uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,477,468,751,181
arxiv
\section{#1} \setcounter{equation}{0}} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{definition} \newtheorem{assumption}[theorem]{Assumption} \newcommand{{\int\hspace*{-4.3mm}\diagup}}{{\int\hspace*{-4.3mm}\diagup}} \makeatletter \def\dashint{\operatorname% {\,\,\text{\bf-}\kern-.98em\DOTSI\intop\ilimits@\!\!}} \makeatother \newcommand{\WO}[2]{\overset{\scriptscriptstyle0}{W}\,\!^{#1}_{#2}} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \def\textit{\textbf{c}}{\textit{\textbf{c}}} \def\textit{\textbf{u}}{\textit{\textbf{u}}} \def\textit{\textbf{v}}{\textit{\textbf{v}}} \def\textit{\txfextbf{w}}{\textit{\txfextbf{w}}} \def\textit{\textbf{f}}{\textit{\textbf{f}}} \def\textit{\textbf{g}}{\textit{\textbf{g}}} \def\textit{\textbf{h}}{\textit{\textbf{h}}} \def\textit{\textbf{P}}{\textit{\textbf{P}}} \def\textit{\textbf{\phi}}{\textit{\textbf{\phi}}} \def\\det{\text{det}} \def\tilde{\mathcal{L}_0^\sigma}{\tilde{\mathcal{L}_0^\sigma}} \def\hat{\mathcal{L}_0^\sigma}{\hat{\mathcal{L}_0^\sigma}} \def\alpha'+\sigma{\alpha'+\sigma} \def\alpha'/\sigma{\alpha'/\sigma} \defa{a} \defb{b} \defc{c} \def{\sf A}{{\sf A}} \def{\sf B}{{\sf B}} \def{\sf M}{{\sf M}} \def{\sf S}{{\sf S}} \def\mathrm{i}{\mathrm{i}} \def\.5{\frac{1}{2}} \def\mathbb{A}{\mathbb{A}} \def\mathbb{O}{\mathbb{O}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{Z}{\mathbb{Z}} \def\mathbb{E}{\mathbb{E}} \def\mathbb{N}{\mathbb{N}} \def\mathbb{H}{\mathbb{H}} \def\mathbb{Q}{\mathbb{Q}} \def\mathbb{C}{\mathbb{C}} \def\tilde{G}{\tilde{G}} \def\textsl{\textbf{a}}{\textsl{\textbf{a}}} \def\textsl{\textbf{x}}{\textsl{\textbf{x}}} \def\textsl{\textbf{y}}{\textsl{\textbf{y}}} \def\textsl{\textbf{z}}{\textsl{\textbf{z}}} \def\textsl{\textbf{w}}{\textsl{\textbf{w}}} \def\mathfrak{L}{\mathfrak{L}} \def\mathfrak{B}{\mathfrak{B}} \def\mathfrak{O}{\mathfrak{O}} \def\mathfrak{R}{\mathfrak{R}} \def\mathfrak{S}{\mathfrak{S}} \def\mathfrak{T}{\mathfrak{T}} \def\mathfrak{q}{\mathfrak{q}} \def\text{Re}\,{\text{Re}\,} \def\text{Im}\,{\text{Im}\,} \def\mathcal{A}{\mathcal{A}} \def\mathcal{B}{\mathcal{B}} \def\mathcal{C}{\mathcal{C}} \def\mathcal{D}{\mathcal{D}} \def\mathcal{E}{\mathcal{E}} \def\mathcal{F}{\mathcal{F}} \def\mathcal{G}{\mathcal{G}} \def\mathcal{H}{\mathcal{H}} \def\mathcal{P}{\mathcal{P}} \def\mathcal{M}{\mathcal{M}} \def\mathcal{O}{\mathcal{O}} \def\mathcal{Q}{\mathcal{Q}} \def\mathcal{R}{\mathcal{R}} \def\mathcal{S}{\mathcal{S}} \def\mathcal{T}{\mathcal{T}} \def\mathcal{L}{\mathcal{L}} \def\mathcal{U}{\mathcal{U}} \def\mathcal{I}{\mathcal{I}} \newcommand\frC{\mathfrak{C}} \def\bar{P}{\bar{P}} \newcommand{\RN}[1]{% \textup{\uppercase\expandafter{\romannumeral#1}}% } \newcommand{\ip}[1]{\left\langle#1\right\rangle} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\norm}[1]{\lVert#1\rVert} \newcommand{\Norm}[1]{\left\lVert#1\right\rVert} \newcommand{\abs}[1]{\left\lvert#1\right\rvert} \newcommand{\tri}[1]{|\|#1|\|} \newcommand{\operatorname{div}}{\operatorname{div}} \newcommand{\text{dist}}{\text{dist}} \newcommand{\operatornamewithlimits{argmin}}{\operatornamewithlimits{argmin}} \renewcommand{\epsilon}{\varepsilon} \newcounter{marnote} \newcommand\marginnote[1]{\stepcounter{marnote}$^{\bullet\,\themarnote}$\marginpar{\tiny$\bullet\,\themarnote$:\,#1}} \begin{document} \title[Gradient estimates for the insulated conductivity problem]{Gradient estimates for the insulated conductivity problem: the case of $m$-convex inclusions} \author[Z.W. Zhao]{Zhiwen Zhao} \address[Z.W. Zhao]{Beijing Computational Science Research Center, Beijing 100193, China.} \email{[email protected]} \date{\today} \begin{abstract} We consider an insulated conductivity model with two neighboring inclusions of $m$-convex shapes in $\mathbb{R}^{d}$ when $m\geq2$ and $d\geq3$. We establish the pointwise gradient estimates for the insulated conductivity problem and capture the gradient blow-up rate of order $\varepsilon^{-1/m+\beta}$ with $\beta=[-(d+m-3)+\sqrt{(d+m-3)^{2}+4(d-2)}]/(2m)\in(0,1/m)$, as the distance $\varepsilon$ between these two insulators tends to zero. In particular, the optimality of the blow-up rate is also demonstrated for a class of axisymmetric $m$-convex inclusions. \end{abstract} \maketitle \section{Introduction} In this paper, we consider a bounded domain $D\subseteq\mathbb{R}^{d}\,(d\geq3)$ with $C^{2}$ boundary, which contains two $C^{2,\gamma}\,(0<\gamma<1)$ inclusions $D_{1}$ and $D_{2}$ with $\varepsilon$ apart. Moreover, these two inclusions are far away from the exterior boundary $\partial D$. Denote $\Omega:=D\setminus\overline{D_{1}\cup D_{2}}$. The insulated conductivity problem is modeled as follows: \begin{align}\label{con002} \begin{cases} \Delta u=0,&\hbox{in}\;\Omega,\\ \frac{\partial u}{\partial\nu}=0,&\mathrm{on}\;\partial D_{i},\,i=1,2,\\ u=\varphi, &\mathrm{on}\;\partial D, \end{cases} \end{align} where $\varphi\in C^{2}(\partial D)$ is a given boundary data, $\nu$ denotes the unit outer normal to the domain. The solution $u$ represents the voltage potential and its gradient $|\nabla u|$ is called the electric field. The electric field always exhibits high concentration in the thin gap between inclusions. The interest of this paper lies in establishing the optimal gradient estimates for problem \eqref{con002} with two nearly touching $m$-convex inclusions in dimensions greater than two. \subsection{Previous works} Ammari et al.\cite{AKL2005,AKLLL2007} were the first to study the insulated conductivity problem and found that the optimal gradient blow-up rate is $\varepsilon^{-1/2}$ in dimension two. Bao, Li and Yin \cite{BLY2010} utilized a ``flipping" technique to obtain the upper bound on the gradient as follows: \begin{align}\label{LYL90} \|\nabla u\|_{L^{\infty}(\Omega)}\leq C\|\varphi\|_{C^{2}(\partial D)}\varepsilon^{-1/2},\quad\text{for any}\;d\geq2. \end{align} Subsequently, Yun \cite{Y2016} considered a pair of unit spheres in three dimensions and established the optimal gradient estimates only in the shortest segment between these two insulators, which revealed that the blow-up rate is $\varepsilon^{\frac{\sqrt{2}-2}{2}}$. Li and Yang \cite{LY2021,LY202102} further improved and extended the upper bound \eqref{LYL90} to \begin{align}\label{TOP002} \|\nabla u\|_{L^{\infty}(\Omega)}\leq C\|\varphi\|_{C^{2}(\partial D)}\varepsilon^{-1/m+\beta},\quad\text{with}\;d\geq3,\,m\geq2, \end{align} for two adjacent $m$-convex insulators, where $\beta>0$ is not explicit. The upper bound in \eqref{TOP002} also shows that the singularity of the electric field will weaken as the interfacial boundaries of insulators become flat. So it is essential and important to study the general $m$-convex insulators from the view of engineering, especially applying in the optimal shape design of insulated materials. The subsequent work \cite{W2021} completed by Weinkove gave an explicit $\beta(d)$, which sharpens the upper bound \eqref{TOP002} in the case of $m=2$ and $d\geq4$. Recently, Dong, Li and Yang \cite{DLY2021} established the optimal gradient estimates and obtained the explicit blow-up rate as follows: \begin{align*} |\nabla u|\sim\varepsilon^{\frac{\alpha-1}{2}},\quad\alpha=\frac{-(d-1)+\sqrt{(d-1)^{2}+4(d-2)}}{2},\quad\mathrm{for}\;m=2,\,d\geq3. \end{align*} This demonstrates that in the case of $m=2$ and $d=3$, the blow-up rate $\varepsilon^{\frac{\sqrt{2}-2}{2}}$ captured in \cite{Y2016} is also optimal in the whole matrix region, especially covering the shortest segment. Problem \eqref{con002} is actually the limit equation of the following conductivity model with piecewise constant coefficients \begin{align}\label{pro006} \begin{cases} \mathrm{div}(a_{k}(x)\nabla u_{k})=0,&\mathrm{in}\;D,\\ u_{k}=\varphi,&\mathrm{on}\;\partial D, \end{cases}\quad a_{k}(x)=& \begin{cases} k\in(0,\infty),&\mathrm{in}\;D_{1}\cup D_{2},\\ 1,&\mathrm{in}\;\Omega, \end{cases} \end{align} with $\varphi\in C^{2}(\partial D)$, as the conductivity $k$ degenerates to zero. For problem \eqref{pro006}, Dong and Li \cite{DL2019} used Green's function method to capture the explicit dependence of the gradient on the conductivity $k$ and the distance $\varepsilon$ between two disks in two dimensions. This especially answers open problem $(b)$ proposed by Li and Vogelius in \cite{LV2000}. For more related work on the finite coefficients, we refer to \cite{BV2000,DZ2016,CEG2014,KL2019} for the elliptic equation and \cite{LN2003,BASL1999} for the elliptic systems, respectively. In particular, the problem of estimating the gradient of solutions in the presence of closely located inclusions was first posed in \cite{BASL1999}, which concerns a numerical investigation on the Lam\'{e} system arising from composites and motivates greatly the aforementioned and subsequent theoretical studies. When the conductivity $k\rightarrow\infty$, the limit equation of \eqref{pro006} is called the perfect conductivity equation. There has been a long list of papers devoting to the study on the gradient estimates and asymptotics for the perfect conductivity problem. In the presence of strictly convex inclusions (that is, $m=2$), the blow-up rate of the concentrated field has been proved to be \begin{align*} \rho_{d}(\varepsilon)=& \begin{cases} \varepsilon^{-1/2},&\mathrm{if}\;d=2,\\ |\varepsilon\ln\varepsilon|^{-1},&\mathrm{if}\;d=3,\\ \varepsilon^{-1},&\mathrm{if}\;d\geq4, \end{cases} \end{align*} see \cite{AKLLL2007,BC1984,BLY2009,AKL2005,Y2007,Y2009,K1993} for $d=2$, \cite{BLY2009,LY2009,BLY2010,L2012} for $d=3$ , and \cite{BLY2009} for $d\geq4$, respectively. For more precise description regarding the singularities of the concentrated field, see \cite{KLY2013,ACKLY2013,KLY2014,LLY2019,LWX2019,BT2013}. The results have also been extended to the general $m$-convex perfect conductors, for example, see \cite{L2020,KLY2015,ZH2021}. For nonlinear equation, we refer to \cite{CS2019,CS201902,G2012}. \subsection{Main results} Before listing the main results of this paper, we first fix some notations and parameterize the domain. By picking a coordinate system appropriately, let $D_{1}$ and $D_{2}$ be translations of two touching insulators as follows: \begin{align*} D_{1}:=D_{1}^{\ast}+(0',\varepsilon/2),\quad\mathrm{and}\; D_{2}:=D_{2}^{\ast}+(0',-\varepsilon/2), \end{align*} where $D_{i}^{\ast}$, $i=1,2,$ satisfy \begin{align*} \partial D_{1}^{\ast}\cap\partial D_{2}^{\ast}=\{0\}\subset\mathbb{R}^{d},\quad\mathrm{and}\; D_{i}^{\ast}\subset\{(x',x_{d})\in\mathbb{R}^{d}\,|\,(-1)^{i+1}x_{d}>0\},\quad i=1,2. \end{align*} Here and afterwards, we will make use of superscript prime to represent $(d-1)$-dimensional variables and domains (for example, $x'$ and $B'$). Suppose that for some $R_{0}>0$ independent of $\varepsilon$, let $\partial D_{1}$ and $\partial D_{2}$ around the origin be parameterized by two smooth curved surfaces $(x',\varepsilon/2+h_{1}(x'))$ and $(x',-\varepsilon/2+h_{2}(x'))$, respectively, and $h_{i}$, $i=1,2$ verify that for $m\geq2$ and $\gamma>0$, \begin{enumerate} {\it\item[(\bf{H1})] $h_{1}(x')-h_{2}(x')=\lambda|x'|^{m}+O(|x'|^{m+\gamma}),\;\mathrm{if}\;x'\in B_{2R_{0}}',$ \item[(\bf{H2})] $|\nabla_{x'}h_{i}(x')|\leq \kappa_{1}|x'|^{m-1},\;\mathrm{if}\;x'\in B_{2R_{0}}',$ \item[(\bf{H3})] $\|h_{1}\|_{C^{2}(B'_{2R_{0}})}+\|h_{2}\|_{C^{2}(B'_{2R_{0}})}\leq \kappa_{2},$} \end{enumerate} where $\lambda$, $\kappa_{1}$ and $\kappa_{2}$ are three positive $\varepsilon$-independent constants. Here and in the following, the notation $O(A)$ implies that $|O(A)|\leq CA$ for some positive constant $C$ independent of $\varepsilon$. For $z'\in B'_{R_{0}},\,0<t\leq2R_{0}$, denote \begin{align*} \Omega_{t}(z'):=&\{x\in \mathbb{R}^{d}\,|\,-\varepsilon/2+h_{2}(x')<x_{d}<\varepsilon/2+h_{1}(x'),~|x'-z'|<{t}\}. \end{align*} We simplify the notation $\Omega_{t}(0')$ as $\Omega_{t}$ if $z'=0'$, and denote its top and bottom boundaries, respectively, by \begin{align*} \Gamma^{+}_{t}:=\{x\in\mathbb{R}^{d}\,|\,x_{d}=\varepsilon/2+h_{1}(x'),\;|x'|<t\}, \end{align*} and \begin{align*} \Gamma^{-}_{t}:=\{x\in\mathbb{R}^{d}\,|\,x_{d}=-\varepsilon/2+h_{2}(x'),\;|x'|<t\}. \end{align*} Using the standard elliptic estimates, we get \begin{align}\label{AZ001} \|u\|_{C^{1}(\Omega\setminus\Omega_{R_{0}/2})}\leq C. \end{align} Then it is sufficient to quantify the singular behavior of $|\nabla u|$ in the small gap $\Omega_{R_{0}/2}$. That is, consider \begin{align}\label{problem006} \begin{cases} \Delta u=0,&\hbox{in}\;\Omega_{2R_{0}},\\ \frac{\partial u}{\partial\nu}=0,&\mathrm{on}\;\Gamma^{\pm}_{2R_{0}},\\ \|u\|_{L^{\infty}(\Omega_{2R_{0}})}\leq1. \end{cases} \end{align} Write \begin{align}\label{degree} \alpha=\alpha(d,m):=\frac{-(d+m-3)+\sqrt{(d+m-3)^{2}+4(d-2)}}{2}. \end{align} In fact, $\alpha(d,m)$ increases monotonically in terms of $d$, while it is monotonically decreasing in $m$. Moreover, we have \begin{align*} \alpha(d,m)=& \begin{cases} 1-\frac{m}{d}+O(\frac{1}{d^{2}}),&\text{as }d\rightarrow\infty,\;\text{for any given }m\geq2,\\ \frac{d-2}{m}+O(\frac{1}{m^{2}}),&\text{as }m\rightarrow\infty,\;\text{for any given }d\geq3. \end{cases} \end{align*} Unless otherwise stated, in the following we let $C$ be a constant which may differ at each occurrence, depending only on $d,m,\lambda,\gamma,R_{0},\kappa_{1},\kappa_{2}$, but not on $\varepsilon$. The first result is stated as follows. \begin{theorem}\label{thm001} Suppose that $D_{1},\,D_{2}\subset D\subseteq\mathbb{R}^{d}\,(d\geq3)$ are defined as above, conditions $\mathrm{(}${\bf{H1}}$\mathrm{)}$--$\mathrm{(}${\bf{H3}}$\mathrm{)}$ hold. Let $u\in H^{1}(\Omega_{2R_{0}})$ be the solution of \eqref{problem006}. Then for a sufficiently small $\varepsilon>0$ and $x\in\Omega_{R_{0}/2}$, \begin{align}\label{U002} |\nabla u(x)|\leq C\|u\|_{L^{\infty}(\Omega_{2R_{0}})}(\varepsilon+|x'|^{m})^{\frac{\alpha-1}{m}}, \end{align} where $\alpha$ is given by \eqref{degree}. \end{theorem} \begin{remark} Previously in \cite{LY2021,LY202102}, Li and Yang established the pointwise upper bound on the gradient as follows: \begin{align}\label{DM001} |\nabla u(x)|\leq C(\varepsilon+|x'|^{m})^{-1/m+\beta},\quad\mathrm{in}\;\Omega_{R_{0}/2}, \end{align} for some inexplicit $\beta>0$. We further improve the upper bound in \eqref{DM001} by solving the explicit value of $\beta$, that is, $\beta=\frac{\alpha}{m}$, as shown in Theorem \ref{thm001}. \end{remark} \begin{remark} It is worthwhile to emphasize that by using the change of variables \eqref{SCALING} below for every line segment in the narrow region $\Omega_{R_{0}/2}$ in the whole Section \ref{SEC02} which is different from the idea in \cite{DLY2021}, we avoid dividing into two cases to complete the proof of Theorem \ref{thm001} in the following and thus succeeds in simplifying the proof procedure in \cite{DLY2021}. \end{remark} \begin{remark} The shape of inclusions considered in Theorem \ref{thm001} covers a class of axisymmetric inclusions as follows. To be precise, $\partial D_{1}$ and $\partial D_{2}$ are, respectively, expressed as \begin{align}\label{ellipsoids} |x'|^{m}+|x_{d}-\varepsilon/2-r_{1}|^{m}=r_{1}^{m},\quad\mathrm{and}\;|x'|^{m}+|x_{d}+\varepsilon/2+r_{2}|^{m}=r^{m}_{2}, \end{align} where $r_{1}$ and $r_{2}$ are two positive $\varepsilon$-independent constants. Using Taylor expansion for \eqref{ellipsoids}, we have \begin{align*} h_{1}(x')-h_{2}(x')=\lambda_{0}|x'|^{m}+O(|x'|^{2m}),\quad\mathrm{in}\;\Omega_{r_{0}}, \end{align*} where $\lambda_{0}=\frac{1}{m}\left(\frac{1}{r_{1}^{m-1}}+\frac{1}{r_{2}^{m-1}}\right)$ and $0<r_{0}<\min\{r_{1},r_{2}\}$. \end{remark} In order to prove the optimality of the blow-up rate $\varepsilon^{\frac{\alpha-1}{m}}$ obtained in Theorem \ref{thm001}, we now consider two ellipsoids in \eqref{ellipsoids} with $r_{1}=r_{2}=1$. The optimal lower bound on the gradient is established as follows. \begin{theorem}\label{thm002} For $d\geq3$, let $D:=B_{5}$ and $D_{i}$, $i=1,2$ be defined by \eqref{ellipsoids} with $r_{1}=r_{2}=1$. Let $u\in H^{1}(\Omega)$ be the solution of (\ref{con002}) with $\varphi=x_{1}$. Then for a sufficiently small $\varepsilon>0$, \begin{align}\label{ARQZ001} \|\nabla u\|_{L^{\infty}(\Omega\cap B_{2\sqrt[m]{\varepsilon}})}\geq\frac{1}{C}\varepsilon^{\frac{\alpha-1}{m}}, \end{align} where $\alpha$ is given by \eqref{degree}. \end{theorem} In the following, we will establish the optimal pointwise upper and lower bounds on the gradient in Sections \ref{SEC02} and \ref{SEC03}, respectively. \section{Pointwise upper bound on the gradient}\label{SEC02} Without loss of generality, we let $\lambda=1$ in condition ({\bf{H1}}). Denote \begin{align*} \delta:=\delta(|y'|)=\varepsilon+|y'|^{m},\quad |y'|\leq 2R_{0}. \end{align*} For any fixed $x_{0}=(x_{0}',x_{d})\in\Omega_{R_{0}/2}$, write \begin{align*} \delta_{0}:=\delta(|x_{0}'|)=\varepsilon+|x_{0}'|^{m}. \end{align*} Define a cylinder as follows: for $s,t>0$ and $x'\in B'_{R_{0}}$, \begin{align*} Q_{s,t}(x'):=\{y=(y',y_{d})\in\mathbb{R}^{d}\,|\,|y'-x'|<s,\,|y_{d}|<t\}. \end{align*} For simplicity, let $Q_{s,t}:=Q_{s,t}(0')$ if $x'=0'$. Using a change of variables for $\Omega_{2R_{0}}$ as follows: \begin{align}\label{SCALING} \begin{cases} y'=x',\\ y_{d}=2\delta_{0}\left(\frac{x_{d}-h_{2}(x')+\varepsilon/2}{\varepsilon+h_{1}(x')-h_{2}(x')}-\frac{1}{2}\right), \end{cases} \end{align} we derive a cylinder $Q_{2R_{0},\delta_{0}}$ of thickness $2\delta_{0}$. Write $v(y)=u(x)$. In light of equation \eqref{problem006}, we see that $v$ is a solution of \begin{align}\label{ZKM001} \begin{cases} -\partial_{i}(a_{ij}(y)\partial_{j}v(y))=0,&\mathrm{in}\;Q_{2R_{0},\delta_{0}},\\ a_{dj}(y)\partial_{j}v(y)=0,&\mathrm{on}\;\{y_{d}=\pm\delta_{0}\}, \end{cases} \end{align} with $\|v\|_{L^{\infty}(Q_{R_{0},\delta_{0}})}\leq1$, where \begin{gather} \begin{align*} (a_{ij}(y))=&\frac{2\delta_{0}(\partial_{x}y)(\partial_{x}y)^{t}}{\det(\partial_{x}y)}\notag\\ =& \begin{pmatrix}\delta&0&\cdots&0&a_{1d} \\ 0&\delta&\cdots&0&a_{2d}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\0&0&\cdots&\delta&a_{d-1\,1}\\ a_{d1}&a_{d2}&\cdots&a_{d\,d-1}&\frac{4\varepsilon^{2}+\sum^{d-1}_{i=1}a_{id}^{2}}{\delta} \end{pmatrix}+\begin{pmatrix} e^{1}&0&\cdots&0 \\ 0&e^{2}&\cdots&0 \\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&e^{d} \end{pmatrix}, \end{align*} \end{gather} whose elements satisfy that for $i=1,...,d-1,$ using conditions ({\bf{H1}}) and ({\bf{H2}}), \begin{align}\label{K001} |a_{id}|=|a_{di}|=|-2\delta_{0}\partial_{i}h_{2}(y')-(y_{d}+\delta_{0})\partial_{i}(h_{1}-h_{2})(y')|\leq C\delta_{0}|y'|^{m-1}, \end{align} and \begin{align}\label{K002} |e^{i}|=|O(|y'|^{m+\gamma})|\leq C|y'|^{m+\gamma},\quad |e^{d}|\leq C\delta_{0}^{2}|y'|^{\gamma}\delta^{-1}. \end{align} In light of $\frac{\partial u}{\partial\nu}=0$ on $\Gamma^{\pm}_{2R_{0}}$, it follows from ({\bf{H2}}) and \eqref{DM001} that \begin{align*} |\partial_{d}u(x)|\leq C|x'|^{m-1}|\nabla_{x'}u|\leq C|x'|^{m-2},\quad\mathrm{on}\;\Gamma^{\pm}_{2R_{0}}. \end{align*} This, in combination with \eqref{AZ001}, the harmonicity of $\partial_{d}u$ and the maximum principle, yields that \begin{align}\label{U001} |\partial_{d}u|\leq C,\quad\mathrm{in}\;\Omega_{2R_{0}}, \end{align} and then \begin{align}\label{AZ005} |\partial_{d} v|\leq C\delta_{0}^{-1}\delta,\quad\mathrm{in}\;Q_{2R_{0},\delta_{0}}. \end{align} Let \begin{align}\label{KL01} \bar{v}(y'):=\fint^{\delta_{0}}_{-\delta_{0}}v(y',y_{d})\,dy_{d}. \end{align} Then $\bar{v}$ verifies \begin{align}\label{ZK003} \mathrm{div}(\delta\nabla\bar{v})=\mathrm{div}F,\quad\mathrm{in}\;B'_{2R_{0}}, \end{align} where $F=(F_{1},...,F_{d-1})$, $F_{i}:=-\overline{a_{id}\partial_{d}v}-e^{i}\partial_{i}\bar{v}$ for $i=1,...,d-1$, $\overline{a_{id}\partial_{d}v}$ represents the average of $a_{id}\partial_{d}v$ with respect to $y_{d}$ on $(-\delta_{0},\delta_{0})$. From \eqref{DM001} and \eqref{K001}--\eqref{AZ005}, we obtain \begin{align}\label{QL001} |F_{i}|\leq C\left(|y'|^{m-1}\delta+|y'|^{m+\gamma}\delta^{-1/m}\right),\;\, i=1,...,d-1,\quad\mathrm{in}\;B'_{2R_{0}}. \end{align} For $\gamma,\sigma\in\mathbb{R}$, define a norm as follows: \begin{align}\label{QL002} \|F\|_{\varepsilon,\gamma,\sigma,B_{R}'}:=\sup\limits_{y'\in B_{R}'}|y'|^{-\gamma}(\varepsilon+|y'|^{m})^{\sigma-1}|F(y')|,\;\,\mathrm{with}\;0<R\leq2R_{0}. \end{align} For any $0<R\leq 2R_{0}$, we decompose the solution $\bar{v}$ of \eqref{ZK003} as follows: \begin{align}\label{ADAD001} \bar{v}:=\bar{v}_{1}+\bar{v}_{2},\quad\mathrm{in}\;B_{R}', \end{align} where $\bar{v}_{i},i=1,2,$ satisfy \begin{align}\label{de001} \begin{cases} \mathrm{div}(\delta\nabla\bar{v}_{1})=0,& \mathrm{in}\;B_{R}',\\ \bar{v}_{1}=\bar{v},&\mathrm{on}\;\partial B_{R}', \end{cases} \end{align} and \begin{align}\label{de002} \begin{cases} \mathrm{div}(\delta\nabla\bar{v}_{2})=\mathrm{div}F,&\mathrm{in}\;B_{R}',\\ \bar{v}_{2}=0,&\mathrm{on}\;\partial B_{R}', \end{cases} \end{align} respectively. For $\bar{v}_{1}$, we have \begin{lemma}\label{lemma001} For $d\geq 3$, let $\bar{v}_{1}$ be a solution of \eqref{de001}. Then for any $0<\rho<R$, \begin{align*} \left(\fint_{\partial B_{\rho}'}|\bar{v}_{1}-\bar{v}_{1}(0')|^{2}\right)^{\frac{1}{2}}\leq\left(\frac{\rho}{R}\right)^{\alpha}\left(\fint_{\partial B_{R}'}|\bar{v}_{1}-\bar{v}_{1}(0')|^{2}\right)^{\frac{1}{2}}, \end{align*} where $\alpha$ is defined by \eqref{degree}. \end{lemma} \begin{proof} Observe that $\bar{v}_{1}\in C^{\infty}(B_{R}')$ by using the standard elliptic theory. Without loss of generality, let $\bar{v}_{1}(0')=0$ and $R=1$. Write $y'=(r,\xi)\in(0,1)\times\mathbb{S}^{d-2}$. Then $\bar{v}_{1}$ verifies \begin{align*} \partial_{rr}\bar{v}_{1}+\left(\frac{d-2}{r}+\frac{mr^{m-1}}{\varepsilon+r^{m}}\right)\partial_{r}\bar{v}_{1}+\frac{1}{r^{2}}\Delta_{\mathbb{S}^{d-2}}\bar{v}_{1}=0,\quad\mathrm{in}\;B_{1}'\setminus\{0'\}. \end{align*} Adopt the following decomposition \begin{align}\label{FK001} \bar{v}_{1}(y')=\sum^{\infty}_{k=1}\sum^{N(k)}_{l=1}V_{k,l}(r)Y_{k,l}(\xi),\quad y'\in B_{1}'\setminus\{0'\}, \end{align} where $\{Y_{k,l}\}_{k,l}$ is an orthonormal basis of $L^{2}(\mathbb{S}^{d-2})$ and every element $Y_{k,l}$ denotes a $k$-th degree spherical harmonics satisfying that \begin{align*} -\Delta_{\mathbb{S}^{d-2}}Y_{k,l}=k(k+d-3)Y_{k,l}. \end{align*} Therefore, $V_{k,l}(r)\in C^{2}(0,1)$ is determined by \begin{align*} V_{k,l}(r)=\int_{\mathbb{S}^{d-2}}\bar{v}_{1}(y')Y_{k,l}(\xi)d\xi, \end{align*} and verifies \begin{align*} L_{k}V_{k,l}:=\partial_{rr}V_{k,l}(r)+\left(\frac{d-2}{r}+\frac{mr^{m-1}}{\varepsilon+r^{m}}\right)\partial_{r}V_{k,l}(r)-\frac{k(k+d-3)}{r^{2}}V_{k,l}(r)=0, \end{align*} for $r\in (0,1)$, $k\in\mathbb{N}$, $l=1,2,...,N(k)$. For every $c\in\mathbb{R}$, a direct calculation gives \begin{align*} L_{k}r^{c}=r^{c-2}\left[c^{2}+\Big(d-3+\frac{mr^{m}}{\varepsilon+r^{m}}\Big)c-k(k+d-3)\right]. \end{align*} It then follows that for a sufficiently small $c>0$, \begin{align}\label{W001} L_{k}r^{-c}\leq0,\quad\mathrm{and}\; L_{k}r^{\alpha_{k}}\leq0,\quad\mathrm{in}\;(0,1), \end{align} where $\alpha_{k}$ is given by \begin{align*} \alpha_{k}:=\frac{-(d+m-3)+\sqrt{(d+m-3)^{2}+4k(k+d-3)}}{2},\quad\text{for}\;k\in\mathbb{N}. \end{align*} Especially when $k=1$, we have $\alpha_{1}=\alpha$. Then we obtain from \eqref{W001} that for every $\tau>0$, \begin{align*} L_{k}(\pm V_{k,l}(r)-\tau r^{-c}-|V_{k,l}(1)|r^{\alpha_{k}})\geq0,\quad\mathrm{in}\;(0,1). \end{align*} Note that $V_{k,l}$ remains bounded in $(0,1)$ for the sake of $\bar{v}_{1}\in L^{\infty}(B_{1})$. Hence, \begin{align*} \pm V_{k,l}(r)-\tau r^{-c}-|V_{k,l}(1)|r^{\alpha_{k}}<0,\quad\mathrm{as}\;r\searrow0,\;\mathrm{or}\;r=1, \end{align*} which, together with the maximum principle, yields that \begin{align*} |V_{k,l}(r)|\leq\tau r^{-c}+|V_{k,l}(1)|r^{\alpha_{k}},\quad\mathrm{in}\;(0,1). \end{align*} Letting $\tau\rightarrow0$, we get \begin{align*} |V_{k,l}(r)|\leq|V_{k,l}(1)|r^{\alpha_{k}},\quad\mathrm{for}\;r\in(0,1). \end{align*} This, in combination with \eqref{FK001}, reads that \begin{align*} \fint_{\partial B_{\rho}}|\bar{v}_{1}|^{2}=&\sum^{\infty}_{k=1}\sum^{N(k)}_{l=1}|V_{k,l}(\rho)|^{2}\leq\rho^{2\alpha}\sum^{\infty}_{k=1}\sum^{N(k)}_{l=1}|V_{k,l}(1)|^{2}=\rho^{2\alpha}\fint_{\partial B_{1}}|\bar{v}_{1}|^{2}. \end{align*} \end{proof} With regard to $\bar{v}_{2}$, we make use of Moser's iteration argument to obtain the following result. \begin{lemma}\label{lemma002} For $d\geq 3$, $1+\gamma-m\sigma>0$, let $\bar{v}_{2}$ be a solution of \eqref{de002} with $R=1$. Suppose that $F\in L^{\infty}(B_{1}')$ satisfies $\|F\|_{\varepsilon,\gamma,\sigma,B_{1}'}<\infty$. Then, \begin{align*} \|\bar{v}_{2}\|_{L^{\infty}(B_{1}')}\leq C\|F\|_{\varepsilon,\gamma,\sigma,B_{1}'}, \end{align*} where $C$ is a positive constant depending only on $d,m,\gamma,\sigma$, but not on $\varepsilon$. \end{lemma} \begin{proof} For simplicity, let $\|F\|_{\varepsilon,\gamma,\sigma,B_{1}'}=1$. Write $r=|y'|$. In view of \eqref{de002}, we see that $\bar{v}_{2}$ satisfies \begin{align}\label{AWA001} \Delta \bar{v}_{2}+mr^{m-1}\delta^{-1}\partial_{r}\bar{v}_{2}=\partial_{i}(F_{i}\delta^{-1})+mF_{i}y_{i}|y'|^{m-2}\delta^{-2},\quad\mathrm{in}\;B_{1}'. \end{align} From \eqref{QL002}, we have \begin{align*} |F_{i}\delta^{-1}|\leq r^{\gamma-m\sigma},\quad |F_{i}y_{i}|y'|^{m-2}\delta^{-2}|\leq r^{\gamma-m\sigma-1}. \end{align*} Multiplying equation \eqref{AWA001} by $-|\bar{v}_{2}|^{p-2}\bar{v}_{2}$ with $p\geq2$, it follows from integration by parts that \begin{align*} &(p-1)\int_{B_{1}'}|\nabla\bar{v}_{2}|^{2}|\bar{v}_{2}|^{p-2}\notag\\ &\leq C(p-1)\int_{B'_{1}}|\nabla\bar{v}_{2}||\bar{v}_{2}|^{p-2}r^{\gamma-m\sigma}+C\int_{B_{1}'}|\bar{v}_{2}|^{p-1}r^{\gamma-m\sigma-1}, \end{align*} where we utilized the fact that \begin{align*} \int_{B'_{1}}mr^{m-1}\delta^{-1}\partial_{r}\bar{v}_{2}(|\bar{v}_{2}|^{p-2}\bar{v}_{2})=&\frac{1}{p}\int_{\mathbb{S}^{d-2}}\int^{1}_{0}mr^{d+m-3}\delta^{-1}\partial_{r}|\bar{v}_{2}|^{p}drd\theta\notag\\ =&-\frac{1}{p}\int_{\mathbb{S}^{d-2}}\int_{0}^{1}\partial_{r}(mr^{d+m-3}\delta^{-1})|\bar{v}_{2}|^{p}drd\theta\leq0. \end{align*} Then in view of $1+\gamma-m\sigma>0$, it follows from H\"{o}lder's inequality that \begin{align*} &(p-1)\int_{B_{1}'}|\nabla\bar{v}_{2}|^{2}|\bar{v}_{2}|^{p-2}\notag\\ &\leq C(p-1)\||\nabla\bar{v}_{2}||\bar{v}_{2}|^{\frac{p-2}{2}}\|_{L^{2}(B_{1}')}\|\bar{v}_{2}^{p-2}\|^{1/2}_{L^{\frac{d-1+2\tau}{d-3+2\tau}}(B_{1}')}\|r^{2(\gamma-m\sigma)}\|_{L^{\frac{d-1}{2}+\tau}(B'_{1})}^{1/2}\notag\\ &\quad\,+C\|\bar{v}_{2}^{p-1}\|_{L^{\frac{d-1+2\tau}{d-3+2\tau}}(B_{1}')}\|r^{\gamma-m\sigma-1}\|_{L^{\frac{d-1}{2}+\tau}(B_{1}')}\notag\\ &\leq C(p-1)\||\nabla\bar{v}_{2}||\bar{v}_{2}|^{\frac{p-2}{2}}\|_{L^{2}(B_{1}')}\|\bar{v}_{2}^{p-2}\|^{1/2}_{L^{\frac{d-1+2\tau}{d-3+2\tau}}(B_{1}')}+C\|\bar{v}_{2}^{p-1}\|_{L^{\frac{d-1+2\tau}{d-3+2\tau}}(B_{1}')}, \end{align*} where $\tau>0$ is a sufficiently small constant such that \begin{align*} &\|r^{2(\gamma-m\sigma)}\|_{L^{\frac{d-1}{2}+\tau}(B'_{1})}+\|r^{\gamma-m\sigma-1}\|_{L^{\frac{d-1}{2}+\tau}(B_{1}')}\leq C. \end{align*} Then using H\"{o}lder's inequality and Young's inequality, we get \begin{align}\label{QAZ001} \frac{2(p-1)}{p^{2}}\int_{B'_{1}}\big|\nabla|\bar{v}_{2}|^{\frac{p}{2}}\big|^{2}=&\frac{p-1}{2}\int_{B_{1}'}|\nabla\bar{v}_{2}|^{2}|\bar{v}_{2}|^{p-2}\notag\\ \leq&\max_{1\leq i\leq2}Cp^{i-1}\|\bar{v}_{2}\|^{p-i}_{L^{\frac{(d-1+2\tau)p}{d-3+2\tau}}(B_{1}')}. \end{align} In particular, if we pick $p=2$ in \eqref{QAZ001}, then we deduce from the Sobolev-Poincar\'{e} inequality and Young's inequality that \begin{align}\label{E001} \|\bar{v}_{2}\|_{L^{\frac{2(d-1+2\tau)}{d-3+2\tau}}(B_{1}')}\leq C. \end{align} Utilizing the Sobolev-Poincar\'{e} inequality and Young's inequality again for \eqref{QAZ001} with $p\geq2$, we obtain \begin{align}\label{E002} \|\bar{v}_{2}\|_{L^{tp}(B_{1}')}\leq&\max_{1\leq i\leq2}(Cp^{i})^{1/p}\left(\frac{p-i}{p}\|\bar{v}_{2}\|_{L^{\frac{(d-1+2\tau)p}{d-3+2\tau}}(B_{1}')}+\frac{i}{p}\right)\notag\\ \leq& (Cp^{2})^{1/p}\left(\|\bar{v}_{2}\|_{L^{\frac{(d-1+2\tau)p}{d-3+2\tau}}(B_{1}')}+\frac{2}{p}\right), \end{align} where $t:=t(d)$ is given by \begin{align*} \begin{cases} t>\frac{d-1+2\tau}{d-3+2\tau},&d=3,\\ t=\frac{d-1}{d-3},&d>3. \end{cases} \end{align*} Set \begin{align*} p_{k}=2\left(\frac{(d-3+2\tau)t}{d-1+2\tau}\right)^{k}\frac{d-1+2\tau}{d-3+2\tau},\quad k\geq0,\,d\geq3. \end{align*} Hence by iteration with \eqref{E001}--\eqref{E002}, we get \begin{align*} \|\bar{v}_{2}\|_{L^{p_{k}}(B_{1}')}\leq&\prod^{k-1}_{i=0}(Cp_{i}^{2})^{1/p_{i}}\|\bar{v}_{2}\|_{L^{p_{0}}(B_{1}')}+\sum^{k-1}_{i=0}\prod^{k-1-i}_{j=0}(Cp_{k-1-j}^{2})^{1/p_{k-1-j}}\frac{2}{p_{i}}\notag\\ \leq&C\|\bar{v}_{2}\|_{L^{\frac{2(d-1+2\tau)}{d-3+2\tau}}(B_{1}')}+C\leq C, \end{align*} where $C=C(d,m,\gamma,\sigma)$ is independent of $k$. Then Lemma \ref{lemma002} is proved by letting $k\rightarrow\infty$. \end{proof} Combining Lemma \ref{lemma001} and \ref{lemma002}, we obtain \begin{prop}\label{prop001} For $d\geq 3$, $\sigma\geq0$, $1+\gamma-m\sigma>0$, $1+\gamma-m\sigma\neq\alpha$, let $\bar{v}$ be a solution of \eqref{ZK003} with $\|F\|_{\varepsilon,\gamma,\sigma,B_{R_{0}}'}<\infty$. Then for any $R\in(0,R_{0})$, \begin{align*} \left(\fint_{\partial B_{R}'}|\bar{v}-\bar{v}(0')|^{2}\right)^{1/2}\leq C\|F\|_{\varepsilon,\gamma,\sigma,B_{R_{0}}'}R^{\tilde{\alpha}}, \end{align*} where $\tilde{\alpha}:=\min\{\alpha,1+\gamma-m\sigma\}$ with $\alpha$ given by \eqref{degree}. \end{prop} \begin{proof} Without loss of generality, set $\bar{v}(0')=0$ and $\|F\|_{\varepsilon,\gamma,\sigma,B_{R_{0}}'}=1$. For $0<\rho\leq R\leq R_{0}$, denote \begin{align*} \omega(\rho):=\bigg(\fint_{\partial B_{\rho}'}|\bar{v}|^{2}\bigg)^{\frac{1}{2}}. \end{align*} Write $\tilde{v}_{2}(y'):=\bar{v}_{2}(Ry')$. Then in light of \eqref{de002}, we see that $\tilde{v}_{2}$ verifies \begin{align*} \mathrm{div}\big((R^{-m}\varepsilon+|y'|^{m})\nabla\tilde{v}_{2}\big)=\mathrm{div}\tilde{F},\quad\mathrm{in}\;B_{1}', \end{align*} where $\tilde{F}(y'):=R^{-(m-1)}F(Ry')$ satisfies \begin{align*} \|\tilde{F}\|_{R^{-m}\varepsilon,\gamma,\sigma,B_{1}'}=R^{1+\gamma-m\sigma}\|F\|_{\varepsilon,\gamma,\sigma,B_{R}'}. \end{align*} Then using Lemma \ref{lemma002} for $\tilde{v}_{2}$ with $R^{-m}\varepsilon$ substituting for $\varepsilon$, we get \begin{align}\label{FNM001} \|\bar{v}_{2}\|_{L^{\infty}(B_{R}')}\leq CR^{1+\gamma-m\sigma}. \end{align} Recalling decomposition \eqref{ADAD001}, it follows from Lemma \ref{lemma001} and \eqref{FNM001} that \begin{align}\label{GAZ001} \omega(\rho)\leq&\bigg(\fint_{\partial B_{\rho}'}|\bar{v}_{1}-\bar{v}_{1}(0')|^{2}\bigg)^{\frac{1}{2}}+\bigg(\fint_{\partial B_{\rho}'}|\bar{v}_{2}-\bar{v}_{2}(0')|^{2}\bigg)^{\frac{1}{2}}\notag\\ \leq&\left(\frac{\rho}{R}\right)^{\alpha}\bigg(\fint_{\partial B_{R}'}|\bar{v}_{1}|^{2}\bigg)^{\frac{1}{2}}+\left(\frac{\rho}{R}\right)^{\alpha}|\bar{v}_{1}(0')|+2\|\bar{v}_{2}\|_{L^{\infty}(B_{R}')}\notag\\ \leq&\left(\frac{\rho}{R}\right)^{\alpha}\omega(R)+CR^{1+\gamma-m\sigma}, \end{align} where we also used the fact that $\bar{v}=\bar{v}_{1}$ on $\partial B_{R}'$ and $|\bar{v}_{1}(0')|=|\bar{v}_{2}(0')|$ in virtue of $\bar{v}(0')=\bar{v}_{1}(0')+\bar{v}_{2}(0')=0$. For $i=0,...,k-1,$ $k$ is a positive integer, let $\rho=2^{-i-1}R_{0}$ and $R=2^{-i}R_{0}$ in \eqref{GAZ001}. Since $1+\gamma-m\sigma\neq\alpha$, it then follows from $k$ iterations that \begin{align*} \omega(2^{-k}R_{0})\leq&2^{-k\alpha}\omega(R_{0})+C\sum^{k}_{i=1}2^{-(k-i)\alpha}(2^{1-i}R_{0})^{1+\gamma-m\sigma}\notag\\ \leq&2^{-k\alpha}\omega(R_{0})+C2^{-k\alpha}R_{0}^{1+\gamma-m\sigma}\frac{1-2^{k(\alpha-1-\gamma+m\sigma)}}{1-2^{\alpha-1-\gamma+m\sigma}}\notag\\ \leq&2^{-k\tilde{\alpha}}\big(\omega(R_{0})+CR_{0}^{1+\gamma-m\sigma}\big). \end{align*} For every $\rho\in(0,R_{0})$, there exists some integer $k$ such that $\rho\in(2^{-k-1}R_{0},2^{-k}R_{0}]$. Hence we have \begin{align*} \omega(\rho)\leq C\rho^{\tilde{\alpha}},\quad\mathrm{for}\;\mathrm{any}\;\rho\in(0,R_{0}). \end{align*} The proof is complete. \end{proof} We are now ready to prove Theorem \ref{thm001}. \begin{proof}[Proof of Theorem \ref{thm001}] To begin with, we point out again that by making use of the change of variables in \eqref{SCALING} for every line segment in the thin gap $\Omega_{R_{0}/2}$, we can achieve a united proof of Theorem \ref{thm001}. That is, we don't need to divide into two cases to prove Theorem \ref{thm001} any more, which simplifies the corresponding proof procedure in \cite{DLY2021}. Suppose that $\lambda=1$, $u(0)=0$ and $\|u\|_{L^{\infty}(\Omega_{R_{0}})}=1$ without loss of generality. Let $v$ and $\bar{v}$ be defined by \eqref{ZKM001} and \eqref{KL01}--\eqref{ZK003}, respectively. From \eqref{QL001}, we obtain \begin{align*} \|F\|_{\varepsilon,\gamma,\sigma_{0},B_{R_{0}}'}<\infty,\quad\mathrm{with}\;\sigma_{0}=\frac{1}{m}, \end{align*} decreasing $\gamma$ if necessary. Using \eqref{AZ005}, we have \begin{align}\label{LZMWN001A} |v(y',y_{d})-\bar{v}(y')|\leq2\delta_{0}\max_{y_{d}\in(-\delta_{0},\delta_{0})}|\partial_{d}v(y',y_{d})|\leq C\delta,\quad\mathrm{in}\;Q_{R_{0},\delta_{0}}. \end{align} From Proposition \ref{prop001}, we have \begin{align*} \int_{B'_{2c_{0}\delta^{1/m}_{0}}(x_{0}')}|\bar{v}-\bar{v}(0')|^{2}\leq \int_{B'_{|x_{0}'|+2c_{0}\delta^{1/m}_{0}}(0')}|\bar{v}-\bar{v}(0')|^{2}\leq C\delta^{\frac{2\tilde{\alpha}+n-1}{m}}_{0}. \end{align*} This, in combination with \eqref{LZMWN001A}, yields that \begin{align*} &\fint_{Q_{2c_{0}\delta_{0}^{1/m},\delta_{0}}(x_{0}')}|v-\bar{v}(0')|^{2}dy\notag\\ &\leq\fint_{Q_{2c_{0}\delta_{0}^{1/m},\delta_{0}}(x_{0}')}2\big(|v-\bar{v}|^{2}+|\bar{v}-\bar{v}(0')|^{2})dy\leq C\delta_{0}^{\frac{2\tilde{\alpha}}{m}}, \end{align*} where $c_{0}:=2^{-(m+1)}m^{-1}$. Let \begin{align*} \tilde{v}(y)=&v(\delta_{0}^{1/m}y'+x_{0}',\delta_{0}^{1/m}y_{d})-\bar{v}(0'),\notag\\ \tilde{a}_{ij}(y)=&\delta_{0}^{-1}a_{ij}(\delta_{0}^{1/m}y'+x_{0}',\delta_{0}^{1/m}y_{d}). \end{align*} Since $Q_{2c_{0}\delta_{0}^{1/m},\delta_{0}}(x_{0}')\subset Q_{2R_{0},\delta_{0}}$ for $x_{0}'\in B_{R_{0}/2}'$, then $\tilde{v}$ solves \begin{align*} \begin{cases} -\partial_{i}(\tilde{a}_{ij}(y)\partial_{j}\tilde{v}(y))=0,&\mathrm{in}\; Q_{2c_{0},\delta_{0}^{1-1/m}},\\ \tilde{a}_{dj}(y)\partial_{j}\tilde{v}(y)=0,&\mathrm{on}\;\{y_{d}=\pm\delta_{0}^{1-1/m}\}. \end{cases} \end{align*} Observe that for $x=(x',x_{d})\in\Omega_{s}(x_{0}')$, $0<s\leq 2c_{0}\delta_{0}^{1/m}$, $c_{0}=2^{-(m+1)}m^{-1}$, we deduce \begin{align*} |\delta(x')-\delta(x_{0}')|=&||x'|^{m}-|x_{0}'|^{m}|\leq m|x'_{\theta}|^{m-1}|x'-x_{0}'|\notag\\ \leq&2^{m-2}ms(s^{m-1}+|x_{0}'|^{m-1})\leq\frac{\delta(x_{0}')}{2}, \end{align*} where $x_{\theta}'$ is some point between $x_{0}'$ and $x'$. Then, we have \begin{align}\label{QWN001} \frac{1}{2}\delta(x_{0}')\leq\delta(x')\leq\frac{3}{2}\delta(x_{0}'),\quad\mathrm{in}\;\Omega_{s}(x_{0}'). \end{align} Remark that the result in \eqref{QWN001} actually gives a precise characterization for the equivalence of the length of each line segment in the small narrow region $\Omega_{s}(x_{0}')$, which is not presented in previous work \cite{DLY2021}. Using \eqref{QWN001}, we obtain that the coefficient matrix $\tilde{a}:=(\tilde{a}_{ij})$ satisfies \begin{align*} \frac{I}{C}\leq\tilde{a}\leq CI,\quad\mathrm{and}\;\|\tilde{a}\|_{C^{\mu}(Q_{2c_{0},\delta_{0}^{1-1/m}})}\leq C,\quad\mathrm{for}\;\mathrm{any}\;\mu\in(0,1]. \end{align*} For any integer $l$, denote \begin{align*} S_{l}:=\{y\in\mathbb{R}^{d}\,|\,|y'|<2c_{0},\;(2l-1)\delta_{0}^{1-1/m}<y_{d}<(2l+1)\delta_{0}^{1-1/m}\}, \end{align*} and \begin{align*} S:=\{y\in\mathbb{R}^{d}\,|\,|y'|<2c_{0},\,|y_{d}|<2c_{0}\}. \end{align*} In particular, $S_{0}=Q_{2c_{0},\delta_{0}^{1-1/m}}$. Introduce a new function as follows: \begin{align*} \hat{v}(y):=\tilde{v}(y',(-1)^{l}(y_{d}-2l\delta_{0}^{1-1/m})),\quad\mathrm{in}\;S_{l},\,l\in\mathbb{Z}, \end{align*} which is generated by performing even extension of $\tilde{v}$ with respect to $y_{d}=\delta_{0}^{1-1/m}$ and then the periodic extension with the period $4\delta_{0}^{1-1/m}$. The corresponding coefficients become that for $k=1,...,d-1$ and any $l\in\mathbb{Z}$, \begin{align*} \hat{a}_{dk}(y)=\hat{a}_{kd}(y):=(-1)^{l}\tilde{a}_{kd}(y',(-1)^{l}(y_{d}-2l\delta_{0}^{1-1/m})),\quad\mathrm{in}\;S_{l}, \end{align*} and \begin{align*} \hat{a}_{ij}(y):=\tilde{a}_{ij}(y',(-1)^{l}(y_{d}-2l\delta_{0}^{1-1/m})),\quad\mathrm{in}\;S_{l}, \end{align*} for other indices. Therefore, $\hat{v}$ and $\hat{a}_{ij}$ are defined in $Q_{2,\infty}$. From the conormal boundary conditions, we know that $\hat{v}$ verifies \begin{align*} \partial_{i}(\hat{a}_{ij}\partial_{j}\hat{v})=0,\quad\mathrm{in}\;S. \end{align*} Applying Proposition 4.1 of \cite{LN2003} and Lemma 2.1 of \cite{LY202102}, we get \begin{align*} \|\nabla\hat{v}\|_{L^{\infty}(\frac{1}{2}S)}\leq C\|\hat{v}\|_{L^{2}(S)}\leq C\delta_{0}^{\frac{\tilde{\alpha}}{m}}. \end{align*} Then back to $u$, we obtain that for $x_{0}=(x_{0}',x_{d})\in\Omega_{R_{0}/2}$, \begin{align*} |\nabla u(x_{0})|\leq\|\nabla u\|_{L^{\infty}(\Omega_{c_{0}\delta^{1/m}}(x_{0}'))}\leq C\delta_{0}^{\frac{\tilde{\alpha}-1}{m}}=C(\varepsilon+|x_{0}'|^{m})^{\frac{\tilde{\alpha}-1}{m}}. \end{align*} Then we improve the previous upper bound $|\nabla u(x)|\leq C(\varepsilon+|x'|^{m})^{-\sigma_{0}}$ to be $|\nabla u(x)|\leq C(\varepsilon+|x'|^{m})^{\frac{\tilde{\alpha}-1}{m}},$ where $\frac{\tilde{\alpha}-1}{m}=\min\{\frac{\alpha-1}{m},-\sigma_{0}+\frac{\gamma}{m}\}.$ If $1+\gamma-m\sigma_{0}>\alpha$, then the proof is finished. Otherwise, if $1+\gamma-m\sigma_{0}<\alpha$, then pick $\sigma_{1}=\sigma_{0}-\frac{\gamma}{m}$ and repeat the above argument. It may need to decrease $\gamma$ if necessary such that $\frac{\alpha-1}{m}\neq-\sigma_{0}+k\frac{\gamma}{m}$ for any $k\geq1$. By repeatedly using the argument for finite times, we complete the proof of Theorem \ref{thm001}. \end{proof} \section{optimal lower bound on the gradient}\label{SEC03} Denote \begin{align}\label{lam} \lambda_{0}:=\frac{2}{m}. \end{align} We start by proving the following lemma for the purpose of establishing the optimal lower bound on the gradient. \begin{lemma}\label{lemma006} For $\varepsilon>0$, there exists a unique solution $g\in L^{\infty}((0,1))\cap C^{\infty}((0,1])$ \begin{align}\label{AMR001} Lg:=\partial_{rr}g(r)+\left(\frac{d-2}{r}+\frac{m\lambda_{0}r^{m-1}}{\varepsilon+\lambda_{0}r^{m}}\right)\partial_{r}g(r)-\frac{d-2}{r^{2}}g(r)=0,\quad0<r<1, \end{align} such that $g(1)=1$. Furthermore, $g\in C([0,1])$ increases strictly with $g(0)=0$, satisfying that for $\beta\geq\frac{2\alpha^{2}+\alpha(d+m-3)}{2\alpha+d-3}$, \begin{align}\label{DAM001} \min\{r,\lambda_{0}^{\frac{\beta-\alpha}{m}}r^{\beta}(\varepsilon+\lambda_{0}r^{m})^{\frac{\alpha-\beta}{m}}\}<g(r)<r^{\alpha},\quad\mathrm{in}\;(0,1), \end{align} and \begin{align}\label{DAM002} g(r)<C_{0}(\varepsilon)r,\quad\mathrm{in}\;(0,r_{0}(\varepsilon)), \end{align} where $\alpha$ is given by \eqref{degree}, $\lambda_{0}$ is defined in \eqref{lam}, and \begin{align}\label{WZQ001} r_{0}(\varepsilon)=\left(\frac{a_{0}(b_{0}-1)(d+b_{0}-2)}{m\lambda_{0}(1+a_{0}b_{0})}\varepsilon\right)^{\frac{1}{m+1-b_{0}}},\quad C_{0}(\varepsilon)=\frac{(r_{0}(\varepsilon))^{\alpha-1}}{1-a_{0}(r_{0}(\varepsilon))^{b_{0}-1}}, \end{align} for any fixed constants $a_{0}>0$ and $1<b_{0}<m+1$. \end{lemma} \begin{remark} We improve the corresponding estimates in Lemma 3.1 of \cite{DLY2021} by solving the explicit constants in \eqref{WZQ001}. \end{remark} \begin{proof} Denote by $g_{\tau}\in C^{2}([\tau,1])$ the solution of $Lg_{\tau}=0$ in $(\tau,1)$ with $g_{\tau}(\tau)=\tau$ and $g_{\tau}(1)=1$, where $0<\tau<1$. Due to the fact that $Lr>0$ and $Lr^{\alpha}<0$ in $(0,1)$, it follows from the maximum principle and strong maximum principle that \begin{align*} r<g_{\tau}(r)<r^{\alpha},\quad r\in(\tau,1). \end{align*} Then there exists a solution $g\in C([0,1])\cap C^{\infty}((0,1])$ of $Lg=0$ in $(0,1)$ such that $g_{\tau}\rightarrow g$ in $C^{2}_{loc}((0,1])$ as $\tau\rightarrow0$ along a subsequence. Moreover, $r\leq g(r)\leq r^{\alpha}$ in $(0,1)$ and $h(0)=0$. Using the strong maximum principle, we further get $r<g(r)<r^{\alpha}$ in $(0,1)$. Consider $\underline{g}(r):=\lambda_{0}^{\frac{\beta-\alpha}{m}}r^{\beta}(\varepsilon+\lambda_{0}r^{m})^{\frac{\alpha-\beta}{m}}$ for $\beta\in\mathbb{R}$. Then it follows from a direct calculation that \begin{align*} L\underline{g}=&\lambda_{0}^{\frac{\beta-\alpha}{m}}r^{\beta-2}(\varepsilon+\lambda_{0}r^{m})^{\frac{\alpha-\beta}{m}}\bigg((\beta-\alpha)^{2}\Big(\frac{\lambda_{0}r^{m}}{\varepsilon+\lambda_{0}r^{m}}\Big)^{2}\notag\\ &+\big((\alpha-\beta)(d+m+2\beta-3)+m\beta\big)\Big(\frac{\lambda_{0}r^{m}}{\varepsilon+\lambda_{0}r^{m}}\Big)+(d-2+\beta)(\beta-1)\bigg),\;\,\mathrm{in}\;(0,1). \end{align*} Denote \begin{align*} p(t):=(\beta-\alpha)^{2}t^{2}+\big((\alpha-\beta)(d+m+2\beta-3)+m\beta\big)t+(d-2+\beta)(\beta-1), \end{align*} for $0\leq t\leq1$. In light of $p(1)=0$, it suffices to require that \begin{align*} p'(t)\leq&2(\beta-\alpha)^{2}+(\alpha-\beta)(d+m+2\beta-3)+m\beta\notag\\ \leq&-(2\alpha+d-3)\beta+2\alpha^{2}+\alpha(d+m-3)\leq0, \end{align*} for the purpose of $L\underline{g}\geq0$ in $(0,1)$. This, together with the strong maximum principle and the fact that $\underline{g}(0)=g(0)$ and $\underline{g}(1)<g(1)$, yields that $\underline{g}$ is a subsolution of \eqref{AMR001} in the case of $\beta\geq\frac{2\alpha^{2}+\alpha(d+m-3)}{2\alpha+d-3}$. On the other hand, let $\overline{g}:=C_{0}(\varepsilon)(r-a_{0}r^{b_{0}})$, where $a_{0}>0$ and $1<b_{0}<m+1$, and $C_{0}(\varepsilon)$ is given in \eqref{WZQ001}. A straightforward computation shows that for $r\in(0,r_{0}(\varepsilon))$, \begin{align*} L\overline{g}=C_{0}(\varepsilon)\left(-a_{0}(b_{0}-1)(b_{0}+d-2)r^{b_{0}-2}+\frac{m\lambda_{0}r^{m-1}}{\varepsilon+\lambda_{0}r^{m}}(1-a_{0}b_{0}r^{b_{0}-1})\right)\leq0, \end{align*} where $r_{0}(\varepsilon)$ and $C_{0}(\varepsilon)$ are given by \eqref{WZQ001}. Since $\overline{g}(0)=g(0)$ and $\overline{g}(r_{0}(\varepsilon))>g(r_{0}(\varepsilon))$, then we deduce from the strong maximum principle that $g(r)<C_{0}(\varepsilon)r$ in $(0,r_{0}(\varepsilon))$. In addition, $g$ is actually strictly increasing in $(0,1)$. Otherwise, there exists a constant $r^{\ast}\in(0,1)$ such that $g'(r^{\ast})=0$ and $g''(r^{\ast})\leq0$. Then we get $Lg(r^{\ast})<0$ in virtue of $g(r^{\ast})>0$. This leads to a contradiction. It remains to prove the uniqueness of $g$. Assume that there exists another solution $g_{1}\in L^{\infty}((0,1))\cap C^{\infty}((0,1])$ of \eqref{AMR001} such that $g_{1}(1)=1$. Let $w:=g_{1}g^{-1}$ in $(0,1)$. Then \begin{align*} Lg_{1}=\frac{g}{G}(Gw')'=0,\quad\mathrm{in}\;(0,1), \end{align*} where $G=g^{2}r^{d-2}(\varepsilon+\lambda_{0}r^{m})$. Hence there are two constants $C_{1}$ and $C_{2}$ such that \begin{align*} g_{1}(r)=&g(r)\int_{r}^{1}\frac{C_{1}}{g^{2}(s)s^{d-2}(\varepsilon+\lambda_{0}s^{m})}\,ds+C_{2} g(r),\quad\mathrm{in}\;(0,1). \end{align*} From \eqref{DAM001}--\eqref{DAM002}, we obtain \begin{align*} &g(r)\int_{r}^{1}\frac{1}{g^{2}(s)s^{d-2}(\varepsilon+\lambda_{0}s^{m})}\,ds\notag\\ &\geq g(r)\int_{r}^{r_{0}(\varepsilon)}\frac{1}{g^{2}(s)s^{d-2}(\varepsilon+\lambda_{0}s^{m})}\,ds\notag\\ &\geq\frac{1}{(d-1)(C_{0}(\varepsilon))^{2}(\varepsilon+\lambda_{0}(r_{0}(\varepsilon))^{m})}\left(r^{2-d}-r(r_{0}(\varepsilon))^{1-d}\right)\rightarrow\infty,\quad\mathrm{as}\;r\rightarrow0, \end{align*} which, in combination with the fact that $g$ and $g_{1}$ are bounded, leads to that $C_{1}=0$, $C_{2}=1$ and thus $g=g_{1}$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm002}] \noindent{\bf Step 1.} To begin with, it follows from Taylor expansion that \begin{align*} h_{1}(x')=-h_{2}(x')=\frac{|x'|^{m}}{m}+O(|x'|^{2m}),\quad\mathrm{in}\;B_{1}'. \end{align*} Let \begin{align*} \bar{u}(x')=\fint_{-\frac{\varepsilon}{2}+h_{2}}^{\frac{\varepsilon}{2}+h_{1}}u(x',x_{d})\,dx_{d},\quad\mathrm{in}\;B_{1}'. \end{align*} Hence $\bar{u}$ is a solution of \begin{align*} \mathrm{div}((\varepsilon+\lambda_{0}|x'|^{m})\nabla\bar{u})=\mathrm{div}F,\quad \mathrm{in}\;B_{1}', \end{align*} where $\lambda_{0}$ is given in \eqref{lam}, $F=(F_{1},...,F_{d-1})$, $F_{i}=2|x'|^{m-2}(x_{i}+O(|x'|^{m+1}))\overline{x_{d}\partial_{d}u}+O(|x'|^{2m})\partial_{i}\bar{u}$, $\overline{x_{d}\partial_{d}u}$ denotes the average of $x_{d}\partial_{d}u$ with regard to $x_{d}$ in $(-\varepsilon/2+h_{2},\varepsilon/2+h_{1})$. In light of the fact that $|x_{d}|\leq C(\varepsilon+\lambda_{0}|x'|^{m})$, we deduce from \eqref{U002} and \eqref{U001} that \begin{align}\label{TQ001} |F|\leq C(d)|x'|^{m-1}(\varepsilon+\lambda_{0}|x'|^{m}),\quad\mathrm{in}\; B_{1}'. \end{align} Since $\varphi$ is odd with respect to $x_{1}$ and the domain $\Omega=B_{5}\setminus\overline{D_{1}\cup D_{2}}$ is symmetric, then it follows from the elliptic theory that $u$ is odd in $x_{1}$ and $u$ is smooth. Then $\bar{u}$ is also odd in $x_{1}$ and $\bar{u}(0')=0$. Based on these facts, we use spherical harmonics to expand $\bar{u}$ as follows: \begin{align}\label{VDAZ001} \bar{u}(x')=U_{1,1}(r)Y_{1,1}(\xi)+\sum^{\infty}_{k=2}\sum^{N(k)}_{l=1}U_{k,l}(r)Y_{k,l}(\xi),\quad\mathrm{in}\;B_{1}'\setminus\{0'\}, \end{align} where $\{Y_{k,l}\}_{k,l}$, which is an orthonormal basis of $L^{2}(\mathbb{S}^{d-2})$, consists of $k$-th degree normalized spherical harmonics, and $U_{k,l}\in C([0,1))\cap C^{\infty}((0,1))$ is given by $U_{k,l}=\int_{\mathbb{S}^{d-2}}\bar{u}(r,\xi)Y_{k,l}(\xi)d\xi$. In view of the fact that $\varepsilon+\lambda_{0}|x'|^{m}$ is independent of $\xi$ and $\bar{u}(0')=0$, we obtain that $U_{1,1}(0)=0$, and \begin{align*} LU_{1,1}:=\partial_{rr}U_{1,1}(r)+\left(\frac{d-2}{r}+\frac{m\lambda_{0}r^{m-1}}{\varepsilon+\lambda_{0}r^{m}}\right)\partial_{r}U_{1,1}(r)-\frac{d-2}{r^{2}}U_{1,1}(r)=H(r), \end{align*} for $0<r<1$, where \begin{align*} H(r)&=\int_{\mathbb{S}^{d-2}}\frac{(\mathrm{div}F)Y_{1,1}(\xi)}{\varepsilon+\lambda_{0}r^{m}}d\xi=\int_{\mathbb{S}^{d-2}}\frac{\partial_{r}F_{r}+\frac{1}{r}\nabla_{\xi}F_{\xi}}{\varepsilon+\lambda_{0}r^{m}}Y_{1,1}(\xi)d\xi\notag\\ &=\partial_{r}\left(\int_{\mathbb{S}^{d-2}}\frac{F_{r}Y_{1,1}}{\varepsilon+\lambda_{0}r^{m}}d\xi\right)+\int_{\mathbb{S}^{d-2}}\left(\frac{m\lambda_{0}r^{m-1}F_{r}Y_{1,1}}{(\varepsilon+\lambda_{0}r^{m})^{2}}-\frac{F_{\xi}\nabla_{\xi}Y_{1,1}}{r(\varepsilon+\lambda_{0}r^{m})}\right)d\xi\notag\\ &=:\partial_{r}A(r)+B(r),\quad\mathrm{in}\;(0,1), \end{align*} and $A(r),B(r)\in C^{1}([0,1))$. From \eqref{TQ001}, we know that \begin{align}\label{AB001} |A(r)|\leq C(d)r^{m-1},\quad\mathrm{and}\;|B(r)|\leq C(d)r^{m-2},\quad\mathrm{in}\;(0,1). \end{align} \noindent{\bf Step 2.} Proof of \begin{align}\label{QWMM001} U_{1,1}(r)=C_{1}(\varepsilon)g(r)+O(r^{m-1+\alpha}),\quad\mathrm{in}\;(0,1), \end{align} where $\alpha$ is defined by \eqref{degree}, $g$ is the solution of \eqref{AMR001}, $C_{1}(\varepsilon)$ is some constant satisfying that \begin{align}\label{QWMM002} C_{1}(\varepsilon)\geq\frac{1}{C_{2}},\quad\text{for some positive constant}\; C_{2}\; \text{independent of}\;\varepsilon. \end{align} Define $v:=gw$, where $g$ is the solution of \eqref{AMR001} with $g(0)=0$ and $g(1)=1$, and \begin{align*} w(r):=\int_{0}^{r}\frac{1}{g^{2}(s)s^{d-2}(\varepsilon+\lambda_{0}s^{m})}\int^{s}_{0}g(t)t^{d-2}(\varepsilon+\lambda_{0}t^{m})H(t)\,dtds,\quad\mathrm{in}\;(0,1). \end{align*} It then follows from a straightforward calculation that \begin{align*} Lv=L(gw)=gw''+\left(2g'+\Big(\frac{d-2}{r}+\frac{m\lambda_{0}r^{m-1}}{\varepsilon+\lambda_{0}r^{m}}\Big)g\right)w'=\frac{g}{G}(Gw')'=H, \end{align*} where $G=g^{2}r^{d-2}(\varepsilon+\lambda_{0}r^{m})$. In light of $g'>0$ and using \eqref{AB001}, we have \begin{align*} &\int^{s}_{0}g(t)t^{d-2}(\varepsilon+\lambda_{0}t^{m})H(t)dt\notag\\ &=\int^{s}_{0}g(t)t^{d-2}(\varepsilon+\lambda_{0}t^{m})A'(t)dt+O(1)g(s)s^{d+m-3}(\varepsilon+\lambda_{0}s^{m})\notag\\ &=-\int_{0}^{s}g'(t)t^{d-2}(\varepsilon+\lambda_{0}t^{m})A(t)dt+O(1)g(s)s^{d+m-3}(\varepsilon+\lambda_{0}s^{m})\notag\\ &=O(1)\left(s^{d+m-3}(\varepsilon+\lambda_{0}s^{m})\int^{s}_{0}g'(t)dt+g(s)s^{d+m-3}(\varepsilon+\lambda_{0}s^{m})\right)\notag\\ &=O(1)g(s)s^{d+m-3}(\varepsilon+\lambda_{0}s^{m}), \end{align*} which, together with \eqref{DAM001}, reads that \begin{align*} |v(r)|\leq Cg(r)\int^{r}_{0}\frac{s^{m-1}}{g(s)}\leq Cr^{m-1+\alpha}. \end{align*} Note that $U_{1,1}-v$ remains bounded and $L(U_{1,1}-v)=0$ for $r\in(0,1)$, it then follows from Lemma \ref{lemma006} that $U_{1,1}-v=C_{1}(\varepsilon)g$. That is, \eqref{QWMM001} holds. We now prove \eqref{QWMM002}. Based on the assumed symmetric condition on the domain in Theorem \ref{thm002}, let $x=(r,\xi,x_{d})\in\mathbb{R}_{+}\times\mathbb{S}^{d-2}\times\mathbb{R}$ and then \eqref{con002} can be rewritten as follows: \begin{align}\label{DWQ001} \begin{cases} \partial_{rr}u+\frac{d-2}{r}\partial_{r}u+\frac{1}{r^{2}}\Delta_{\mathbb{S}^{d-2}}u+\partial_{dd}u=0,&\mathrm{in}\;B_{5}\setminus\overline{D_{1}\cup D_{2}},\\ \frac{\partial u}{\partial\nu}=0,&\mathrm{on}\;\partial D_{i},\;i=1,2,\\ u=x_{1},&\mathrm{on}\;\partial B_{5}. \end{cases} \end{align} Denote \begin{align*} \tilde{u}(r,x_{d}):=\int_{\mathbb{S}^{d-2}}u(r,\xi,x_{d})Y_{1,1}(\xi)d\xi. \end{align*} Due to the fact that $u$ is an odd function with respect to $x_{1}$, we get $\tilde{u}(0,x_{d})=0$ for any $x_{d}$. Then multiplying equation \eqref{DWQ001} by $Y_{1,1}(\xi)$ and utilizing integration by parts on $\mathbb{S}^{d-2}$, we obtain that $\tilde{u}(r,x_{d})$ verifies \begin{align}\label{DWQ002} \begin{cases} \partial_{rr}\tilde{u}+\frac{d-2}{r}\partial_{r}\tilde{u}-\frac{d-2}{r^{2}}\tilde{u}+\partial_{dd}\tilde{u}=0,&\mathrm{in}\;\tilde{B}_{5}\setminus\overline{\tilde{D}_{1}\cup \tilde{D}_{2}},\\ \frac{\partial \tilde{u}}{\partial\nu}=0,&\mathrm{on}\;\partial \tilde{D}_{i},\;i=1,2,\\ \tilde{u}=0,&\mathrm{on}\;\{r=0\},\\ \tilde{u}=r,&\mathrm{on}\;\partial \tilde{B}_{5}, \end{cases} \end{align} where $\nu$ denotes the unit inner normal of $\partial\tilde{D_{i}}$, $i=1,2,$ and \begin{align*} \tilde{B}_{5}:=&\{(r,x_{d})\in\mathbb{R}_{+}\times\mathbb{R}\,|\,r^{2}+x_{d}^{2}<25\},\\ \tilde{D}_{i}:=&\{(r,x_{d})\in\mathbb{R}_{+}\times\mathbb{R}\,|\,r^{m}+|x_{d}+(-1)^{i}(1+\varepsilon/2)|^{m}<1\}. \end{align*} Observe that $\tilde{v}(r)=r$ verifies the first line of \eqref{DWQ002} with $\frac{\partial\tilde{v}}{\partial\nu}<0$ on $\partial\tilde{D}_{i}$, $i=1,2.$ Therefore, $r$ becomes a subsolution of \eqref{DWQ002} and we thus have $\tilde{u}\geq r$. This yields that \begin{align*} U_{1,1}(r)=\fint_{-\frac{\varepsilon}{2}+h_{2}}^{\frac{\varepsilon}{2}+h_{1}}\tilde{u}(r,x_{d})\,dx_{d}\geq r, \end{align*} which, together with \eqref{DAM001} and \eqref{QWMM001}, leads to that \begin{align*} r\leq U_{1,1}(r)=C_{1}(\varepsilon)g(r)+O(r^{m-1+\alpha})\leq C_{1}(\varepsilon)r^{\alpha}+\frac{1}{2}r,\quad\mathrm{in}\;(0,r_{0}], \end{align*} for some small $\varepsilon$-independent constant $r_{0}$. Then we get \begin{align*} C_{1}(\varepsilon)\geq\frac{1}{2}r_{0}^{1-\alpha}. \end{align*} \noindent{\bf Step 3.} Combining the results above, we give the proof of Theorem \ref{thm002}. Using \eqref{DAM001} and \eqref{QWMM001}--\eqref{QWMM002}, we obtain that for $\beta\geq\frac{2\alpha^{2}+\alpha(d+m-3)}{2\alpha+d-3}$, \begin{align}\label{DYDA001} U_{1,1}(r)\geq\frac{1}{C}g(r)-Cr^{m-1+\alpha}\geq\frac{1}{2C}r^{\beta}(\varepsilon+\lambda_{0}r^{m})^{\frac{\alpha-\beta}{m}},\quad\mathrm{in}\;(0,r_{0}], \end{align} where $r_{0}$ is a small positive $\varepsilon$-independent constant. From \eqref{VDAZ001} and \eqref{DYDA001}, we deduce \begin{align*} \left(\int_{\mathbb{S}^{d-2}}|\bar{u}(\sqrt[m]{\varepsilon},\xi)|^{2}d\xi\right)^{\frac{1}{2}}\geq|U_{1,1}(\sqrt[m]{\varepsilon})|\geq\frac{1}{C}\varepsilon^{\frac{\alpha}{m}}, \end{align*} which reads that $|\bar{u}(\sqrt[m]{\varepsilon},\xi_{0})|\geq\frac{1}{C}\varepsilon^{\frac{\alpha}{m}}$ for some $\xi_{0}\in\mathbb{S}^{d-2}$. In view of the fact that $\bar{u}$ is the average of $u$ in the $x_{d}$ direction, we obtain \begin{align*} |u(\sqrt[m]{\varepsilon},\xi_{0},x_{d})|\geq\frac{1}{C}\varepsilon^{\frac{\alpha}{m}},\quad\text{for some}\;x_{d}\in(-\varepsilon/2+h_{2}(x'),\varepsilon/2+h_{1}(x')). \end{align*} This, together with the fact that $u(0)=0$, implies that \eqref{ARQZ001} holds. The proof is complete. \end{proof} \noindent{\bf{\large Acknowledgements.}} The author would like to thank Prof. C.X. Miao for his constant encouragement and useful discussions. The author was partially supported by CPSF (2021M700358). \bibliographystyle{plain} \def$'${$'$}
1,477,468,751,182
arxiv
\section{Introduction}\label{sec_intro} The ability to build adequate models of the coronal magnetic field is extremely important for understanding the physics of the solar corona. The corona is believed to be generally in a force-free (or at least low-$\beta$) state \citep{Gary2001}. Destabilization of this state may lead to eruptions, with contributing factors including topological properties of the field, such as the existence of null points and excessive magnetic twist \citep{Canfield1999}. The amount of energy released in eruptions cannot exceed the amount of free magnetic energy at the time of destabilization. Moreover, as the coronal field generally evolves in such a way that its total helicity only changes due to helicity flux across the photosphere and into the heliosphere \citep{Berger1984}, the assessment of helicity at one point in time, such as prior to a CME, might be beneficial for studies of the evolution of the corona and heliosphere. Modeling of coronal heating is frequently performed as 1-D hydrodynamic (or static) models following magnetic field lines using values of magnetic field along these field lines as important input \citep[e.g.,][]{Lundquist2008}. So this modeling would also benefit from better models for the magnetic field. The general problem of constructing a force-free magnetic field (hereafter FFF) to model the coronal field is formulated as follows \citep{Nakagawa1971}. The objective is to find a magnetic field $\bvec$ which satisfies the divergence-free condition \begin{equation} \nabla\cdot\bvec=0 \label{div_free} \end{equation} and the force-free equation \begin{equation} \nabla\times\bvec=\alpha\bvec, \label{fff} \end{equation} where $\alpha$ is a proportionality constant between the magnetic field and magnetic current\footnote{The parameter \als has topological meaning associated with the amount of twist in the field, e.g., \citet{Gold1960}.}. Equations~(\ref{div_free}) and~(\ref{fff}) must be solved for $\bvec$ and \als in a volume domain $\vol$ subject to boundary conditions $\bvec|_{\partial\vol}$ (or $\bvec\cdot\nhat|_{\partial\vol}$ and $\alpha|_{\partial\vol}$). The problem is not in general linear and the solution is hence called a \textit{``nonlinear force-free field''}, hereafter NLFFF. Particular cases include a \textit{linear} force-free field (hereafter LFFF) that solves the system assuming $\alpha(\rvec)=\mbox{const}$, or a potential field (where $\alpha(\rvec)=0$) which we refer to in the text as $\bp$. Many difficulties arise when solving the problem of constructing a non-linear force-free field. The underlying reasons for these difficulties are physical, mathematical and computational. Physically, the full vector magnetic field at the lower boundary $z=0$ is presently obtained only in the photosphere, where plasma forces are significant. That is to say, Equation~(\ref{fff}) is not appropriate at the lower boundary level \citep{Gary2001}. Also, the component of $\bvec$ transverse to the line of sight at the photosphere is subject to an intrinsic 180$^\circ$ ambiguity, and measurements of boundary data at the top and side boundaries of the computational domain are not available at this time \citep[see][for an extensive discussion of these issues]{Demoulin1997b}. Typically, assumptions are made about the side boundaries, e.g., a field matching a potential source surface model \citep{Schrijver2003} is assumed, and there are various methods to resolve the azimuthal ambiguity \citep{Metcalf2006}. Mathematically, the system is nonlinear and at the present stage the uniqueness, and even the existence, of a solution in general for a given boundary conditions are not proven. Finally, there are computational difficulties that have to do with the high instrumental uncertainty in the measurements of the transverse horizontal component of the photospheric magnetic field and the small spatial scale of current changes, possibly below the instrumental resolution, in the lower boundary. This uncertainty has more impact than it might seem at first sight because $\bvec\cdot\nabla\alpha=0$, implying $\alpha=\mbox{const}$ along magnetic field lines\footnote{This follows from Equation~(\ref{fff}) by taking the divergence of both sides of the equation.}. Hence, field lines must connect points with the same \als on positive and negative polarities at the lower boundary so the boundaries must have equal amounts of incoming and outgoing magnetic flux for each value of \al. Noise in \als at the lower boundary and limits to the field of view prevent this condition from being satisfied, and the problem is in general ill-posed. Techniques exist for ``pre-processing'' of the boundary data to attempt to mitigate this problem \citep[e.g.,][]{Wiegelmann2006, Wiegelmann2008}. The existing methods to address the difficulties outlined above do not appear to be developed to a level such that photospheric vector magnetograms may be used to reliably model the coronal field. Different methods for solving the NLFFF problem, and even different implementations of the same method, applied to the same photospheric data, and even the same method applied to different polarities of the same data, frequently yield results inconsistent with each other and with the coronal features \citep{Schrijver2006, Metcalf2008, Schrijver2008, DeRosa2009}. Such methods are, for example, the magnetofrictional relaxation \citep[e.g., ][]{Ballegooijen2004}, optimization \citep[e.g., ][]{Wiegelmann2004} and the Grad-Rubin method \citep[e.g., ][]{Wheatland2007}. \\ Extensive studies are needed to address all of these issues. Hence, a substantial time might pass before reliable vector magnetograms consistent with the upper chromosphere become available for models of the coronal field. Until they are available another source of information is needed for modeling the coronal fields. We propose this source to be coronal loops. Coronal loops, observed in X-Ray and EUV images, are believed to follow lines of the magnetic field, and therefore they should be of help for magnetic extrapolations. Unlike vector magnetograms, this information originates in the force-free corona, where Equation~(\ref{fff}) is appropriate. Field lines spread apart with height and so do bundles of coronal loops \citep[though the field generally expands with height, individual loops are found to have nearly constant diameter with height, see][]{Klimchuk2000}. Consequently, the structure of the magnetic field in the corona should be less fine than at the photospheric level so it might in principle be better resolved by currently available instruments. Observed loops also give an idea about the overall connectivity of the coronal field, which might otherwise be easily distorted by even minor noise present in photospheric vector magnetograms and therefore in \al, as discussed above. Even if techniques of processing vector magnetograms are developed to the point that NLFFF models are generally reliable, coronal loops as an additional constraint might be of great benefit, for example, for studies of energy release in solar flares. Vector magnetograms undergo relatively minor changes during even major flares \citep[e.g., ][found only a fractional change in the transverse component of the field in a small patch of the active region during a large X-class flare]{Wang2012}. On the contrary, the changes in the connectivity of the coronal magnetic field can be large-scale and dramatic even in smaller flares. As the connectivity of the magnetic field manifests itself in the shapes of coronal loops, the latter provide a powerful guide for tracking sudden changes in the field. Making use of coronal loops is, however, a non-trivial task. The plasma is optically thin, and what is observed by instruments is the integrated emission of all the plasma along the line of sight. Extracting individual loops from bundles of overlapping loops is a non-trivial image processing task, with the possible exception of isolated loops far away from the core of the regions. Some progress, though, has been made in this direction \citep[e.g.,][]{Aschwanden2008}. Another difficulty is that all currently existing instruments, with the exception of STEREO satellites \citep{StereoRef}, only observe the Sun in one projection, so the three-dimensional structure of the loops is not immediately obvious. Recently, substantial progress has been made in studies of coronal loops as magnetic features. \citet{Lim2007} first fitted observed projections of coronal loops with lines of a LFFF. \citet{Malanushenko2009b} developed a semi-automatic algorithm for such fits, applicable to portions of loops, and showed that \als values obtained this way statistically correlate with \als values for a NLFFF model. Progress also has been made in obtaining information from the original images. Numerous studies, \citep[e.g.,][]{Aschwanden2009} have demonstrated good results on triangulating loops trajectories using STEREO data. Two approaches to the use of image data in magnetic modeling are as follows. Reconstructed 3-D loop trajectories and \als values along them may be determined approximately using the scheme from \citet{Malanushenko2009b}, hereafter the MLM09 fit. This provides information at least about the 3-D trajectories of some field lines and \als in the corona along these field lines. Stereoscopically-derived data offers another possibility: the inferred 3-D loop trajectories could be used in conjunction with values of the vector magnetic field at the loop foot points. Vector magnetograms are of course prone to the problems outlined above. However, in the case of using loops, the field values only need to be accessed \textit{at a sparse set of locations} in the lower boundary. If it is possible to estimate the chromospheric magnetic field (assuming, for example, that the field does not change much with height in the chromosphere) in \textit{at least a few} patches in an active region, and provided that these patches contain foot points of the stereoscopically determined loops trajectories, then this information could be used as in the first approach, but with more accurate results. In this paper we propose a new method of constructing a NLFFF using such information derived from coronal loops. We also draw attention to the value of coronal loop observations for magnetic modeling in general. Such methods might in principle be of use in areas of plasma physics other than coronal studies. It might for example be desirable in laboratory plasma studies to estimate what kind of a force-free field would have a required topology and magnitude of currents. The paper is organized as follows. In Section~\ref{sec_method} we describe the quasi Grad-Rubin scheme enabling us to make use of coronal loops with and without vector magnetograms. In Section~\ref{sec_input_data} we discuss various inputs. Section~\ref{sec_tests} describes the general scheme of a set of tests of the method and figures of merit obtained. The results of the tests are presented in detail in Section~\ref{sec_appl}. Section~\ref{sec_summ} discusses the results, evaluating how successful the scheme is and its value for modeling of the coronal field. \clearpage \section{Description Of The Quasi Grad-Rubin Method}\label{sec_method} Suppose there is a domain $\vol$ with boundary $\partial\vol$ and the following are given: \begin{enumerate} \item[(a)]{$\bvec\cdot\nhat|_{\partial\vol}$ (where $\nhat$ is the normal to $\partial\vol$);} \item[(b)]{a set of trajectories $\{\path_i\}_{i=1}^{N}$ in $\vol$ along which the force-free parameter values $\{\alpha_i\}_{i=1}^{N}$ are known (and are constant along each individual trajectory).} \end{enumerate} The objective is to find the field $\bvec$ that solves Equation~(\ref{fff}) and matches the boundary conditions (a) and the volume constraints (b). The procedure is iterative and is similar to a Grad-Rubin iteration \citep{Grad1958}. It starts with potential field $\bvec^{(0)}=\bvec_{P}$ as an initial guess for the field and an initial guess $\alpha^{(0)}$ for the force-free parameter, which at each point in the domain is set equal to $\alpha_i$ for the closest point in the volume for which an $\alpha_i$ value is known. Then on every $n$-th iteration the updated cubes $\bvec^{(n)}$ and $\alpha^{(n)}$ are obtained as follows: \begin{enumerate} \item{Impose the volume constraints by setting $\alpha^{(n-1)}=\alpha_i$ along the trajectories $\path_i$.} \item{Calculate updated field values $\bvec^{(n)}$ from \begin{equation}\nabla\times\bvec^{(n)}=\alpha^{(n-1)}\bvec^{(n-1)}\label{curl_iter}\end{equation} subject to the prescribed boundary conditions. This equation is solved using a vector potential $\avec^{(n)}$ such that $\bvec^{(n)}=\nabla\times\avec^{(n)}$, so the divergence-free condition is satisfied to truncation error.} \item{Calculate an updated set of values for the force-free parameter $\alpha^{(n)}$: for every point in $\vol$, assign $\alpha^{(n)}=\langle\alpha^{(n-1)}\rangle$ \textit{averaged along the field line in $\bvec^{(n)}$ that passes through that point}. If a field line leaves the domain through any boundary but the lower one, the value of $\alpha$ is set to zero along it (in common with the Wheatland~2007 GR scheme). This ensures that no currents go off to infinity so that the fields' energy remains finite.} \item{Repeat 1.-3. until $\bvec^{(n)}\approx\bvec^{(n-1)}$ and $\alpha^{(n)}\approx\alpha^{(n-1)}$ to within a tolerance and therefore $\nabla\times\bvec^{(n)}\approx\alpha^{(n)}\bvec^{(n)}$.} \end{enumerate} This sequence is similar to an existing Grad-Rubin method of solution of the NLFFF problem \citep{Wheatland2007, Wheatland2009}. The only difference between these two schemes is how the updated $\alpha^{(n)}$ cube is calculated in the Step~3. In both schemes, at each point in the domain a field line is traced in $\bvec^{(n)}$. In the original Grad-Rubin code, positive or negative polarity is picked at the lower boundary; and $\alpha^{(n)}$ at each point in the volume is set to the \textit{value at the boundary point} in the chosen polarity where it is crossed by the field line. The only exception is the boundary itself for all points in the chosen polarity where \als is kept constant. In the Quasi Grad-Rubin, $\alpha^{(n)}$ at each point is assigned the \textit{average} of \als from the \textit{previous iteration} along this field line. The only exception is the volume constraint paths where \als keeps the volume constraint value. Let us consider some particular simple cases. First of all, it is clear that if the initial guess for $\bvec$ and \als already satisfies $\nabla\times\bvec=\alpha\bvec$, then the scheme keeps it unchanged. If the field or $\alpha$ differ from a solution to Equation~(\ref{fff}) even at one point, the field changes -- though if it is only one point that is different, convergence is achieved in a single iteration. It is also clear that if $\alpha=0$ everywhere, currents do not appear; since the scheme averages $\alpha$ at every iteration it is incapable of introducing $|\alpha|\geq \max(|\alpha_i|)$. These cases make sense: if the answer is close to the correct answer, convergence is achieved rapidly, and if no currents are specified to start with, the scheme does not change the input potential field. But what happens in an intermediate situation: currents are known on \textit{some}, but not all flux tubes in the domain? Can a solution be reached at all? If several solutions are plausible given the constraints, which solution (of any) is achieved? There are proofs of existence and uniqueness of solution for the force-free problem, achievable by Grad-Rubin iteration, if \als is sufficiently small in some sense \citep[see][]{Bineau1972}. However, the range of \als is not clearly defined and it is unclear whether solar-like fields are within this range. If they are outside of this range, that does not mean, the original Grad-Rubin iteration necessarily fails. It is unclear if similar proofs exist for the proposed Quasi Grad-Rubin scheme. In the absence of this, we consider a process of numerical experimentation to test the scheme. We attempt also to determine how many field line trajectories $\path_i$ are sufficient to enable convergence. In Section~\ref{sec_appl} we review some of our experiments. \section{Different Types of the Input Data for Quasi Grad-Rubin Scheme}\label{sec_input_data} The Quasi Grad-Rubin numerical scheme (hereafter ``QGR'', in contrast to ``GR'' for Grad-Rubin algorithm) could in principle be used with \als constrained at \textit{any} set of locations including the lower boundary. Hence it may be used with vector magnetograms, setting ${\alpha_i}$ at the lower boundary to the vector magnetogram derived value, \begin{equation}\alpha|_{z=0}=\left.\frac{1}{B_z}\left(\frac{\partial B_y}{\partial x} -\frac{\partial B_x}{\partial y}\right)\right|_{z=0}.\end{equation} Vector magnetograms may also be used with or without the loop trajectories. We identify three different kinds of inputs for QGR: \als along loop trajectories; \als along loop trajectories and at the lower boundary; and \als at the lower boundary only\footnote{Note that \als does \textit{not} have to be constrained at all points on the lower boundary.}. Traditional schemes are designed to work for the last case only. If the \als values, wherever set, are approximate, this will introduce uncertainties. To properly test QGR we try to recover several known fields and use both approximated trajectories and those drawn from the known field. We note that there may be applications of QGR even if it would not work with approximate data, for example for problems of the following kind: constructing a magnetic field with twist prescribed along certain trajectories. It could also be used with stereoscopically triangulated loops and currents derived from the photospheric vector magnetograms, or \als values and trajectories obtained by other means. In this paper we test various possible inputs and compare the results with the reference fields. Table~\ref{input_data_types} outlines the degrees of freedom available for such tests. We refer to various inputs and schemes using this chart, e.g., II.b is QGR applied to volume constraints alone, drawn from the reference field. \begin{table}[!hc] \caption{\small{Possible combinations of different inputs to QGR, in addition to $\bvec\cdot\nhat|_{\partial\vol}$. Note that I.a has the same input as the original GR scheme but the algorithm is different, so QGR with I.a input is \textit{not} equivalent to GR.}} \centering \begin{tabular}{m{11.0cm}m{2.0cm}m{2.0cm}} & & \\ \hline & \multicolumn{2}{c}{Values of \als at $z=0$} \\ Values of \als along loop trajectories & Known & Unknown \\ \hline None & I.a & --- \\ From the field lines of the model field (``ideal'' input) & I.b & II.b \\ From the MLM09 approximation derived from 2-D projections of these field lines (realistic coronal input) & I.c & II.c \\ \hline & & \\ \end{tabular} \label{input_data_types} \end{table} The QGR scheme was implemented by modifying an existing GR code (which we refer to as CFit version 1.3), described in \citet{Wheatland2009}. It is a ``self-consistent'' scheme, in that it picks a polarity, finds a force-free solution using boundary data from that polarity, hence obtaining values of \als everywhere in the volume including the other polarity at the boundary; then the \als map at the other polarity is updated with the weighted average of the values obtained from this new solution and the values that existed before this solution was found. It then repeats the cycle using the updated \als map from the other polarity. The cycles are continued until the two solutions obtained using \als values at the opposite polarities are consistent with each other to a tolerance. Since QGR does not use the lower boundary in the same way, we discard the switching between cycles and modify the way the \als values in the volume are calculated (see previous section). Also, instead of using $B_z$ at $z=0$ and a 2-D array of $\alpha$ values at $z=0$ the modified code uses $B_z$ at $z=0$ and two 3-D arrays: $\alpha_i$ along the trajectories and an initial guess everywhere else and a ``mask'', i.e. another 3-D array with entries either unity along ${\path_i}$ or zero everywhere else. These are the two principal changes to the CFit code. The calculation of the field on a given iteration, i.e. the solution of $\nabla\times\bvec^{(n+1)}=\alpha^{(n)}\bvec^{(n)}$ is unchanged. The code uses a vector potential for this step and hence the field satisfies $\nabla\cdot\bvec^{(n+1)}=0$. \section{Description Of Metrics for QGR Solutions}\label{sec_tests} In this paper we try to recover several known force-free fields, namely those from \citet{Schrijver2006} and \citet{Schrijver2008}. The iterations are initialized with the same potential fields used in the original studies. We also try to construct a NLFFF based on a dipole magnetogram and two specified loop trajectories (with a reference field which is not known). We estimate the quality of the reconstruction and the relative force-freeness of the known solutions using metrics of which most have become standard in NLFFF modeling. These are as follows. \begin{itemize} \item{$E/E\pot$ versus $E_\mathrm{ref}/E\pot$: energy in the reconstructed field versus energy of the reference field (for a perfect reconstruction the two would be equal).} \item{$H(\bvec|\bp)$ versus $H(\br|\bp)$: relative helicity\footnote{By $H(\bvec_1|\bvec_2)$ we mean the helicity of the field $\bvec_1$ relative to the field $\bvec_2$.} in the reconstructed field versus that of the reference field (for a perfect reconstruction the two would be equal).} \item{$\mbox{CWsin}=(\sum{|\sin\theta||\jvec|})/(\sum{|\jvec|})$ versus $\mbox{CWsin}_{\mathrm{ref}}$, where $|\sin\theta|=|\jvec\times\bvec|/|\jvec||\bvec|$: the total current-weighted sine of the angle between $\bvec$ and $\jvec$ (for a perfectly force-free field this is zero).} \item{Metrics of similarity between $\bvec$ and $\br$, normalized to equal unity if $\bvec=\br$: \begin{itemize} \item{$\mbox{C}_{\mathrm{CS}}= \frac{1}{N}\sum{[\bvec\cdot\br/(|\bvec||\br|)]}$ (where $N$ is the number of points in the domain): the average cosine of the angle between $\bvec$ and $\br$;} \item{$\mbox{C}_{\mathrm{vec}}= (\sum{\bvec\cdot\br})/(\sum{|\bvec||\br|})$: same as previous but with increased weight in regions of stronger field;} \item{$E_m'=1-E_m$, where $E_m=\frac{1}{N}\sum{|\bvec-\br|/|\br|}$: the average relative difference between $\bvec$ and $\br$;} \item{$E_n'=1-E_n$, where $E_n=\sum{|\bvec-\br|}/\sum{|\br|}$: same as previous but with increased weight in regions of stronger field.} \end{itemize} } \end{itemize} We omit metrics for how well $\nabla\cdot\bvec$ is satisfied, because the method \citep[in common with ][]{Wheatland2009}, uses a vector potential to calculate the field and hence achieves a divergence-free state to truncation error \citep{Press1992}. \section{Sample Applications of QGR}\label{sec_appl} \subsection{QGR Solution for a Dipole Field}\label{sec_dipole} The first test case is a simple dipole field aligned in the E-W direction with the North half of both magnetic poles having negative twist and the South half of both magnetic poles having matching positive twist. This model could be viewed as a simple representation of an emerged untwisted flux rope whose foot points became distorted in such a way that the field at one (leading) polarity has been inclined more than the field at the second (following) one, perhaps due to subsurface flows. Such a difference in inclinations is observed for solar active regions \citep{Howard1991_7}. To construct the field we calculate two constant-\als fields confined to half spaces \citep{Chiu1977} with equal and opposite twist ($\alpha_0=\pm1.5\pi/L$, where $L$ is the size of the domain). We draw one field line for $\path_i$ from each of these and use these field lines (which imitate coronal loops) as volume constraints. The initial guess for \als is $\pm\alpha_0$ in the S and N halves, respectively. The fields, field lines, and the locations of the constraints are shown in Figure~\ref{dipole_input}. \begin{figure}[!hc] \begin{center} \includegraphics{fig01.eps} \end{center} \caption{Input data for the dipole field test case: two field lines (thick red) drawn from two constant-\als fields (left and middle panels). The right panel shows points where the volume constraints are applied (i.e., the points through which the trajectories $\path_i$ pass). Points in the north and south halves of the domain are assigned $\alpha_i$ values of $\pm\alpha_0$, respectively.} \label{dipole_input} \end{figure} \begin{figure}[!hc] \begin{center} \includegraphics{fig02.eps} \end{center} \caption{The solution for the dipole test case. The QGR iteration converges as shown in panel (a), which displays the change in the free energy of the field consecutive iterations. The constructed field $\bvec$ is force-free, as shown in panel (b) by a histogram of $|\sin(\bvec, \jvec)|$ for $\bp$ (shaded curve) and $\bvec$ (solid curve). The peak at unity is due to the current-free regions in the field, as explained in the previous section. The initial field $\bp$ is current-free and $\bvec$ retains current-free regions (in particular where field lines leave the domain through side or top boundary). The field $\bvec$ is nonlinear, as seen in panel (c), which shows a horizontal slice of \als at a height of six pixels in the box (out of 64). In this panel black corresponds to negative \als and white to positive \al. The field lines of $\bvec$, however, do not match the constraints! Field lines of $\bvec$ are shown in panel (d) as solid black while the constraints are dashed red lines. Field lines are initiated at midpoints, rather than foot points, of the loops.} \label{dipole_underest} \end{figure} QGR is found to converge to a nonlinear force-free field, as shown in Figure~\ref{dipole_underest}. Achievement of the force-free state is shown by the distribution of $\sin(\bvec, \jvec)$ peaking at zero. A second smaller peak at one is a contribution from the current-free regions of the field, and is due to $\jvec$ in these regions being exclusively numeric noise. To support this, the shaded histogram on Figure~\ref{dipole_underest} shows the distribution of the same quantity evaluated for potential field. The nonlinearity of the field, that is, the presence of different \als values on different field lines, is illustrated by a distribution of \als on a horizontal slice. These additional diagrams are shown in Figure~\ref{dipole_underest}~(a)-(c). The field lines of the solution, interestingly, are found to follow different trajectories than the two constraining field lines. In fact, the solution, having \als close to the two imposed constraints in the two halves, appears much closer to a potential field than the highly twisted constant-\als fields we drew the loops from! This is shown in Figure~\ref{dipole_underest}~(d) which illustrates the constraining loops and two field lines of the solution initiated at the midpoints of these loops. \clearpage The fact that the NLFFF solution with currents similar in magnitude as in the \citet{Chiu1977} constant-\als field appears to have much less twisted field lines is due to at least two effects. First, as open field lines are required to carry no currents in the NLFFF model constructed by a QGR \citep[in common with the][implementation of GR]{Wheatland2007}, the volume containing currents has to be contained in the computational domain, while the \citet{Chiu1977} fields have currents in the entire half space. As the volume containing currents in the NLFFF is smaller, the current density has to be stronger for the field lines to have similar shape. Secondly, currents which run in opposing directions close to each other might counteract each other in influencing the shape of field lines. To illustrate these effects we perform a simple numeric experiment. We repeat the computation but set $\alpha_i=\pm f\alpha_0$ on the constraining paths for several values of $f$. The results are shown in Figure~\ref{dipole_factor}, first column. We also repeat the experiment for a box of smaller size (cropped on the sides and the top) but with otherwise identical setup (Figure~\ref{dipole_factor}, second column). Finally we repeat the computation with the same lower boundary but different volume constraints: a field line in the lower half of the domain is drawn from a constant-\als field with the same sign of \als as the upper half but smaller in magnitude, with $\alpha=-\alpha_0/4=-3\pi/8 L$ (Figure~\ref{dipole_factor}, third column). For the original setup, the best match between the constraining loops (red dashed curves) and field lines of the solution (black solid curves) initiated at the mid points of these loops is achieved with $f\approx10-12$, while if the same (or at least similar) field lines are required to exist in a field with currents confined to a smaller domain, the best match between the loops and the solution is achieved with $f\approx14$. If both loops have \als of the same sign, the best match is achieved for $f\approx 6-10$. (No steady solution is found for $f=14$ for the setup in the first and third columns; in this case the QGR solution continues to oscillate. The solutions for $f=14$ in the second column and $f=12$ in the third columns exhibits oscillations but of small magnitude. Later we discuss these oscillations which may result from the input being inconsistent with a NLFFF solution and describe a procedure to damp them. In this section, however, our point is to illustrate the significance of the scale factor $f$.) The scale factor $f$ cannot be evaluated \textit{a priori}, but it may be estimated using observables, i.e., coronal loops. This can be done by minimizing the difference between the shapes of coronal loops and field lines of different solutions corresponding to different values of $f$. In the next two sections we demonstrate that scaling factors obtained this way are indeed a proportionality constant between \als on lines from a NLFFF and \als of the approximation of these field lines by lines of constant-\als fields of the type constructed by \citet{Chiu1977}. \hoffset=-0.5cm \voffset=-0.5cm \begin{figure}[!hc] \begin{center} \includegraphics[height=22cm]{fig03.eps} \end{center} \caption{Experiments on the dipolar test case to illustrate the effect of opposing currents and domain size on the scaling factor $f$. See Section~\ref{sec_dipole} for description. The notation is the same as in Figure~\ref{dipole_underest}.} \label{dipole_factor} \end{figure} \clearpage \subsection{QGR Solutions for Low \& Lou Fields}\label{sec_llf} In this section we try to reconstruct the test field from \citet{Schrijver2006} (the ``reference field'' further in this section) using QGR. This test case is a member of family of analytic NLFFF's introduced by \citet{LowLou1990}. For the first QGR test we use field lines of the reference field as trajectories $\path_i$ and take $\alpha_i$ values corresponding to the correct \als values for the reference field. Physically, this could correspond to stereoscopically derived loops with chromospheric vector field known around their foot points. These data are hard to obtain at present; we use them mainly to test QGR alone, that is, on the ``ideal'' data not contaminated by measurement errors. We use 113 randomly selected field lines $\path_{i,\mathrm{ref}}$ (out of a bigger sample, chosen to be closed field lines, i.e. with both foot points on the lower boundary). We calculate $\alpha_{i,\mathrm{ref}}$ numerically everywhere in $\br$ and evaluate it along these 113 field lines. We discard all but 27 of these, retaining the ones that may be well fitted using MLM09, to make this test consistent with the next, realistic test presented later in this section. We construct two solutions of the same size as the reference field ($64^3$ pixels), with and without the additional constraint of vector field data at the lower boundary (i.e., schemes I.b and II.b from Table~\ref{input_data_types}). Figures of merit are calculated in the same subdomain as in \citet{Schrijver2006}. This center subdomain is also used to estimate a best-matching scaling factor $f$. In the second test we determine $\path_i$ and $\alpha_i$ based on a fit to the resulting MLM09 field derived from the normal field at the lower boundary as in \citet{Malanushenko2009b}. We calculate QGR solutions using these data with and without vector field data at the lower boundary (I.c and II.c from Table~\ref{input_data_types}). This tests the applicability of QGR to realistically available coronal data. It is not obvious that using approximations to loop trajectories and approximate values of \als along them is sufficient to create a field model at least as good as those derived from vector magnetogram data. To investigate this we project the same 113 field lines as for the ideal data to the $z=0$ plane to simulate the appearance of loops in the plane of the sky. We treat these 2-D projections as synthetic loops and use them to obtain \als values $\alpha_{i,\mathrm{MLM09}}$ along trajectories $\path_{i,\mathrm{MLM09}}$ using the MLM09 fit procedure. We discard loops which upon visual examination are poorly fitted, which leaves 27 loops that appear to have a near perfect fit. The trajectories of these loops and their MLM09 approximations are shown in Figure~\ref{loop_fit_llf}. \begin{figure}[!hc] \begin{center} \includegraphics{fig04.eps} \end{center} \caption{Results of MLM09 applied to the reference field from \citet{Schrijver2006}, used for I.c and II.c inputs as trajectories $\path_i$: $\br$ field lines (red) and MLM09 field lines (blue).} \label{loop_fit_llf} \end{figure} Keeping in mind the results from Section~\ref{sec_dipole}, we calculate several solutions for II.c with seven different scaling factors $f$ applied to input \als but with the calculations otherwise identical. Three of these solutions (corresponding to $f=1.23$, 1.69 and 2.15) are shown in Figure~\ref{fact_llf}, top row. For each of the solutions we estimate how closely the field lines match the synthetic 2-D loops used to construct the trajectories $\path_i$. We calculate the average distance between the projected loops and the corresponding field lines of the solution. The result (as a function of $f$) for all seven solutions is shown in Figure~\ref{fact_llf} (bottom left plot). The solution for $f\approx1.69$ is the closest match to the loops. The bottom right panel in the same figure is a scatter plot of $\alpha_{\mathrm{fit}}$ and $\alpha_{\mathrm{ref}}$ for individual loops. The fit appears to underestimate the value of \al. The underestimation factor is remarkably close to 1.69 (dashed line on the same plot). This suggests that such underestimation \textit{could be derived a posteriori} from the observed loops. \begin{figure}[!hc] \begin{center} \includegraphics[height=12cm]{fig05.eps} \end{center} \caption{Top row: several QGR solutions for the reference field from \citet{Schrijver2006} for input II.c (see Table~\ref{input_data_types}) corresponding to the same input but different scaling factor $f$ for \al. Red dashed lines show lines of $\br$ projected onto $z=0$ and used as loops which were approximated by MLM09 to construct trajectories $\path_i$. Black lines show corresponding lines of $\bvec$ (initiated at midpoints of the lines of $\br$). Field lines in the core of the domain are the most affected by the choice of $f$. Bottom left: average distance between loops (projections of lines of $\br$) and corresponding lines of $\bvec$ in projection onto the plane of the sky ($z=0$). The difference is smallest for $f\approx1.69$. This coefficient is remarkably close to the scaling coefficient between $\alpha_{\mathrm{ref}}$ and $\alpha_{\mathrm{fit}}$ from MLM09, shown at the lower right panel as a dashed line. Diamonds show $\alpha_{\mathrm{ref}}$ of the individual field lines of $\br$ versus $\alpha_{\mathrm{fit}}$ of the MLM09 approximations of these field lines.} \label{fact_llf} \end{figure} The results of our tests (I.c and II.c with $f=1.69$) are summarized in Table~\ref{table_llf} and Figure~\ref{llf/small_table}. Figure~\ref{llf/energies} shows that convergence is achieved for all solutions. We conclude that QGR is able to reconstruct the reference field, including the shape of field lines\footnote{Excepting the field lines leaving the computational domain, which are required to carry no current in our scheme, but in fact do carry currents in the Low \& Lou field.}, the structure of the currents and the distribution of \al, remarkably well. The figures of merit show that the QGR reconstructions are at least as close (and typically clos\textit{er}) to the right answer as other methods. In particular, the \textit{smallest} estimate for the free energy we obtain is \textit{closer} to the correct answer than any of the estimates based on the vector field boundary values alone reported by \citet{Schrijver2006}. \hoffset=-1cm \begin{table}[!hc] \centering \begin{tabular}{m{1.0cm}m{1.9cm}m{1.9cm}m{1.9cm}m{1.9cm}m{1.9cm}m{1.9cm}c} & $\mbox{C}_{\mathrm{vec}}$ & $\mbox{C}_{\mathrm{CS}}$ & $1-E_n$ & $1-E_m$ & CWsin & $E/E\pot$ & $H(\bvec|\bp)$ \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Reference field}}\\ & 1.00 & 1.00 & 1.00 & 1.00 & 0.01 & 1.24 & 1.00 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Quasi Grad-Rubin with vector magnetograms}}\\ I.a & 0.99 & 0.93 & 0.80 & 0.64 & 0.03 & 1.27 & 0.67 \\%& 1.56 \\ I.b & 0.99 & 0.96 & 0.83 & 0.70 & 0.02 & 1.19 & 0.81 \\%& 1.74 \\ I.c & 1.00 & 0.97 & 0.88 & 0.75 & 0.02 & 1.23 & 0.91 \\%& 1.68 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Quasi Grad-Rubin with loop trajectories alone}}\\ II.b & 0.99 & 0.96 & 0.81 & 0.68 & 0.02 & 1.17 & 0.77 \\%& 0.23 \\ II.c & 0.99 & 0.97 & 0.86 & 0.78 & 0.02 & 1.23 & 0.93 \\%& 0.16 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Ranges reported in \citet{Schrijver2006}}}\\ & 0.94 -- 1.00 & 0.54 -- 0.91 & 0.48 -- 0.92 & -2.2 -- 0.66 & 0.03 -- 0.57 & 0.82 -- 1.14 & --- \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Potential field}}\\ & 0.86 & 0.87 & 0.50 & 0.44 & --- & 1.00 & 0.00 \\ \end{tabular} \caption{Metrics (defined in Section~\ref{sec_tests}) for different QGR solutions with different types of input data applied to the \citet{Schrijver2006} test case. The values for I.c and II.c are reported for the optimal solution with $f=1.69$. The values for $\br$ and $\bp$ are shown for comparison, and so are the ranges of values for different NLFFF extrapolations reported in \citet{Schrijver2006}. Relative helicity is stated in fractions of that of the reference field. For notation, refer to Table~\ref{input_data_types}.} \label{table_llf} \end{table} \voffset=-1.5cm \hoffset=-1cm \begin{figure}[!hc] \begin{center} \includegraphics{fig06.eps} \end{center} \caption{Reconstruction of Low \& Lou field from \citet{Schrijver2008} using schemes II.b and II.c, i.e. QGR with ideal and realistic loop input, that is, reconstructed from 2D loop projections (see Table~\ref{input_data_types}). Panels \textit{(a)}-\textit{(i)}: field lines, line-of-sight integrated magnitude of current and horizontal slices of \als for $\br$ and $\bvec$ for II.b and II.c. Panel \textit{(j)}: field lines of $\bvec_{P}$ (all field lines are traced from the same starting points). Panels \textit{(h)}, \textit{(l)}: line-of-sight integrated volume constraints for II.b and II.c.} \label{llf/small_table} \end{figure} \voffset=-1.5cm \hoffset=-1cm \begin{figure}[!hc] \begin{center} \includegraphics{fig07.eps} \end{center} \caption{Energy $E/E_P$ at each iteration as demonstration of the convergence of the QGR iteration for the Low \& Lou field test case. The energy on these plots is shown for the entire domain \citep[while Table~\ref{table_llf} reports the numbers for the middle sub-domain identical to the one in][as do the other tables in the manuscript]{Schrijver2006}. Top row, from left to right: I.a, I.b and I.c solutions (refer to Table~\ref{input_data_types} for notation). Bottom row, from left to right: II.b, II.c.} \label{llf/energies} \end{figure} \clearpage \subsection{QGR Applied to a Solar-Like Field}\label{sec_karel} In this section, we investigate whether QGR is applicable to solar data. The \citet{LowLou1990} family of fields has axial symmetry which is not in general observed in active regions, and both magnetic field and current vary unrealistically smoothly through the lower boundary by comparison with vector magnetogram data. Hence, we repeat the experiments from the previous section, but with choosing a more realistic solar-like field as $\br$. We use two particular NLFFF solutions from \citet{Schrijver2008}, who presented NLFFF reconstructions of the coronal field for AR 10930 before and after a major flare using several extrapolation methods applied to \textit{Hinode} vector magnetograms. They found that the extrapolations which best matched observed coronal features were GR solutions obtained with the \citet{Wheatland2007} code, using \als values from the positive polarity of the magnetograms (hereafter Wh$^{+}_{\mathrm{pp}}$). We use those as our reference field. These solutions also had the largest free energy of all extrapolations. Another advantage of these fields for our study is that they use the same boundary conditions and nearly the same numeric implementation as the QGR scheme. We emphasize that the objective is in this case \textit{not} to create a realistic representation of coronal field but to test the new algorithm on a \textit{known} NLFFFs that are expected to more closely resemble the coronal field overlying a solar active region. For both pre- and post-flare reference fields we select random sets of field lines and evaluate $\langle\alpha\rangle$ on each field line. These field lines are used as trajectories $\path_i$ in II.b set-up and their projections onto $z=0$ plane are used as loops for II.c set-up. We also use $B_z$ at the lower boundary and start with the same initial $\bp$ as the other schemes in \citet{Schrijver2008}. We perform final tests on the full-sized domain but determine the scaling factor $f$ on the domain down-sampled by a factor of 0.5 (this is done to speed up computations and to allow the possibility that currents have structure finer than the grid size, which is likely to be the case for real data). For both sets of loops (volume constraints for the down-sampled domain were only imposed at a small fraction of pixels, at about 1.7\% of the current-carrying volume and the bigger one covered about 6.6\% of the current-carrying volume) the solution for the pre-flare configuration does not converge with the II.b inputs. Instead it enters a remarkably stable oscillatory cycle with a period of $\approx50$ iterations. This cycle develops at $\approx200$ iterations, as shown in Figure~\ref{alpha_oscill_fig1}. We ran the code for a few thousands iterations to verify that the cycle is indeed stable. As energy slowly increases, a sheared arcade forms similar to the one in $\br$; but as the energy reaches its maximum and becomes most similar to $\br$, the field experiences drastic changes. Some of the current-carrying field lines rapidly ``escape'' the domain via the $y=-100$ boundary, what changes the \als values on these field lines (\als is set to zero), as explained in Section~\ref{sec_method}. In the stage of the cycle with the lowest energy most of the field lines from the core of the region connect to the $y=-100$ boundary and so carry no currents. This may be a valid force-free solution, though it is not consistent with the volume constraints which require \als to be non-zero in some points in the volume. When the \als values from the volume constraints are reimposed again at each iteration, the currents gradually build up again and the cycle repeats. The escape of field lines does not represent a physical evolution of the field as the iterations are not related to any physically meaningful time-like variable. The same oscillatory behavior is found in numerous experiments with this particular test-case. \citet{Schrijver2008} report that the pre-flare solution which we use as $\br$ did not fully converge either; it kept oscillating. Below we discuss factors possibly causing the oscillations and a way to damp them. These factors are: (1) numerical noise in \als and therefore in the electric currents that appear even in current-free areas and (2) deviations of the input data from a force-free field (as we discuss below, $\br$ in this case is not exactly force-free even at full resolution). The damping that we consider allows the calculated field to have small variations in \als along field lines as well as small deviations of the solution from the volume constraints. \begin{figure}[!hc] \begin{center} \includegraphics{fig08.eps} \end{center} \caption{Values of CWsin and $E/E\pot$ in the center of the domain (the same region as used in \citet{Schrijver2008}) for the QGR calculation for one of the solar-like fields demonstrating oscillatory behavior. Different stages of this cycle are discussed in the text. The values of $E_\mathrm{ref}$ and CWsin$_{\mathrm{ref}}$ are shown as dashed lines.} \label{alpha_oscill_fig1} \end{figure} \clearpage The first factor is the influence of numerical noise when solving $\nabla\times\bvec^{(n+1)}=\alpha^{(n)}\bvec^{(n)}$ (Step~2 in the algorithm in Section~\ref{sec_method}), especially around sharp edges in $\alpha^{(n)}$, and with the artifacts introduced the by Fourier transforms around these edges. These effects introduce noise in the $\alpha^{(n+1)}$ values obtained in the next step. Figures~\ref{alpha_noise} and~\ref{alpha_noise1} clarify the amount of such noise and its relative size to the signal. In areas of closed field $|\alpha_{\mathrm{ref}}|\lessapprox 0.8\mbox{ arcsec}^{-1}$ and in the areas with open field (and hence no currents) $|\alpha_{\mathrm{ref}}|\lessapprox 5\times 10^{-3}\mbox{ arcsec}^{-1}$. The flux-weighted distribution of \als evaluated numerically in the current-free region has half width at half maximum of $\approx10^{-3}$ arcsec$^{-1}$. \begin{figure}[!hc] \begin{center} \includegraphics{fig09.eps} \end{center} \caption{Left panel: image of horizontal slice of $\alpha_{\mathrm{ref}}$ in the pre-flare Wh$^{+}_{\mathrm{pp}}$ close to the lower boundary (the grayscale goes from $-0.8$ to $0.8$ arcsec$^{-1}$) and two profiles of $\alpha_{\mathrm{ref}}$ (solid line on both profiles) and $\alpha_{\mathrm{pot}}$ (dashed line on both profiles) in this slice. Top right panel: areas with significant currents. Bottom right panel: areas with no currents in both $\br$ and $\bp$. Variations in \als due to numerical uncertainties are $|\alpha|\lessapprox 0.005$ arcsec$^{-1}$. As we use the same numeric solver, the noise in our case is expected to be of the similar nature and magnitude.} \label{alpha_noise} \end{figure} \begin{figure}[!hc] \begin{center} \includegraphics[width=8cm]{fig10.eps} \end{center} \caption{Histograms of $\alpha_{\mathrm{ref}}$ evaluated numerically on closed field (black line) and open field regions (gray line) in the pre-flare Wh$^{+}_{\mathrm{pp}}$. No currents are allowed on the open field by the GR scheme, used to calculate $\br$. Hence, the gray line shows the numerical noise. The distribution of the noise, evaluated from this plot, has a half width at half maximum of $\approx10^{-3}$ arcsec$^{-1}$.} \label{alpha_noise1} \end{figure} The second factor is errors in the volume constraint data. In the case discussed in this section, $\br$ has significant non-zero magnetic forces from the perspective of QGR. As discussed in Section~\ref{sec_method}, for convergence QGR requires not only that the Lorentz force is small everywhere in the volume, but also that that the \textit{integrals} of the Lorentz force along the field lines are small. These conditions are not met for $\br$: as shown in Figure~\ref{alpha_constant}, \als changes substantially along central field lines. This is due to the Wh$_{\mathrm{pp}}^{+}$ solutions themselves not converging precisely during the GR iteration used to calculate them \citep[as mentioned in][]{Schrijver2008}. \begin{figure}[!hc] \begin{center} \includegraphics[width=12cm]{fig11.eps} \end{center} \caption{The variation in \als in the pre-flare Wh$^{+}_{\mathrm{pp}}$ field used as a solar-like test case. Left column: selected field lines of $\br$ (dashed red) and stream lines of $\jvec_{\mathrm{ref}}$ initiated at points along these field lines. Right column: profiles of $\alpha_{\mathrm{ref}}$ along these field lines. These panels indicate both small-scale and large-scale variation in $\alpha_{\mathrm{ref}}$ significantly above the noise threshold. The values of $\langle\alpha_{\mathrm{ref}}\rangle$ and $\langle\alpha_{\mathrm{ref}}\rangle \pm \sigma$ are shown as dashed and dotted lines respectively.} \label{alpha_constant} \end{figure} To account for these issues we introduce two uncertainty thresholds: $\Delta\alpha_{\mathrm{err}}$ and $\Delta\alpha_{\mathrm{noise}}$. The first allows the solution to have \als values slightly different from the \als values imposed along loop trajectories and the second allows small variations of \als along field lines to damp numerical noise. A revised algorithm is formulated as follows, with the modifications relative to Section~\ref{sec_method} in bold. \begin{enumerate} \item{Impose the volume constraints by setting $\alpha^{(n-1)}=\alpha_i$ along loop trajectories $\path_i$, \textbf{but only at points satisfying ${|\alpha^{(n-1)}-\alpha_i|\geq\Delta\alpha_{\mathrm{err}}}$}.} \item{Calculate updated field values $\bvec^{(n)}$ from Equation~\ref{curl_iter} subject to the prescribed boundary conditions. This equation is solved using a vector potential $\avec^{(n)}$ such that $\bvec^{(n)}=\nabla\times\avec^{(n)}$, so the divergence-free condition is satisfied to truncation error.} \item{Calculate an updated set of values for the force-free parameter $\alpha^{(n)}$: for every point in $\vol$, assign $\alpha^{(n)}=\langle\alpha^{(n-1)}\rangle$ \textit{averaged along the field line in $\bvec^{(n)}$ that passes through that point}, \textbf{but only at points satisfying ${|\alpha^{(n)}-\langle\alpha^{(n-1)}\rangle|\geq\Delta\alpha_{\mathrm{noise}}}$. Otherwise retain the value of \als from the previous iteration}. If a field line leaves the domain through any boundary but the lower one, the value of $\alpha$ is set to zero along it (in common with the Wheatland~2007 GR scheme). This ensures that no currents go off to infinity so that the fields' energy remains finite.} \item{Repeat 1.-3. until $\bvec^{(n)}\approx\bvec^{(n-1)}$ and $\alpha^{(n)}\approx\alpha^{(n-1)}$ to within a tolerance and therefore $\nabla\times\bvec^{(n)}\approx\alpha^{(n)}\bvec^{(n)}$.} \end{enumerate} For the case of the Wh$^{+}_{\mathrm{pp}}$ field we choose $\Delta\alpha_{\mathrm{err}}=\Delta\alpha_{\mathrm{noise}}\approx 5\times 10^{-4}$ arcsec$^{-1}$. For comparison, significant \als for the pre-flare $\br$ are $|\alpha_{\mathrm{ref}}|\propto0.8$ arcsec$^{-1}$, as shown in Figure~\ref{alpha_noise}. Two exceptions are the II.b pre-flare solution on the downsampled domain for fewer loops case and II.c pre-flare solution on the full-size domain for more loops case, for which the error threshold is increased to $5\times 10^{-3}$ arcsec$^{-1}$ in order to damp oscillations. Figure~\ref{wh_pp/energies} shows convergence plots for the II.b and II.c cases for the case with the fewer loops. The correction factors $f$ are determined in the same way as in the Section~\ref{sec_llf}. We find a best-fitter $f=4.0$ for both pre- and post-flare data. In both cases this factor matches the coefficient between $\alpha_{\mathrm{ref}}$ and $\alpha_{\mathrm{fit}}$ (see Figures~\ref{fact_wh_pp} and~\ref{fact_wh_pp_pf}) for individual loops, which provides further evidence in favor of this method of estimating $f$. \begin{figure}[!hc] \begin{center} \includegraphics{fig12.eps} \end{center} \caption{Left panel: average distance between loops from the pre-flare Wh$^{+}_{\mathrm{pp}}$ field and corresponding lines of $\bvec$ for the QGR solution in the same manner as in Figure~\ref{fact_llf}. The factor $f=4.0$, which yields the best-matching solution, is again close to scaling coefficient between $\alpha_{\mathrm{fit}}$ from MLM09 underestimates $\alpha_{\mathrm{ref}}$. Right panel: a scatter plot of $\alpha_{\mathrm{ref}}$ and $\alpha_{\mathrm{fit}}$ for individual loops and a line with the slope which equals to the best-matching $f$. Vertical error bars indicate the average variation of \als along loops in the reference field.} \label{fact_wh_pp} \end{figure} \begin{figure}[!hc] \begin{center} \includegraphics{fig13.eps} \end{center} \caption{Same as Figure~\ref{fact_wh_pp}, but for the QGR solution for the post-flare reference field. The best-matching scaling factor $f$ is found to be the same as for the pre-flare field.} \label{fact_wh_pp_pf} \end{figure} The results for the pre- and post-flare reference fields are summarized in Tables~\ref{table_wh_pp}-\ref{table_wh_pp_post-flare} and Figures~\ref{wh_pp/fullres_1iic_25}-\ref{wh_postflare/fullres_1iic_25}. In each case, the QGR reproduces the overall shape of field lines and the large-scale features of the current distribution. The reconstructed fields have $\ge50$\% of the free energy and $\ge25$\% of the relative helicity of the reference fields. The reconstructions using half resolution data are slightly inferior to those using full resolution data, based on the metrics in Tables~\ref{table_wh_pp}-\ref{table_wh_pp_post-flare}, but the solutions still reproduce at least half of the free energy and the quarter of helicity of the reference field and the large-scale structure of currents. \vspace{-1cm}\small{\begin{table}[!hc] \begin{tabular}{m{1.0cm}m{1.5cm}m{1.5cm}m{1.5cm}m{1.5cm}m{1.5cm}m{1.5cm}c} & & & & & & & \\ & $\mbox{C}_{\mathrm{vec}}$ & $\mbox{C}_{\mathrm{CS}}$ & $1-E_n$ & $1-E_m$ & CWsin & $E/E\pot$ & $H(\bvec|\bp)$ \\ \hline & & & & & & & \\ \multicolumn{8}{l}{\textbf{Half resolution}} \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Reference field}}\\ & 1.00 & 1.00 & 1.00 & 1.00 & 0.35 & 1.31 & 1.00 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{QGR with loop trajectories alone, fewer loops case}} \\ II.b & 0.98 & 0.98 & 0.83 & 0.84 & 0.37 & 1.16 & 0.62 \\ II.c & 0.97 & 0.97 & 0.79 & 0.80 & 0.32 & 1.20 & 0.36 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{QGR with loop trajectories alone, more loops case}} \\ II.b & 0.98 & 0.98 & 0.82 & 0.83 & 0.34 & 1.23 & 0.60 \\ II.c & 0.97 & 0.97 & 0.77 & 0.76 & 0.30 & 1.27 & 0.21 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Potential field}}\\ & 0.86 & 0.94 & 0.62 & 0.70 & --- & 1.00 & 0.00 \\ & & & & & & & \\ \hline & & & & & & & \\ \multicolumn{8}{l}{\textbf{Full resolution}} \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Reference field}}\\ & 1.00 & 1.00 & 1.00 & 1.00 & 0.24 & 1.32 & 1.00 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{QGR with loop trajectories alone, fewer loops case}} \\ II.b & 0.98 & 0.99 & 0.85 & 0.86 & 0.11 & 1.18 & 0.64 \\ II.c & 0.98 & 0.98 & 0.80 & 0.81 & 0.07 & 1.27 & 0.43 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{QGR with loop trajectories alone, more loops case}} \\ II.b & 0.98 & 0.99 & 0.83 & 0.84 & 0.09 & 1.26 & 0.62 \\ II.c & 0.97 & 0.97 & 0.77 & 0.77 & 0.08 & 1.30 & 0.29 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Potential field}}\\ & 0.86 & 0.94 & 0.62 & 0.70 & --- & 1.00 & 0.00 \\ \hline \end{tabular} \caption{\small{Metrics for the pre-flare reference field. The numbers for II.c solution are reported for the $f=4.0$. The downsampled fewer loops II.b case and full resolution more loops II.c case are unstable for $\Delta\alpha_{\mathrm{err}}=5\times 10^{-4}$ arcsec$^{-1}$; the reported values in these cases are for $\Delta\alpha_{\mathrm{err}}=5\times 10^{-3}$ arcsec$^{-1}$.}} \label{table_wh_pp} \end{table}} \voffset=-1.5cm \hoffset=-1cm \begin{figure}[!hc] \begin{center} \includegraphics{fig14.eps} \end{center} \caption{QGR solutions for the pre-flare Wh$_{\mathrm{pp}}^+$ reference field in full resolution for fewer loops case using schemes II.b and II.c (QGR with ideal and realistic loop input, that is, reconstructed from 2D loop projections --- refer to Figure~\ref{input_data_types}). Panels \textit{(a)}-\textit{(i)}: field lines, line-of-sight integrated magnitude of current and horizontal slice of \als for $\br$ and $\bvec$ for II.b and II.c. Panel \textit{(j)}: field lines of $\bp$ (all field lines are traced from the same starting points). Panels \textit{(h)}, \textit{(l)}: line-of-sight integrated volume constraints for II.b and II.c.} \label{wh_pp/fullres_1iic_25} \end{figure} \voffset=-1.5cm \hoffset=-1cm \begin{figure}[!hc] \begin{center} \includegraphics{fig15.eps} \end{center} \caption{QGR solutions for the preflare Wh$_{\mathrm{pp}}^+$ reference field in half resolution for more loops using schemes II.b and II.c (QGR with ideal and realistic loop input, that is, reconstructed from 2D loop projections --- refer to Figure~\ref{input_data_types}). Panels \textit{(a)}-\textit{(i)}: field lines, line-of-sight integrated magnitude of current and horizontal slice of \als for $\br$ and $\bvec$ for II.b and II.c. Panel \textit{(j)}: field lines of $\bp$ (all field lines are traced from the same starting points). Panels \textit{(h)}, \textit{(l)}: line-of-sight integrated volume constraints for II.b and II.c.} \label{wh_pp/1iic_25_ml} \end{figure} \begin{table} \centering \begin{tabular}{m{1.0cm}m{1.9cm}m{1.9cm}m{1.9cm}m{1.9cm}m{1.9cm}m{1.9cm}c} & & & & & & & \\ & $\mbox{C}_{\mathrm{vec}}$ & $\mbox{C}_{\mathrm{CS}}$ & $1-E_n$ & $1-E_m$ & CWsin & $E/E\pot$ & $H(\bvec|\bp)$ \\ & & & & & & & \\ \hline & & & & & & & \\ \multicolumn{8}{l}{\textbf{Half resolution}} \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Reference field}}\\ & 1.00 & 1.00 & 1.00 & 1.00 & 0.13 & 1.16 & 1.00 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{QGR with loop trajectories alone}}\\ II.b & 0.99 & 0.99 & 0.86 & 0.87 & 0.10 & 1.07 & 0.48 \\ II.c & 0.99 & 0.99 & 0.88 & 0.87 & 0.07 & 1.13 & 0.63 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Potential field}}\\ & 0.94 & 0.97 & 0.76 & 0.80 & --- & 1.00 & 0.00 \\ & & & & & & & \\ \hline & & & & & & & \\ \multicolumn{8}{l}{\textbf{Full resolution}} \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Reference field}}\\ & 1.00 & 1.00 & 1.00 & 1.00 & 0.17 & 1.14 & 1.00 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{QGR with loop trajectories alone}}\\ II.b & 0.99 & 0.99 & 0.89 & 0.88 & 0.13 & 1.09 & 0.50 \\ II.c & 0.99 & 0.99 & 0.89 & 0.88 & 0.10 & 1.14 & 0.69 \\ & & & & & & & \\ \multicolumn{8}{l}{\textit{Potential field}}\\ & 0.93 & 0.97 & 0.75 & 0.80 & --- & 1.00 & 0.00 \\ & & & & & & & \\ \hline \end{tabular} \caption{Metrics for the QGR results for the post-flare reference field. The numbers for II.c are reported for the $f=4.0$ solution. For notation, refer to Table~\ref{input_data_types}.} \label{table_wh_pp_post-flare} \end{table} \voffset=-1.5cm \hoffset=-1cm \begin{figure}[!hc] \begin{center} \includegraphics{fig16.eps} \end{center} \caption{QGR solutions for the postflare Wh$_{\mathrm{pp}}^+$ reference field in full resolution using schemes II.b and II.c (QGR with ideal and realistic loop input, that is, reconstructed from 2D loop projections --- refer to Figure~\ref{input_data_types}). Panels \textit{(a)}-\textit{(i)}: field lines, line-of-sight integrated magnitude of current and horizontal slice of \als for $\br$ and $\bvec$ for II.b and II.c. Panel \textit{(j)}: field lines of $\bp$ (all field lines are traced from the same starting points). Panels \textit{(h)}, \textit{(l)}: line-of-sight integrated volume constraints for II.b and II.c.} \label{wh_postflare/fullres_1iic_25} \end{figure} \voffset=-1.5cm \hoffset=-1cm \begin{figure}[!hc] \begin{center} \includegraphics{fig17.eps} \end{center} \caption{Energy $E/E_P$ at each iteration as demonstration of convergence of different schemes for the preflare and postflare Wh$^{+}_{pp}$ field fields. These particular plots correspond to the downsampled datacube for the case with fewer loops.} \label{wh_pp/energies} \end{figure} \section{Discussion and Conclusions}\label{sec_summ} In this study we demonstrate that coronal loops provide a useful source of information for determining the structure of the coronal magnetic field. While the observed loops do not cover all of the coronal volume, they provide information about the \textit{shape of the coronal field lines}, which boundary data alone lack. We demonstrate a method that constructs nonlinear force-free fields using line-of-sight magnetograms and coronal loops observed in the plane-of-sky projection. This may mitigate the problems NLFFF schemes encounter with currents determined from vector magnetograms \citep{Demoulin1997b}. The loops are first approximated by lines of constant-\als fields, with different \als values for each loop. This is done using an existing scheme developed by \citet{Malanushenko2009b} which we refer to as the MLM09 fit in this paper. The approximate \als values along the approximate loop trajectories are treated as volume constraints in a quasi Grad-Rubin algorithm, using the code modified from \citet{Wheatland2009}. The method, which we refer to as the Quasi Grad-Rubin method (or QGR) is tested on several nonlinear force-free fields and the results demonstrate good performance of the method. While traditional extrapolations of coronal magnetic fields have been found to provide poor matches to coronal features observed in X-rays and EUV \citep{DeRosa2009}, the fields created by QGR are \textit{constructed} with the effort to match observed coronal features and thus may provide a more realistic model of the actual coronal magnetic field. The problem of constructing a nonlinear force-free field is typically viewed as a boundary value problem requiring an extrapolation of the field from the boundaries to the volume of the corona. However, thorough this paper we purposely avoid referring to QGR as an extrapolation scheme, because it is not. It is a mixture of extrapolation of magnetic field and \textit{inter}polation of electric currents. The reason we tend to view the step of filling the volume with \als values as an interpolation-like procedure is as follows. At each iteration, \als is averaged along lines of the field at the present iteration. If the solution has not converged yet, this field is different from the one on the previous iteration, so \als is averaged across lines of the field of the previous iteration (see Section~\ref{sec_method}). Substantial differences in \als values on such field lines result in extreme values getting ``spread'' through the volume and the values of \als being ``smoothed'' along field lines. Smaller differences between the fields from two consecutive iterations should result in \als smoothing out on shorter distances. So on each consecutive iteration $n$ the \als values are smoothed across field lines to a distance which depends on the magnitude of the angle between $\bvec^{(n)}$ and $\bvec^{(n-1)}||\jvec^{(n)}$, and therefore on the Lorentz force at the $n$-th iteration. The process therefore results in a smooth distribution of \als in the volume and decreasing the Lorentz forces implies smaller-scale changes in this already smooth distribution. The described scheme cannot produce \als bigger in magnitude than the volume constraints and it tends to produce a smooth transition of \als between these constraints in such a way as to minimize Lorentz forces. This explains the interpolation-like nature of QGR with respect to \al. This scheme does not resolve fine structure of the ``interpolated'' variable (in this case, \al), as no interpolation scheme can, but it successfully approximates general trends, as expected from an interpolation scheme. We also develop a way to deal with the uncertainties in the input data and the numeric noise. The uncertainties in the observables produce inconsistency with a force-free solution. As such uncertainties are expected, this is an important feature of the method. QGR in the form described in Section~\ref{sec_karel} allows the volume constraints not to be re-imposed if the average \als on a given field line which passes through a given constraint point is within a small prescribed amount of $\alpha_i$ of the constraint. This means that a magnetic field which is force-free but imperfectly matches given volume constraints for \als would not be changed by the method. As any numerical scheme, QGR is also prone to numerical noise. We are able to determine the range of this noise for a given problem. The method assumes that \als below this noise level is numerically not distinguished from zero. It also assumes that average \als along a given field line may vary within this noise range. It therefore does not replace \als by the newly defined average along the field line if that average differs to less than the numerical noise threshold from the previously determined value. We noted that fitting loops with lines of \citet{Chiu1977} constant-\als fields results in the underestimation of \als and verify that it is at least partly due to a difference in the size of the volumes which contain currents (finite in reference cases and half space in the fields used for fitting, see Section~\ref{sec_dipole}). We determine that the underestimation coefficient is roughly the same for most loops and that this coefficient may be determined from observables (projected loops). Applying this determined coefficient leads to a good match between the reference field and the model, as demonstrated in Section~\ref{sec_tests}. A rigorous proof of the nature of such a coefficient and its analytic evaluation are subjects of future studies. We do not find substantial differences when reconstructing a reference field on a full size domain or on a down-sampled domain. This might be due to the smoothness of fields created by the QGR due to its interpolation-like nature for the reasons discussed above. This is an important result, as the test case in Section~\ref{sec_karel} has structure of currents finer than the grid size in the down-sampled test, which may also be the case when modeling coronal fields. While developed for currents approximated from loops in EUV and X-ray images, QGR yields better results when currents are measured exactly, e.g., from vector magnetograms (II.b inputs in Sections~\ref{sec_llf} and~\ref{sec_karel}). This gives hope that as vector magnetograms become more applicable for NLFFF modeling (at least in the cores of active regions), the performance on the II.b level could be achieved. This method could also benefit from the exact knowledge of the 3D shape of the loops, e.g., drawn from STEREO and SDO satellites combined. Overall we find that the method developed in this paper is able to recover over half of the free energy and over a quarter of the helicity for the solar-like test case fields, which is more than was reported for previously tested methods \citep{Metcalf2008, Schrijver2008, DeRosa2009}. The method recovers large-scale features of the field well, such as structure of currents, shape of field lines and the connectivity of the field, but it fails to resolve fine structure. We nonetheless find that the large structure determines at least half of the free energy and a quarter of the relative helicity and therefore QGR may be used to provide estimates of these quantities. This work was supported by AIA contract NNG04EA00C to the Lockheed Martin Advanced Technology Center through a grant to Montana State University, in collaboration with the University of Sydney. \textit{Hinode} is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and NSC (Norway).
1,477,468,751,183
arxiv
\section{Introduction}\label{sec:intro} A DC network is a power system with electrical power flows in the form of direct current~% \cite{dorfler2018electrical}. DC networks have found promising applications in low- and medium-voltage power systems with high penetrations of DC loads and generators, such as distribution systems, microgrids, shipboard electrical networks, data centers, etc.~% \cite{dragivcevic2016dc}. An operating point for a DC network satisfies the network's power flow equations. System operators seek the optimal operating point which maximizes economic efficiency while satisfying various physical and operational constraints~% \cite{dragivcevic2016dc}. The optimal operating point can be found by solving an optimal power flow (OPF) problem~% \cite{cain2012history}. For AC power systems, the OPF problem has recently been shown to be NP-hard~% \cite{lehmann2016ac}. Many research efforts have attempted to improve the tractability of AC-OPF problems using various approximations and relaxations of the power flow equations~% \cite{molzahn2018survey}. Recent research has studied OPF problems for DC networks (DN-OPF)~% \cite{farasat2015ga,montoya2018linear, gan2014optimal,li2018optimal,montoya2018opf,garces2018newton, inam2016stability,7464840}. Note that DN-OPF problems are very different from OPF problems for AC systems that use the linear ``DC'' power flow approximation to obtain linear programming formulations (often termed \mbox{DC-OPF} problems~% \cite{stott2009dc}). Rather, DN-OPF problems incorporate the nonlinear power flow equations associated with DC networks, resulting in non-convex optimization problems~% \cite{garces2018newton}. A variety of methods have been applied to solve DN-OPF problems. In~% \cite{farasat2015ga}, a genetic algorithm is applied to solve the OPF problem for a DC distribution system. In~% \cite{montoya2018linear}, linearization techniques are used to simplify the problem. Other methods~% \cite{gan2014optimal, li2018optimal, montoya2018opf} employ second-order cone programming and quadratic convex programming to relax a DN-OPF problem into a convex formulation. This existing work demonstrates the capability to effectively solve various DN-OPF problems. Despite recent advances, existing work~% \cite{gan2014optimal,li2018optimal,montoya2018opf,montoya2018linear} has two major limitations. First, previous results primarily focus on deterministic DN-OPF problems where the loading conditions are assumed to be fixed and known \textit{a priori}. Second, previous results do not consider the stability characteristics of DN-OPF solutions. Nevertheless, with high penetrations of intermittent generation and variable loads, uncertainty in the net loading conditions is a characteristic feature of DC networks~% \cite{7762882}. Ensuring stability despite this uncertainty is a key concern for secure and reliable operation of DC networks% ~% \cite{riccobono2014comprehensive,liu2018existence}. The lack of stability considerations when choosing an operating point may result in instability. Moreover, directly applying the OPF decisions computed using a specific scenario to an uncertain system can cause unpredictable deviations of the system operating point from the designated value~% \cite{louca2018robust,molzahn2018towards}. This may lead to violations of operational constraints and possibly cause voltage collapse~% \cite{simpson2016voltage,cui2018voltage}, where the power flow equations no longer admit a solution. We propose a robust stability-constrained DN-OPF algorithm to address these limitations. We focus on a generic DC network with nonlinear constant power loads (CPLs) whose demands have interval uncertainties~% \cite{dragivcevic2016dc}. We seek to select the generators' voltage set points in order to minimize operational costs while ensuring the existence of stable and secure operating points for all realizations of the uncertain loading conditions. In other words, solutions resulting from our algorithm guarantee: 1)~robust stability (local exponential stability of the operating point) and 2)~robust feasibility (existence of power flow solutions) and security (the satisfaction of all other operational constraints). To provide robust stability and feasibility guarantees, we formulate a \mbox{DN-OPF} problem that incorporates uncertainty and stability conditions. Solving this problem is difficult. First, existing stability conditions for DC networks are developed to study given operating points~% \cite{liu2018robust}; hence, ensuring stability when operating points are decision variables is challenging. Additionally, to ensure robustness, the power flow equation along with the stability condition need to jointly hold for all uncertainty realizations. This results in a semi-infinite programming (SIP) problem~% \cite{hettich1993semi} that is generally computationally intractable~% \cite{mulvey1995robust}. There exist methods, like the minimax robust optimization approach~% \cite{ben2002robust} and scenario methods~% \cite{mulvey1995robust}, to solve a convex SIP problem by transforming it into a more tractable problem. Nevertheless, existing approaches either cannot guarantee the existence of a solution to the original problem or are computationally expensive~% \cite{ben2009robust}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{illu_idea.pdf} \caption{\label{fig:idea}Illustration of the proposed work.} \end{figure} The proposed algorithm converts the SIP problem into a tractable formulation that resembles a well-studied DN-OPF problem. Fig.~\ref{fig:idea} illustrates the key idea of the proposed work. The main technical tasks are summarized as: \begin{itemize} \item[(1)] For a given polytopically constrained uncertainty set for the loads, we first derive a polytopic stability set in the voltage space such that any operating point therein is guaranteed to be stable. \item[(2)] We study the solvability of the DC network power flow equations to certify the unique existence of an operating point in a feasibility set, for a given load profile. This feasibility set depends on generator voltage set points and the loads. \item[(3)] By solving a tractable problem reminiscent of a \mbox{DN-OPF} problem, we compute generator voltage set points which ensure that the feasibility sets associated with all uncertainty realizations in the given load uncertainty set are contained within the intersection of the stability set and an operational constraint set. This guarantees robustly stable and robustly feasible operation. \end{itemize} Using existing stability analysis results~% \cite{liu2018robust}, we first certify whether a given polytope in the voltage space is a stability set using a linear matrix inequality (LMI) feasibility test. With an initial polytope set, we can find an optimal scaling of its size to determine the largest stability set with respect to the initial set. The scaling can be efficiently found by solving a generalized eigenvalue problem~% \cite{boyd2004convex}. We then employ AC power flow feasibility results~% \cite{wang2018explicit} to derive a condition which ensures that the DC network power flow equation always has a solution lying in a polytope whose center and radius depend on the generators' voltage set points. Lastly, we formulate a \mbox{DN-OPF} problem to optimally design the voltage set points so that the operating points for all uncertainty realizations are always within the intersection of the stability set and operational constraint set. The problem is tractable and resembles ordinary DN-OPF problems studied in the literature~% \cite{gan2014optimal}. We prove that any solution of this tractable problem is a feasible point of the original intractable SIP problem. Therefore, the solution guarantees robust feasibility and stability. To the best of our knowledge, this work is among the first to solve an OPF problem with robust stability and feasibility guarantees which does not rely on simplifying assumptions such as special load models~% \cite{louca2018robust} or power flow solution existence~% \cite{molzahn2018towards}. The rest of the paper is organized as follows: Section~\ref{sec:prob} first introduces notation and the steady-state and dynamic models of a DC network and then formulates the problem considered in this paper. Section~\ref{sec:technical} discusses the development of the proposed algorithm. Section~\ref{sec:simu} demonstrates the efficacy of the proposed work using case study simulations. Section~\ref{sec:sum} concludes the paper and discusses future research directions. \section{System Modeling and Problem Statement}\label{sec:prob} \subsection{Notation} In this paper, we use $\mathbf{1}$ and $\mathbf{0}$ to represent vectors of all 1's and 0's of appropriate sizes, and use $I$ to represent the identity matrix of appropriate size. Recall that a square matrix $A$ is Hurwitz if all real parts of its eigenvalues are negative. In addition, we use $A_{j,k}$ to denote the element on the $j$-th row and $k$-th column. For a vector $v$, let $v_k$ represent its $k$-th element. Let operator $\diag\{v\}$ yield a diagonal matrix with the vector's components being the diagonal entries. For a real square matrix $A$, $A^{-1}$ denote its inverse and $A \succ 0$ means it is symmetric positive definite. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{unitop_2.pdf} \caption{\label{fig:topo}Example DC power network.} \end{figure} \subsection{DC Power Systems}\label{sec:prob_model} In this paper, we focus on a DC network with $n_s$ generators, $n_\ell$ loads, and $n_t$ power lines. The total number of these components is $n=n_s+n_t+n_\ell$. Let the index sets of generators, loads, and power lines be $\mathcal{N}_s$, $\mathcal{N}_\ell$, and $\mathcal{E}_t$, respectively. Fig.~\ref{fig:topo} shows an example DC network consisting of lumped $\pi$-equivalent models~% \cite{7798761} where generators and loads are interconnected via equivalent RLC circuits~% \cite{dorfler2018electrical}. \subsubsection{Load and Generator Models} Fig.~\ref{fig:indi_gl} shows a zoomed-in image of one part of the circuit. Suppose the circuit has the $k$-th generator, $p$-th power line, and $j$-th load. Let $i_{to}(t)$ and $i_{td}(t)$ represent the current flowing into and out of the circuit. Loads are modeled as constant power loads (CPLs) that are connected in parallel with a lumped shunt resistor by Norton's Theorem. It is well known that a CPL is a nonlinear load and its negative impedance effect is a major source of instability in a DC network~% \cite{7182770,6415284}. It is modeled as a nonlinear current sink with current injection equal to the power demand divided by terminal voltage. For the $j$-th load, let $p_{\ell j}$ represent its power output, and let $v_{\ell j}$ represent the terminal voltage. At the nominal condition, $p_{\ell j}=p_{\ell j}^*$, where $p_{\ell j}^*$ is a given constant. Each $p_{\ell j}$ can be considered as a perturbation to $p_{\ell j}^*$ that is unknown and bounded within a given uncertainty interval $[\ubar{p}_{\ell j},\bar{p}_{\ell j}]$ where $\bar{p}_{\ell j}\geq \ubar{p}_{\ell j}$. The uncertainty interval may stem from probabilistic measures of demand fluctuations or from the physical capacity constraints of loads. Let $\mathcal{P}_\ell$ be the polytopic uncertainty set for all load, that is, $\mathcal{P}_\ell = \{p_\ell: p_{lk}\in [\ubar{p}_{\ell k},\bar{p}_{\ell k}], k \in \mathcal{N}_\ell \}$. Let $R_{\ell j}$ and $C_{\ell j}$ represent the load resistance and capacitance, respectively. \begin{figure}[h] \centering \includegraphics[width=0.52\textwidth]{indi_gl.pdf} \caption{\label{fig:indi_gl}Zoomed-in image of the dynamic circuit.} \end{figure} \begin{rem} Generators are modeled as voltage sources~% \cite{7798761} that are in series with equivalent resistors by Th\'evenin's Theorem. We assume that proper low-level controllers~% \cite{dragivcevic2016dc} have been employed to regulate the terminal voltage of a generator to track a reference set point. Consequently, the generator can automatically vary power outputs to respond to changing loading conditions. The generator internal dynamics, including those from low-level controllers and electromechanical transients, are not considered in this paper, and we mainly focus on the network dynamics contributed by electromagnetic transients in the stability analysis. Nevertheless, the main results of the paper can be extended to include various generator dynamics as well. For the $k$-th source, let $v^{\mathrm{ref}}_k$ be the controllable voltage set point, $v_{sk}$ be the external generator voltage, and $R_{sk}$, $C_{sk}$ represent the source resistance and capacitance, respectively. We impose operational constraints on $v^{\mathrm{ref}}$ such that vector $v^{\mathrm{ref}}$ which includes all voltage set points needs to lie in a given convex constraint set $\mathcal{V}^{\mathrm{ref}}$. The main results of this paper can be extended to DC networks with other generator and load models. For example, constant-current and constant-impedance loads are linear and can be easily incorporated in the model. Additionally, for generators with V-I droop control~% \cite{dragivcevic2016dc}, the voltage set point can be considered as the droop reference and the droop gains can be modeled as virtual impedances that are included in the RLC circuits. \end{rem} \subsubsection{Dynamic Network Model} Sources and loads are connected to DC buses. The buses form a connected graph where a bus is a node and an edge is a $\pi$-equivalent power line. For the $p$-th power line, let $i_{tp}(t)$ be the current flow and let $R_{tp}$ and $L_{tp}$ represent line resistance and inductance. In this paper, the dynamics of the system are mainly associated with the RLC circuit that connects all the components~% \cite{dorfler2018electrical}. Note that some dynamic controllers have recently been developed for DC networks~% \cite{dragivcevic2016dc}. Our results can be easily extended to cover the additional dynamics introduced by these controllers. We exemplify the modeling approach using the circuit shown in Fig.~% \ref{fig:indi_gl}. The state variables of the example circuit are the voltages of the capacitors and the currents through the inductors, namely, $v_{sk}(t)$, $v_{\ell j}(t)$, and $i_{tp}(t)$. The design variable is the output voltage of the source, $v^{\mathrm{ref}}_k$. The dynamics of the circuit are represented by the following model using Kirchhoff's current and voltage laws, \begin{subequations} \label{eq:mdl_circ} \begin{align} \dot{i}_{tp}(t) &= \frac{1}{L_{tp}} \left( v_{sk}(t)-R_{tp}i_{tp}(t)-v_{\ell j}(t) \right), \\ \dot{v}_{sk}(t) &= \frac{1}{C_{sk}} \left( \frac{v^{\mathrm{ref}}_k-v_{sk}(t)}{R_{sk}} + i_{to}(t) - i_{tk}(t) \right), \\ \dot{v}_{\ell j}(t) &= \frac{1}{C_{\ell j}} \left(-\frac{v_{\ell j}(t)}{R_{\ell j}} - i_{td}(t) + i_{tk}(t) - \frac{p_{\ell j}}{v_{\ell j}(t)} \right). \end{align} \end{subequations} The first two equations in~\eqref{eq:mdl_circ} characterize the behavior of the power line and the source. They are linear in the state variables and the design variable. However, the last equation is nonlinear due to the term, $p_{\ell k}/v_{\ell k}(t)$. Recall that $i_{to}(t)$ and $i_{td}(t)$ represent aggregate line currents flowing to the load and from the source, respectively, and they have similar dynamics to those of $i_{tk}(t)$. The modeling approach can be applied to the entire system. By dropping the subscripts indicating variable indices, $p_\ell, v_\ell, v_s, i_t$, $v^{\mathrm{ref}}$ represent the vectors of load powers, load voltages, generator external voltages, power line currents, and controllable voltage set points, respectively. Let $x=[i_t^\top,v_{s}^\top,v_{l}^\top]^\top$ be the vector of state variables and $h(x,p_\ell)=[p_{\ell1}/v_{\ell1}, \cdots, p_{\ell n_\ell}/v_{\ell n_\ell}]^\top$, where $\left(\,\cdot\,\right)^\top$ is the transpose. With the above description and notation, the overall dynamics of the DC grid can be written as follows: \begin{equation} \dot{x}(t)=Ax(t)+Bv^{\mathrm{ref}}+Ch(x(t),p_\ell), \quad p_\ell\in \mathcal{P}_\ell, \label{eq:gencls} \end{equation} where the matrices $A\in \mathbb{R}^{n\times n}$, $B\in \mathbb{R}^{n\times n_s}$, and $C\in \mathbb{R}^{n\times n_\ell}$ are constant matrices that are determined by the network topology and RLC circuit parameters through similar methods to those in~% \cite{liu2018robust}. This is a well-accepted model for DC network stability studies that is applicable to a variety of DC power systems~\cite{dragivcevic2016dc,kalcon2012dctrans,1658410}, for instance, DC transmission systems~% \cite{kalcon2012dctrans}. Let $x^e = [(i_t^e)^\top, (v_s^e)^\top, (v_\ell^e)^\top]^\top \in \mathbb{R}^{n}$ be an equilibrium of~\eqref{eq:gencls}. For $v^e_{\ell j} \neq 0, \forall j \in \mathcal{N}_\ell$, system~\eqref{eq:gencls} can be linearized around $x^e$ as \begin{equation} \dot{x}(t) = J(v^e_\ell ,p_\ell) x(t), \end{equation} where it can be easily verified that the corresponding Jacobian matrix $J(v^e_\ell ,p_\ell)$ is a function of $p_\ell$ and $v^e_\ell$. In other words, the stability of an equilibrium only depends on the CPL power and steady-state CPL voltage. Specifically, the Jacobian matrix contains terms in the form of $p_{\ell j}/(v^e_{\ell j})^2$. We know from \cite[Sect. 4.3]{Khalil2002nonlinear} that the equilibrium is asymptotically stable if there exists a matrix $P = P^\top \succ 0$ that satisfies the following condition: \begin{equation} PJ(v^e_\ell, p_\ell)+J(v^e_\ell, p_\ell)^\top P \prec 0. \label{eq:con_stab} \end{equation} If $p_\ell$ and $v^e_\ell$ are given, this condition is a linear matrix inequality (LMI) constraint. However, in our problem, $p_\ell$ is uncertain, $v^e_\ell$ is unknown, and the coupling between $p_\ell$, $v^e_\ell$, and $P$ is non-polynomial. \subsubsection{Power Flow Model} The power flow model describes the steady-state behavior at an operating point of a DC network. The power flow model is obtained by setting the left-hand side of~\eqref{eq:gencls} to $\mathbf{0}$ and rearranging terms as \begin{align} p_\ell=\diag\{v_\ell^e\}\left( Y_{\ell \ell}v_\ell^e+Y_{\ell s}v^{\mathrm{ref}}\right), \label{eq:pfe_ori} \end{align} where the terms in parentheses represent current injections. The connectivity between CPL-source and CPL-CPL are described by two admittance matrices $Y_{\ell s}\in \mathbb{R}^{n_\ell\times n_s}$ and $Y_{\ell\ell}\in \mathbb{R}^{n_\ell\times n_\ell}$~% \cite{liu2018existence}, which are submatrices of the system admittance matrix $Y$. Equation~\eqref{eq:pfe_ori} is quadratic in state variables $v_\ell^e$ and bilinear in design variables $v^{\mathrm{ref}}$ and state variables $v_\ell^e$. \vspace{0.1cm} \subsection{Problem Statement}\label{sec:prob_problem} Recall that the Jacobian $J(v^e_\ell, p_\ell)$ depends on system operating points and uncertain loading conditions. From power flow equation~\eqref{eq:pfe_ori}, an operating point depends on the generators' voltage set points. Therefore, a poorly designed $v^{\mathrm{ref}}$ may 1)~render the stability of system~\eqref{eq:gencls} at risk, and 2)~make~\eqref{eq:pfe_ori} admit no solution. The goal of this work is to choose the value of $v^{\mathrm{ref}}$ which results in the minimum operating cost at the nominal condition while guaranteeing that the system is robustly feasible and stable. We make the terms \emph{robustly feasible} and \emph{robustly stable} precise in Definition~\ref{def:feas+stab} below: \begin{definition} \label{def:feas+stab} Given generator voltage set point $v^{\mathrm{ref}}$, system~\eqref{eq:gencls} is said to be \emph{robustly feasible} if, for every $p_\ell \in \mathcal{P}_\ell$, it admits an equilibrium $x^e$ which satisfies all operational constraints. The system is said to be \textit{robustly stable} if, for every $p_\ell \in \mathcal{P}_\ell$, there exists a corresponding $v^e_\ell$ such that the Jacobian $J(v^e_\ell ,p_\ell)$ is Hurwitz. \end{definition} Desirable operating points for power systems are usually computed by solving optimal power flow (OPF) problems~% \cite{cain2012history}. Recently, OPF problems for DC networks (DN-OPF) have been studied as well~% \cite{gan2014optimal,li2018optimal,montoya2018opf,montoya2018linear}. The formulation of existing DN-OPF problems can be summarized as follows, \vspace{0.2cm} \begin{subequations}\label{eq:prob_ori} \begin{align} \hspace{-2.4cm} \textbf{DN-OPF$^*$: }\quad\quad\quad &\min_{v^{\mathrm{ref}}} f(v^{\mathrm{ref}},v^{e*}_\ell),\text{ subj. to}\\ &~\eqref{eq:pfe_ori}, \quad p_\ell = p^*_\ell,\\ &v_\ell^e\in \mathcal{V}_\ell^{e},\quad v^{\mathrm{ref}}\in \mathcal{V}^{\mathrm{ref}},\label{eq:con_pfe_ori} \end{align} \end{subequations} where $p^*\in \mathbb{R}^{n_\ell}$ and $v^{e*}_\ell \in \mathbb{R}^{n_\ell}$ are the CPL power profile and voltage at the nominal condition, $f:\mathbb{R}^{n_s}\times \mathbb{R}^{n_\ell}\to \mathbb{R}$ is usually a convex cost function representing the operating cost (e.g., power loss or generation cost), and $\mathcal{V}^{e}_\ell \subset \mathbb{R}^{n_\ell}$ is the convex operational constraint set of $v^e_\ell$ representing system operational requirements such as bounds on load voltages.\footnote{We limit our presentation to only consider the state constraints related to the load voltages, which are directly relevant to the system stability, in order to simplify the paper's discussion. Other state variables are linear functions of the load voltages and the voltage set points of the generators. The proposed algorithm can be easily extended to incorporate constraints on these variables.} When the cost function is a quadratic function of the decision variables, problem~\eqref{eq:prob_ori} is a quadratically constrained quadratic program. Recently, effective methods have been developed to solve this problem~% \cite{gan2014optimal,li2018optimal,montoya2018opf,montoya2018linear} using approximation and convex relaxation techniques. However, problem~\eqref{eq:prob_ori} only considers a fixed loading condition and does not explicitly consider system stability. If the actual load is different from the nominal load, the system's operating point may be unexpected and possibly even unstable. To address these limitations, we focus on the following problem with explicit constraints guaranteeing robust feasibility and robust stability: \begin{subequations}\label{eq:prob1} \begin{align} \hspace{-1.1cm}\textbf{R. DN-OPF SIP: }\quad\quad &\min_{v^{\mathrm{ref}},P\succ 0} f(v^{\mathrm{ref}},v^{e*}_\ell),\text{ subj. to}\\ &~\eqref{eq:con_stab},\,~\eqref{eq:pfe_ori},\,~\eqref{eq:con_pfe_ori},\quad \forall p_\ell\in \mathcal{P}_\ell.\label{eq:con_seminf} \end{align} \end{subequations} Compared to problem~\eqref{eq:prob_ori}, we add constraint~\eqref{eq:con_stab} which is sufficient to provide stability guarantees. We also require all of the constraints to hold for any $p_\ell\in \mathcal{P}_\ell$ in order to ensure robust feasibility and robust stability in the presence of uncertainty. Existing methods for~\eqref{eq:prob_ori} cannot be directly applied to~\eqref{eq:prob1}. Problem~\eqref{eq:prob1} is intractable in general due to the infinite number of constraints on the decision variables used to ensure robust feasibility and stability. This makes~\eqref{eq:prob1} a semi-infinite programming (SIP) problem~% \cite{ben2002robust}. Finding a tractable reformulation for the SIP problem \eqref{eq:prob1} is challenging due to non-convexity of the stability condition \eqref{eq:con_stab} and the power flow equations \eqref{eq:pfe_ori}. \section{Tractable DN-OPF with Robust \\ Feasibility and Stability Guarantees}\label{sec:technical} This section presents our algorithm for transforming the computationally forbidding problem~\eqref{eq:prob1} into a tractable formulation. As illustrated in Fig.~\ref{fig:idea}, the algorithm involves three major steps: First, we formulate a computationally efficient optimization problem to find a set of $v^e_\ell$ that satisfies \eqref{eq:con_stab} for all $p_\ell \in \mathcal{P}_\ell$. Second, we develop a sufficient condition to show the existence of $v^e_\ell$ in a set depending on the voltage set points $v^{\text{ref}}$. Third, we develop a DN-OPF problem whose solution $v^{\text{ref}}$ steers $v^e_\ell$ into a desired set. \subsection{Robust Stability Set}\label{sec:robstab} To compute a set of $v^e_\ell$ that can satisfy~\eqref{eq:con_stab}, it suffices to find a convex inner approximation of the feasibility region of~\eqref{eq:con_stab}. For every $j \in \mathcal{N}_\ell$, let $\bar{v}^e_{\ell j}$ and $\ubar{v}^e_{\ell j}$ be the upper and lower bounds of $v^e_{\ell j}$. In this paper, we assume that $\ubar{v}^e_{\ell j} > 0$ to allow only positive steady-state voltages~% \cite{de2016power}. Let $\mathcal{V}^s_\ell=\left\{ v^e_\ell :\; \ubar{v}^e_{\ell j} \leq v^e_{\ell j} \leq \bar{v}^e_{\ell j}, \forall j \in \mathcal{N}_\ell \right\}$ be a polytope of interest. \begin{definition} \label{def:robstab} A set $\mathcal{V}^s_\ell \subseteq \mathbb{R}^{n_\ell}$ is called a \emph{robust stability set} if there exists a positive definite matrix $P$ that makes~\eqref{eq:con_stab} satisfied for all $p_\ell\in \mathcal{P}_\ell$ and $v^e_\ell \in \mathcal{V}^s_\ell$. \end{definition} We seek a common Lyapunov function $Q=x^\top Px$ which certifies that the Jacobian matrix, $J(v^e_\ell,p_\ell)$, is Hurwitz for any $p_\ell \in \mathcal{P}_\ell$ and $v^e_\ell \in \mathcal{V}^s_\ell$. We first transform the non-polynomial constraints~\eqref{eq:con_stab} into bilinear matrix inequalities (BMIs). Then, we show that the infinite BMIs can be further reduced to a finite number of linear matrix inequalities (LMIs). Finally, we formulate a generalized eigenvalue problem (GEVP)~% \cite{boyd1994linear} to compute a convex robust stability set. \subsubsection{BMI} For notational brevity, define $\delta_j \triangleq p_{\ell j} / (v_{\ell j}^e)^2$ for $j \in \mathcal{N}_\ell$ and let $\delta \in \mathbb{R}^{n_\ell}$ be the corresponding vector. Let $\delta_j$ be bounded by box constraints with upper and lower bounds defined by $\bar{\delta}_j=\bar{p}_{\ell j}/(\ubar{v}^e_{\ell j})^2$ and $\ubar{\delta}_j = \ubar{p}_{\ell j}/(\bar{v}^e_{\ell j})^2$, respectively. Hence, $\delta$ is contained in a polytopic set defined by $\Delta=\{\delta:\ubar{\delta}_j\leq \delta_j\leq \bar{\delta}_j\}$. Let $\delta^v_k$ be the vertices of the set~$\Delta$ for \mbox{$k\in \mathcal{M}^v \triangleq \{ 1, \ldots, 2^{n_\ell}\}$.} Thus, the Jacobian matrix of~\eqref{eq:gencls} can be equivalently transformed into a function of $\delta$, denoted by $J(\delta)$. It can be easily checked that $J(\delta)=A+D\diag\{\delta\}$, where $D$ is an $n\times n_\ell$ matrix with $D_{kk}=1/C_{\ell k}$ for $n_t+n_s\leq k\leq n$, and all the other elements are zero \cite{liu2018robust}. Since the Jacobian $J(\delta)$ is affine in $\delta$, the stability condition~\eqref{eq:con_stab} is implied by the following conditions, which are BMIs in $P$ and $\delta$: \begin{equation} PJ(\delta) + J(\delta)^\top P\prec 0,\quad \forall \delta\in \Delta.\label{eq:BMIs} \end{equation} \subsubsection{Infinite BMIs to Finite LMIs} Since $\Delta$ is a polytope, the feasibility of BMIs~\eqref{eq:BMIs} is can be implied by the feasibility of the following finitely many constraints: \begin{align} P J(\delta^v_k) + J(\delta^v_k)^\top P \prec 0,\quad P = P^\top \succ 0, \quad k\in \mathcal{M}^v.\label{eq:LMItest} \end{align} Since each $\delta^v_k$ is a constant vector, \eqref{eq:LMItest} is a convex LMI feasibility testing problem \cite{boyd2004convex}. \subsubsection{Computational Complexity and Conservativeness} The number of constraints in the LMI feasibility test~\eqref{eq:LMItest} is exponential in the dimension of the uncertainty. Fortunately, there exist methods to reduce the number of constraints. For example, only one LMI needs to be checked to ensure the feasibility of~\eqref{eq:LMItest}. Let $\lambda>0$, $\hat{A}=A+D\diag\{(\bar{\delta}+\ubar{\delta})/2\}$, and let $\delta^{\max}=(\bar{\delta}-\ubar{\delta})/2$. In addition, let $\hat{C}=[\mathbf{0}, \diag\{\delta^{\max}\}]$ and $\hat{B}=[\mathbf{0};I]$. The feasibility of the following single LMI is a sufficient condition for the feasibility of~\eqref{eq:LMItest}~% \cite{ben2001lectures}: \begin{align} \label{eq:singleLMItest} \left[ \begin{array}{cc} P\hat{A}+\hat{A}^\top P+\lambda \hat{C}^\top\hat{C} & P\hat{B}\\ \hat{B}^\top P & -\lambda I \end{array} \right]&\prec 0,\; P\succ 0, \; \lambda>0. \end{align} However, the improvement in computational tractability is accompanied by a rise in the conservativeness of the reformulated problem. One method to evaluate the conservativeness of various conditions is to compare the volume of the largest sets that these conditions can certify as stability sets. Empirical studies show that the largest set that the single-LMI condition~\eqref{eq:singleLMItest} can certify is only about 40$\%$ of the volume of the set obtained from the LMIs~\eqref{eq:LMItest}. Still, there exist methods to reduce the number of LMIs to be polynomially dependent on the dimension of the uncertainty~% \cite{liu2018robust,ben2009robust}. Numerical tests show that the resulting conditions can certify a set with a volume above 90$\%$ to that obtained from~\eqref{eq:LMItest}. These conditions thus have promising applicability for practical usage. Nevertheless, the transformation of LMIs~\eqref{eq:LMItest} is out of scope for this paper. For the sake of brevity, we use~\eqref{eq:LMItest} to present the numerical results in Section~\ref{sec:simu}. \subsubsection{Largest Robust Stability Set} To determine the robust stability set, we simply need to find a choice for $\Delta$ that satisfies~\eqref{eq:BMIs}. Once $\Delta$ is determined, $\mathcal{V}^e_\ell$ can be conveniently derived based on the coupling between $\delta$ and $v^e_\ell$. An appropriate choice for $\Delta$ can be found by scaling an initial guess. The optimal scaling factor can be found by employing generalized eigenvalue problem (GEVP) techniques. Let the initial guess be $\Delta^0=\{\delta:-\delta_j^0 \leq \delta_j \leq \delta_j^0 \}$ where $ \delta_j^0>0$.\footnote{The choices for the upper and lower bounds on each $\delta_j$ do not need to be symmetric, in general. We choose symmetric bounds for the sake of simplicity.} Let $\alpha > 0$ be a scaling factor. We next find the largest $\alpha$ that makes~\eqref{eq:LMItest} feasible. Recall that $J(\delta)=A+D\diag\{\delta\}$. The largest $\alpha$ can be found by solving the following GEVP problem: \begin{align} \hspace{-1.55cm}\textbf{GEVP: }\quad\quad\quad&\min_{P\succ 0,\beta>0}\,\, \beta,\quad\text{subj. to}\label{eq:prob_gevp}\\ G_k+&\beta \left(PA+A^\top P\right)\prec 0,\quad k\in \mathcal{M}^v, \nonumber\\ P\!A+&A^\top P\prec 0,\nonumber \end{align} where $G_k=P\left(D\diag\{\delta^{v0}_k\}\right) + \left(D\diag\{\delta^{v0}_k\}\right)^\top P$, $\delta^{v0}_k$ is a vertex of $\Delta^0$, and $\beta = 1/\alpha$. Problem~\eqref{eq:prob_gevp} is a quasi-convex problem that can be efficiently solved~% \cite{boyd2004convex}. Additionally, for a given $\Delta^0$,~\eqref{eq:prob_gevp} only requires knowledge of the system matrix $A$, which is based on the network topology and electrical parameters that are usually fixed. Hence,~\eqref{eq:prob_gevp} can be solved once (off-line) prior to solving multiple DN-OPF problems with different loading conditions. A proper initial guess can also be determined before solving the DN-OPF problem. One applicable choice is letting $\delta^0_j=\bar{p}_{\ell j}$. In many practical DC networks, the steady-state voltage levels of different buses are fairly close to each other~% \cite{john2015resistive}. The choice is effective when the voltage constraints are also fairly close to each other. Based on the discussion above, the following condition characterizes the robust stability set: \begin{prop}~\label{prop:stab} With given $\Delta^0$, if $\beta$ is a solution of~\eqref{eq:prob_gevp}, $\mathcal{V}^{s}_\ell$ defined in the following expression is a robust stability set: \begin{equation} \mathcal{V}^{s}_\ell = \left\{v^e_\ell:v^e_{\ell k}\geq \sqrt{\beta\cdot \bar{p}_{\ell j}/ \delta^0_k }\right\}.\nonumber \end{equation} \end{prop} Note that the robust stability set only imposes a deterministic lower bound on each $v_{\ell k}^e$, meaning the feasibility region of~\eqref{eq:con_stab} can be approximated by half planes. This observation is consistent with practical experience that higher voltage levels are more preferable for maintaining stability, and thus provides implications for the design and operation of DC networks. \subsection{Feasibility Condition}\label{sec:robfeas} Robust stability can only be guaranteed when there exists a power flow solution $v^e_\ell$ in $\mathcal{V}^s_\ell \cap \mathcal{V}^e_\ell$. We derive a condition to ensure such existence for any $p_\ell\in \mathcal{P}_\ell$ by leveraging a recent result on AC power flow solvability~% \cite{wang2018explicit}: with fixed generator voltage set points $v^{\mathrm{ref}}$, we characterize a compact set in $\mathbb{R}^{n_\ell}$ inside which a unique power flow solution $v_\ell^{e^*}$ is guaranteed to exist for every $p_\ell \in \mathcal{P}_\ell$. Given load power $p_\ell^*$ and a generator voltage set point $v^{\mathrm{ref}}$, the nominal load voltage $v_\ell^{e^*}$ is the solution of the following nominal power flow equation: \begin{equation} p^*_\ell\!=\!\diag\{v_\ell^{e*}\}\left( Y_{\ell \ell}v_\ell^{e*}+Y_{\ell s}v^{\mathrm{ref}}\right).\label{eq:pfe_nom} \end{equation} Furthermore, we define the open-circuit voltage vector \mbox{$w \in \mathbb{R}^{n_\ell}$} and a dimensionless matrix $\tilde{Z}_{\ell \ell}$ as follows: \begin{equation}\label{eq:zll} w \triangleq -Y_{\ell \ell} Y_{\ell s} v^{\mathrm{ref}},\, \tilde{Z}_{\ell \ell} \triangleq \diag\{w\}^{-1}Y_{\ell \ell}^{-1}\diag\{w\}^{-1}. \end{equation} The vector $w$ can be viewed as the equivalent generator voltage seen by each load bus and $\tilde{Z}_{\ell \ell}$ is the impedance matrix normalized by the open-circuit voltages. Finally, denote the minimum normalized load voltage $u^{\min}$ as $u^{\min} \triangleq \min_{j \in \mathcal{N}_\ell} \frac{v_{\ell j}^{e^*}}{w_j}$. Using the setup above, we restate the AC power flow solvability result from~% \cite{wang2018explicit}: \begin{prop}[\hspace{1sp}{\cite[Thm. 1]{wang2018explicit}}] \label{prop:robfeas} Given load power $p_\ell^*$, generator set point $v^{\text{ref}}$, and the corresponding power flow solution $v_\ell^{e^*}$ satisfying $\|\tilde{Z}_{\ell \ell} p_\ell^* \|_\infty < (u^{\min})^2$, if the following inequality holds, \begin{equation} \label{eq:wang_condition} \Gamma_s\! \triangleq\! \left( u^{\min} - \frac{\|\tilde{Z}_{\ell \ell} p_\ell^* \|_\infty}{u^{\min}} \right)^2 - 4\|\tilde{Z}_{\ell \ell} \left( p_\ell^* - p_\ell \right) \|_\infty > 0, \end{equation} the power flow equation \eqref{eq:pfe_ori} with load power $p_\ell$ admits a unique solution in \begin{equation} \label{eq:D} \mathcal{D}(p_\ell) = \left\{ v_\ell^e \in \mathbb{R}^{n_\ell} :\; \left| v_{\ell j}^e - v_{\ell j}^{e^*} \right| \le rw_j, \, j \in \mathcal{N}_\ell \right\}, \end{equation} where \begin{equation} \label{eq:r} r \triangleq \frac{ \left( u^{\min} - \frac{\|\tilde{Z}_{\ell \ell} p_\ell^* \|_\infty}{u^{\min}} \right) - \sqrt{\Gamma_s} }{2}. \end{equation} \end{prop} Proposition~\ref{prop:robfeas} provides a sufficient condition which ensures the unique existence of a power flow solution in a polytope. When generator set points $v^{\mathrm{ref}}$, the nominal load $p_\ell^*$, and the power flow solution $v_\ell^{e^*}$ are all given, the polytope will always be centered at $v^{e*}_{\ell j}$ with a radius that is dependent only on $p_\ell$. \subsection{Tractable Robust DN-OPF Formulation}\label{sec:tra_opf} With the robust stability set derived in Sections~\ref{sec:robstab} and the feasibility condition derived in Section~\ref{sec:robfeas}, our objective now is to drive all $\mathcal{D}(p_\ell)$ into $\mathcal{V}_\ell^s \cap \mathcal{V}_\ell^e$. With given $p^*_\ell$ and fixed $v^{\text{ref}}$, all sets $\mathcal{D}(p_\ell)$ are centered at the same $v^{e*}_\ell$. The radius, on the other hand, is different. Intuitively, we can find the largest radius among these sets, and correspondingly define a new set in the form of~\eqref{eq:D}. Every $\mathcal{D}(p_\ell)$ must lie inside of this set. It is easy to see from~\eqref{eq:wang_condition} and~\eqref{eq:r} that the quantities $w$, $\tilde{Z}_{\ell \ell}$, and $u_{\min}$ only depend on $v^{\text{ref}}$ and $p^*_\ell$. For the radius, $rw_j$, $r$ is dependent on $p_\ell$ such that it increases as $\|p_\ell^* - p_\ell\|_1$ increases. Therefore, to ensure robust feasibility, i.e., to ensure that for every $p_\ell \in \mathcal{P}_\ell$ there exists a power flow solution lying in $\mathcal{V}_\ell^s \cap \mathcal{V}_\ell^e$, we only need to ensure that $\mathcal{D}(p_\ell) \subseteq \mathcal{V}_\ell^s \cap \mathcal{V}_\ell^e$ for the value of $p_\ell$ that maximizes $\|p_\ell^* - p_\ell\|_1$. We denote such $p_\ell$ as $p_\ell^m$, that is, $p_\ell^m = \argmax_{p_\ell \in \mathcal{P}_\ell} \|p_\ell^* - p_\ell\|_1$. For a given $\mathcal{P}_\ell$ and $p^*_l$, $p^m_\ell$ is a scalar constant that can be easily obtained by summing the largest element-wise deviations of uncertainty to the nominal value. With a change of variable, each largest element-wise deviation can be found by solving \begin{equation} \min_{p_{\ell j}\in [\ubar{p}_{\ell j},\bar{p}_{\ell j}],\gamma\geq 0} \gamma, \text{ subj. to: } |p^*_{\ell j}-p_{\ell j}|\leq \gamma.\nonumber \end{equation} Using $p^m_\ell$, we seek to bound the variation of $v^e_\ell$. Define \begin{subequations}\label{eq:def_r} \begin{align} &\bar{r} \triangleq \frac{ \left( u^{\min} - \frac{\|\tilde{Z}_{\ell \ell} p_\ell^* \|_\infty}{u^{\min}} \right) - \sqrt{\bar{\Gamma}_s} }{2}, \\ &\bar{\Gamma}_s\! \!\triangleq\! \!\left( u^{\min} \!\!- \frac{\|\tilde{Z}_{\ell \ell} p_\ell^* \|_\infty}{u^{\min}} \right)^2\! \!\!- \!4\|\tilde{Z}_{\ell \ell} \left( p_\ell^* - p_\ell^m \right) \|_\infty \!>\! 0. \end{align} \end{subequations} It suffices to enforce the following constraints to ensure the existence of $v^e_\ell$ in $\mathcal{V}_\ell^s \cap \mathcal{V}_\ell^e$ for all $p_\ell$: \begin{align}\label{eq:robfeas} v_{\ell j}^{e^*} - \bar{r}w_j \ge \ubar{v}_j, \quad v_{\ell j}^{e^*} + \bar{r}w_j \le \bar{v}_j, \quad j \in \mathcal{N}_\ell, \end{align} where $\ubar{v}_j$ and $\bar{v}_j$ are element-wise load voltage bounds from $\mathcal{V}_\ell^s \cap \mathcal{V}_\ell^e$ for all $j \in \mathcal{N}_\ell$. Notice that $\bar{r}w_j$ represents the largest deviation of a $v^e_{\ell j}$ to $v^{e*}_{\ell j}$. If~\eqref{eq:robfeas} holds, there must be a $v^e_{\ell}$ lying inside of the desired set $\mathcal{V}_\ell^s \cap \mathcal{V}_\ell^e$. The left-hand sides of~\eqref{eq:robfeas} are functions of the nominal load voltage and generator set points. The right-hand sides are known constants derived from $\mathcal{V}_\ell^s \cap \mathcal{V}_\ell^e$. We can thus ensure satisfaction of~\eqref{eq:robfeas} by finding proper values for the nominal load voltage and generator set points. Such values for $v^{e*}_\ell$ and $v^{\text{ref}}$ are determined by solving the following DN-OPF problem: \begin{subequations}\label{eq:prob_final_1} \begin{align} \hspace{-0.3cm} \textbf{R. DN-OPF*: }\,\, &\min_{v^{\mathrm{ref}},u^{\text{min}}} f(v^{\mathrm{ref}},v^{e*}_\ell),\text{ subj. to} \\ &~\eqref{eq:pfe_nom},\,~\eqref{eq:zll},\,~\eqref{eq:def_r},\, ~\eqref{eq:robfeas}, \,~v^{\mathrm{ref}}\in \mathcal{V}^{\mathrm{ref}},\label{eq:con_pfe_tran}\\ &u^{\min}w_j \leq v_{\ell j}^{e^*},\, \, j\in \mathcal{N}_\ell. \label{eq:cons_umin} \end{align} \end{subequations} In problem~\eqref{eq:prob_final_1}, recall that~\eqref{eq:pfe_nom} denotes the power flow equations at the nominal condition. Constraints~\eqref{eq:zll}, \eqref{eq:def_r}, \eqref{eq:robfeas}, and~\eqref{eq:cons_umin} ensure satisfaction of the feasibility condition. \begin{rem} Problem~\eqref{eq:prob_final_1} can be transformed into a quadratically constrained quadratic program (if $f(v^{\text{ref}},v^{e*}_\ell)$ is a quadratic function) similar to~\eqref{eq:prob_ori} in order to exploit existing solution algorithms. We only need to show that~\eqref{eq:def_r} can be equivalently transformed into quadratic constraints. Let $s_{1j}$ be the $j$-th element of the vector $\tilde{Z}_{\ell \ell} p_\ell^*$ and let $s_{2j}$ be the $j$-th element of the vector $\tilde{Z}_{\ell \ell} \left( p_\ell^* - p_\ell^m \right)$. From~\eqref{eq:zll}, both $s_{1j}$ and $s_{2j}$ are quadratic functions in $v^{\text{ref}}$. Let $a=\| \tilde{Z}_{\ell \ell}p^*_\ell \|_\infty$, $b=\|\tilde{Z}_{\ell \ell} \left( p_\ell^* - p_\ell^m \right)\|_\infty$, $c = \sqrt{\bar{\Gamma}_s}$, and $d = a/u^{\text{min}}$. For~\eqref{eq:def_r}, we can apply the following transformation based on change of variables: \begin{subequations}\label{eq:quadratic} \begin{align} &2\bar{r} = \left( u^{\min} - d \right) - c , \quad c^2 = \left( u^{\min} - d \right)^2 - 4b, \label{eq:expla_r}\\ &\left( u^{\min} - d \right)^2 > 4b ,\quad \,d \cdot u^{\text{min}}= a, \label{eq:expla_gam}\\ & a > s_{1j}, \, a > -s_{1j},\, b > s_{2j}, \, b > -s_{2j}, \, j\in \mathcal{N}^\ell.\label{eq:expla_change} \end{align} \end{subequations} It is straightforward to check that~\eqref{eq:quadratic} only contains linear and quadratic constraints in the new variables $a$, $b$, $c$, $d$, and decision variables $v^{\mathrm{ref}}$ and $u^{\mathrm{min}}$. \end{rem} From the solution of~\eqref{eq:prob_final_1}, we can also find the value of each $\bar{r}w_j$. According to~\eqref{eq:robfeas}, this provides bounds for $v^e_\ell$ that are valid for any $p_\ell\in \mathcal{P}_\ell$. With given values of $p^m_\ell$ and the robust stability set $\mathcal{V}^s$, we state the main result of the paper as follows: \begin{thm}\label{thm:main} Any solution $v^{\mathrm{ref}}$ of problem~\eqref{eq:prob_final_1} is a feasible point of SIP~\eqref{eq:prob1}. \end{thm} \begin{proof} It suffices to show that for all $p_\ell\in \mathcal{P}_\ell$, stability constraint~\eqref{eq:con_stab} is satisfied, power flow equation~\eqref{eq:pfe_ori} is feasible, and~\eqref{eq:con_pfe_ori} holds true with the solution of~\eqref{eq:prob_final_1}, $v^{\text{ref}}$. Regarding any $p_\ell\in \mathcal{P}_\ell$, since $v^{\text{ref}}$ makes~\eqref{eq:pfe_nom},~\eqref{eq:zll},~\eqref{eq:def_r}, and~\eqref{eq:cons_umin} satisfied, Proposition~\ref{prop:robfeas} shows that power flow equation~\eqref{eq:pfe_ori} must admit a unique solution in $\mathcal{D}(p_\ell)$. From the satisfaction of constraint~\eqref{eq:robfeas}, $\mathcal{D}(p_\ell)$ lies in $\mathcal{V}^s$, which shows that the power flow solution exists in the robust stability set. From Proposition~\ref{prop:stab} and Definition~\ref{def:robstab}, stability constraint~\eqref{eq:con_stab} is always satisfied. In addition, it is easy to check that~\eqref{eq:con_pfe_ori} holds as well. This completes the proof. \end{proof} Theorem~\ref{thm:main} certifies that the solution to a tractable optimization problem~\eqref{eq:prob_final_1} provides generator set points which guarantee robust feasibility and stability. To summarize the results discussed in Sections~\ref{sec:prob}, \ref{sec:robstab}, and~\ref{sec:robfeas}, the proposed robust DN-OPF algorithm can be described with the following algorithm: \begin{algorithm} \caption{Find a feasible point, $v^{\mathrm{ref}}$, for SIP~\eqref{eq:prob1}}\label{alg:main} \textbf{Input:} System matrices $A$, $B$, $C$, $D$, nominal load $p^*_\ell$, load set $\mathcal{P}_\ell$, constraint sets $\mathcal{V}^e_\ell$ and $\mathcal{V}^{\mathrm{ref}}$, and cost function $f(v^{\mathrm{ref}},v^{e^*}_\ell)$.\\ \textbf{Output:} A solution $v^{\mathrm{ref}}$. \begin{algorithmic}[1] \item[Step 1:] Select a $\Delta_0$ and solve GEVP~\eqref{eq:prob_gevp} to find $\alpha$. \item[Step 2:] Use $\alpha$, $\Delta_0$, and $\mathcal{P}_\ell$ to find robust stability set $\mathcal{V}^s_\ell$. \item[Step 3:] Find $p_\ell^m = \argmax_{p_\ell \in \mathcal{P}_\ell} \|p_\ell^* - p_\ell\|_1$. \item[Step 4:] Solve problem~\eqref{eq:prob_final_1} to find $v^{\text{ref}}$. \end{algorithmic} \end{algorithm} Each step in Algorithm~\ref{alg:main} is computationally tractable. Propositions~\ref{prop:stab} and~\ref{prop:robfeas} along with Theorem~\ref{thm:main} certify that the output of the algorithm solves SIP problem~\eqref{eq:prob1}. \section{Case Studies}\label{sec:simu} This section uses simulation case studies to demonstrate the effectiveness of the proposed algorithm. The simulations are conducted on a desktop with Intel Core i7 and 32GB of RAM. The optimization problems are solved using IPOPT~% \cite{Wachter2006}, and the simulations are performed in Matlab/Simulink. We first show the efficacy and the limited conservativeness of the proposed work with a 14-bus system. We then demonstrate the computational efficiency of the proposed work. \subsection{Illustrative Case Study of a 14-Bus System} We first focus on an example DC network whose topology and bus types are the same as the IEEE 14-bus system. The parameters of the DC network given in Table~\ref{table:simu1_spec} are chosen according to existing DC network case studies~% \cite{liu2018existence,salo2007low}. The generators, loads, and power lines all have uniform parameters. We model all the eleven loads as unknown CPLs that can arbitrarily vary within the range $[0, 50~\text{kW}]$. The nominal load for each is $25$~kW. All the five generators are controlled as voltage sources. Our algorithm is used to compute the voltage set points for the generators. For this case study, we impose bounds of $[425~\text{V}, 575~\text{V}]$ on the generator and CPL voltages. The objective function minimizes the losses at the nominal operating point. \begin{table}[t] \caption{Parameters for the 14-bus DC network case study} \label{table:simu1_spec} \begin{center} \begin{tabular}{cccccc} \hline\hline $R_{sk}$ & 0.05 $\Omega$ & $R_{lj}$ & 5 $\Omega$ & $R_{tp}$ & 0.05 $\Omega$ \\ $L_{tp}$ & 3mH & $C_{sk}$ & 0.75mF & $C_{lj}$ & 0.9mF \\ \hline\hline \end{tabular} \end{center} \end{table} Considering only on the nominal condition (ignoring the range of possible uncertainty realizations), the solution to the DN-OPF problem~\eqref{eq:prob_ori} yields set points of the five generators as $455.6$, $462.9$, $454.9$, $454.4$, and $460.0~\text{V}$. We apply these set points and consider a uniform increase in load demands of $2.5$~kW every $2.5$ seconds. As shown in Fig.~\ref{fig:case1_uns}, the system becomes unstable at approximately $35$ seconds when the loads are around $40$~kW each. This shows the need to consider stability, especially in systems with significant uncertainties. In comparison, we formulate the optimization problem~\eqref{eq:prob_final_1} using our algorithm. Applying the stability analysis approach developed in Section~\ref{sec:robstab} shows that the system is always robustly stable if the steady-state CPL voltage is higher than $500$~V. Problem~\eqref{eq:prob_final_1} yields the following set points: $544.5$, $553.4$, $543.8$, $543.2$, $549.9~\text{V}$. Using these set points results in stability for the entire range of load demands when loads increase at the same rate as the previous testing, as shown in Fig.~\ref{fig:case1_stb}. This shows the efficacy of Theorem~\ref{thm:main}. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{case1_uns.pdf} \caption{\label{fig:case1_uns}DC network instability if only nominal DN-OPF is considered.} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{case1_stb.pdf} \caption{\label{fig:case1_stb}DC network operation is stable for all conditions with the proposed algorithm.} \end{figure} Moreover, Fig.~\ref{fig:case1_stb} also demonstrates the limited conservativeness of the proposed algorithm. As discussed in Section~\ref{sec:tra_opf}, we can find the bounds on the variation range of the operating points. Using the optimization results, we can certify that the ratio of an operating point to the nominal operating point lies in the range, $[0.978,1.022]$. As shown in Fig.~\ref{fig:case1_stb}, the certified region is a reasonably tight estimation of the variations of system operating points. \subsection{Summary of Other Case Studies} We also tested the proposed algorithm on DC networks with the same topologies and bus types as the IEEE~\mbox{9-,} \mbox{30-,} \mbox{39-,} \mbox{69-,} and \mbox{118-bus} systems to study computational tractability. To summarize the results, Table~\ref{table:compu_compa} compares the average CPU time in IPOPT for solving problem~\eqref{eq:prob_final_1} and the traditional DN-OPF problem~\eqref{eq:prob_ori}, averaged over 10 tests for each system. Observe that the proposed optimization problem has a similar computational complexity as the traditional DN-OPF problem. This verifies the tractability of our algorithm. \begin{table}[!t] \caption{Comparison of computation times for solving~\eqref{eq:prob_ori} and~\eqref{eq:prob_final_1}} \label{table:compu_compa} \begin{center} \begin{tabular}{l|lllllll} \hline & 9-bus & 30-bus & 39-bus & 69-bus & 118-bus\\ \hline DN-OPF~\eqref{eq:prob_ori} & $0.10$ s & $0.11$ s & $0.31$ s & $0.91$ s & $1.40$ s \\ Problem~\eqref{eq:prob_final_1} & $0.11$ s & $0.14$ s & $0.55$ s & $2.24$ s & $3.93$ s \\ \hline \end{tabular} \end{center} \end{table} \section{Conclusion}\label{sec:sum} This paper has developed a systematic algorithm to study stability-constrained OPF problems for DC networks under uncertainty. Such problems are usually intractable due to the involvement of infinitely many constraints. Our algorithm uses computationally efficient approaches to transform the problem into a tractable counterpart that resembles a traditional DN-OPF problem such that existing tools can be employed. We first derive a robust stability set within which any operating point is guaranteed to be robustly stable. We then use a sufficient condition which ensures the existence of feasible operating points in this set for all uncertainty realizations in the specified uncertainty set. The limited conservativeness and computational efficiency of the proposed algorithm is demonstrated using various test cases. In future research, we will investigate the application of the algorithm to DN-OPF problems with contingency constraints. \bibliographystyle{IEEEtran}
1,477,468,751,184
arxiv
\section{\label{sec:level1}Introduction} Rhythm plays a crucial role in many aspects of physics, biology and chemistry \cite{strogatz2003sync,winfree2001geometry}. To study the rhythmic phenomena quantitatively, the phase oscillator model has been widely used since the prominent studies had been done \cite{winfree1967biological,kuramoto1975self,ermentrout1991multiple}. The Kuramoto model undergoes a transition from nonsynchronous to synchronous as the coupling strength increases. This model can also be solved exactly with the Lorentzian natural frequency distribution \cite{kuramoto1984chemical}. Although this discovery has triggered many subsequent works, many unsolved, related models are considered important in real world situations \cite{acebron2005kuramoto}. One of such important models is a system composed of multiple heterogeneous populations of phase oscillators. In fact, some real systems appear to have a hierarchical structure of multiple populations. For example, multiple neuronal modules in the brain seem to organize into structural networks \cite{bullmore2009complex,buzsaki2009rhythms}. To understand the dynamical prosperities of such systems, we need to theoretically investigate a model with multiple populations of oscillators. As a first step, we here investigate a two-population system of phase oscillators. It is plausible that the characteristics of two populations are generally different and in several situations this property seems to play an important functional role. For example, the synchrony of different neuronal populations in the brain is positively correlated with the success of human tasking \cite{buzsaki2009rhythms}. In real systems such as electroencephalograms, the average frequencies often largely differ across populations. Considering such a coupled system of two populations with different average frequency distributions, we can theoretically derive the correct form of the coupling function by the averaging method \cite{sanders2007averaging} in the phase reduction method. This derivation is essential if average frequencies between two populations are related by an integer ratio such as $k:1$ (where $k$ is an integer). This condition is called the resonant condition. \cite{luck2011dynamics,komarov2013dynamics}. However, resonance has not been considered in most previous studies. Therefore, we will examine multifrequency oscillator systems by applying a specific coupling function. Moreover, multiple populations of phase oscillators exhibit interesting properties \cite{okuda1991mutual,abrams2008solvable,komarov2011effects,komarov2013dynamics}. One of the most remarkable phenomena is the formation of chimera states in which synchronous and asynchronous states coexist \cite{montbrio2004synchronization,abrams2008solvable,laing2009chimera,laing2012disorder,martens2010bistable,martens2010chimeras,pazo2014low,laing2010chimeras}. This phenomenon was theoretically discovered by Kuramoto \textit{et al.} \cite{kuramoto2002coexistence} and named by Abrams \textit{et al.} \cite{abrams2004chimera}. Later, the properties of chimera states were experimentally investigated \cite{tinsley2012chimera,hagerstrom2012experimental,nkomo2013chimera}. Recent theoretical works have explored chimera states in more general situations \cite{motter2010nonlinear,gu2013spiral,panaggio2013chimera,singh2011chimera,omelchenko2013nonlocal}. However, in most studies on chimera states in multiple phase oscillator populations, the natural frequencies of the oscillators are assumed to be evenly distributed across the populations. As the next stage, we should theoretically examine heterogeneous frequency distributions across the populations. To this end, we treat a general situation in which the oscillator populations have different average frequencies. We focus on the resonant case with an integer frequency ratio. Here, we must derive the appropriate type of phase coupling function under the corresponding resonance condition. We show that such a system develops chimera states under some conditions and investigate the properties of these states. Recently Ott and Antonsen proposed a remarkable ansatz that reduces an infinite system of coupled phase oscillators in the continuum limit to a low-dimensional system \cite{ott2008low,ott2009long,ott2011comment}. This ansatz has been applied to a wide range of applications and has yielded many fruitful results \cite{martens2009exact,lee2009large,montbrio2011shear,skardal2012hierarchical,kloumann2014phase,tanaka2014solvable,mirollo2012asymptotic}. Komarov and Pikovsky considered the resonant interactions among more than two oscillator communities and applied the Ott-Antonsen ansatz to a simple resonant case \cite{komarov2013dynamics}. In this study, we consider two populations of phase oscillators in the more general resonant case $k:1$, where $k$ is not $\pm1$. However, because the populations interact through the resonant type coupling function, we cannot straightforwardly reduce the system to low-dimensional equations using the Ott-Antonsen ansatz. To proceed with the analysis, we augment the Ott-Antonsen ansatz with the additional assumptions of Skardal \textit{et al.}. Consequently, our system reduces to a three-dimensional system of ordinary differential equations. The remainder of this paper is structured as follows. Section \ref{sec:model} describes a coupled system of two populations of phase oscillators. In Sec. \ref{sec:reduction}, we reduce this system to a low-dimensional system. The emergent dynamical states such as the clustered chimera states are investigated in Sec. \ref{sec:results}. In Sec. \ref{sec:comparison}, we numerically confirm that our results hold without the additional solvability assumption. Our method is extended to more general resonant conditions in Sec. \ref{sec:general}. The paper concludes with a summary in Sec. \ref{sec:discussion}. \section{Phase reduction and Model equations}\label{sec:model} As a preliminary, we discuss two interacting oscillators. The oscillators evolve by the following equations: \begin{align} \frac{d\boldsymbol{x}_1}{dt} = \boldsymbol{f}_1\left(\boldsymbol{x}_1\right) + \epsilon \boldsymbol{g}_1\left(\boldsymbol{x}_1,\boldsymbol{x}_2\right), \label{eq: original_oscillators1}\\ \frac{d\boldsymbol{x}_2}{dt} = \boldsymbol{f}_2\left(\boldsymbol{x}_2\right) + \epsilon \boldsymbol{g}_2\left(\boldsymbol{x}_2,\boldsymbol{x}_1\right), \label{eq: original_oscillators2} \end{align} where $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ are $n$-dimensional state vectors, $\boldsymbol{f}_1$ and $\boldsymbol{f}_2$ represent the intrinsic dynamics of the oscillators, and $\boldsymbol{g}_1$ and $\boldsymbol{g}_2$ are the interaction terms between the oscillators. We further suppose $\lvert\epsilon\rvert\ll1$, and that the oscillators have non-perturbed limit cycles (when $\epsilon=0$). The periods of oscillators 1 and 2 are $2\pi/\omega_1$ and $2\pi/\omega_2$ respectively, where $\omega_1$ and $\omega_2$ are the respective natural frequencies of the oscillators. The frequencies almost satisfy the resonant relation $k:1$, where the natural frequency of the fast oscillator $\omega_1$ is approximately $k$ times the frequency of the slow oscillator $\omega_2$: \begin{align} \omega_1\simeq k\omega_2.\label{eq:almost_resonant_relation} \end{align} If the frequencies satisfy Eq. (\ref{eq:almost_resonant_relation}), the resonant coupling function can be derived by phase reduction. We first introduce the original phase variables $\theta_1$ and $\theta_2$ such that $d\theta_1/dt=\omega_1 , d\theta_2/dt=\omega_2$ near the limit cycle orbits $\boldsymbol{x}_{1,0}(t)$ and $\boldsymbol{x}_{2,0}(t)$ in the absence of the perturbations. Applying phase reduction, the dynamics of the phase variables $\theta_1$ and $\theta_2$ are determined from Eqs. (\ref{eq: original_oscillators1}) and (\ref{eq: original_oscillators2}) as \begin{align} \frac{d\theta_1}{dt} = k\omega+\epsilon\boldsymbol{Z}_1\left(\theta_1\right)\cdot\boldsymbol{g}_{12}\left(\theta_1,\theta_2\right), \label{eq: original_phase_dynamics1}\\ \frac{d\theta_2}{dt} = \omega+\epsilon\boldsymbol{Z}_2\left(\theta_2\right)\cdot\boldsymbol{g}_{21}\left(\theta_2,\theta_1\right),\label{eq: original_phase_dynamics2} \end{align} where \begin{align} \boldsymbol{Z}_1\left(\theta_1\right) &= \boldsymbol{\nabla}_{\boldsymbol{x}_1}\theta_1\left(\boldsymbol{x}_1\right)|_{\boldsymbol{x}_1=\boldsymbol{x}_{1,0}\left(\theta_1\right)},\notag\\ \boldsymbol{Z}_2\left(\theta_2\right) &= \boldsymbol{\nabla}_{\boldsymbol{x}_2}\theta_2\left(\boldsymbol{x}_2\right)|_{\boldsymbol{x}_2=\boldsymbol{x}_{2,0}\left(\theta_1\right)}.\notag \end{align} To separate the slow dynamics from Eqs.(\ref{eq: original_phase_dynamics1}) and (\ref{eq: original_phase_dynamics2}), we define slow phase variables $\psi_1$ and $\psi_2$ as $\theta_1=2\omega t+\psi_1$ and $\theta_2=\omega t+\psi_2$ respectively. The dynamics of the slow phase variables are described by \begin{align} \frac{d\psi_1}{dt} = &\epsilon\boldsymbol{Z}_1\left(\psi_1+k\omega t\right)\cdot\boldsymbol{g}_{12}\left(\psi_1+k\omega t,\psi_2+\omega t\right), \label{eq: slow_phase_dynamics1}\\ \frac{d\psi_2}{dt} &= \epsilon\boldsymbol{Z}_2\left(\psi_2+\omega\right)\cdot\boldsymbol{g}_{21}\left(\psi_2+\omega t,\psi_1+k\omega t\right).\label{eq: slow_phase_dynamics2} \end{align} Averaging the RHS in Eq. (\ref{eq: slow_phase_dynamics1}) over the period of the slow oscillator $2\pi/\omega$, we obtain \begin{align} \frac{d\psi_1}{dt} &= \epsilon\frac{\omega}{2\pi}\int^{2\pi/\omega}_{0}dt\boldsymbol{Z}_1\left(\psi_1+k\omega t\right)\cdot\boldsymbol{g}_{12}\left(\psi_1+k\omega t,\psi_2+\omega t\right) \notag\\ &= \frac{\epsilon}{2\pi}\int^{2\pi}_{0}d\Theta\boldsymbol{Z}_1\left(\psi_1+k\Theta\right)\cdot\boldsymbol{g}_{12}\left(\psi_1+k\Theta,\psi_2+\Theta\right)\notag\\ &= \frac{\epsilon}{2\pi}\int^{2\pi}_{0}d\Theta\boldsymbol{Z}_1\left(\psi_1-k\psi_2+k\Theta\right)\cdot\notag\\ &\qquad\qquad\qquad\qquad\boldsymbol{g}_{12}\left(\psi_1-k\psi_2+k\Theta,\Theta\right),\label{eq: slow_averaging_first} \end{align} where $\psi_1$ and $\psi_2$ are constants independent of $t$ during one period. When evaluating the integral, we express the RHS of Eq. (\ref{eq: slow_averaging_first}) as a function of $\psi_1-k\psi_2$; namely $\Gamma_{12}\left(\psi_1-k\psi_2\right)$. Performing similar operations the RHS of Eq. (\ref{eq: slow_phase_dynamics2}), we obtain the pair of averaged equations: \begin{align} \frac{d\psi_1}{dt} = \epsilon\Gamma_{12}\left(\psi_1-k\psi_2\right), \label{eq: averaged_slow_phase_dynamics1}\\ \frac{d\psi_2}{dt} = \epsilon\Gamma_{21}\left(k\psi_2-\psi_1\right),\label{eq: averaged_slow_phase_dynamics2} \end{align} where \begin{align} \Gamma_{12}\left(\psi_1-k\psi_2\right) &= \frac{\omega}{2\pi}\int^{2\pi/\omega}_{0}dt \boldsymbol{Z}_1\left(\psi_1+k\omega t\right)\cdot\notag\\ &\boldsymbol{g}_{12}\left(\psi_1+k\omega t,\psi_2+\omega t\right),\notag\\ \Gamma_{21}\left(k\psi_2-\psi_1\right) &= \frac{\omega}{2\pi}\int^{2\pi/\omega}_{0}dt \boldsymbol{Z}_2\left(\psi_2+\omega t\right)\cdot\notag\\ &\boldsymbol{g}_{21}\left(\psi_2+\omega t,\psi_1+k\omega t\right).\notag \end{align} This approximation is valid to order $\epsilon$. Consequently, we obtain the evolutionary equations of the averaged phase variables $\theta_1$ and $\theta_2$: \begin{align} \frac{d\theta_1}{dt} = k\omega + \epsilon \Gamma_{12}\left(\theta_1-k\theta_2\right), \\ \frac{d\theta_2}{dt} = \omega + \epsilon \Gamma_{21}\left(k\theta_2-\theta_1\right). \end{align} We emphasize that the coupling functions $\Gamma_i$ ($i$=1,2) depend on $\theta_1-k\theta_2$ (or $k\theta_2-\theta_1$) when the natural frequencies of the oscillators satisfy the resonant relation $k:1$. Generalizing this to the $m:n$ case, we can state that $\Gamma_i$ ($i$=1,2) depends on $n\theta_1-m\theta_2$ (or $m\theta_2-n\theta_1$). Our discussion of resonant interactions is now extended to populations of phase oscillators. We consider two populations of phase oscillators with different average frequencies. We assume that both populations have inherent Lorentzian (Cauchy) distributions of natural frequencies. The mean frequency ratio between the two populations is $2:1$ (i.e., $k=2$ in Fig. \ref{fig freq_dist}): \begin{align} g_{\text{fast}}\left(\omega\right) = \frac{D}{\pi}\frac{1}{\left(\omega-2\Omega\right)^2+D^2},\label{eq:lorentz_2omega} \\ g_{\text{slow}}\left(\omega\right) = \frac{D}{\pi}\frac{1}{\left(\omega-\Omega\right)^2+D^2},\label{eq:lorentz_omega} \end{align} where $D$ is the common width of the distributions and $\Omega$ is the mean of the distribution in the slow population. \begin{figure} \includegraphics[scale=0.55]{pop_dist.eps} \caption{\label{fig freq_dist}Top: Natural frequency distributions of the phase oscillators in the two populations whose mean frequencies satisfy the resonant relation $k:1$.\\ Bottom: The coupling strengths is $\mu=(1+A)/2$ within the same population and $\nu=(1-A)/2$ between the two populations.} \end{figure} As usual, we assume dominance of the first term in the Fourier series of the coupling function, and take $H\left(\theta\right)=\sin\left(\theta+\alpha\right)$ as in the Kuramoto-Sakaguchi model \cite{sakaguchi1986soluble}, where $\alpha$ is the phase lag parameter. We thus consider the following model: \begin{align} \displaystyle\frac{d\theta^{\text{fast}}_i}{dt} &= \begin{aligned}[t] \omega^{\text{fast}}_i +& \displaystyle\frac{\mu}{N}\displaystyle\sum^{N}_{j=1}\sin\left(\theta^{\text{fast}}_j-\theta^{\text{fast}}_i-\alpha\right)\\ &+ \displaystyle\frac{\nu}{N}\displaystyle\sum^{N}_{j=1}\sin\left(2\theta^{\text{slow}}_j-\theta^{\text{fast}}_i-\alpha\right),\label{eq:original1} \\ \end{aligned}\\ \displaystyle\frac{d\theta^{\text{slow}}_i}{dt} &= \begin{aligned}[t] \omega^{\text{slow}}_i +& \displaystyle\frac{\mu}{N}\displaystyle\sum^{N}_{j=1}\sin\left(\theta^{\text{slow}}_j-\theta^{\text{slow}}_i-\alpha\right)\\ &+ \displaystyle\frac{\nu}{N}\displaystyle\sum^{N}_{j=1}\sin\left(\theta^{\text{fast}}_j-2\theta^{\text{slow}}_i-\alpha\right), \label{eq:original2} \end{aligned} \end{align} where $\theta^{\text{fast/slow}}_i$ is the phase of oscillator $i$ ($i=1,\cdots,N$) in the fast/slow population. Incorporating the phase reduction concept, we now consider the interactions between the oscillators. For this purpose, we introduce the parameter $A$ and define $\mu=\left(1+A\right)/2, \nu=\left(1-A\right)/2$ as in \cite{abrams2008solvable}. In this setting, $\mu+\nu=1$. The parameter $A$ controls the ratio of the strengths of the interactions within each population and between the populations. When $0<A<1$, the interactions are stronger within the populations than across the populations; conversely, when $-1<A<0$ the interactions across the populations dominate. In this study we fix the phase lag parameter $\alpha=\pi/2-0.05$ as in \cite{laing2012disorder}. Under this condition, chimera states will appear, and a variety of dynamics with a reasonably general coverage are expected \cite{abrams2008solvable}. The general $k$ case will be straightforwardly extended from the $k=2$ case in Sec. \ref{sec:general}. \section{Reduction to Low-dimensional dynamics}\label{sec:reduction} Using the Ott-Antonsen ansatz \cite{ott2008low,ott2009long,ott2011comment}, we will attempt to reduce the system represented by Eqs. \eqref{eq:lorentz_2omega}-\eqref{eq:original2} to a low-dimensional system in the limit $N\to\infty$. To handle the high-order Kuramoto model, we use a modified version of the Ott-Antonsen ansatz used by Skardal \textit{et al.} \cite{skardal2011cluster}. However, the modified ansatz alone does not reduce the original dynamics to a low-dimensional system. To overcome this difficulty, we employ one additional assumption in which we replace $\sin\left(\theta^{\text{slow}}_j-\theta^{\text{slow}}_i-\alpha\right)$ with $\sin\left(2\theta^{\text{slow}}_j-2\theta^{\text{slow}}_i-\alpha\right)$ in Eq. \eqref{eq:original2}. Equation \eqref{eq:original2} becomes \begin{align} \displaystyle\frac{d\theta^{\text{slow}}_i}{dt} &= \begin{aligned}[t] \omega^{\text{slow}}_i +& \displaystyle\frac{\mu}{N}\displaystyle\sum^{N}_{j=1}\sin\left(2\theta^{\text{slow}}_j-2\theta^{\text{slow}}_i-\alpha\right)\\ &+ \displaystyle\frac{\nu}{N}\displaystyle\sum^{N}_{j=1}\sin\left(\theta^{\text{fast}}_j-2\theta^{\text{slow}}_i-\alpha\right).\label{eq:modified2} \end{aligned} \end{align} The dynamics can be reduced by applying the Ott-Antonsen ansatz to \eqref{eq:original1} and the modified model Eq. \eqref{eq:modified2}. In general, however, it is unlikely that the interaction terms among the slow oscillators contain no first Fourier mode. In Sec. \ref{sec:comparison}, we will check the validity of the modified model under general conditions. We now consider the continuum limit $N\to\infty$ in our modified model. The probability density functions (PDFs) of the fast and slow populations in the continuum limit are denoted as $f_{\text{fast}}(\theta,\omega,t)$ and $f_{\text{slow}}(\theta,\omega,t)$ respectively, where $f_j(\theta,\omega,t)d\theta d\omega dt$ is the fraction of oscillators with phase between $\theta$ and $\theta+d\theta$ and natural frequency between $\omega$ and $\omega+d\omega$ at time $t$ in population $j(=\text{fast,slow})$. The common order parameter for the fast population is given by \begin{align} \displaystyle z_{\text{fast}}(t) &=\lim_{N\to\infty}\displaystyle\frac{1}{N}\sum^{N}_{j=1}e^{i\theta^{\text{fast}}_j}\notag\\ &= \int^{\infty}_{-\infty}d\omega\int^{2\pi}_{0}d\theta f_{\text{fast}}\left(\theta,\omega,t\right)e^{i\theta}.\label{eq:fast_order_parameter} \end{align} For the slow population, we define the Daido order parameter \cite{daido1992order} as \begin{align} \displaystyle z_{\text{slow}}(t) &=\lim_{N\to\infty}\displaystyle\frac{1}{N}\sum^{N}_{j=1}e^{2i\theta^{\text{slow}}_j}\notag\\ &= \int^{\infty}_{-\infty}d\omega\int^{2\pi}_{0}d\theta f_{\text{slow}}\left(\theta,\omega,t\right)e^{2i\theta}.\label{eq:slow_order_parameter} \end{align} Following the Ott-Antonsen ansatz \cite{ott2008low,ott2009long,ott2011comment} and its variant \cite{skardal2011cluster}, we expand the PDFs as two Fourier series: \begin{align} f_{\text{fast}}(\theta,\omega,t) &= \frac{g_{\text{fast}}(\omega)}{2\pi}\left[1+\sum^{\infty}_{n=1}\left(a(\omega,t)^ne^{in\theta}+\text{c.c.}\right)\right], \label{eq: fast_density}\\ f_{\text{slow}}(\theta,\omega,t) &= \frac{g_{\text{slow}}(\omega)}{2\pi}\left[1+\sum^{\infty}_{m=1}\left(b(\omega,t)^{m}e^{2im\theta}+\text{c.c.}\right)\right],\label{eq: slow_density} \end{align} where c.c. stands for complex conjugate. The ansatz requires the conditions, $\lvert a\left(\omega,t\right)\rvert<1$ and $\lvert b\left(\omega,t\right)\rvert<1$ for the convergence of Eqs. \eqref{eq: fast_density} and \eqref{eq: slow_density}. To conserve the total number of oscillators in each population, the following continuity equations should be satisfied: \begin{align} \frac{\partial f_{j}}{\partial t}+\frac{\partial}{\partial\theta_{j}}\left(f_{j}\dot{\theta}_{j}\right) = 0\,\,\,\left(j=\text{fast},\text{slow}\right),\notag \end{align} Substituting Eqs. \eqref{eq: fast_density} and \eqref{eq: slow_density} into these continuity equations, we obtain the ordinal differential equations for $a$ and $b$: \begin{align} \frac{\partial a}{\partial t}+i\omega a&+\frac{\mu}{2}\left(z_{\text{fast}}a^2e^{-i\alpha}-\bar{z}_{\text{fast}}e^{i\alpha}\right)\notag\\ &+\frac{\nu}{2}\left(z_{\text{slow}}a^2e^{-i\alpha}-\bar{z}_{\text{slow}}e^{i\alpha}\right) = 0,\label{eq:a}\\ \frac{1}{2}\frac{\partial b}{\partial t}+i\omega b&+\frac{\mu}{2}\left(z_{\text{slow}}b^2e^{-i\alpha}-\bar{z}_{\text{slow}}e^{i\alpha}\right)\notag\\ &+\frac{\nu}{2}\left(z_{\text{fast}}b^2e^{-i\alpha}-\bar{z}_{\text{fast}}e^{i\alpha}\right) = 0.\label{eq:b} \end{align} From Eqs. \eqref{eq:fast_order_parameter} and \eqref{eq: fast_density}, we immediately find that \begin{align} z_{\text{fast}}\left(t\right) = \int^{\infty}_{-\infty}d\omega g_{\text{fast}}\left(\omega\right)\bar{a}\left(\omega,t\right). \end{align} Given that the natural frequency distribution is Lorentzian Eq. \eqref{eq:lorentz_2omega}, we can write \begin{align} z_{\text{fast}}\left(t\right) = \frac{1}{2\pi i}\int^{\infty}_{-\infty}d\omega &\left(\frac{1}{\omega-2\Omega-iD}-\frac{1}{\omega-2\Omega+iD}\right)\notag\\ &\times\bar{a}\left(\omega,t\right).\label{eq:z_int} \end{align} Following \cite{ott2008low} we assume that $\bar{a}\left(\omega,t\right)$ is analytic in $\text{Im}\,\omega>0$. From the complex conjugate of Eq. \eqref{eq:a}, we know that $\partial\bar{a}/\partial t\sim -\left(\text{Im}\,\omega\right)\bar{a}$ as $\text{Im}\,\omega\to\infty$. As $\lvert\bar{a}\left(\omega,t\right)\rvert<1$, we also know that $\bar{a}\left(\omega,t\right)\to0$ as $\text{Im}\,\omega\to\infty$. Integrating the RHS of Eq. \eqref{eq:z_int} over the upper semicircular contour in the complex plain, we obtain \begin{align} z_{\text{fast}}\left(t\right) = \bar{a}\left(2\Omega+iD,t\right).\notag \end{align} A similar analysis gives \begin{align} z_{\text{slow}}\left(t\right) = \bar{b}\left(\Omega+iD,t\right).\notag \end{align} Substituting $\omega=2\Omega+iD$ and $\omega=\Omega+iD$ in the complex conjugate of Eqs. \eqref{eq:a} and \eqref{eq:b}, respectively, we obtain the dynamics of the complex order parameters of the two populations: \begin{align} \frac{dz_{\text{fast}}}{dt} &= \begin{aligned}[t] &\left(-D+2\Omega i\right)z_{\text{fast}} + \displaystyle\frac{e^{-i\alpha}}{2}\left(\mu z_{\text{fast}}+\nu z_{\text{slow}}\right) \\ &-\displaystyle\frac{e^{i\alpha}}{2}\left(\mu\bar{z}_{\text{fast}}+\nu\bar{z}_{\text{slow}}\right)z_{\text{fast}}^2,\label{eq:complex_dyn1} \end{aligned}\\ \frac{dz_{\text{slow}}}{dt} &= \begin{aligned}[t] &\left(-2D+2\Omega i\right)z_{\text{slow}}+e^{-i\alpha}\left(\mu z_{\text{slow}}+\nu z_{\text{fast}}\right) \\ &-e^{i\alpha}\left(\mu\bar{z}_{\text{slow}}+\nu\bar{z}_{\text{fast}}\right)z_{\text{slow}}^2.\label{eq:complex_dyn2} \end{aligned} \end{align} We now rewrite Eqs. \eqref{eq:complex_dyn1} and \eqref{eq:complex_dyn2} in terms of the polar coordinates $z_{\text{fast}}=r_{\text{fast}}e^{-i\phi_{\text{fast}}}$ and $z_{\text{slow}}=r_{\text{slow}}e^{-i\phi_{\text{slow}}}$ and their phase difference $\phi=\phi_{\text{fast}}-\phi_{\text{slow}}$. The resulting system comprises three ordinary differential equations with three degrees of freedom $(r_{\text{fast}},r_{\text{slow}},\phi)$: \begin{align} \frac{dr_{\text{fast}}}{dt} &= \begin{aligned}[t] &-Dr_{\text{fast}} \\ &+\displaystyle\frac{1-r_{\text{fast}}^2}{2}\left(\mu r_{\text{fast}}\cos\alpha+\displaystyle\nu r_{\text{slow}}\cos\left(\phi-\alpha\right)\right), \label{eq: three_dynamics1} \\ \end{aligned}\\ \frac{dr_{\text{slow}}}{dt} &= \begin{aligned}[t] &-2Dr_{\text{slow}} \\ &+\displaystyle\frac{1-r_{\text{slow}}^2}{2}\left(\mu r_{\text{slow}}\cos\alpha+\displaystyle\nu r_{\text{fast}}\cos\left(\phi+\alpha\right)\right), \label{eq: three_dynamics2} \\ \end{aligned}\\ \frac{d\phi}{dt} &= \begin{aligned}[t] &\frac{1+r_{\text{fast}}^2}{2}\left(\mu \sin\alpha-\nu \frac{r_{\text{slow}}}{r_{\text{fast}}}\sin\left(\phi-\alpha\right)\right) \\ &-\left(1+r_{\text{slow}}^2\right)\left(\mu \sin\alpha+\nu \frac{r_{\text{fast}}}{r_{\text{slow}}}\sin\left(\phi+\alpha\right)\right). \label{eq: three_dynamics3} \end{aligned} \end{align} From these equations, we found that the reduced system evolves independently of $\Omega$ ,the mean of natural frequency distribution of the oscillators in the slow population. Later, we will show that this independency holds in general $k$ cases. \section{Results: Clustered Chimera State}\label{sec:results} In the previous section, we showed that, if we make a slight modification to Eq. \eqref{eq:original2} and by replacing $\sin\left(\theta^{\text{slow}}_j-\theta^{\text{slow}}_i-\alpha\right)$ with Eq. $\sin\left(2\theta^{\text{slow}}_j-2\theta^{\text{slow}}_i-\alpha\right)$, this modified system with \eqref{eq:original1} in the continuum limit can be simplified to the reduced system (\ref{eq: three_dynamics1}-\ref{eq: three_dynamics3}) by applying the Ott-Antonsen ansatz. In this section, we numerically simulate the detailed dynamics of the modified and reduced systems. We set $D=1.0\times10^{-3}$ and imposed the initial condition $r_{\text{fast}}, r_{\text{slow}}\simeq1$ (Each $\theta_i$ was chosen from a uniform distribution in $[0,\pi/30]$), and each population comprised $N=10^4$ oscillators. The initial order parameters in the reduced systems were set by substituting the conditions of the corresponding $N=10^4$ modified systems. \begin{figure}[h] \includegraphics[scale=0.35]{time_series_ansatz.eps} \caption{\label{fig: reduced_time_series}Time evolutions of the order parameters of the populations in the reduced system for (a) $A=0.9$, (b) $A=0.1$, (b) $A=-0.1$, and (d) $A=-0.9$.} \end{figure} Figure \ref{fig: reduced_time_series} plots the time evolutions of the order parameters of the reduced system for (a) $A=0.9$, (b) $A=0.1$, (c) $A=-0.1$, and (d) $A=-0.9$. In all cases, we can see the order parameters of the populations reach a steady state. Panels (a)-(c) reveal three different types of dynamics; coherent, breathing chimera and stable chimera. In the coherent state (Fig. \ref{fig: reduced_time_series} (a)), the order parameters of both populations approach almost $1$, indicating that the fast and slow oscillators form synchronous clusters. In the chimera state, the order parameter of one population becomes incoherent while that of the other population is coherent \cite{abrams2004chimera}. In this case, the slow oscillators are mutually synchronized while the fast oscillators are not. In the breathing chimera state (Fig. \ref{fig: reduced_time_series} (b)), the order parameters oscillate. Conversely, in Fig. \ref{fig: reduced_time_series} (c), the order parameters go finally to the fixed point, which is called the stable chimera state \cite{abrams2008solvable}. Panels (c) and (d) exhibit similar order parameter behaviors but different phase distributions. These differences will be thoroughly explored in later simulations of the modified system. \begin{figure}[h] \includegraphics[scale=0.35]{time_series_finite.eps} \caption{\label{fig: finite_time_series}Time evolutions of the order parameters of the populations in the modified systems; (a) $A=0.9$, (b) $A=0.1$, (b) $A=-0.1$, and (d) $A=-0.9$. Right panels show corresponding snapshots of the steady-state phase distributions (Up: fast population; Down: slow population).} \end{figure} Figure \ref{fig: finite_time_series} plots the evolving order parameters and instantaneous phase distributions of the modified system with $N=10^4$. In panels (a)-(d), the modified systems exhibit the same steady-state behavior as their counterpart reduced systems, but the asymptotic behavior in Fig. \ref{fig: finite_time_series} (d) differs from that Fig. \ref{fig: reduced_time_series} (d). This difference appears when the coupling between the populations is dominant. Figure \ref{fig: finite_time_series} also represents snapshots of the steady-state phase distributions of the two populations in the modified system. For $A=0.1$, $-0.1$ and $0.9$ (panels (a), (b), and (c), respectively), the fast population is incoherent, whereas the slow population splits into two clusters with a phase difference $\pi$. Similar states, known as clustered chimera states, have been reported in a delay-coupled system \cite{sethia2008clustered}. \begin{figure}[h] \includegraphics[scale=0.7]{avr_order_ansatz_finite_D_0.001_pi_30.eps} \caption{\label{fig ansatz_finite}Comparisons between the average (a) and the standard deviation (b) of the steady-state order parameters in the modified system (with $N=10^4$) and the reduced system.} \end{figure} Let us check the result correspondence in the modified and reduced systems. Panels (a) and (b) of Fig. \ref{fig ansatz_finite}plot the averages and standard deviations, respectively, of the steady-state order parameters in the system reduced by the ansatz and the modified system with $N=10^4$. When the intra-population coupling strengths are sufficiently strong, both populations settle into coherent states. Weakened coupling leads to stable or breathing chimera states. These results were obtained under the initial condition $r_{\text{fast}}, r_{\text{slow}}\simeq1$. The steady-state order parameters behaved similarly under other starting conditions (data not shown), suggesting that the clustered chimera states in our system are robust. However, the height ratio of two clusters in the asymptotic phase distributions of the slow oscillators does depend on the initial conditions, because the height difference have no effect on the phase dynamics in Eq. \eqref{eq:modified2}. Moreover, the transient behaviors crucially depend on the initial conditions in some parameter ranges. \section{Validity of specific assumption}\label{sec:comparison} For compatibility with the Ott-Antonsen ansatz, the above analysis imposed an unrealistic assumption on the interactions between the slow oscillators in Eq. \eqref{eq:modified2}. In real situations, the interaction term between the slow oscillators in Eq. \eqref{eq:original2} should contain first mode in the Fourier series; consequently we need to reassess this assumption. Unfortunately, without replacing $\sin\left(\theta^{\text{slow}}_j-\theta^{\text{slow}}_i-\alpha\right)$ with $\sin\left(2\theta^{\text{slow}}_j-2\theta^{\text{slow}}_i-\alpha\right)$, we cannot reduce the original system to a low-dimensional system through using the Ott-Antonsen ansatz. In this section, we investigate whether imposing the above assumption affects the dynamics of two oscillator populations. To this end, we numerically examine the dynamics of the original system \eqref{eq:original1} and \eqref{eq:original2}. The natural frequency distributions of the populations are those of the modified system in Secs. \ref{sec:model}-\ref{sec:results}. As the original system cannot be reduced to a low-dimensional system, we numerically simulate both systems with $N=10^4$. As in Sec. \ref{sec:results}, we set $D=1.0\times10^{-3}$ and $\alpha=\pi/2-0.05$. The initial conditions were set to $r_{\text{fast}}, r_{\text{slow}}\simeq1$ (each $\theta_i$ was chosen from uniform distribution in $[0,\pi/30]$). \begin{figure}[h] \includegraphics[scale=0.37]{solvable_D_0.001_A_0.0_OMEGA_0.01_hikaku.eps} \caption{\label{fig:comparison_time_A_0.0}(a) Time evolutions of the order parameters in the modified and original systems with $N=10^4$ for $A=0$.\\Bottom panels are snapshots of the steady-state phase distributions in the modified (b) and original (c) systems for $A=0$.} \end{figure} The results for $A=0$ (homogeneous coupling strengths in the system) are plotted in Fig. \ref{fig:comparison_time_A_0.0}. In Fig. \ref{fig:comparison_time_A_0.0} (a) compares the time evolutions of the order parameters in the modified and original systems with $N=10^4$. Panels (b) and (c) of this figure are snapshots of the steady-state phase distributions in the modified and original systems, respectively. In Fig. \ref{fig:comparison_time_A_0.0} (a), the order parameters of both systems reach similar steady states. However, closer inspection reveals that the asymptotic phase distributions of the slow oscillators differ between the two systems. Specially, the slow population in the modified system splits into two clusters (Fig. \ref{fig:comparison_time_A_0.0} (b)) but is unimodal in the original system (Fig. \ref{fig:comparison_time_A_0.0} (c)). Therefore, the original system settles into a normal rather than a clustered chimera state. The same result emerged under all tested conditions. \begin{figure}[h] \includegraphics[scale=0.7]{avr_order_finite_D_0.001_order_hikaku_pi_30.eps} \caption{\label{fig:comparison}Comparison between averages (a) and standard deviations (b) of the steady-state order parameters in the reduced and original systems ($N=10^4$).} \end{figure} The averages and standard deviations of the steady-state order parameters in the reduced and original systems are compared in Fig. \ref{fig:comparison} (Note that the results of the reduced system are replicated from Fig. \ref{fig ansatz_finite}). The behaviors of the reduced and original systems are qualitatively similar. We checked that this similarity remains if $D$ is sufficiently small. Finally we remark on the different outcomes of the two systems. In Fig. \ref{fig:comparison}, the asymptotic behaviors of the order parameters differ in certain ranges of the parameter $A$. The two models differ when $A$ is small and positive; that is, when the intra-population interactions are slightly stronger than the inter-population interactions. \begin{figure}[h] \includegraphics[scale=0.37]{solvable_D_0.001_A_0.2_OMEGA_0.01_hikaku.eps} \caption{\label{fig:comparison_time_A_0.2}(a) Time evolutions of the order parameters in the modified and original models with $N=10^4$ for $A=0$.\\Bottom panels are snapshots of the steady-state phase distributions in (b) the modified system and (c)the original system for $A=0.2$.} \end{figure} To clarify this results, Fig. \ref{fig:comparison_time_A_0.2} (a) plots the evolved order parameter dynamics of the modified and original systems with $N=10^4$ for $A=0.2$. Fig. \ref{fig:comparison_time_A_0.2} (b) and (c) represent snapshots of the steady-state phase distributions. In this case, the specific assumption alters the steady-state distribution of the modified system in two ways. First it amplifies the peak of the fast oscillators relative to the original system. Second, the slow oscillators separate into two clusters in the modified system, but they form a single cluster in the original system. \section{general resonant case}\label{sec:general} As mentioned above, our method is readily extensible from the resonant condition $2:1$ to the general integer resonant condition $k:1$. Thus, we consider a model of oscillators of two populations under the resonant condition $k:1$: \begin{align} \displaystyle\frac{d\theta^{\text{fast}}_i}{dt} &= \begin{aligned}[t] \omega^{\text{fast}}_i +& \displaystyle\frac{\mu}{N}\displaystyle\sum^{N}_{j=1}\sin\left(\theta^{\text{fast}}_j-\theta^{\text{fast}}_i-\alpha\right)\\ &+ \displaystyle\frac{\nu}{N}\displaystyle\sum^{N}_{j=1}\sin\left(k\theta^{\text{slow}}_j-\theta^{\text{fast}}_i-\alpha\right),\label{eq: general1} \\ \end{aligned}\\ \displaystyle\frac{d\theta^{\text{slow}}_i}{dt} &= \begin{aligned}[t] \omega^{\text{slow}}_i +& \displaystyle\frac{\mu}{N}\displaystyle\sum^{N}_{j=1}\sin\left(\theta^{\text{slow}}_j-\theta^{\text{slow}}_i-\alpha\right)\\ &+ \displaystyle\frac{\nu}{N}\displaystyle\sum^{N}_{j=1}\sin\left(\theta^{\text{fast}}_j-k\theta^{\text{slow}}_i-\alpha\right), \label{eq: general2} \end{aligned} \end{align} where the corresponding natural frequency distributions are assumed to obey the Lorentz distributions of Fig. \ref{fig freq_dist}. Note that the strength parameter $A$ satisfies $\mu=(1+A)/2$ and $\nu=(1-A)/2$. Thus, by replacing the interaction term $\sin\left(\theta^{\text{slow}}_j-\theta^{\text{slow}}_i-\alpha\right)$ in Eq. \eqref{eq: general2} with $\sin\left(k\theta^{\text{slow}}_j-k\theta^{\text{slow}}_i-\alpha\right)$, we can reduce the original system of Eqs. \eqref{eq: general1} and \eqref{eq: general2} in the continuum limit $N\to\infty$. After this modification, the PDF of the slow oscillator population can be expressed as a Fourier series: \begin{align} f_{\text{slow}}(\theta,\omega,t) &= \frac{g_{\text{slow}}(\omega)}{2\pi}\left[1+\sum^{\infty}_{m=1}\left(b(\omega,t)^{m}e^{ikm\theta}+\text{c.c.}\right)\right],\notag \end{align} where we assume $\left\lvert b\left(\omega,t\right)\right\rvert<1$ under the Ott-Antonsen ansatz. We also define the complex order parameter for the slow oscillator population $z_{\text{slow}}$: \begin{align} \begin{aligned} \displaystyle z_{\text{slow}}(t) &= \int^{\infty}_{-\infty}d\omega\int^{2\pi}_{0}d\theta f_{\text{slow}}\left(\theta,\omega,t\right)e^{ik\theta}.\label{eq:general_slow_order_parameter} \end{aligned} \end{align} Similarly to Sec \ref{sec:reduction}, the population dynamics are finally described by a set of ordinary differential equations with three degrees of freedom: \begin{align} \frac{dr_{\text{fast}}}{dt} &= \begin{aligned}[t] &-Dr_{\text{fast}} \\ &+\displaystyle\frac{1-r_{\text{fast}}^2}{2}\left(\mu r_{\text{fast}}\cos\alpha+\displaystyle\nu r_{\text{slow}}\cos\left(\phi-\alpha\right)\right),\label{eq:gen_three_dynamics1} \\ \end{aligned}\\ \frac{dr_{\text{slow}}}{dt} &= \begin{aligned}[t] &-kDr_{\text{slow}} \\ &+\displaystyle\frac{k\left(1-r_{\text{slow}}^2\right)}{2}\left(\mu r_{\text{slow}}\cos\alpha+\displaystyle\nu r_{\text{fast}}\cos\left(\phi+\alpha\right)\right),\label{eq:gen_three_dynamics2} \\ \end{aligned}\\ \frac{d\phi}{dt} &= \begin{aligned}[t] &\frac{1+r_{\text{fast}}^2}{2}\left(\mu \sin\alpha-\nu \frac{r_{\text{slow}}}{r_{\text{fast}}}\sin\left(\phi-\alpha\right)\right) \\ &-\frac{k\left(1+r_{\text{slow}}^2\right)}{2}\left(\mu \sin\alpha+\nu \frac{r_{\text{fast}}}{r_{\text{slow}}}\sin\left(\phi+\alpha\right)\right).\label{eq:gen_three_dynamics3} \end{aligned} \end{align} Here, we used the polar coordinates $z_{\text{fast}}=r_{\text{fast}}e^{-i\phi_{\text{fast}}},z_{\text{slow}}=r_{\text{slow}}e^{-i\phi_{\text{slow}}}$ and denoted the phase difference $\phi=\phi_{\text{fast}}-\phi_{\text{slow}}$ as in Sec. \ref{sec:reduction}. We remark that the system dynamics depend only on the ratio of the means of the natural frequencies, $k$. In other words, the absolute value of the mean frequencies $\Omega$ does not influence the collective behavior of the system. \begin{figure}[h] \includegraphics[scale=0.355]{general_k_D_0.001.eps} \caption{\label{fig:general_k}(Left) Averages and (Right) standard deviations of the steady-state order parameters in the reduced and original systems with $N=10^4$. Top to bottom: $k=3,5,7,11$.} \end{figure} Fig. \ref{fig:general_k} shows the averages and standard deviations of the steady-state order parameters in the reduced and modified systems for various $k$ ($3,5,7,11$). As in Sec. \ref{sec:comparison}, we set $N=10^4$ and the initial conditions $r_{\text{fast}}, r_{\text{slow}}\simeq1$. We also set $\alpha=\pi/2-0.05$ and $D=1.0\times10^{-3}$. Qualitatively, the system \eqref{eq:gen_three_dynamics1}-\eqref{eq:gen_three_dynamics3} exhibits the same dynamics as the system \eqref{eq: three_dynamics1}-\eqref{eq: three_dynamics3}, even at higher values of $k$. However, the asymptotic behaviors differ between the reduced and original systems as $k$ increases. The widening difference is especially apparent in the standard deviation. For large $k$, the order parameter of the slow oscillators tends to fluctuate with larger amplitudes in the original system, probably because of the imposed assumption. In typical Fourier series, the magnitudes of the low-frequency modes are more similar to the first Fourier mode amplitude than those of high-frequency modes. Under the general condition $k:1$, the modified system develops $k$-clustered chimera states, with $k$ clusters of coherent slow oscillators. To our knowledge these states have not been previously reported. The slow oscillators in the $k$-clustered chimera states form $k$ synchronous oscillator groups. The phase difference between the nearest clusters is approximately $2\pi/k$. The $k$-clustered chimera states can be considered as generalized versions of the $2$-cluster ones. Like the standard chimera state, the $k$-clustered chimera states are classifiable as stable or breathing. We numerically checked the result correspondence between the reduced and modified systems with large $N$ and general $k$. Therefore, our proposed reduction is valid in the general $k:1$ case. \section{Conclusion}\label{sec:discussion} Referring to phase reduction theory, we investigated the dynamics of multiple populations of phase oscillators. First, we assumed that the mean of frequency distribution in one population was twice faster than that in the other population. Applying the Ott-Antonsen ansatz \cite{ott2008low,skardal2011cluster} and imposing an additional assumption, we reduced the original system to a low-dimensional system of ordinary differential equations describing the time evolution of the order parameters. The population of slow oscillators was treated by the Daido order parameter \cite{daido1992order}. Clustered chimera states emerged when the inter-population coupling strength was relatively large. We also investigated the general resonant condition. Our analysis was extensible from the simple resonant case $2:1$ to the general case $k:1$, where $k$ is any integer. We confirmed that the result for the case $2:1$ were qualitatively replicated in the general case. However, for large $k$, our additional assumption significantly altered the dynamics of the original system. As a future work, we will investigate multifrequency systems completely. In other words, we can think about dynamics of systems with the more general resonant case $m:n$. We can apply our approach to the resonant case $m:n$, although it requests us one more assumption on the interactions within populations of fast oscillators. \begin{acknowledgments} We thank Takashi Imai and Kaiichiro Ota for fruitful discussions. This work was supported by Grants-in-Aid from the Ministry of Education, Science, Sports, and Culture of Japan: Grant numbers 21120002 and 25115719. \end{acknowledgments}
1,477,468,751,185
arxiv
\section{Introduction} The possibility concerning the existence of extra dimensions is one of the most astonishing aspects on string theory and the formalism of $p$-branes. In spite of this possibility, extra dimensions still remain up to now unaccessible and obliterated to experiments. An alternative approach to the compactification of extra dimensions, provided by, e.g., Kaluza-Klein (KK) and string theories \cite{gr,zwi,zwi1,zwi2}, involves an extra dimension which is not compactified, as pointed by, e.g., RS model \cite{Randall1,Randall2}. This extra dimension implies deviations on Newton's law of gravity at scales below about 0.1 mm, where objects may be indeed gravitating in more dimensions. The electromagnetic, weak and strong forces, as well as all the matter in the universe, would be trapped on a brane with three spatial dimensions, and only gravitons would be allowed to leave the surface and move into the full bulk, constituted by an AdS$_5$ spacetime, as prescribed by, e.g., in RS model \cite{Randall1,Randall2}. At low energies, gravity is localized on the brane and general relativity is recovered, but at high energies, significant changes are introduced in gravitational dynamics, forcing general relativity to break down to be overcome by a quantum gravity theory \cite{rov}. A plausible reason for the gravitational force appear to be so weak in relation to other forces can be its dilution in possibly existing extra dimensions related to a bulk, where $p$-branes \cite{gr,zwi, zwi1, zwi2, Townsend} are embedded. $p$-branes are good candidates for brane-worlds \cite{ken} because they possess gauge symmetries \cite{zwi, zwi1, zwi2} and automatically incorporate a quantum theory of gravity. The gauge symmetry arises from open strings, which can collide to form a closed string that can leak into the higher-dimensional bulk. The simplest excitation modes of these closed strings correspond precisely to gravitons. An alternative scenario can be achieved by Randall-Sundrum model (RS) \cite{Randall1,Randall2}, which induces a volcano barrier-shaped effective potential for gravitons around the brane \cite{Likken}. The corresponding spectrum of gravitational perturbations has a massless bound state on the brane, and a continuum of bulk modes with suppressed couplings to brane fields. These bulk modes introduce small corrections at short distances, and the introduction of more compact dimensions does not affect the localization of matter fields. However, true localization takes place only for massless fields \cite{Gregory}, and in the massive case the bound state becomes metastable, being able to leak into the extra space. This is shown to be exactly the case for astrophysical massive objects, where highly energetic stars and the process of gravitational collapse, which can originate black holes, leads to deviations from the $4D$ general relativity problem. There are other interesting and astonishing features concerning RS models, such as the AdS/CFT correspondence of a RS infinite AdS$_5$ brane-world, without matter fields on the brane, and 4-dimensional general relativity coupled to conformal fields \cite{Randall1,Randall2,Maartens}. We precisely investigate the consequences of the deviation of a Schwarzschild-like term in a 5$D$ spacetime metric, predicted by RS1 model in the correction of the Schwarzschild radius of a BH. We show that, for fixed effective extra dimension size, supermassive BHs (SMBHs) give the upper limit of variation in luminosity of quasars, and although the method used holds for any other kind of BH, such as mini-BHs and stellar-mass ones, we shall use SMBHs parameters, where the effects are seen to be more notorious. It is also analyzed how the quasar luminosity variation behaves as a function of the AdS$_5$ bulk radius $\ell$, for various values of BH masses, from $10$ to $10^6$ solar masses. The search for observational evidence of higher-dimensional gravity is an important way to test the ideas that have being come from string theory. This evidence could be observed in particle accelerators or gravitational wave detectors. The wave-form of gravitational waves produced by black holes, for example, could carry an observational signature of extra dimensions, because brane-world models introduce small corrections to the field equations at high energies. But the observation of gravitational waves faces severe limitations in the technological precision required for detection. This is a undeniable fact. Possibly, an easier manner of testing extra dimensions can be via the observation of signatures in the luminous spectrum of quasars and microquasars. This is the goal of this paper, which is the first of a series of papers we shall present. Here we show the possibility of detecting brane-world corrections for big quasars by their luminosity observation. In the next article we shall see that these corrections are more notorious in mini-BHs, where the Schwarzschild radius in a brane-world scenario shall be shown to be $10^4$ times bigger than standard Schwarzschild radius associated with mini-BHs. Indeed, mini-BHs are to be shown to be much more sensitive to brane-world effects. In the last article of this series we also present an alternative possibility to detect electromagnetic KK modes due to perturbations in black strings \cite{ma,soda}. This article is organized as follows: in Section 2 after presenting Einstein equations in AdS$_5$ bulk and discussing the relationship between the electric part of Weyl tensor and KK modes in RS1 model, the deviation in Newton's 4$D$ gravitational potential is introduced in order to predict the deviation in Schwarzschild form and its consequences on the variation in quasar luminosity. For a static spherical metric on the brane the propagating effect of 5$D$ gravity is shown to arise only in the fourth order expansion in terms of the Talor's of the normal coordinate out of the brane. In Section 3 the variation in quasar luminosity is carefully investigated, by finding the correction in the Schwarzschild radius caused by brane-world effects. All results are illustrated by graphics and figures. \section{Black holes on the brane} In a brane-world scenario given by a 3-brane embedded in an AdS$_5$ bulk the Einstein field equations read \begin{eqnarray}\label{123} &&G_{\mu\nu} = -\frac{1}{2}{\Lambda}_5g_{\mu\nu}\nonumber\\ &&+ \frac{1}{4}\kappa_5^4\left[TT_{\mu\nu} - T^\alpha _{ \nu}T_{\mu \alpha} + \frac{1}{2}g_{\mu\nu}(T^2 - T_{\alpha\beta}^{\;\;\;\;\alpha\beta})\right] - E_{\mu\nu},\nonumber \end{eqnarray} \noindent where $T = T_\alpha^{\;\;\alpha}$ denotes the trace of the momentum-energy tensor $T_{\mu\nu}$, $\Lambda_5$ denotes the 5-dimensional cosmological AdS$_5$ bulk constant, and $E_{\mu\nu}$ denotes the `electric' components of the Weyl tensor, that can be expressed by means of the extrinsic curvature components $K_{\mu\nu} = -\displaystyle \frac{1}{2} \text{{\it\char'44}}_n g_{\mu\nu}$ by \cite{soda} \begin{equation} E_{\mu\nu} = \text{{\it\char'44}}_n K_{\mu\nu} + K_{\mu}^{\;\;\alpha}K_{\alpha\nu} - \frac{1}{\ell^2}g_{\mu\nu} \end{equation}\noindent where $\ell$ denotes the AdS$_5$ bulk curvature radius. It corresponds equivalently to the effective size of the extra dimension probed by a 5$D$ graviton \cite{Likken, Randall1,Randall2,Maartens} The constant $\kappa_5 = 8\pi G_5$, where $G_5$ denotes the 5-dimensional Newton gravitational constant, that can be related to the 4-dimensional gravitational constant $G$ by $G_5 = G\ell_{\rm Planck}$, where $\ell_{\rm Planck} = \sqrt{G\hbar/c^3}$ is the Planck length. As indicated in \cite{Randall1,Maartens}, ``table-top tests of Newton's law currently find no deviations down to the order of 0.1 mm'', so that $\ell \lesssim $ 0.1 mm. Empar\'an et al \cite{emparan} provides a more accurate magnitude limit improvement on the AdS$_5$ curvature $\ell$, by analyzing the existence of stellar-mass BHs on long time scales and of BH X-ray binaries. In this paper we relax the stringency $\ell \lesssim 0.01$ mm to the former table-top limit $\ell \lesssim $ 0.1 mm. The Weyl `electric' term $E_{\mu\nu}$ carries an imprint of high-energy effects sourcing KK modes. It means that highly energetic stars and the process of gravitational collapse, and naturally BHs, lead to deviations from the 4-dimensional general relativity problem. This occurs basically because the gravitational collapse unavoidably produces energies high enough to make these corrections significant. From the brane-observer viewpoint, the KK corrections in $E_{\mu\nu}$ are nonlocal, since they incorporate 5-dimensional gravity wave modes. These nonlocal corrections cannot be determined purely from data on the brane \cite{Maartens}. The component $E_{\mu\nu}$ also carries information about the collapse process of BHs. In the perturbative analysis of Randall-Sundrum (RS) positive tension 3-brane, KK modes consist of a continuous spectrum without any gap. It generates a correction in the gravitational potential $V(r) =\frac{GM}{c^2r}$ to 4$D$ gravity at low energies from extra-dimensional effects \cite{Maartens}, which is given by \cite{Randall1,Randall2} \begin{equation}\label{potential} V(r) = \frac{GM}{c^2r}\left[1 + \frac{2\ell^2}{3r^2} + \mathcal{O}\left(\frac{\ell}{r}\right)^4\right]. \end{equation} \noindent The KK modes that generate this correction are responsible for a nonzero $E_{\mu\nu}$. This term carries the modification to the weak-field field equations, as we have already seen. The Gaussian coordinate $y$ denotes hereon the direction normal out of the brane into the AdS$_5$ bulk, in each point of the 3-brane\footnote{In general the vector field cannot be globally defined on the brane, and it is only possible if the 3-brane is considered to be parallelizable.}. \begin{figure} \includegraphics[width=8.7cm]{ads5.eps} \caption{\small Schematic diagram of a slice of a 3-brane embedded in an AdS$_5$ bulk. The Gaussian coordinate $y$ is normal to the brane and $x$ denotes spacetime coordinates in the brane.} \label{fig:ads5} \end{figure} The RS metric is in general expressed as \begin{equation} ^{(5)}ds^2 = e^{-2k|y|}g_{\mu\nu}dx^{\mu}dx^{\nu} + dy^2, \end{equation} \noindent where $k^2 = 3/(2\ell^2)$, and the term $e^{-2k|y|}$ is called \emph{the warp factor} \cite{Randall1,Randall2,Maartens}, which reflects the confinement role of the bulk cosmological constant $\Lambda_5$, preventing gravity from leaking into the extra dimension at low energies \cite{Maartens,Randall1,Randall2}. The term $|y|$ clearly provides the $\mathbb{Z}_2$ symmetry of the 3-brane at $y=0$. Concerning the anti-de Sitter (AdS$_5$) bulk, the cosmological constant can be written as $\Lambda_5 = -6/\ell^2$ and the brane is localized at $y = 0$, where the metric recovers the usual aspect. The contribution of the bulk on the brane can be shown only to be due to the Einstein tensor, and can be expressed as $\nabla_\nu G^{\mu\nu} = 0$, which implies that $\nabla_\nu (E^{\mu\nu} - S^{\mu\nu}) = 0$ \cite{Shiromizu}, where \begin{equation} S_{\mu\nu} := \frac{1}{4}\kappa_5^4\left[TT_{\mu\nu} - T^{\;\alpha} _{ \nu}T_{\mu \alpha} + \frac{1}{2}g_{\mu\nu}(T^2 - T_{\alpha\beta}^{\;\;\;\;\alpha\beta})\right] \end{equation} A vacuum on the brane, where $T_{\mu\nu} = 0$ outside a BH, implies that \begin{equation}\label{21} \nabla_\nu E^{\mu\nu} = 0. \end{equation} \noindent Eqs.(\ref{21}) are referred to the nonlocal conservation equations. Other useful equations for the BH case are \begin{equation}\label{ricci2} G_{\mu\nu} = - \frac{1}{2}\Lambda_ 5g_{\mu\nu} - E_{\mu\nu}, \quad R = R^{\mu}_{\;\; \mu} = 0 = E^{\mu}_{\;\; \mu}. \end{equation} \noindent Therefore, a particular manner to express the vacuum field equations in the brane given by eq.(\ref{ricci2}) is $E_{\mu\nu} = - R_{\mu\nu},$ where the bulk cosmological constant is incorporated to the warp factor in the metric. One can use a Taylor expansion in order to probe properties of a static BH on the brane \cite{Da}, and for a vacuum brane metric, we have, up to terms of order ${\mathcal O}(y^5)$ on, the following: \begin{eqnarray}\label{metrica} &&\negthickspace g_{\mu\nu}(x,y) = g_{\mu\nu}(x,0) - E_{\mu\nu}(x,0)y^2 - \frac{2}{\ell}E_{\mu\nu}(x,0)|y|^3\nonumber\\ &&\negthickspace+\frac{1}{12}\left[\left({\Box} - \frac{32}{\ell^2}\right)E_{\mu\nu} + 2R_{\mu\alpha\nu\beta}E^{\alpha\beta} + 6E_{\mu}^{\; \alpha}E_{\alpha\nu}\right]_{y=0}\negthickspace y^4 \nonumber\end{eqnarray} \noindent where $\Box$ denotes the usual d'Alembertian. It shows in particular that the propagating effect of $5D$ gravity arises only at the fourth order of the expansion. For a static spherical metric on the brane given by \begin{equation}\label{124} g_{\mu\nu}dx^{\mu}dx^{\nu} = - F(r)dt^2 + \frac{dr^2}{H(r)} + r^2d\Omega^2, \end{equation} \noindent where $d\Omega^2$ denotes the spherical 3-volume element related to the geometry of the 3-brane, the projected eletric component Weyl term on the brane is given by the expressions \begin{eqnarray} E_{00} &=& \frac{F}{r}\left(H' - \frac{1 - H}{r}\right),\; E_{rr} = -\frac{1}{rH}\left(\frac{F'}{F} - \frac{1 - H}{r}\right),\nonumber\\ E_{\theta\theta} &=& -1 + H +\frac{r}{2}H\left(\frac{F'}{F} + \frac{H'}{H}\right). \end{eqnarray} \noindent Note that in eq.(\ref{124}) the metric is led to the Schwarzschild one, if $F(r)$ equals $H(r)$. The exact determination of these radial functions remains an open problem in BH theory on the brane \cite{Maartens, rs05,rs06,rs07,rs08,rs09}. These components allow one to evaluate the metric coefficients in eq.(\ref{metrica}). The area of the $5D$ horizon is determined by $g_{\theta\theta}$. Defining $\psi(r)$ as the deviation from a Schwarzschild form for $H(r)$ \cite{Maartens,rs05,rs06,rs07,rs01,rs02,rs03,Gian} \begin{equation}\label{h} H(r) = 1 - \frac{2GM}{c^2r} + \psi(r), \end{equation} \noindent where $M$ is constant, yields \begin{eqnarray}\label{gtheta} g_{\theta\theta}(r,y) &=& r^2 - \psi'\left(1 + \frac{2}{\ell}|y|\right)y^2\nonumber\\ +\negthickspace \negthickspace &&\negthickspace\negthickspace\left[\psi' + \frac{1}{2}(1 + \psi')(r\psi' - \psi)'\right]\frac{y^4}{6r^2} + \cdots \end{eqnarray} \noindent It can be shown $\psi$ and its derivatives determine the change in the area of the horizon along the extra dimension \cite{Maartens}. For a large BH, with horizon scale $r \gg \ell$, it follows from eq.(\ref{potential}) that \begin{equation}\label{psi} \psi(r) \approx -\frac{4GM\ell^2}{3c^2r^3}. \end{equation} \noindent \section{Variation in the luminosity of quasars and AdS curvature radius} The observation of quasars (QSOs) in X-ray band can constrain the measure of the AdS$_5$ bulk curvature radius $\ell$, and indicate how the bulk is curled, from its geometrical and topological features. QSOs are astrophysical objects that can be found at large astronomical distances (redshifts $z > 1$). For a \emph{gedanken} experiment involving a static BH being accreted, in a simple model, the accretion eficiency $\eta$ is given by \begin{equation}\label{eta} \eta = \frac{GM}{6c^2R_{{\rm Sbrane}}}, \end{equation} \noindent where $R_{{\rm Sbrane}}$ is the Schwarzschild radius corrected for the case of brane-world effects. The luminosity $L$ due to accretion in a BH, that generates a quasar, is given by \begin{equation}\label{dl} L(\ell) = \eta(\ell) \dot{M}c^2, \end{equation} \noindent where $\dot{M}$ denotes the accretion rate and depends on some specific model of accretion. In order to estimate $R_{{\rm Sbrane}}$, fix $H(r) = 0$ in eq.(\ref{h}), resulting in \begin{equation} 1 - \frac{2GM}{c^2R_{{\rm Sbrane}}} - \frac{4GM\ell^2}{3c^2R_{{\rm Sbrane}}^3} = 0. \end{equation} \noindent This equation can be rewritten as \begin{equation} R_{{\rm Sbrane}}^3 - \frac{2GM}{c^2}R_{{\rm Sbrane}}^2 - \frac{4GM\ell^2}{3c^2} = 0. \end{equation} \noindent Using Cardano's formul\ae\,\cite{card}, it follows that \begin{equation}\label{111} R_{{\rm Sbrane}} = (a + \sqrt{b})^{1/3} + (a - \sqrt{b})^{1/3} + \frac{2GM}{3c^2},\end{equation} \noindent where \begin{eqnarray} a &=& \frac{2GM}{3c^2}\left(\ell^2 + \frac{4G^2M^2}{9c^4}\right),\\\label{rs3} b&=&\frac{4G^2M^2\ell^2}{9c^4}\left(\ell^2 + \frac{8G^2M^2}{9c^4}\right).\label{rs4} \end{eqnarray} \noindent Writing $a$ and $b$ explicitly in terms of the Schwarzschild radius $R_S$ it follows from eqs.(\ref{rs3},\ref{rs4}) that \begin{eqnarray}a&=& \frac{R_S}{3}\left(\ell^2 + \frac{R_S^2}{9}\right),\\\label{rs1} b&=& \frac{R_S^2\ell^2}{9}\left(\ell^2 + \frac{2R_S^2}{9}\right).\label{rs2} \end{eqnarray} Now, substituting the values of $G$ and $c$ in the SI, and adopting $\ell \sim 0.1\, {\rm mm}$ and $M \sim 10^9 M_\odot$ (where $M_\odot \approx 2 \times 10^{33}$ g) denotes solar mass, corresponding to the mass of a SMBH, it follows from eq.(\ref{111}) that the correction in the Schwarzschild radius of a SMBH by brane-world effects is given by \begin{equation}\label{coor} R_{{\rm Sbrane}} - R_S \sim 100\,{\rm m},\end{equation} \noindent and since the Schwarzschild radius $R_S$ is defined as $\frac{2GM}{c^2} = 2.964444 \times 10^{12}\,{\rm m}$, the relative error concerning the brane-world corrections in the Schwarzschild radius of a SMBH is given by \begin{equation}\label{razao} 1- \frac{R_S}{R_{{\rm Sbrane}}} \sim 10^{-10}\end{equation} \noindent These calculations shows that there exists a correction in the Schwarzschild radius of a SMBH caused by brane-world effects, although it is negligible. This tiny correction can be explained by the fact the event horizon of the SMBH is $10^{15}$ times bigger than the AdS$_5$ bulk curvature radius $\ell$. As shall be seen in a sequel paper these corrections are shown to be outstandingly wide in the case of mini-BHs, wherein the event horizon can be a lot of magnitude orders smaller than $\ell$. As proved in \cite{rcp}, the solution above for $R_{{\rm Sbrane}}$ can be also found in terms of the curvature radius $\ell$. It is then possible to find an expression for the luminosity $L$ in terms of the radius of curvature, regarding formul\ae\, (\ref{dl}). Here we shall adopt the model of the accretion rate given by a disc accretion, given by \cite{shapiro} Having observational values for the luminosity $L$, it is possible to estimate a value for $\ell$, given a BH accretion model. For a tipical supermassive BH of $10^9 M_{\odot}$ in a massive quasar the accretion rate is given by \begin{equation} \dot{M} \approx 2.1 \times 10^{16} {\rm kg}\, {\rm s}^{-1} \end{equation} Supposing the quasar radiates in Eddington limit, given by (see, e.g., \cite{shapiro}) \begin{equation} L(\ell) = L_{{\rm Edd}} = 1.263 \times 10^{45}\left(\frac{M}{10^7 M_{\odot}}\right)\; {\rm erg\, s^{-1}} \end{equation} \noindent for a quasar with a supermassive BH of $10^9 M_{\odot}$, the luminosity is given by $L \sim 10^{47}\, {\rm erg\, s^{-1}}$. From eqs.(\ref{eta}) and (\ref{dl}) the variation in quasar luminosity of a SMBH is given by \begin{eqnarray}\label{dell} \Delta L &=& \frac{GM}{6c^2}\left(R_{{\rm Sbrane}}^{-1} - R_S^{-1}\right) \dot{M} c^2\nonumber\\ &=& \frac{1}{12}\left(\frac{R_S}{R_{\rm Sbrane}} -1\right)\dot{M}c^2 \end{eqnarray}\noindent For a typical SMBH eq.(\ref{dell}) reads \begin{equation} \Delta L \sim 10^{28}\;{\rm erg\, s^{-1}}. \end{equation} In terms of solar luminosity units $L_\odot = 3.9 \times 10^{33} {\rm erg\,s^{-1}}$ it follows that the variation of luminosity of a (SMBH) quasar due to the correction of the Schwarzschild radius in a brane-world scenario is given by \begin{equation}\label{de1} \Delta L \sim 10^{-5}\;L_\odot. \end{equation} Naturally, this small correction in the Schwarzschild radius of SMBHs given by eqs.(\ref{dell},\ref{de1}) implies in a consequent correction in quasar luminosity via accretion mechanism. This correction has shown to be a hundred thousand weaker than the solar luminosity. In spite of the huge distance between quasars and us, it is probable these corrections can be never observed, although they indeed exist in a brane-world scenario. This correction is clearly regarded in the luminosity integrated in all wavelength. We look forward the detection of these corrections in particular selected wavelengths, since quasars also use to emit radiation in the soft/hard X-ray band. In the graphics below we illustrate the variation of luminosity $\Delta L$ of quasars as a function of the SMBH mass and $\ell$, and also for a given BH mass, $\Delta L$ as is depicted as a function of $\ell$. \begin{figure} \includegraphics[width=8.5cm]{rs1.eps} \caption{\small 3D graphic of $\frac{\Delta L}{\dot{M}c^2} \times \ell \times M$ where the SMBH mass $M$ varies from 10 to 10$^6$ $M_\odot$ and the radius $\ell$ of the AdS$_5$ bulk varies from $10^{-7}$ to $10^{-1}$ mm.} \label{fig:ads51} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{rs3.eps} \caption{\small Graphic of $\displaystyle\frac{\Delta L}{\dot{M}c^2} \times \ell$ for $M = M_\odot$} \label{fig:ads52} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{rs2.eps} \caption{\small Graphic of $\displaystyle\frac{\Delta L}{\dot{M}c^2} \times \ell$ for $M = 10M_\odot$} \label{fig:ads53} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{rs4.eps} \caption{\small Graphic of $\displaystyle\frac{\Delta L}{\dot{M}c^2} \times \ell$ for $M = 100M_\odot$} \label{fig:ads54} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{rs5.eps} \caption{\small Graphic of $\displaystyle\frac{\Delta L}{\dot{M}c^2} \times \ell$ for $M = 1000M_\odot$} \label{fig:ads54} \end{figure} \section{Concluding Remarks and Outlooks} In the present model the variation of quasar luminosity is regarded as an extra dimension brane effect, and can be immediately estimated by eq.(\ref{dell}), involving the Schwarzschild radius calculated in a brane-world scenario, and the standard Schwarzschild radius of a BH in 3-brane. It is desirable also to calculate the variation of quasar luminosity in a Kerr and in a Reissner-Nordstr\o m (RN) geometry, where while the latter is caused by a electrically neutral, rotating BH, the former is generated by a charged, static black-hole. It shall be done in a sequel paper, using a formul\ae\, equivalent to eq.(\ref{h}) but now concerning RN metric \cite{rs06,rs09,Da}. The 2-brane model by contrast, for suitable choice of the extra dimension length and of $\ell$ does predict tracks and/or signatures in LHC \cite{Maartens,lhc}. Black holes shall be produced in particle collisions at energies possibly below the Planck scale. ADD brane-worlds \cite{ad1,ad2,ad3} also provides a possibility to observe black hole production signatures in the next-generation colliders and cosmic ray detectors \cite{lhc1}. In the sequel article we will show that, since mini-BHs possess a Reissner-Nordstr\"om-like effetive behavior under gravitational potential, they feel a 5$D$ gravity and are more sensitive to extra dimension brane effects. \section{Acknowledgements} The authors are grateful to Prof. Paul K. Townsend for his comments and suggestions, to Prof. Roy Maartens for his patience and clearing up expositions concerning branes. The authors thank to CAPES and CNPq for financial support.
1,477,468,751,186
arxiv
\section{Introduction} A class of very accurate standard candles, the supernovae Ia (SNeIa), has been highly developed in the last two decades \citep{Branch}; however, these objects are hardly detectable at redshifts higher than $z$ = 1.7, so the study of more distant regions of the Universe leads to the needing to implement more powerful standard candles. The problem becomes particularly crucial at intermediate redshift, $z$ = 6 - 7, where, up to now, not very well - defined distance indicators are available. In the last years, several efforts have been made in order to implement gamma ray bursts (GRBs), the most powerful explosions in the Universe, as standard candles, and several interesting results have recently been achieved (e.g. Amati et al 2008; Basilakos $\&$ Perivolaropoulos 2008 and references therein). Considering the standard model of such objects, the GRB phenomenon should originate from the black hole formation and reach huge amounts of energy (up to $10^{54} erg$). These events are observed at considerable distances, so there are several efforts to frame them into the standard of cosmological distance ladder. In the literature, several more - detailed models give account for the GRB formation, e.g. Meszaros 2006; Ruffini et al 2008, but, up to now, none of them is intrinsically capable of connecting all the observable quantities. For this reason, GRBs cannot be used as standard candles. Despite this shortcoming, there are several observational correlations among the photometric and spectral properties of GRBs. These features allow use of GRBs as distance indicators \citep{Schaefer}, even when they cannot be fully ''enrolled" in the class of standard candles. In particular, it is possible to connect the peak energy of GRBs, $E_p$, with the isotropic energy released in the burst, $E_{iso}$, and with the rest frame jet break - time of the afterglow optical light curve, measured in days, $t_b$, \citep{Liang2}: \begin{equation}\label{eq:no1} \log{E_{iso}}=a + b_1 \log{\frac{E_p (1+z)}{300keV}} + b_2 \log{\frac{t_b}{(1+z)1day}} \end{equation} where $a$ and $b_i$, with $i=1,2$, are calibration constants. Another interesting result is the relation given by Ghirlanda et al. \citep{Ghirlanda}. It connects the peak energy $E_p$ with the collimation-corrected energy, or the energy release of a GRB jet, $E_{\gamma}$, where \begin{equation} E_{\gamma} = ( 1 - \cos{\theta_{jet}} ) E_{iso}, \end{equation} with $\theta_{jet}$ the jet opening angle, given by \citep{Sari}: \begin{equation} \theta_{jet} = 0.163\left(\frac{t_b}{1 + z}\right)^{3/8}\left(\frac{n_0\eta_{\gamma}}{E_{iso,52}}\right)^{1/8}, \end{equation} where $E_{iso,52} = E_{iso}/10^{52}$ ergs, $n_0$ is the circumburst particle density in 1 cm$^{-3}$, and $\eta_{\gamma}$ the radiative efficiency. The Ghirlanda et al. relation is \begin{equation}\label{eq:no2} \log{E_{\gamma}} = a + b \log{\frac{E_p}{300 keV}}, \end{equation} where $a$ and $b$ are two calibration constants. These two relations are used the most in constraining cosmology due to their relatively small scatter, interestingly very tight in the Ghirlanda et al. one, and the sufficient number of data points available. In \citep{Schaefer}, an example of the discrepancy between data and theoretical curves is shown for these two relations. It is worth noticing that the calibration of the above relations is necessary to avoid the circularity problem: all the relations need to be calibrated for every set of cosmological parameters. Indeed, all GRB distances, obtained only in a photometric way, are strictly dependent on the cosmological parameters since there is no low-redshift set of GRBs to achieve a cosmology-independent calibration. Recently, Liang et al. \citep{Liang} present a calibration method (Liang thereafter) for several GRB relations, included the above relations (\ref{eq:no1}) and (\ref{eq:no2}), in a cosmology-independent way using the SNeIa. In fact, the SNeIa are very accurate standard candles, but their range is limited up to $z \approx$ 1.7; hence, assuming that relations (\ref{eq:no1}) and (\ref{eq:no2}) work at any $z$ and that, at the same redshift, GRBs and SNeIa have the same luminosity distance, it becomes possible, in principle, to calibrate GRB relations at low redshifts. The calibration parameters are shown in Table \ref{table:no1}. \begin{table} \caption{Parameter values obtained by \citep{Liang}} \label{table:no1} \centering \begin{tabular}{c c c} \hline\hline Relation & a & b \\ \hline $E_{\gamma} - E_p$ & 52.26 $\pm$ 0.09 & 1.69 $\pm$ 0.11 \\ $E_{iso}-E_p - t_b$ & 52.83 $\pm$ 0.10 & 2.28 $\pm$ 0.30 \\ & & -1.07 $\pm$ 0.21 \\ \hline \end{tabular} \end{table} For the $E_{iso}-E_p-t_b$ relation, the $b$-values in the first line is $b_1$ and in the second line is $b_2$. When our working-relations are calibrated with the Liang method, we can compute the luminosity distance $d_l$ from the well-known relation between $d_l$ and the energy-flux ratio of the distance indicators in consideration. Afterwards, we can use a formulation given by Visser \citep{Visser1}, where the luminosity distance $d_l$ is related to the cosmographic parameters \citep{Weinberg} by means of a Taylor series expansion for the same $d_l$. Such an analysis works very well at low and intermediate redshifts, since very good classes of standard candles are available there. Besides, it is useful to constrain alternative theories of gravity, as shown in Capozziello et al. 2008. Since we are calibrating GRBs by SNeIa (in the SNeIa redshift range, the $d_l$ Taylor series analysis works very well), the method could also be extended to the next step (intermediate-high redshifts) where GRBs are expected to be suitable distance indicators. This working hypothesis could be useful in order to link low and high redshift ranges and then fully probe $d_l$. However, it is clear that such a Taylor expansion, derived for low redshifts, can be problematic for fitting GRBs at high redshifts. Here, we consider it a viable methodological approach to link GRBs to SNeIa. The aim of this work is to achieve the cosmographic parameters \citep{Weinberg} using the above GRB relations and then to test the cosmological density parameters in a $\Lambda$CDM model. The only assumption that we make here is that the Universe is described by a Friedmann-Robertson-Walker geometry and the scale factor of the universe $a(t)$ can be expanded in a Taylor series (Sect.2). In Sect.3, after considering a sample of 27 GRBs, we use a best-fit analysis to derive the cosmographic parameters discussed in the previous section, adopting the so - called Chevallier, Polarsky, Linder parameterization for the equation of state (EoS). Discussion and conclusions are given in Sect.4. \section{Cosmography} The calibration we want to achieve should be cosmologically model-independent; hence, applying the above relations to a GRB sample in a given $z$-range, we want to derive the related cosmography. In particular, we want to obtain deceleration, jerk, and snap parameters \citep{Visser1} and compare them with the current values deduced by other methods and observations (see, for example, Basilakos $\&$ Perivolaropoulos 2008; Capozziello et al 2008 and references therein). Being only related to the derivatives of the scale factor allows to fit the cosmographic parameters versus the distance-redshift relation without any \emph{a priori} assumption on the underlying cosmological model but, this fails at very high redshifts where the Taylor expansion does not work yet. To build a distance-redshift diagram, one has to calculate the luminosity distance for each GRB in a given sample. In our case the luminosity distance is \begin{equation} \label{lum1} d_l = \left(\frac{E_ {iso}}{4\pi S_{bolo}'}\right)^{\frac{1}{2}}, \end{equation} where $S_{bolo}' = S_{bolo}/(1+z)$ is the bolometric fluence of gamma rays in the burst, corrected with respect to the rest frame. The definition of $E_{iso}$ is different for each relation used, therefore for the luminosity distance, we have \begin{equation} \label{lum2} d_l = \left[\frac{10^a \left(\frac{E_p(1+z)}{300 keV}\right)^{b_1}\left(\frac{t_b}{(1+z)1 day}\right)^{b_2}}{4 \pi S'_{bolo}}\right]^{1/2}, \end{equation} adopting the Liang-Zhang relation, with $a$, $b_1$, and $b_2$ given in the Table \ref{table:no1}, and \begin{equation} \label{lum3} d_l = 7.575\frac{{(1 + z) a^{2/3} [E_p(1 + z)/100\,{\rm keV}]^{2b/3} }}{{(S_{bolo} t_b )^{1/2} (n_0 \eta_\gamma )^{1/6} }}{\rm{ Mpc}}, \end{equation} for the Ghirlanda et al. relation, with $a$ and $b$ given in \ref{table:no1}, \citep{xu}. Note that the former gives $d_l$ in centimeters, therefore it divides the result for the value of 1 parsec in cm, while the latter gives $d_l$ directly in Mpc. The luminosity distance can be connected to the Hubble series \citep{Weinberg}. Expanding the Hubble law up to the fourth order in redshift and considering the related luminosity distance, we get \citep{Visser1} \begin{eqnarray} d_l(z) = d_H z \Bigg\{ 1 + {1\over2}\left[1-q_0\right] {z} -{1\over6}\left[1-q_0-3q_0^2+j_0+ \frac{k \; d_H^2}{a_0^2} \right] z^2 \nonumber \\ {+} {1\over24}[ 2-2q_0-15q_0^2-15q_0^3+5j_0(1+2q_0)+s_0 \nonumber \\ + \frac{2\; k \; d_H^2 \; (1+3q_0)}{a_0^2}]\; z^3 + \mathcal{O}(z^4) \Bigg\} \end{eqnarray} where $d_H = c/H_0$ is the Hubble radius and where the cosmographic parameters are defined as \begin{equation} H(t) = + {1\over a} \; {d a\over d t}\,, \end{equation} \begin{equation} q(t) = - {1\over a} \; {d^2 a\over d t^2} \;\left[ {1\over a} \; {d a \over d t}\right]^{-2}\,, \end{equation} \begin{equation} j(t) = + {1\over a} \; {d^3 a \over d t^3} \; \left[ {1\over a} \; {d a \over d t}\right]^{-3}\,, \end{equation} \begin{equation} s(t) = + {1\over a} \; {d^4 a \over d t^4} \; \left[ {1\over a} \; {d a \over d t}\right]^{-4}\,. \end{equation} They are usually referred to as the \emph{Hubble}, \emph{deceleration}, \emph{jerk}, and \emph{snap} parameters, respectively. Their present values, which we denote with a subscript $0$, may be used to characterize the evolutionary status of the Universe. For instance, $q_0 < 0$ denotes an accelerated expansion, while $j_0$ allows us to distinguish among different accelerating models; a positive value of $j_0$ indicates that, in the past, the acceleration reversed its sign. In this paper, according to the WMAP observations, we assume the value of the Hubble constant $H_0 \simeq 70 \pm 2$ km/sec/Mpc \citep{WMAP}. The cosmographic parameters can be expressed in terms of the dark energy density and the EoS. Following the prescriptions of the \emph{Dark Energy Task Force}, \citep{DETF}, we use the Chevallier-Polarski-Linder parameterization (CPL) for the EoS \citep{CPL1,CPL2} and assume a spatially flat Universe filled with dust matter and dark energy. The dimensionless Hubble parameter $E(z) = H/H_0$ reads as \begin{equation} E^2(z) = \Omega_M (1 + z)^3 + \Omega_{X} (1 + z)^{3(1 + w_0 + w_a)}e^{-\frac{3w_a z}{1+z}}, \end{equation} with $\Omega_{X} = 1 - \Omega_M$ and $w_0$ and $w_a$ the CPL parameterization for the EoS (see Chevallier et al 2001; Linder 2003; Capozziello et al 2008 for details). We can have $\Omega_{X} \equiv \Omega_{\Lambda}$, with $\Lambda$ the cosmological constant. Such a relation can be used to evaluate the cosmographic parameters, obtaining \begin{equation}\label{eq:no10} q_0 = \frac{1}{2} + \frac{3}{2} (1 - \Omega_M) w_0 \ , \end{equation} \begin{equation}\label{eq:no11} j_0 = 1 + \frac{3}{2} (1 - \Omega_M) \left [ 3w_0 (1 + w_0) + w_a \right ] \ , \end{equation} \begin{eqnarray}\label{eq:no12} s_0 & = & -\frac{7}{2} - \frac{33}{4} (1 - \Omega_M) w_a \nonumber \\ ~ & - & \frac{9}{4} (1 - \Omega_M) \left [ 9 + (7 - \Omega_M) w_a \right ] w_0 \nonumber \\ ~ & - & \frac{9}{4} (1 - \Omega_M) (16 - 3\Omega_M) w_0^2 \nonumber \\ ~ & - & \frac{27}{4} (1 - \Omega_M) (3 - \Omega_M) w_0^3 \,. \end{eqnarray} For a $\Lambda$CDM-universe, where $(w_0, w_a) = (-1, 0)$, it becomes \begin{equation}\label{eq:no3} q_0 = -1 + \frac{3}{2}\Omega_M; \end{equation} \begin{equation}\label{eq:no4} j_0 = 1; \end{equation} \begin{equation}\label{eq:no5} s_0 = 1 - \frac{9}{2}\Omega_M\,, \end{equation} that are the quantities which we are going to fit using a given GRB sample. \section{GRB data fitting} Let us take a GRB sample into account that satisfies the above relations. Unfortunately only 27 GRBs have observed jet breaks in the Schaefer sample \citep{Schaefer}. The observational quantities of GRBs to take into account, are listed in Table \ref{table:no6}. The luminosity distance for each of the relations is given by Eqs. (\ref{lum2}) and (\ref{lum3}), and then we obtain a data distribution in the luminosity distance-redshift diagram $d_l - z$. The errors on the data are only of a photometric nature and, in a first analysis, we can exclude errors on the redshift. For each GRB, we assume $\eta_{\gamma} = 0.2$ and $\sigma_{\eta} = 0$, \citep{Frail}. \begin{table*} \centering \caption{GRBs Data Sample} \label{table:no6} \begin{tabular}{c c c c c c c} \hline\hline $GRB$ & $z$ & $E_p$ (keV) & $S_{bolo}$ (erg cm$^{-2}$) & $t_{jet}$ (days) & $\theta_{jet}$ (deg.) & $n_0$ $(cm^{-3})$ \\ (1) & (2) & (3) & (4) & (5) & (6) & (7)\\ \hline 970508 & 0.84 & 389 $\pm$ 40 & 8.09E-6 $\pm$ 8.1E-7 & 25 $\pm$ 5 & 23 $\pm$ 3 & 3.0 $\pm$ 2.4 \\ 970828 & 0.96 & 298 $\pm$ 30 & 1.23E-4 $\pm$ 1.2E-5 & 2.2 $\pm$ 0.4 & 5.91 $\pm$ 0.79 & 3.0 $\pm$ 2.4 \\ 980703 & 0.97 & 254 $\pm$ 25 & 2.83E-5 $\pm$ 2.9E-6 & 3.4 $\pm$ 0.5 & 11.02 $\pm$ 0.8 & 28.0 $\pm$ 10 \\ 990123 & 1.61 & 604 $\pm$ 60 & 3.11E-4 $\pm$ 3.1E-5 & 2.04 $\pm$ 0.46 & 3.98 $\pm$ 0.57 & 3.0 $\pm$ 2.4 \\ 990510 & 1.62 & 126 $\pm$ 10 & 2.85E-5 $\pm$ 2.9E-6 & 1.6 $\pm$ 0.2 & 3.74 $\pm$ 0.28 & 0.29 $\pm$ 0.14 \\ 990705 & 0.84 & 189 $\pm$ 15 & 1.34E-4 $\pm$ 1.5E-5 & 1 $\pm$ 0.2 & 4.78 $\pm$ 0.66 & 3.0 $\pm$ 2.4 \\ 990712 & 0.43 & 65 $\pm$ 10 & 1.19E-5 $\pm$ 6.2E-7 & 1.6 $\pm$ 0.2 & 9.47 $\pm$ 1.2 & 3.0 $\pm$ 2.4 \\ 991216 & 1.02 & 318 $\pm$ 30 & 2.48E-4 $\pm$ 2.5E-5 & 1.2 $\pm$ 0.4 & 4.44 $\pm$ 0.7 & 4.7 $\pm$ 2.8 \\ 010222 & 1.48 & 309 $\pm$ 12 & 2.45E-4 $\pm$ 9.1E-6 & 0.93 $\pm$ 0.1 & 3.03 $\pm$ 0.14 & 3.0 $\pm$ 2.4 \\ 011211 & 2.14 & 59 $\pm$ 8 & 9.20E-6 $\pm$ 9.5E-7 & 1.56 $\pm$ 0.16 & 5.38 $\pm$ 0.66 & 3.0 $\pm$ 2.4 \\ 020124 & 3.20 & 87 $\pm$ 18 & 1.14E-5 $\pm$ 1.1E-6 & 3 $\pm$ 0.4 & 5.07 $\pm$ 0.64 & 3.0 $\pm$ 2.4 \\ 020405 & 0.70 & 364 $\pm$ 90 & 1.10E-4 $\pm$ 2.1E-6 & 1.67 $\pm$ 0.52 & 6.27 $\pm$ 1.03 & 3.0 $\pm$ 2.4 \\ 020813 & 1.25 & 142 $\pm$ 14 & 1.59E-4 $\pm$ 2.9E-6 & 0.43 $\pm$ 0.06 & 2.8 $\pm$ 0.36 & 3.0 $\pm$ 2.4 \\ 021004 & 2.32 & 80 $\pm$ 53 & 3.61E-6 $\pm$ 8.6E-7 & 4.74 $\pm$ 0.5 & 8.47 $\pm$ 1.06 & 30.0 $\pm$ 27.0 \\ 030226 & 1.98 & 97 $\pm$ 27 & 8.33E-6 $\pm$ 9.8E-7 & 1.04 $\pm$ 0.12 & 4.71 $\pm$ 0.58 & 3.0 $\pm$ 2.4 \\ 030328 & 1.52 & 126 $\pm$ 14 & 6.14E-5 $\pm$ 2.4E-6 & 0.8 $\pm$ 0.1 & 3.58 $\pm$ 0.45 & 3.0 $\pm$ 2.4 \\ 030329 & 0.17 & 67.9 $\pm$ 2.3 & 2.31E-4 $\pm$ 2.0E-6 & 0.5 $\pm$ 0.1 & 5.69 $\pm$ 0.5 & 1.0 $\pm$ 0.11 \\ 030429 & 2.66 & 35 $\pm$ 12 & 1.13E-6 $\pm$ 1.9E-7 & 1.77 $\pm$ 1.0 & 6.3 $\pm$ 1.52 & 3.0 $\pm$ 2.4 \\ 041006 & 0.71 & 63 $\pm$ 12 & 1.75E-5 $\pm$ 1.8E-6 & 0.16 $\pm$ 0.04 & 2.79 $\pm$ 0.41 & 3.0 $\pm$ 2.4 \\ 050318 & 1.44 & 47 $\pm$ 15 & 3.46E-6 $\pm$ 3.5E-7 & 0.21 $\pm$ 0.07 & 3.65 $\pm$ 0.5 & 3.0 $\pm$ 2.4 \\ 050505 & 4.27 & 70 $\pm$ 23 & 6.20E-6 $\pm$ 8.5E-7 & 0.21 $\pm$ 0.04 & 3.0 $\pm$ 0.8 & 3.0 $\pm$ 2.4 \\ 050525 & 0.61 & 81.2 $\pm$ 1.4 & 2.59E-5 $\pm$ 1.3E-6 & 0.28 $\pm$ 0.12 & 4.04 $\pm$ 0.8 & 3.0 $\pm$ 2.4 \\ 050904 & 6.29 & 436 $\pm$ 200 & 2.0E-5 $\pm$ 2E-6 & 2.6 $\pm$ 1 & 8 $\pm$ 1 & 3.0 $\pm$ 2.4 \\ 051022 & 0.80 & 510 $\pm$ 22 & 3.40E-4 $\pm$ 1.2E-5 & 2.9 $\pm$ 0.2 & 4.4 $\pm$ 0.1 & 3.0 $\pm$ 2.4 \\ 060124 & 2.30 & 237 $\pm$ 76 & 3.37E-5 $\pm$ 3.4E-6 & 1.2 $\pm$ & 3.72 $\pm$ 0.15 & 3.0 $\pm$ 2.4 \\ 060210 & 3.91 & 149 $\pm$ 35 & 1.94E-5 $\pm$ 1.2E-6 & 0.33 $\pm$ 0.08 & 1.9 $\pm$ 0.17 & 3.0 $\pm$ 2.4 \\ 060526 & 3.21 & 25 $\pm$ 5 & 1.17E-6 $\pm$ 1.7E-7 & 1.27 $\pm$ 0.35 & 4.7 $\pm$ 1 & 3.0 $\pm$ 2.4 \\ \hline \end{tabular} \\ \hspace{1mm} References: \citep{Jimenez}; \citep{Metzger}; \citep{Djorgovski}; \citep{Kulkarni}; \citep{Israel}; \citep{Bjornsson}; \citep{Li} \end{table*} Another version of the Hubble series can be used to improve the data fit. If we consider the equation for the distance modulus, \begin{equation} \mu = 25 + \frac{5}{ln(10)}ln[d_l/(1 Mpc)] + 25, \end{equation} and substitute the equation for $d_l$, we obtain a logarithmic version of the Hubble series: \begin{eqnarray}\label{log} \ln[d_l / (z Mpc)] = \ln(d_H/Mpc) - \frac{1}{2}[-1+q_0] z \nonumber \\ + \frac{1}{24} [-3 + 10q_0 + 9q_0^2 - 4(j_0 + 1 + \frac{kd_H^2}{a_0^2})]z^2 \nonumber \\ + \frac{1}{24} [4q_0(j_0 + 1 + {k d_H^2}{a_0^2}) + 5 - 9q_0 - 16q_0^2 - 10q_0^3 \nonumber \\ + j_0(7 + 4q_0) + s_0]z^3 + \mathcal{O}(z^4)\,. \end{eqnarray} This logarithmic version shows the advantage when there is no need to transform the uncertainties on the distance modulus. With these considerations in mind, we perform a polynomial least-squares fit for each relation of the data assuming Taylor series polynomials, both in distance and in logarithmic distance. We stop at order $n = 3$ both for the polynomial fit and for the logarithmic fit. In the latter case, we obtain an estimate of the snap parameter. Note that we are using least squares since, in absence of any better data-fitting procedure, this is the standard procedure when assuming Gaussian distributed uncertainties. The truncated polynomial used in the fits has the form \begin{equation} d(z) = \sum_{i=1}^3 a_i z^i, \end{equation} and \begin{equation} \ln[d(z)/(z Mpc)] = \sum_{i=1}^3 b_i z^i \end{equation} for the logarithmic fit. In the latter case, the Hubble constant enters as the $i=1$ component of the fit. As stated above, we use $H_0$ as a constraint (a \emph{prior}). The fits can be used to estimate the deceleration and the jerk parameters. The logarithmic fit is better for estimating the snap parameter through the values of the coefficients $a_i$ and $b_i$ and their statistical uncertainties. The statistical uncertainties on $q_0$ are linearly related to the statistical uncertainties on the parameter $b_1$, while the statistical uncertainties on $j_0$ and $s_0$ depend non -linearly on $q_0$ and its statistical uncertainty. It is worth noticing the combination $j_0 + kd_H^2/a_0^2$, which is a well-known degeneracy in Eq.(\ref{log}) \citep{Weinberg}. It means that we cannot determine $j_0$ and $\Omega = 1 + kd_H^2/a_0^2$ separately, but we need an independent determination of $\Omega$ to estimate the value of the jerk parameter. The results of the fits are presented in Table \ref{table:no3}. and all of them include the error on the data. For the calculation of the uncertainties on $d_l$, we have followed the procedure discussed in Xu et al. \citep{xu}. For example, the fractional uncertainties on $d_l$ in the Ghirlanda et al. relation, without the small angle approximation for $\theta_{jet}$ \citep{Sari}, are given by \begin{eqnarray} \left( {\frac{{\sigma _{d_L } }}{{d_L }}} \right)^2 & = & \frac{1}{4}\left[ \left( {\frac{{\sigma _{S_{bolo} } }}{{S_{bolo} }}} \right)^2 \right] +\frac{1}{4}\frac{{1 }}{{(1 - \sqrt {C_{\theta } } )^2 }}\left[ \left( {\frac{{\sigma _a }}{a}}\right)^2 \right. \nonumber \\ & & \left. + \left( {b\frac{{\sigma _{E_p } }}{{E_p }}} \right)^2 + \left( {b\frac{{\sigma _b }}{b}\ln \frac{E_p}{100}} \right)^2 \right]+ \frac{1}{4}\frac{C_{\theta }}{{(1 - \sqrt {C_{\theta } } )^2 }} \nonumber \\ & &\times \left[ {\left( {\frac{{3\sigma _{t_b } }}{{t_b }}} \right)^2 + \left( {\frac{{\sigma _{n_0 } }}{{n_0 }}} \right)^2}\right]\,, \end{eqnarray} where $C_{\theta} = [\theta \sin{\theta} / 8 - 8 \cos{\theta}]^2$. This shows that the uncertainties on $d_l$ in the Ghirlanda et al. relation are very high, due to the dependence on several parameters. For this reason, in Fig \ref{fig:no3}, the prediction bounds are plotted at a $68\%$ confidence level, instead of $95\%$, as in LZ-relation. \begin{table} \caption{Results of the fits. LZ is for Liang-Zhang relation, GGL for the Ghirlandaet al. one} \label{table:no3} \centering \begin{tabular}{c c c c} \hline\hline Fit & $q_0$ & $j_0 + \Omega$ & $s_0$ \\ \hline $d_l(z)$ LZ & $-0.94 \pm 0.30$ & $2.71 \pm 1.1$ & \\ $d_l(z)$ GGL & $-0.39 \pm 0.11$ & $2.52 \pm 1.33$ & \\ $\ln [d_l/z]$ LZ & $-0.68 \pm 0.30$ & $0.021 \pm 1.07$ & $3.39 \pm 17.13$ \\ $\ln [d_l/z]$ GGL & $-0.78 \pm 0.20$ & $0.62 \pm 0.86$ & $8.32 \pm 12.16$ \\ \hline \end{tabular} \end{table} As said above, only statistical uncertainties have been considered and other kinds of errors (systematics of cosmological inference, modelling errors and ''historical" biases, Visser 2007b) have been neglected. If we do not assume $H_0$ as a constraint, the analysis gives $H_0 = 56$ $km/s/Mpc$, which means that the data sample needs to be improved with further GRBs to give more reliable results. Another step would be to test the goodness of the next fit statistics using the MATLAB package. In particular, we used the R-square method: A value closer to 1 indicates a better fit. In Table \ref{table:no5}, the results of R-square are shown and the plots of the residuals of the fits are shown in Figs. \ref{fig:no3}, \ref{fig:no4}. For the logarithmic fit, the bad value of the R-square is caused by the logarithm of the Hubble series, which spreads a lot of the data on the $ln(d_l)$-axis. The values $\ll 1$ for the logarithmic fits are due to the discrepancy of the data. \begin{table} \caption{Goodness of the fits with the R-square.} \label{table:no5} \centering \begin{tabular}{c c c c} \hline\hline Fit & R-square \\ \hline $d_l(z)$ LZ & $0.9909$ \\ $d_l(z)$ GGL & $0.9977$ \\ $\ln [d_l/z Mpc]$ LZ & $0.4005$ \\ $\ln [d_l/z Mpc]$ GGL & $0.2929$ \\ \hline \end{tabular} \end{table} In summary, the results are in quite good agreement with the $\Lambda$CDM model, giving a Universe model that accelerates in the present epoch and that has undergone a decelerated phase in the past. The signature of this past phase is related to the sign change of the parameter $q_0$ and the positive value of the jerk parameter, unless a positive value of the spatial curvature constant $k$ is considered. However this occurrence is excluded by the last observational results, which confirm a spatially flat Universe \citep{WMAP}. \subsection{The CPL parameterization to test the $\Lambda$CDM model} As said, the cosmographic parameters may also be expressed in terms of the dark energy density and EoS parameters. Starting from the Friedmann equation, we obtain the Hubble parameter: \begin{equation} H^2 = \left(\frac{8\pi G}{3}\right)\rho, \end{equation} where $\rho$ is the energy density. The continuity equation for each cosmological component is given by the Bianchi identity \citep{Weinberg}: \begin{equation} \frac{\dot{\rho}}{\rho} = -3H \left(1 + \frac{p}{\rho}\right) = -3H [ 1 + w(z)], \end{equation} where $p$ is the pressure of the component considered and $w(z) = p/\rho$ the redshift-dependent EoS for each component. The dark energy component responsible for the observed acceleration of the universe must have a negative EoS, \citep{Riess, Allen}. To find $w(z)_{DE}$, we can adopt the CPL parameterization, \citep{CPL1, CPL2}, where \begin{equation} w(z)_{DE} = w_0 + w_a z \left(\frac{1}{1 + z}\right) \end{equation} with $w_0$ and $w_a$ two parameters that enter directly into the equations for the cosmographic parameters, (\ref{eq:no10}-\ref{eq:no12}). To test the $\Lambda$CDM model, we have assumed $(w_0,w_a)=(-1,0)$. Conversely, such a case can be generalized by deducing the value of cosmographic parameters $(\Omega_M, w_0, w_a)$ from polynomial fits where GRB data are considered. Adopting our GRB sample, we obtained the following values \begin{equation} w_0 = -0.53 \pm 0.64 \qquad w_a = 0.59 \pm 0.77, \end{equation} which directly enter into the CPL parameterization. The errors on the CPL parameters are directly connected with the errors on the cosmographic parameters, as easily seen from the system (\ref{eq:no10}-\ref{eq:no12}). The values of $w_0$ and $w_a$ agree, within the errors, with the $\Lambda$CDM model, without assuming constraints {\it a priori} on the cosmological model. \section{Discussion and conclusions} Starting from some relations connecting the observable quantities of GRBs, we have used a sample of 27 GRBs to derive the luminosity distance - redshift diagram of the Hubble law. The relations conveniently calibrated by SNeIa to make them independent of any cosmological models. We have taken the Hubble law into account in the Taylor series form, assuming the luminosity distance $d_l$ as a redshift function whose coefficients are combinations of the cosmographic parameters $H_0$, $q_0$, $j_0$, and $s_0$. The aim was to evaluate such parameters starting from the GRB data. A direct analysis of the fits leads to the conclusion that, in the error range, the SNeIa results can also be extended at higher redshifts \citep{Visser3}. Besides, such results agree with the $\Lambda$CDM model according to Eqs. (\ref{eq:no3}),(\ref{eq:no4}),(\ref{eq:no5}). In particular, the value of the parameter $q_0$ that we found is in agreement with the observed $\Omega_M$ (see Table \ref{table:no4}). \begin{figure} \includegraphics[width=10.4cm, height=8cm]{adjdlz.eps} \caption{Luminosity distance - redshift diagram and the residuals of the $d_l(z)$ GGL fit. Note the discrepancy at high-$z$. The dotted lines are the bounds predicted at $68\%$ confidence level.} \label{fig:no3} \end{figure} \begin{figure} \includegraphics[width=10.4cm, height=8cm]{adjlndlz.eps} \caption{Logarithmic version of the luminosity distance relation versus redshift and the residuals of the $ln[d_l/zMpc]$ LZ fit. In this version of the Hubble series, the discrepancy is the same at every $z$. The dotted lines are the bounds predicted at $95\%$ confidence level. } \label{fig:no4} \end{figure} \begin{table} \caption{Cosmological density parameters} \label{table:no4} \centering \begin{tabular}{c c c} \hline\hline Fit & $\Omega_M$ & $\Omega_{\Lambda}$ \\ \hline $d_l(z)$ LZ & $0.04 \pm 0.03$ & $0.65 \pm 0.73$ \\ $d_l(z)$ GGL & $0.46 \pm 0.43$ & $0.54 \pm 2.82$ \\ $\ln [d_l/z Mpc]$ LZ & $0.37 \pm 0.31$ & $0.63 \pm 1.13$ \\ $\ln [d_l/z Mpc]$ GGL & $0.28 \pm 0.30$ & $0.72 \pm 1.09$ \\ \hline \end{tabular} \end{table} However, the sample we used is quite poor at high redshifts and, in some sense, this justifies the use of the method of Taylor series which works very well at low redshifts. In particular, at $z > 6$, we only have one GRB, GRB050904 (see Fig. \ref{fig:no3}). This GRB is very important in the fit results because it affects the trend of the fits. For this reason we need some richer sample at medium-high redshifts to constrain the results better. However, if we had richer samples at high redshifts, the Taylor series analysis would fail to constrain cosmological models since an exact, and not approximated, $d_l(z)$ expression is needed in that case. The best constraint, however, would be an absolute relation between several, GRB observables which would make the GRBs a powerful standard candle at intermediate-high redshift. Considering these preliminary results, it seems that cosmography by GRBs could be a useful tool in constraining self-consistent cosmological models even if, up to now, GRBs are not standard candles in the proper sense. \paragraph{Acknowledgements.} We thank the referee for the useful suggestions that improved the paper.
1,477,468,751,187
arxiv
\section{Introduction} Most stars are believed to form in clusters (e.g. Lada \& Lada 2003) and high-mass stars may form exclusively in cluster-forming clouds. For a comprehensive understanding of clustered star formation, a good knowledge of the initial conditions and earliest phases of the process is crucial which, in practice, can only be inferred from detailed studies of deeply embedded protoclusters at (sub)millimeter wavelengths (e.g. Motte et al. 1998, Andr\'e 2002). Two main scenarios have been proposed to explain the formation of high-mass stars in clusters. In the first scenario, high-mass stars form essentially in the same way as low-mass stars, via an enhanced accretion-ejection phase. In the standard model of low-mass star formation, the mass accretion rate is governed by the thermal sound speed and does not exceed $\sim 10^{-5}$~M$_{\odot}$.yr$^{-1}$ (e.g. Shu 1977; Stahler et al. 2000) in cold cores. To form high-mass stars by accretion, a significantly higher accretion rate $\sim 10^{-3}$~M$_{\odot}$.yr$^{-1}$ is required to overcome the radiation pressure generated by the luminous central object (e.g. Wolfire \& Cassinelli 1987). In order to solve this problem, McKee \& Tan (2003) proposed a model in which high-mass star formation takes place in ultra-dense cores supported by turbulence within virialized cluster-forming clumps. This model produces high-mass accretion rates such as those required to form high-mass stars by accretion. In the second scenario, high-mass stars form by coalescence of lower-mass stars in the dense inner core of a contracting protocluster (Bonnell et al. 1998). This scenario requires high stellar densities (i.e. $\sim 10^{8}$~stars.pc$^{-3}$) in order to render the probability of stellar collisions high enough and allow stellar mergers to take place. It avoids the accretion problem of high-mass star formation by directly combining the masses of lower, intermediate-mass stars. However, no detailed model exists yet to describe how this coalescence mechanism actually occurs. \begin{table*} \begin{minipage}[!ht!]{\textwidth} \caption{Measured source properties} \label{resume_C1} \renewcommand{\footnoterule}{} \begin{tabular}{l c c c c c c c c} \hline\hline Source & Coordinates & Undec.FWHM & P.A. & S$_{peak}^{1.2}$&S$_{peak}^{30m}$ & S$_{peak}^{exp:3.2}$ & S$_{peak}^{3.2}$& $S_{int}^{3.2}$ \\ & ($\alpha _{2000}$ $\,$ $\delta _{2000}$) & (arcsecond) & (deg) & (mJy/beam)& (mJy/beam) & (mJy/beam) & (mJy/beam) & (mJy) \\ \hspace{0.3cm}[1] & [2] & [3] & [4] & [5] & [6] & [7] & [8] & [9] \\ \hline C-MM1 & 06:41:17.95 $\,$ +09:29:03 & 5.8$\times$4.1& 63 & -- &255 & 6 & 18 & 21 \\ C-MM2 & 06:41:15.15 $\,$ +09:29:10 & 5.6$\times$4.3 & 50 &-- &183 & 4 & 7 & 8 \\ C-MM3 & 06:41:12.30 $\,$ +09:29:12 & 5.7$\times$4.4 & 58 &224& 573 & 14 & 37 & 45 \\ C-MM4 & 06:41:09.95 $\,$ +09:29:22 & 7.0$\times$5.0 & 87 &77& 426& 10 & 14 & 24 \\ C-MM5 & 06:41:10.15 $\,$ +09:29:36 & 6.2$\times$4.5 & 83 & --&261 & 6 & 5 & 7 \\ C-MM9 & 06:41:15.30 $\,$ +09:29:33 & 7.3$\times$4.3 & 51 &--& 94 & 2 & 6 & 9 \\ C-MM13 & 06:41:11.45 $\,$ +09:29:17 & 7.5$\times$6.1 & 87 & --&--&--& 5 & 11 \\ \hline \end{tabular} \flushleft [1]: The C-MM numbers are the same as in PAB06. The new source is labelled C-MM13. [2]: J2000 source coordinates, accurate to better than 1\hbox{$^{\prime\prime}$}, derived from a Gaussian fit to the PdBI 3.2mm dust continuum map. [3]: Undeconvolved FWHM sizes derived from fitting an elliptical Gaussian to the PdBI 3.2mm dust continuum map. [4]: Position angle (from North to East) of the major axis of the fitted Gaussian ellipse from the 3.2~mm dust continuum map. [5]: PdBI 1.2mm peak flux density at the best-fit source position (HPBW=3.1\hbox{$^{\prime\prime}$}$\times$1.5\hbox{$^{\prime\prime}$}) [6]: 30m 1.2mm peak flux density at the source position (HPBW=11\hbox{$^{\prime\prime}$} ; from PAB06) [7]: 3.2mm peak flux density expected at PdBI angular resolution estimated from col.[6] (HPBW=4.5\hbox{$^{\prime\prime}$}) [8]: PdBI 3.2mm peak flux density at the best-fit source position (HPBW=5.3\hbox{$^{\prime\prime}$}$\times$3.8\hbox{$^{\prime\prime}$}) [9]: PdBI 3.2mm integrated flux density inside the fitted Gaussian ellipse \end{minipage} \end{table*} On the observational side, studying the earliest stages of high-mass star formation is particularly difficult due to the tight packing, deeply embedded nature, and relatively large distances of massive protoclusters. Based on IRAM 30m observations of the massive cluster-forming clump NGC~2264-C (d$\sim 800$~pc), Peretto, Andr\'e, Belloche (2006 -- hereafter PAB06) recently proposed a picture of high-mass star formation combining features of the two above-mentioned scenarios. They showed that NGC~2264-C harbored a dozen Class~0-like objects (cf. Andr\'e, Ward-Thompson, Barsony 2000) and was characterized by large-scale collapse motions (see also Williams \& Garland 2002). They suggested that a massive, ultra-dense protostellar core was in the making in the central part of the NGC~2264-C clump as a result of the gravitational merger of two or more lower-mass Class~0 objects. The total mass inflow rate associated with the collapse of the clump toward the central protostellar core was estimated to be $3 \times 10^{-3}$~M$_{\odot}$.yr$^{-1}$. PAB06 argued that the combination of large-scale collapse and protostellar mergers may be the key to produce the conditions required for high-mass star formation in the center of NGC~2264-C. In this paper, we follow up on the detailed single-dish study of NGC~2264-C by PAB06 and present higher-resolution observations of the same cluster-forming clump taken with the IRAM Plateau de Bure interferometer. We compare our observations with a set of SPH hydrodynamic numerical simulations which attempt to specifically model NGC~2264-C. As the kinematical and density patterns of NGC~2264-C appear to be relatively simple, comparison between millimeter observations of this region and numerical models offers a unique opportunity to make progress in our understanding of clustered star formation. Section~2 presents our PdBI observations. Section~3 describes the dedicated hydrodynamic SPH simulations that we performed to model NGC~2264-C. We compare the observations with the numerical simulations in Sect.~4 and draw several conclusions in Sect.~5. \section{Interferometer observations of NGC~2264-C} \subsection{Observations} We performed 3.2~mm and 1.2~mm observations of the central part of NGC~2264-C with the IRAM Plateau de Bure interferometer (PdBI) in December 2003 and April 2004. We used the C and D configurations with 6 antennas. We used both 1~mm and 3~mm receivers with 244.935620~GHz ($\lambda = 1.2$~mm) and 93.176258~GHz ($\lambda = 3.2$~mm) as central rest frequencies. We observed at four positions which were chosen so as to obtain a fully sampled mosaic at 3.2~mm (primary beam FWHM $\sim 54$\hbox{$^{\prime\prime}$}) and to encompass the millimeter sources C-MM1, C-MM2, C-MM3, C-MM4, CMM5, and C-MM9 (see Fig.~2b of PAB06) identified by PAB06 with the IRAM 30m telescope. Because the corresponding 1.2~mm mosaic is undersampled (primary beam FWHM $\sim 20$\hbox{$^{\prime\prime}$}), only two of these sources (C-MM3 and C-MM4) were effectively imaged at 1.2mm. We obtained a 3.2~mm dust continuum mosaic and two separate 1.2~mm continuum maps, as well as a N$_2$H$^+$(1-0) mosaic. The spectral resolution for the N$_2$H$^+$(1-0) data was 20~kHz, which corresponds to a velocity resolution of 0.06~km.s$^{-1}$ at 93.2~GHz. The sources used for the bandpass, amplitude and phase calibrations were 0420-014, 0528+134, 0736+017, CRL618, 0923+392, and 3C273 (only the first three were used for the second run in April). We calibrated the data and produced images using the CLIC and MAPPING softwares (Lucas 1999, Guilloteau et al. 2002), part of the GILDAS package\footnote{See \texttt{http://www.iram.fr/IRAMFR/GILDAS} for more information about the \textsc{gildas} softwares.} (Pety 2005). The deconvolution was performed using the natural weighting option of the Clark (1980) CLEAN algorithm (Guilloteau 2001). The final synthesized beam was 5.3\hbox{$^{\prime\prime}$}$\times$3.8\hbox{$^{\prime\prime}$} ~ (HPBW) with P.A.=+63\hbox{$^\circ$} ~ at 3.2~mm, and 3.1\hbox{$^{\prime\prime}$}$\times$1.5\hbox{$^{\prime\prime}$} ~(HPBW) with P.A.=+74\hbox{$^\circ$} ~ at 1.2~mm. We also combined our PdBI N$_2$H$^+$(1-0) observations with the single-dish N$_2$H$^+$ data cube of PAB06 in order to recover short-spacing information. The resulting synthesized beam of the combined N$_2$H$^+$(1-0) mosaic is 6.1\hbox{$^{\prime\prime}$}$\times$4.0\hbox{$^{\prime\prime}$} ~(HPBW) with P.A.=+65\hbox{$^\circ$}. \subsection{Dust continuum results: Evidence for disk emission} Our PdBI 3.2~mm dust continuum mosaic is shown in Fig.~\ref{n2264c_3mm}. It reveals only pointlike sources, since most of the extended emission seen in the single-dish 1.2~mm dust continuum map (cf Fig.~2 of PAB06) was filtered out by the interferometer. The final rms noise level was $\sigma \sim~0.8$~mJy/beam. We extracted millimeter sources from this map using the Gaussclump algorithm (Stutzki \& G\"usten 1990). We detected seven peaks lying above 5$\sigma$. All of these peaks were previously detected with the 30m telescope, except one object, here called C-MM13 (cf. Fig.~1), which is a new detection. Several other peaks lie between 3$\sigma$ and 5$\sigma$, but, by lack of confidence in these marginal detections, we did not consider them. The present 3.2~mm continuum map confirms and improves the positions of the compact millimeter continuum sources detected by PAB06. Interestingly our PdBI 1.2~mm continuum observations of C-MM3 and C-MM4 (not shown here) do not reveal any further sub-fragmentation. The source properties as derived from our PdBI and 30m observations are summarized in Table~\ref{resume_C1}. \begin{figure}[t] \hspace{-0.5cm} \includegraphics[height=10cm,angle=270]{5653fig1.eps} \caption{3.2~mm dust continuum mosaic of the central part of NGC~2264-C obtained with the PdBI. The (0\hbox{$^{\prime\prime}$}, 0\hbox{$^{\prime\prime}$}) position corresponds to ($\alpha = 06^h13^m00^s$; $\delta = 09^o29\arcmin10\hbox{$^{\prime\prime}$}$) in J2000 coordinates. The pre/protostellar sources detected in this map are marked with white crosses and labelled C-MM, keeping the same numbering scheme as in PAB06. The two black crosses mark the positions of C-MM10 and C-MM12 (cf. PAB06), which are located at the very edge of our mosaic. The black open star shows the position of the luminous $IRAS$ source IRS1. The rms of the mosaic is $\sigma \simeq 0.8$~mJy/beam. The contours are 2.4~mJy/beam (i.e. 3$\sigma$), 4~mJy/beam (i.e. 5$\sigma$), 5, 6, 8~mJy/beam and then go from 10 to 35~mJy/beam by 5~mJy/beam. \label{n2264c_3mm}} \end{figure} The millimeter dust continuum emission from a (Class~0 or Class~I) protostar a priori originates from two components: an extended envelope (a few thousand AU in size) and a disk (up to a few hundred AU). At a distance of 800~pc, the linear resolution of the 30m telescope is $\sim 9000$~AU (HPBW) at $\lambda = 1.2$~mm. On this spatial scale, the 1.2mm continuum emission observed toward a young protostar (Table \ref{resume_C1} col.[6]) is expected to be dominated by the envelope rather than by the disk (e.g. Andr\'e, Ward-Thompson, Barsony 2000; Looney, Mundy, Welch 2000). Conversely, we expect the disk component to dominate on compact, interferometric scales (e.g. Terebey et al. 1993). Assuming an isothermal, centrally-condensed envelope, i.e. with a density $\rho \propto r^{-2}$, the flux density is expected to scale linearly with beam size: S$_{\nu} \propto \theta$. We thus expect the envelope contribution to the PdBI flux density at 1.2~mm to be given by: \begin{equation} S_{peak}^{exp:1.2} = S_{peak}^{30m} \times \left(\frac{HPBW_{Bure}}{HPBW_{30m}}\right) \label{flux_1} \end{equation} If we assume the Rayleigh-Jeans regime and adopt a dust opacity scaling as $\kappa_{\nu} \propto \nu^{\beta}$ (e.g. Hildebrand 1983), then we can also estimate the expected contribution of the envelope to the PdBI flux density at 3.2~mm: \begin{equation} S_{peak}^{exp:3.2} = S_{peak}^{exp:1.2} \times \left(\frac{1.2}{3.2}\right)^{\beta+2} \label{flux_2} \end{equation} In order to estimate a lower limit to the disk component, we choose $\beta=1$ which maximizes the contribution of the envelope. A value of $\beta=1.5$ is likely to be more representative of protostellar cores/envelopes (e.g. Ossenkopf \& Henning 1994) and would yield a lower estimate for the expected envelope contribution. The expected envelope flux densities are listed in Table \ref{resume_C1} col.[7] for each detected source. It can be seen that they are a factor of $\sim 2-3$ lower than the observed flux densities (Table \ref{resume_C1} col.[8]) for C-MM1, C-MM3 and C-MM9. The excess flux density observed on small spatial scales can be attributed to unresolved disk emission (e.g. Terebey et a. 1993). Our results thus suggest the presence of a disk in C-MM1, C-MM3, C-MM9 and confirm the protostellar nature of these candidate Class~0 sources. Given the uncertainties on the dust emissivity index $\beta$ (e.g. Dent et al. 1998), we cannot conlude on the presence or absence of a disk in C-MM2 and C-MM4. Finally, it is very likely that C-MM5 does not have a disk since its observed 3.2~mm flux density is consistent with pure envelope emission. For the three sources showing evidence of disk emission, i.e. C-MM1, C-MM3 and C-MM9, we can estimate both the disk and envelope masses as follows. First, we estimate the flux arising from the disk by subtracting the expected envelope peak flux density given in Table~\ref{resume_C1} col.[7] from the observed peak flux density of col.[8] (here, we assume that our 3.2~mm PdBI observations do not spatially resolve the disk given the distance of NGC~2264). Then, the flux arising from the envelope is considered to be given by the integrated flux of col.[9] minus the disk contribution. These flux estimates are listed in Table~\ref{flux_diskenv}. For the envelope mass estimates, we follow PAB06 and assume a dust temperature T$_d = 15$~K, $\beta=1.5$, as well as a dust opacity $\kappa_{1.2mm}= 0.005$~cm$^{2}$.g$^{-1}$ corresponding to $\kappa_{3.2mm}=1.3\times 10^{-3}$~cm$^{2}$.g$^{-1}$. Concerning the disk mass estimates, we assume a dust temperature of T$_d = 50$~K and a dust opacity $\kappa_{1.2mm}= 0.02$~cm$^{2}$.g$^{-1}$ (Beckwith et al. 1990) corresponding to $\kappa_{3.2mm}=5.2\times 10^{-3}$~cm$^{2}$.g$^{-1}$ (see Table~\ref{flux_diskenv}). The dust temperature is higher for the disk because it is supposed to be warmer, closer to the star, while the dust opacity is slightly different from the one adopted in PAB06 because of the enhanced dust emissivity expected in the dense central parts of protostellar disks (e.g. Ossenkopf \& Henning 1994). We caution that the disk masses calculated in this way are only rough estimates. A more proper analysis of the density structure of these sources, especially C-MM3, through sub-arcsecond millimeter observations would be of great interest. By deconvolving the FWHM sizes of Table \ref{resume_C1} from the synthesized beam, we derive the geometrical mean diameter of each source. For each object, we can then estimate the mean column density and mean volume density of the envelope component. All of these derived source parameters are listed in Table~\ref{resume_mass}. It can be seen that C-MM3 and C-MM4 have the most massive envelopes by far, with $M_{env } \ge 15\, M_{\odot}$ in both cases. The densest envelope/core is associated with the central source C-MM3, which reaches a mean volume density $\ge$ 1$\times$10$^8$~cm$^{-3}$ on a $\sim 3200$~AU (2$\times$FWHM) scale. \begin{figure}[t] \hspace{0.0cm} \includegraphics[height=8cm,angle=270]{5653fig2.eps} \vspace{-3cm} \caption{N$_2$H$^+$(1-0) integrated intensity map of the central part of NGC~2264-C obtained with the PdBI. The (0\hbox{$^{\prime\prime}$}, 0\hbox{$^{\prime\prime}$}) position corresponds to ($\alpha = 06^h13^m00^s$; $\delta =09^o29\arcmin10\hbox{$^{\prime\prime}$}$) in J2000 coordinates. The crosses with labels show the positions of the pre/protostellar cores detected in our PdBI dust continuum mosaic (Fig.~\ref{n2264c_3mm}), while the black cross without label mark the position of C-MM10 (cf. PAB06). The white open star shows the position of the $IRAS$ source IRS1. The contours go from 1 to 5 Jy/beam.km/s by 1 Jy/beam.km/s. \label{n2264c_n2h+_pdbi}} \end{figure} The new source identified with the interferometer, C-MM13, could not be separated from C-MM3 at the angular resolution of the IRAM 30m telescope. The projected distance between C-MM3 and C-MM13 is only $\sim 10000$~AU. Surprisingly, C-MM13 is the only new compact millimeter continuum source detected with the PdBI above 5$\sigma$. The fact that there is almost no sub-fragmentation despite a factor 2-4 improvement in angular resolution between the partially resolved 30m sources and their PdBI counterparts suggests that most of the compact dust continuum sources detected at the 30m represent individual Class~0 objects rather than small groups of protostellar cores. However, we stress that our PdBI observations could only detect dust continuum sources with 3.2~mm peak flux densities larger than 4~mJy/beam (i.e. 5$\sigma$). Assuming the same temperature and dust properties as above for the envelope mass estimates, this corresponds to a mass detection threshold $\sim 3$~M$_{\odot}$. Therefore, we could not have detected low-mass pre/protostellar cores possibly lying in the vicinity of the main sources listed in Table~\ref{resume_C1}. This point is further discussed in the next section. \begin{table} \begin{minipage}[ht]{\columnwidth} \caption{Estimated PdBI 3.2~mm flux densities of the disk and envelope components for the three objects showing evidence of disk emission} \label{flux_diskenv} \centering \renewcommand{\footnoterule}{} \begin{tabular}{c c c c c c } \hline\hline Source & S$_{disk}$\footnote{3.2~mm flux density of the disk component estimated by subtracting the peak flux density expected for the envelope (col.[7] of Table~\ref{resume_C1}) from the observed peak flux density (col.[8] of Table~\ref{resume_C1}) }& S$_{peak}^{env}~$\footnote{Estimated 3.2~mm peak flux density of the envelope component at the PdBI resolution (cf. col.[7] of Table~\ref{resume_C1}) }& S$_{int}^{env}$\footnote{3.2~mm integrated flux density of the envelope component estimated by subtracting the disk contribution given in col.[2] from the total integrated flux density measured with PdBI (cf. col.[9] of Table~\ref{resume_C1}) } & M$_{disk}$\footnote{Disk mass seen with the PdBI and estimated from S$_{disk}$ (with T$_d = 50$~K and $\kappa=5.2\times10^{-3}$~cm$^2$.g$^{-1}$). Typical uncertainty is a factor $\ge 2$ (on either side) due to uncertain dust opacity and dust temperature.} \\ & (mJy) & (mJy/beam) &(mJy) & (M$_{\odot}$) \\ \hline C-MM1 & 12 & 6 & 9 & 0.6 \\ C-MM3 & 23 & 14 & 22 & 1.1 \\ C-MM9 & 4 & 2 & 5 & 0.2 \\ \hline \end{tabular} \end{minipage} \end{table} \begin{table*} \begin{minipage}[ht]{\textwidth} \caption{Derived source parameters} \label{resume_mass} \centering \renewcommand{\footnoterule}{} \begin{tabular}{ c c c c c c c} \hline\hline Source & FWHM\footnote{Geometrical mean of the deconvolved, major and minor FWHM diameters measured on the PdBI 3.2mm continuum mosaic } & N$_{H_2}$\footnote{Column density of the envelope seen with the PdBI and estimated from S$_{peak}^{3.2}$ for the sources without a disk (col.[8] of Table~\ref{resume_C1}) and from S$_{peak}^{env}$ for the sources with a disk (col.[3] of Table~\ref{flux_diskenv}). The used dust properties are T$_d = 15$~K and $\kappa=1.3\times10^{-3}$~cm$^2$.g$^{-1}$. Typical uncertainty is a factor $\ge 2$ (on either side) due to uncertain dust opacity and dust temperature.} & M$_{env}$\footnote{Envelope mass seen with the PdBI and estimated from S$_{int}^{env}$ (Table~\ref{flux_diskenv}; with T$_d = 15$~K and $\kappa=1.3\times10^{-3}$~cm$^2$.g$^{-1}$). The uncertainty is the same as for the column density.} & M$_{core}^{30m}$\footnote{Core mass seen with the 30m telescope (from PAB06). This mass is larger than M$_{env}$ because, contrary to the PdBI, the 30m does not filter out most of the large scale emission} &n$_{H_2}$\footnote{Envelope volume density estimated in a radius equal to the FWHM. Estimated in a radius twice smaller, the density increases by a factor of 4. The uncertainty is the same as for the column density} & V$_{LSR}$\footnote{LSR velocity estimated from the combined N$_2$H$^{+}$(1-0) spectra} \\ & (AU) & (10$^{23}$~cm$^{-2}$) & (M$_{\odot}$) & (M$_{\odot}$) & (cm$^{-3}$) &(km.s$^{-1}$) \\ \hline C-MM1 & 1400 & 7 & 6.2 & 13.1&8.3$\times$10$^7$&-- \\ C-MM2 & 1400 & 9 & 5.5 & 16.0 &7.3$\times$10$^7$& 6.2 \\ C-MM3 &1600 & 17 & 15.2 & 40.9&1.4$\times$10$^8$& 7.1 \\ C-MM4 & 3000& 17 & 16.5 & 35.1&2.2$\times$10$^7$ & 8.9 \\ C-MM5 & 2200 & 6 & 4.8 & 18.4&1.6$\times$10$^7$ &-- \\ C-MM9 & 2400 & 2 & 3.5 & 6.6&9.3$\times$10$^6$&-- \\ C-MM13 & 4000 & 6 & 7.6 & -- &4.3$\times$10$^6$ & 8.2 \\ \hline \end{tabular} \end{minipage} \end{table*} \subsection{N$_2$H$^+$(1-0) results} \subsubsection{PdBI observations only} Our PdBI N$_2$H$^+$(1-0) integrated intensity map is shown in Fig.~\ref{n2264c_n2h+_pdbi}. As for the dust continuum mosaic, most of the extended emission is filtered out. On this map, it can be seen that C-MM5 and possibly C-MM12 are not closely associated with a N$_2$H$^+$(1-0) peak. We also note that the strongest N$_2$H$^+$(1-0) peak is associated with the new C-MM13 object which is one of the weakest dust continuum sources. The morphological differences between the dust continuum sources and their N$_2$H$^+$(1-0) counterparts may possibly reflect differences in chemical evolutionary stage (cf. Aikawa et al. 2005). It is also noteworthy that several N$_2$H$^+$(1-0) peaks are not associated with any of the bona fide dust continuum sources listed in Table~\ref{resume_mass}. On the other hand, most of these N$_2$H$^+$(1-0) peaks have faint dust continuum counterparts with flux levels between 3$\sigma$ and 5$\sigma$ in the 3.2~mm continuum map (compare Fig.~\ref{n2264c_3mm} and Fig. ~\ref{n2264c_n2h+_pdbi}). The N$_2$H$^+$(1-0) spectra observed at PdBI (prior to combination with 30m data) provide an estimate of the velocity dispersion within the sources on a $\sim 3500$~AU (FWHM) spatial scale. The mean line-of-sight velocity dispersion is found to be $<\sigma_{los}>\simeq 0.34$~km.s$^{-1}$. Assuming a kinetic temperature T$_k = 15$~K, the non-thermal contribution to this velocity dispersion is $<\sigma_{los}^{NT}> \simeq 0.33$~km.s$^{-1}$. Comparing this value with the isothermal sound speed, c$_s \simeq 0.23$~km.s$^{-1}$, we conclude that the sources are still marginally dominated by non-thermal motions (due to, e.g, turbulence, collapse or, outflow) on scales of a few thousand AUs. \subsubsection{Combined PdBI and 30m observations} \begin{figure}[!t!] \hspace{0.0cm} \includegraphics[height=8cm,angle=270]{5653fig3.eps} \vspace{-3cm} \caption{N$_2$H$^+$(1-0) integrated intensity map of the central part of NGC~2264-C resulting from the combination of our 30m and PdBI data. The (0\hbox{$^{\prime\prime}$}, 0\hbox{$^{\prime\prime}$}) position corresponds to ($\alpha = 06^h13^m00^s$; $\delta = 09^o29\arcmin10\hbox{$^{\prime\prime}$}$) in J2000 coordinates. The crosses mark the positions of the pre/protostellar 3.2~mm dust continuum sources detected in Fig.~\ref{n2264c_3mm}. For the sake of readability, only a few sources are labeled. The white open star symbol shows the position of the $IRAS$ source IRS1. The contours go from 3 to 13 Jy/beam.km/s by 2 Jy/beam.km/s. \label{n2264c_n2h+}} \end{figure} \begin{figure}[!t!] \hspace{0cm} \includegraphics[height=7cm,angle=270]{5653fig4.eps} \caption{Position-velocity diagram derived from the combined (PdBI~$+$~30m) N$_2$H$^+$(101-012) data cube by taking a cut along the East-West axis going through C-MM2, C-MM3, C-MM4 and C-MM13. The positions of these sources are plotted on the position axis. The white contours go from 0.5 to 1.1 Jy/beam by 0.2 Jy/beam. \label{pv_n2h+}} \end{figure} As mentioned in \S~2.1, we added short-spacing 30m information to our interferometric data in order to obtain a N$_2$H$^+$ map sensitive to a wide range of angular scales from $\sim 4\hbox{$^{\prime\prime}$}$ up to $\sim 4\hbox{$^\prime$}$. This combination was performed using the MAPPING package developed by IRAM (Guilloteau et al. 2002). The combined PdBI/30m map of N$_2$H$^+$(1-0) integrated intensity is shown in Fig.\ref{n2264c_n2h+}. As expected, more extended emission is present in the combined mosaic, but the compact sources detected in the PdBI map are still clearly visible. In order to constrain the kinematical pattern of these sources within the NGC~2264-C clump, we constructed a position-velocity (PV) diagram along an East-West axis going through the four central sources, C-MM2, C-MM3, CMM4, and C-MM13 (see Fig.~\ref{pv_n2h+}). This PV diagram shows an overall velocity gradient of 8.4~km.s$^{-1}$.pc$^{-1}$ from East to West between C-MM2 and C-MM4. The LSR velocities of each of the four sources C-MM2, C-MM3, C-MM4, and C-MM13 are listed in Table~\ref{resume_mass}. Figure \ref{pv_n2h+} helps to clarify the origin of the velocity discontinuity identified by PAB06 with the 30m telescope in the center of NGC~2264-C (see Fig.~6 of PAB06). At the 30m resolution, the N$_2$H$^+$(101-012) spectrum observed toward the central source C-MM3 was double-peaked. The higher resolution of the PdBI interferometer now allows us to identify a distinct component, C-MM13, separated by 13\hbox{$^{\prime\prime}$} ~in position and $\sim 1.1$~km.s$^{-1}$ in velocity from C-MM3. With the 30m telescope, C-MM13 could not be separated from C-MM3. We also observe in the western part of the PV diagram (at an offset of $-70$\hbox{$^{\prime\prime}$}~ and velocity of 7.4~km.s$^{-1}$) a velocity feature which is associated with the strong N$_2$H$^+$(1-0) peak lying in this part of the clump (cf Fig.~\ref{n2264c_n2h+}). This velocity feature clearly departs from the rest of the diagram. We attribute it to a peculiar velocity field around the luminous young star IRS1, whose wind has likely perturbed the ambient velocity field and triggered star formation in the immediate vicinity ($\sim 10$\hbox{$^{\prime\prime}$}~ in radius around IRS1), as suggested by the observations of Nakano et al. (2003) and Schreyer et al. (2003). To summarize, our interferometric observations confirm the (Class~0) protostellar nature of C-MM1, C-MM3, C-MM9, and set new constraints on the kinematics of NGC~2264-C. The PdBI observations can help us to confirm or refute the scenario proposed by PAB06 of an axial collapse of NGC~2264-C along its long axis leading to the merging of dense cores in the center. In the context of this scenario, the protostellar nature of the millimeter continuum cores sets strong timescale constraints: the individual collapse of the cores must occur on a significantly shorter timescale than the larger-scale collapse of the clump as a whole. The presence of two central sources, C-MM3 and C-MM13, adjacent to one another (i.e. 10000~AU) and with a velocity difference of $\sim 1.1$~km.s$^{-1}$, must also be accounted for. In the next two sections, we attempt to match these observational constraints with hydrodynamic simulations. \section{SPH numerical simulations} \subsection{Numerical method and initial conditions} \begin{figure*}[th!] \hspace{0cm} \includegraphics[height=18cm,angle=0]{5653fig5.eps} \vspace{-0cm} \caption{Time evolution of a simulation with an initial turbulent to gravitational energy ratio $\alpha_{turb}^0=5\%$. The first column displays the velocity field of the particles taken along the long axis (z-axis) of the model filament, as traced by individual SPH particles. The second column similarly displays the evolution of the density cut along the z-axis of the model filament. The third column shows synthetic column density maps in the (z,x) plane. The reference time, labelled t$_{ff}$, is taken to be one global free-fall time of the initial clump after the start of the simulation, i.e., t$_{ff}=9.5\times10^5$~yr. The first row is taken at a time step t$_{ff}-4\times10^5$~yr, while the second, third, and fourth rows are for time steps t$_{ff}-2\times10^5$~yr, t$_{ff}-1.0\times10^5$~yr, and t$_{ff}-0.4\times10^5$~yr, respectively. \label{sim}} \end{figure*} In order to test the physical validity of the scenario proposed by PAB06 (see also \S ~1), we performed Smooth Particle Hydrodynamics (SPH) simulations (Monaghan 1992, Bate et al. 2003) using the DRAGON SPH code from the Cardiff institute (see Goodwin et al. 2004). We simulated the time evolution of an isothermal (T$_k=20$~K), Jeans-unstable elongated clump of mass 1000~M$_{\odot}$, comparable to the estimated total mass of NGC~2264-C ($\sim 1600\, M_{\odot}$ -- PAB06). The model clump was initially ellipsoidal (finite boundary conditions) with an aspect ratio of 2. The initial density profile was such as: \begin{equation} n_{H_2} = \frac{n_c}{(1+(r/r_0)^2+(z/2 r_0)^2)} \label{density} \end{equation} corresponding to a flat inner ($r<r_0$) region and a $n_{H_2}\propto r^{-2}$ outer ($r>r_0$) region. The total mass of the flat inner region was $\sim 200$~M$_{\odot}$. Highly concentrated clouds (with, e.g., n$_{H_2} \propto r^{-2}$) are known to hardly fragment during collapse (cf. Myhill \& Kaula 1992; Whitworth et al. 1996), while uniform or moderately concentrated clouds (with n$_{H_2} \propto r^{-1}$ or flatter) typically fragment into as many Jeans-mass fragments as they contain initially (cf. Burkert et al. 1997). The expected number of fragments produced by the collapse of our model clump thus corresponds to the number of Jeans masses estimated in the flat inner core. At the mean density calculated within r$_0$ (i.e. n$_{H_2} \sim 1\times10^3$~cm$^{-3}$) and for T$_k = 20~K$, the Jeans mass is M$_J \simeq 20$~M$_{\odot}$ (see Bonnell et al. 1996 for a precise definition of M$_J$), which yields a Jeans mass number $N_J \sim 10$ for the flat inner core. In these simulations we also included turbulent fluctuations. Since the exact nature and properties of interstellar turbulence are not fully understood yet, we considered two types of energy spectrum: 1) a spectrum scaling as Kolmogorov turbulence, i.e., $E(k)\propto k^{-5/3}$, and 2) a white spectrum, i.e, $E(k)\propto k^{0}$. The phases of these turbulence fluctuations were chosen randomly. Initially, three energy components controlled the evolution of the model filament: the thermal energy, $\mathcal T_{th}$, which remained constant in time throughout the simulations (i.e., isothermal assumption); the gravitational energy, $\mathcal W$, whose initial value depended on the clump density profile and thus on $n_c$ and $r_0$ (larger values of $n_c$ and/or $r_0$ correspond to lower gravitational energy; cf. Eq.(\ref{density})); and the turbulent energy, $\mathcal T_{turb}$. In all the simulations shown in this article, $n_c$ and $r_0$ have the same value, namely $n_c(H_2)=2000$~cm$^{-3}$ and $r_0=0.7$~pc, corresponding to an initial thermal to gravitational energy ratio $\alpha_{th}^0 \sim 8 \%$. The initial value of the ratio of turbulent to gravitational energy, $\alpha_{turb}^0$, was varied from 0$\%$ to 50$\%$. We have also explored other initial conditions. When n$_c$ is too large ($n_c> 5000$~cm$^{-3}$), we find that too many fragments form, when n$_c$ is too small ($n_c< 500$~cm$^{-3}$), we find that only one central fragment will generally form. We have also varied the initial aspect ratio and conclude that if it is too close to one, then the cloud is not sufficiently filamentary whereas it is too much filamentary if it is initially too large. Finally, we have also explored the possibility that NGC~2264-C could be the result of a collision between two preexisting clouds rather than a collapsing elongated clump. However, it was not possible to reproduce the various features of this cloud within the scope of this scenario. The most important disagreements are i) a collision quickly tends to create a sheet rather than a filamentary object; ii) we find it very difficult to produce a well defined third object like C-MM3 by interaction of two colliding clouds; iii) a simple collision is unable to create a series of young condensations spread over the long axis of the clump at, e.g., positions comparable to C-MM1 and C-MM5. All our simulations were performed with a total of 5 million SPH particles. When the local density exceeded n$_{H_2} = 1.3 \times 10^8$ cm$^{-3}$, standard SPH particles were replaced by sink particles. The radius, $r_{sink} = 500~AU$, of the sink particles defines the highest resolution reached by our simulation. All particles falling within $r_{sink}$ of a sink particle and being bound to it were removed from the simulations, and their mass, linear and angular momentum were added to the corresponding sink particle values. Using sink particles allowed us to avoid artificial fragmentation (Truelove et al. 1997, Bate \& Burkert 1997). The relatively low density threshold at which sink particles were introduced implies that we could not model advanced phases of star/cluster formation but only the first stages of clump fragmentation. Indeed, the limited numerical resolution of our simulations prevented us from describing small spatial scale processes such as disk formation. \subsection{General Pattern} Figure~\ref{sim} displays the density and velocity fields along the z-axis (i.e., long axis), as well as the column density maps in the (z,x) plane, at four time steps for a model filament with an initial level of turbulence, $\alpha_{turb}^0=5\%$. The reference time was chosen to be at one global free-fall time, t$_{ff}$, after the start of the simulation. Given the initial central density of the model clump, this corresponds to t$_{ff} = 9.5\times10^5$~yr. The four time steps shown in Fig.~\ref{sim} were taken at t$_{ff} - 4\times10^5$~yr, t$_{ff}- 2\times10^5$~yr, t$_{ff}-1.0\times 10^{5}$~yr, and t$_{ff}-0.4\times 10^{5}$~yr, respectively. We can describe the evolution of the model clump as follows (see also Bonnell et al. 1996; Inutsuka \& Miyama 1997). Since the ellipsoidal clump initially contains several thermal Jeans masses (i.e. N$_J \sim 10$, cf. Sect.~3.1) and has a shorter dynamical timescale perpendicular to its major axis, it first collapses along its minor axis when seen projected onto the plane of the sky (see first and second panels of Fig.~\ref{sim}), amplifying the initial anisotropy and leading to the formation of a very elongated and filamentary structure (cf. Lin et al. 1965). This fast contraction proceeds until thermal and turbulent pressure gradients can stop the collapse along the minor axis, ensuring an approximate hydrostatic equilibrium in the (xy) plane (cf Bonnell et al. 1996). Since the dynamical timescale is longer along the major axis, the clump keeps collapsing along its long axis, i.e. the z-direction, after a transverse equilibrium has been established. The velocity field is initially nearly homologous (i.e., $V_z \propto z$ -- see first and second panels of Fig.~\ref{sim}) but becomes more and more complex as the filament fragments into several cores, each of them collapsing individually as can be seen on the third and fourth panels of Fig.~\ref{sim}. The individual collapse of the cores leads to the formation of local protostellar accretion shocks and associated protostars. Moreover, due to the global collapse of the model clump toward its center, we can also see the formation of a central shock which separates the eastern ($z >0$) and western ($z <0$) sides of the clump. Altogether this dynamical evolution produces a complex density and velocity pattern. When $\alpha_{turb}^0=50\%$, the velocity field (not displayed here for conciseness) is much less organized. More shocks develop which lead to enhanced clump fragmentation through the process now widely referred to as ``turbulent fragmentation'' in the literature (e.g. Padoan \& Nordlund 2002, Klessen et al. 2005). The number of shocks is larger and the model clump becomes more substructured as the initial level of turbulence increases (see also Jappsen \& Klessen 2004). At the other extreme, the case with no initial turbulence at all, $\alpha_{turb}^0=0\%$, does not yield any fragmentation due to the lack of initial fragmentation seeds, which is not realistic. Therefore, based on the results of our simulations, several general conclusions can be drawn. In particular, the initial level of turbulence appears to play a key role for the global aspect of the model clump. The higher the turbulence, the more dispersed and less filamentary the model clump is. For low levels of initial turbulence, i.e., low values of $\alpha_{turb}^0$, the structure and kinematics of the clump are dominated by gravity, while for high levels of turbulence the clump is primarily structured by turbulence. \section{Detailed comparison between the observations and the SPH simulations} We performed a wide set of SPH simulations with different initial parameters, e.g., different values of the initial level of turbulence, $\alpha_{turb}^0$, and of the thermal to gravitational energy ratio, $\alpha_{th}^0$. We did not find it necessary to use different initial turbulent velocity fields since the time evolution of the model clump depends only weakly on this. When calculating synthetic observations, we varied the inclination angle of the long axis of the model clump with respect to the line of sight. An inclination angle of 45 degrees was adopted to produce the synthetic images and diagrams shown in Fig.~\ref{comp_coldens} and Fig.~\ref{pv_diagrams}. For each set of simulations, we seeked a particular time step at which the synthetic data, when convolved to the resolution of the observations, best matched the existing 30m and PdBI constraints. In the next subsection, we present our ``best-fit'' simulations and discuss the consequences of changing the best-fit parameters. \subsection{Overall morphology: A fragmented filament} \begin{figure*}[!ht!] \hspace{0.0cm} \includegraphics[height=18cm,angle=270]{5653fig6.eps} \vspace{-0cm} \caption{Observed column density distribution (first row) compared to synthetic column density maps (second to fifth row) convolved to the 30m angular resolution (first column) and to the PdBI angular resolution (second column) for four different initial levels of turbulence: $\alpha_{turb}^0=1\%$ (second row), $\alpha_{turb}^0=5\%$ (third row), $\alpha_{turb}^0=20\%$ (fourth row), $\alpha_{turb}^0=50\%$ (fifth row). The best-fit simulation corresponds to the third row, i.e., $\alpha_{turb}^0=5\%$ (for which the displayed time step is the ``best-fit'' time step). Note that the synthetic PdBI maps include the effect of interferometric filtering (see text). In each map, the contour levels go from 10 to 90$\%$ by step of 10$\%$ of the peak emission. The observed 30m column density distribution corresponds to the 1.2mm dust continuum map of PAB06. \label{comp_coldens}} \end{figure*} The first important feature which must be reproduced by the simulations is the elongated shape of NGC~2264-C and the presence of several protostellar sources, lining up along the long (East-West) axis of the clump. Figure \ref{comp_coldens} compares the observed column density maps (first row) with synthetic maps obtained from simulations with four different initial levels of turbulence, i.e., $\alpha_{turb}^0=1\%$ (second row), $\alpha_{turb}^0=5\%$ (third row), $\alpha_{turb}^0=20\%$ (fourth row), and $\alpha_{turb}^0=50\%$ (fifth row), all assumed to be "observed" with a viewing angle of 45 degrees. The first column of Fig.~\ref{comp_coldens} displays the synthetic data convolved to the 30m resolution while the second column displays the data convolved with the PdB interferometer beam. When convolving the simulated data to the PdBI resolution, we included the effect of interferometric filtering so as to allow more direct comparison with the observations. For this purpose, we used the UV\_MODEL task of the GILDAS package. This task generated a set of visibilities in the UV plane by calculating the values of the Fourier transform of the simulated input image at the observed UV baselines. The rest of the points in the UV plane was set to zero. This method had the consequence of filtering out all extended structures present in the numerical simulations. The four simulations shown in Fig.~\ref{comp_coldens} are compared when the synthetic column density maps convolved to the 30m resolution match the observed map best, except for the case with $\alpha_{turb}^{0} = 50 \%$ (cf. fifth row) which does not exhibit a filamentary shape at any time step in the simulation. The corresponding time steps all lie in the range between t$_{ff} - 5\times10^5$~yr and t$_{ff} - 1.5\times10^5$~yr. The total mass accreted onto sink particles at these time steps ranges from 0$\%$ to 0.2$\%$ of the initial clump mass. As already mentioned, we are thus looking at the very first stages of the formation of a protocluster, i.e., when the first pre-/proto-stellar cores with typical mean volume densities $\sim 10^5$~cm$^{-3}$ appear. It can also be seen in Fig.\ref{comp_coldens} that the filamentary, elongated appearance of the NGC~2264-C clump cannot be reproduced when the initial level of turbulence is too high in the model. Based on this argument, we conclude that $\alpha_{turb}^0$ has a maximum value of $20 \%$, although the $5\%$ model already provides a better match to the observations than the $20\%$ model. Figure~\ref{comp_coldens} also shows that the large-scale morphology of the clump observed at the resolution of the 30m telescope provides the strongest discriminator between different values of $\alpha_{turb}^0$. Our ``best-fit'' simulation is shown in the third row of Fig.~\ref{comp_coldens}. It corresponds to a flat energy spectrum, $E(k)\propto k^{0}$, and an initial value of $\alpha_{turb}^0=5\%$, which is much lower compared to other numerical SPH studies of cloud fragmentation (e.g. Bate et al. 2003). Note that the case of Kolmogorov-like turbulence leads to results which are broadly similar to that shown in Fig.~\ref{comp_coldens}, except that the shape of the filament is more irregular and too distorted to match the observations well. We therefore restrict our attention to the $E(k)\propto k^{0}$ case. Although this energy spectrum differs from the classical Kolmogorov one, we argue in Sect.~5 that it is not unrealistic on the parsec scale of the NGC~2264-C clump. Comparison between the observed dust continuum maps of NGC~2264-C and the synthetic column density maps of the ``best-fit'' simulation (see Fig.~\ref{comp_coldens}) shows that the number of fragments and their alignment are well reproduced. By analogy with the observations, we have labelled the three main fragments of the synthetic 30m column density map SIM2, SIM3 and SIM4. The corresponding synthetic PdBI map shows a strong central source, SIM3, surrounded by weaker sources, as observed. Moreover an additional component, labelled SIM13, becomes visible next to SIM3 in the simulations when ``observed'' at the PdBI angular resolution, which is strongly reminiscent of the (C-MM3, C-MM13) system in the real interferometric map. As described in \S~3.2, the collapse of the filament proceeds in two main phases: first, a global contraction velocity field is established along the long axis; second, a strong shock is generated at the center by the two interacting sides of the model clump. At the same time, the clump fragments to form protostars. Thus, there are at least two relevant dynamical timescales in the problem: a global dynamical timescale corresponding to the global evolution of the elongated clump, and a local dynamical timescale corresponding to the dynamical evolution of individual fragments. In other words, there is competition between local collapse (i.e. fragmentation) and global collapse. In our simulations, this is controlled by the ratio of thermal to gravitational energy, $\alpha_{th}^0$, and thus by $n_c$ and r$_0$ (whose values are 2000 cm$^{-3}$ and 0.7~pc, respectively) since the kinetic temperature and the mass of the model clump are fixed (cf \S~3.1). If $n_c$ or r$_0$ are too small (i.e. the density structure approaches $n_{H_2} \propto r^{-2}$; cf Eq.(\ref{density})), the individual fragments do not have enough time to collapse on their own before entering the central shock. Therefore only one, massive central core forms. Conversely, if $n_c$ and r$_0$ are too large, many protostars (and eventually stars) form before any significant large-scale velocity field is established along the clump long axis. Furthermore, in the latter case, the number of fragments produced in the simulations becomes larger than the observed number of fragments. Note that the collapse simulations shown in this paper were performed with model clumps of total mass $M_{tot} = 1000\, M_{\odot}$, while the total mass of NGC~2264-C is estimated to be somewhat larger, $\sim 1600$~M$_{\odot}$ (cf. PAB06). With more massive model clumps, we did not manage to reproduce the overall morphology of NGC~2264-C, in the sense that too much fragmentation occurred. This suggests that our models lack some source of support against gravity compared to the actual NGC~2264-C clump. This will be discuss further in \S~5. \subsection{The sharp central velocity discontinuity} \begin{figure*}[!th!] \hspace{1cm} \includegraphics[height=15cm,angle=270]{5653fig7.eps} \caption{Observed position-velocity diagrams (first row) compared to synthetic position-velocity diagrams (second to fifth row) convolved to the 30m angular resolution (first column) and to the PdBI angular resolution (second column) at four different time steps of our best-fit SPH simulation ($\alpha_{turb}^0 = 5\%$): t$_{ff}-400 000$~yr (second row); t$_{ff}-200 000$~yr (third row); t$_{ff}-100 000$~yr (fourth row); t$_{ff}-40 000$~yr (fifth row) (with t$_{ff}=9.5\times 10^5$~yr). The best-fit time step corresponds to fourth row. The value of the turbulent to gravitational energy ratio $\alpha_{turb}$ at each time step is given on the right hand side. Note that $\alpha_{turb}$ increases as time proceeds in the simulation. \label{pv_diagrams}} \end{figure*} One of the most interesting features of NGC~2264-C is the central velocity discontinuity observed by PAB06 in optically thin tracers toward C-MM3 (see Fig.~\ref{pv_diagrams}). This velocity discontinuity is believed to trace the axial (i.e., 1D) collapse of NGC~2264-C along its long axis, as well as a possible dynamical interaction between protostellar sources at the center of the clump. Our new PdBI observations, which confirm the presence of a strong velocity gradient along the long axis of the clump (i.e. $\sim 8.4$~km.s$^{-1}$.pc$^{-1}$ -- see Fig.~\ref{pv_n2h+}), set additional constraints on the velocity field in the central part of NGC~2264-C. When the initial level of turbulence was lower than $20\%$, our SPH simulations convolved to the 30m resolution led to a central discontinuity resembling that observed. Furthermore, the shape of the PV diagram observed at the PdBI resolution along the long axis of the clump appears to be a key tracer of the time evolution, as can be seen in Fig.~\ref{pv_diagrams}. In addition to observed PV diagrams (first row), Fig.~\ref{pv_diagrams} shows the synthetic PV diagrams of our ``best-fit'' simulation ($\alpha_{turb}^0 = 5\%$) convolved to the 30m resolution (first column) and to the PdBI resolution (second column) at the same four time steps as in Fig.~\ref{sim} (rows two to five). Note that simulations adopting a Kolmogorov-like turbulent energy spectrum and low values of $\alpha_{turb}^0$ ($\le 20\%$) led to velocity discontinuities which are similar to the discontinuity shown here for the ``best-fit'' simulation. The reference time in Fig.~\ref{pv_diagrams} is the same as in Fig.~\ref{sim}, namely one global free-fall time (t$_{ff} = 9.5\times10^5$~yr) after the start of simulation. At the first time step shown, t$_{ff}-4\times 10^5$~yr (second row), no clear kinematical signature is apparent in the synthetic PV diagrams, either at the 30m or at the PdBI resolution. At t$_{ff}-2\times 10^5$~yr (third row), the synthetic PV diagrams start to exhibit a velocity structure reminiscent of the observed velocity gradient and central discontinuity, but the amplitude of the velocity structure is not large enough to match the observations. At the best-fit time step, i.e., t$_{ff}-1\times10^5$~yr, the agreement between the simulated PV diagrams (fourth row) and the observed PV diagrams (first row) is quite remarkable. The central amplitude (i.e. $\sim2$~km.s$^{-1}$), shape, and position of the velocity discontinuity are well reproduced. Moreover, the synthetic PdBI PV diagram shows a $\sim 1$~km.s$^{-1}$ velocity gap between the two central fragments, SIM3 and SIM13, as observed between C-MM3 and C-MM13. The fifth row of Fig.~\ref{pv_diagrams} shows a later time, i.e., t$_{ff}-0.4\times 10^5$~yr, when the central shock is well developed. While the synthetic 30m PV diagram remains satisfactory, the synthetic PdBI PV diagram differs markedly from the observations. Note, however, that this late phase of evolution may not be correctly described by our numerical model. Indeed, our simulations do not include feedback from protostars which clearly influences the late dynamical evolution of cluster-forming clouds (cf. Li \& Nakamura 2006). Thus, it is not clear if, in reality, the central shock would have time to develop as much as in the simulations to produce a PV diagram such as the one shown at the bottom right of Fig.~\ref{pv_diagrams}. The time evolution of the $\alpha_{turb}$ ratio in the ``best-fit'' simulation is also given on the right-hand side of Fig.~\ref{pv_diagrams}. While the initial level of turbulence was only $\alpha_{turb}^0=5\%$ in this simulation, it can be seen the ratio of nonthermal kinetic energy to gravitational energy quickly increases up to $\alpha_{turb}=27\%$ at the ``best-fit'' time step and $\alpha_{turb}=33\%$ at the last time step shown. This demonstrates that the bulk of the ``turbulent'' energy in our simulations does not come from the large scale turbulent velocity field, but rather from the conversion of gravitational energy into kinetic energy through the global collapse of the clump. The increase of $\alpha_{turb}$ with time contributes to produce synthetic linewidths in reasonable agreement with observed linewidths (see \S~4.3 and Fig.~\ref{comp_spec} below), despite the low level of kinetic energy at the beginning of the simulation. We also note that the value of $\alpha_{turb}$ achieved at the ``best-fit'' time step (27\%) is within a factor of 2 of the kinetic to gravitational energy ratio expected in virial equilibrium (50\%), despite the fact that the model clump is globally collapsing and far from equilibrium at this stage. Clearly, the broad linewidths observed in NGC~2264-C result at least partly from systematic inward motions as opposed to random turbulence. In our ``best-fit'' model, most of the motions are gravitationally focussed and do not exert any support against gravitational collapse. \subsection{Large-scale kinematical pattern in optically-thin line tracers} \begin{figure*}[t] \vspace{-0.0cm} \hspace{2cm} \includegraphics[height=11cm,angle=270]{5653fig8.eps} \vspace{0.5cm} \hspace{2cm} \includegraphics[height=11cm,angle=270]{5653fig9.eps} \vspace{-0cm} \caption{Comparison between the H$^{13}$CO$^{+}$(1-0) spectra observed at the 30m telescope (upper panel) and the synthetic optically thin spectra obtained in our best-fit simulation (lower panel). In the upper panel, the (0,0) corresponds to the position of C-MM3, while in the lower panel the (0,-10) position corresponds to the position of SIM3. Overlaid in grey scale are the 1.2~mm dust continuum image of PAB06 (top) and the synthetic column density map of the best-fit simulation (bottom). The row of spectra observed (top) and simulated (bottom) along the main axis of the clump are marked in boldface. \label{comp_spec}} \end{figure*} The H$^{13}$CO$^+$(1-0) spectra observed toward NGC~2264-C with the 30m telescope show a remarkable East-West axial symmetry over the whole clump on either side of the central source C-MM3 (see Fig.~\ref{comp_spec}). The low optical depth of the H$^{13}$CO$^+$(1-0) line ($\tau \sim 0.3$ for the peak velocity channel of the central spectrum) inferred from the Monte-Carlo radiative transfer calculations of PAB06 implies that the observed double-peaked line profiles (cf. Fig.~\ref{comp_spec}) result from the presence of two velocity components along the line of sight rather than from self-absorption. Although our PdBI observations show that the central velocity discontinuity seen with the 30m telescope originates from two unresolved protostellar sources, the large spatial extent of the region over which double-peaked H$^{13}$CO$^+$(1-0) spectra are observed in the 30m map suggests a global kinematical origin for the double-peak profiles. While the N$_2$H$^+$(101-012) PV diagram observed with PdBI (Fig.~\ref{pv_n2h+}) may be suggestive of rotation about an axis perpendicular to the long axis of the clump, PAB06 showed that rotation could not account for the shape of the observed 30m PV diagram based on a detailed comparison with radiative transfer models (see Fig.~12 of PAB06). By contrast, we now proceed to show that our scenario of large-scale, axial collapse does provides a good match to the symmetric pattern of double-peaked line profiles observed in low optical depth tracers. To this aim, synthetic spectra were constructed from our SPH simulations assuming strictly optically thin line tracers: each SPH particle was given the same weight and the contributions of all particles falling within a given velocity channel were integrated. The synthetic data cube was then convolved to the 30m angular resolution and normalized in such a way that the peak intensity of the synthetic central spectrum matched the peak intensity of the observed central H$^{13}$CO$^{+}$(1-0) spectrum. Figure~\ref{comp_spec} compares the H$^{13}$CO$^{+}$(1-0) spectra observed in the central part of NGC~2264-C (top) with the resulting synthetic spectra for the best-fit simulation at the best-fit time step (bottom). (In this comparison, we use the observed H$^{13}$CO$^{+}$(1-0) spectra rather than the observed N$_2$H$^{+}$(1-0) spectra since the former have a better signal-to-noise ratio.) It can be seen that the overall agreement is very good. Since the synthetic line emission is optically thin, the double-peaked spectra exhibited by the model are clearly not due to radiative transfer effects but result from the presence of two velocity components along the line of sight, corresponding to the two ends of the elongated clump moving toward each other. Focusing on the central row of spectra (marked in boldface), it can be seen that the blue-shifted component of the double-peaked spectra dominates on the eastern side. Moving west, the red-shifted component becomes progressively stronger. It is nearly as intense as the blue-shifted component at the central position and eventually dominates on the western side of the filament. This remarkable reversal of blue/red spectral asymmetry as one moves from the eastern to the western side of the central C-MM3 position can be seen in both the observations and simulations. The synthetic spectra obtained from our SPH simulations are mass weighted and are thus more representative of the global kinematics of the clump than of the kinematics of compact individual fragments. We conclude that the remarkable pattern seen in the central row of spectra in Fig.~\ref{comp_spec} characterizes the collapse of the elongated clump along its long axis. We note, however, that the synthetic spectra are somewhat narrower than are the observed H$^{13}$CO$^{+}$(1-0) spectra. Part of the observed linewidths may result from outflowing gas generated by the protostars, an effect which we did not treat in our simulations. It may also be partly due to another source of support against gravity, not included in our simulations (see \S ~5). In the context of our interpretation of the double-peaked spectral pattern observed in H$^{13}$CO$^{+}$(1-0), the extent of the region over which the red-shifted peak is observed on the eastern side (and the blue-shifted peak observed on the western side) sets constraints on the diameter of the NGC~2264-C cylinder-like clump (see Fig.~11 of PAB06). Double-peaked H$^{13}$CO$^{+}$(1-0) spectra are observed up to 30\hbox{$^{\prime\prime}$} ~on either side of the central object C-MM3. Given the distance of 800~pc and assuming a viewing angle of 45 degrees between the line of sight and the long axis of the clump (as adopted in the radiative transfer model presented by PAB06), we estimate the diameter of the cylinder to be $\sim 0.65$~pc. This is in good agreement with the apparent width of the NGC~2264-C clump as measured in the plane of sky on our dust continuum and molecular line maps. \section{Concluding remarks} The good quantitative agreement obtained between our ``best-fit'' SPH simulations and our (30m and PdBI) millimeter observations confirms the physical plausibility of the scenario of large-scale axial collapse and fragmentation proposed by PAB06 for the NGC~2264-C clump. Observationally, such an axial collapse is traced by a central velocity discontinuity associated with double-peaked profiles in optically thin line tracers. The present study supports our earlier suggestion that an ultra-dense protostellar core of mass up to $\sim 90\, M_\odot $ is in the process of forming at the center of NGC~2264-C through the dynamical merging of lower-mass Class~0 cores (cf. PAB06). Our interferometric PdB detection of a new object, C-MM13, located only $\sim 10000$~AU away (in projection) from the central source, C-MM3, but with a line-of-sight velocity differing by $\sim 1.1$~km.s$^{-1}$ from that of C-MM3, provides an additional observational manifestation of the merging process. Given the relatively large mass of C-MM13 ($\sim 8\, M_\odot$), such a large velocity difference would be difficult to explain by dynamical fragmentation during the collapse of an individual protostellar core, even if low-mass objects can easily be ejected from dynamically unstable protostellar systems (e.g. Bate et al. 2003, Goodwin et al. 2004). In our proposed scenario for NGC~2264-C, the local collapse of individual protostellar cores is strongly influenced by the high dynamical pressure resulting from the global collapse of the clump, and proceeds in a manner that is qualitatively similar to the triggered protostellar collapse models discussed by Hennebelle et al. (2003, 2004). Our detailed comparison between observations and simulations has also allowed us to set constraints on the evolutionary state of the NGC~2264-C protocluster. It seems that the characteristic shape of the observed position-velocity diagrams survives for a relatively short period of time, i.e. $\le 1\times10^{5}$~yr, and occurs only very soon after the formation of the protocluser while less than $\sim 1\%$ of the gas has been accreted onto sink particles. The low level of initial turbulent energy required to match the observations implies that NGC~2264-C is structured more by self-gravity than by turbulence. The main effect of turbulence is to create seeds for further gravitational fragmentation. Turbulent fragmentation does not appear to play a significant role in this clump. In our ``best-fit'' simulation, the initial turbulent to gravitational energy ratio is $\alpha_{turb}^0=5\%$, comparable to the ratio of thermal to gravitational energy $\alpha_{th}^0$. The level of ``turbulence'' increases as the simulation proceeds and gravitational energy is converted into kinetic energy. At the ``best-fit'' time step, the ratio of nonthermal kinetic energy to gravitational energy approaches $\sim 30\% $. Most of the corresponding ``turbulence'' is gravitationally generated as in the recent cloud collapse simulations of Burkert \& Hartmann (2006). In other words, the cloud motions in our best-fit model are primarily due to collapse and gravitationally organized motions as opposed to purely random turbulence. Although we have not identified a specific trigger, we believe that the ``cold'' or ``subvirial'' initial conditions (cf. Adams et al. 2006) required by our model reflect the fact that the NGC~2264-C clump was suddenly compressed and/or assembled as a result of a strong external perturbation. The fact that simulations starting from initial turbulent velocity fields with a Kolmogorov-like energy spectrum lead to model clumps that are much less organized than the observations should not be overinterpreted. Indeed, since the phases are chosen randomly and since in Kolmogorov-like turbulence most of the energy is on large scales, it is not surprising that the shape of the filament is strongly distorted in this case. In a real situation, the large-scale turbulent fluctuations should be much more coherent since they may be responsible, at least in part, for the formation of the filament in the first place (cf. Hartmann et al. 2001). Another point worth noting is that the total mass of gas with density above $10^{4}$~cm$^{-3}$ is $\sim 10$ times lower in our best-fit simulation than in the actual NGC~2264-C clump. Using higher densities by a factor of 10 in the numerical simulations would inevitably lead to fragmentation into a larger number of cores since the corresponding Jeans mass would be smaller by a factor $\sim 3$ compared to the Jeans mass in the present simulations. It seems therefore that some additional support against gravity, not included in the simulations presented here, plays a role in NGC~2264-C. This extra support could arise from protostellar feedback and/or magnetic fields. Finally, we speculate that the evolution inferred and simulated here for NGC~2264-C is not exceptional but representative of many massive cluster-forming clumps in the Galaxy. In particular, we note that evidence of large-scale, supersonic inward motions has been recently found in several deeply embedded regions of high-mass star formation (Motte et al. 2005 -- see also Wu \& Evans 2003 and Fuller et al. 2005). NGC~2264-C may just be caught at a particularly early stage of protocluster evolution and observed in a favorable configuration, leading to a remarkably simple kinematical pattern. Similar detailed modelling studies of other cluster-forming clumps will be needed to confirm this hypothesis. \begin{acknowledgements} We are grateful to the IRAM astronomers in Grenoble for their help with the Plateau de Bure interferometric observations. IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain). \end{acknowledgements}
1,477,468,751,188
arxiv
\section{Introduction} The recognition of proper names constitutes one of the major problems for the wealth of tagging systems developed in the last few years. Most of these systems are statistically based and make use of statistical properties which are acquired from a large manually tagged training corpus. The formation of new proper names, especially personal names, is very productive, and it is not feasible to list them in a static lexicon. As Church \cite{Church1988} already discussed for English, it is difficult to decide whether a capitalized word is a proper name if it has a low frequency (\( < \) 20), and so they were removed from the lexicon. But because they are highly individual, this is the case for most proper names. Furthermore, the problem of proper name tagging for German is not restricted to the disambiguation of sentence--initial words, because proper names and generic terms (normal nouns) are capitalized both at the beginning and within a sentence. Church suggested labelling words as proper nouns if they are ``adjacent to'' other capitalized words. This also holds for German proper nouns, but it is difficult to decide which of the capitalized words belong to the proper name and which not, e.g. is it a first name (as in ``Helmut Kohl'') or is it an apposition (as in ``Bundeskanzler Kohl''), or is it a complex institutional name composed of several generic terms and a proper name (as in ``Vereinigte Staaten von Amerika''). In this procedure, I use Church's heuristic for the selection of proper name hypotheses, which are evaluated on the basis of their syntactic and textual context together with a morphological analysis. The starting point of the analysis is a small database of definite minimal contexts like titles (e.g. ``Prof.'', ``Dr.'') and forms of address (e.g. ``Herr'', ``Frau''), which increases with the processing of texts in which proper names are identified, and supplies new contexts which can be used to find new proper names and new contexts, etc.. This incremental method is applied to unrestricted texts of a small corpus (50,000 words) of German newspapers. \section{Proper Name Acquisition} {}From a psycholinguistic point of view it is possible that we memorize proper names better if we organize them in a hierarchy, in which each word would constitute a node whose subordinate nodes are its hyponyms \cite{Koss1990}. For example, we find in the semantic hierarchy in figure 1 SOCRATES as hyponym of PHILOSOPHER and PHILOSOPHER as hyponym of SCHOLAR, and each node may bear features describing properties of the node. \begin{figure}[htbf] \centerline{\psfig{figure=semnet.eps,width=7.5cm}} \captionsenglish{{\small Figure 1: SOCRATES in a semantic hierarchy}} \label{fig:semnet} \end{figure} One can observe that hyperonyms of names are used to identify or to introduce a proper name in texts. If the knowledge of a name cannot be presupposed, then the name is often introduced by an appositional construction (1)-(2) \cite{Hackel1986} and can be used without additional information (3)--(4) \cite{Kalverkamper1978} later on. \begin{itemize} \item[(1)] der Vorsitzende des Verteidigungsausschusses, {\em Biehle} (CSU), hat Verteidigungsminister {\em W\"or\-ner} gebeten, ...\\ (the chair of the defence committee, {\em Biehle} (CSU), asked the Minister of Defence {\em W\"orner} to ...) \item[(2)] der SPD--Abgeordnete {\em Gerster} kritisierte, da{\ss} ...\\ (the SPD member of parliament {\em Gerster} criticized that ...) \item[(3)] In einem Fernschreiben an {\em W\"orner}, \"au{\ss}erte {\em Biehle} am Dienstag, ...\\ (in a telex to {\em W\"orner}, {\em Biehle} commented on Tuesday ...) \item[(4)] {\em Gerster} forderte eine Mindestflugh\"ohe von 300 Metern\\ ({\em Gerster} called for a minimal flying height of 300 metres) \end{itemize} The syntactic analysis (see section \ref{sec:entag}) operates on a small lexicon of definite minimal contexts of proper names (MC--lexicon) which are used in such appositional constructions and generates a lexicon of so--called potential minimal contexts (MCpot--lexicon). In addition there exist other methods \cite{Koss1987} for the acquisiton of proper names, two of which can be directly observed in the texts. The first method (``Lernpsychologische Sinnverleihung'') tries to lend sense to the name in order to learn it, e.g. the name ``D\"usseldorf'' is given the meaning of `village'. Today it is a big city, but the compound part {\em -dorf} helps us to identify it as a proper name. The second method, the formation of name fields (``Namenfelder'') and name scenes (``Namenlandschaften''), helps us to recognize names describing places which belong to a certain district or scenery, e.g., cities in the Stuttgart area like ``T\"ubingen'', ``Reutlingen'', ``Esslingen'' have the common suffix {\em -ingen}. The morphological analysis (see section \ref{sec:entag}) operates with a list of so--called onomastic suffixes to identify place names. \section{Proper Name Tagging} \label{sec:entag} An overview of the tagging process is shown in figure 2. \begin{figure}[htbf] \centerline{\psfig{figure=entagfig.eps,width=7.5cm}} \captionsenglish{{\small Figure 2: proper name tagging}} \label{fig:entag} \end{figure} \subsection*{Preprocessing} The corpus has to be preprocessed first of all. This includes the tokenization of the corpus in which all punctuation marks are separated from the words to allow the following disambiguation of sentence--initial words. This disambiguation uses a heuristic derived from the one used in CLAWS \cite{Garside1987a}: if a sentence--initial word also occurs inside of a sentence with a lower case initial letter, then it is not a noun (normal noun or proper name) and represented with lower case letters. For this I use a list of all words with lower case initial letter found in the corpus which is stored in an AVL--tree \cite{Wirth1983} for better searching and inserting. After this, a first run through the corpus is done to identify definite proper names occuring in the contexts of the MC--lexicon. Apart from appositons as mentioned above, this lexicon contains speech--embedding (``redeeinbettende'') verbs like ``sagte''and ``fragte'' frequently used in political newspaper texts, as in: \begin{itemize} \item[(5)] die Abgeordnete {\em Kelly} sagte, ...\\ (the member of parliament {\em Kelly} said, ...) \item[(6)] {\em Heinlein} f\"ugte hinzu, ...\\ ({\em Heinlein} added, ...) \item[(7)] so fragte {\em Apel}\\ ({\em Apel} asked) \end{itemize} The MC-lexicon also contains prepositions and preposition frames to identify place names, as in: \begin{itemize} \item[(8)] bei {\em Frankfurt}\\ (near {\em Frankfurt}) \item[(9)] aus {\em S\"ollingen} bei {\em Baden--Baden}\\ (from {\em S\"ollingen} near {\em Baden--Baden}) \item[(10)] im Raum {\em Landshut}\\ (in the {\em Landshut} area) \end{itemize} All proper names are stored in the PN--lexicon which is used during the entire processing. \subsection*{Syntactic and Morphological Analysis} \label{sec:syn} In the following analysis, the immediate syntactic and morphological context of all capitalized words is examined. If the capitalized word is already included in the PN--lexicon, then its immediately preceding context is stored as a potential minimal context in the MCpot--lexicon if it comprises one or more capitalized words. Cases where the proper name is marked as genitive are not considered because this could lead to wrong MCs (e.g., {\em Aussage W\"orners, Besuch Lafontaines}). The collection of potential minimal contexts is also done in the hypotheses processing, which follows. For example, the proper name {\em W\"orner} supplies the MCs: {\em Bundesverteidigungsminister, Verteidigungsminister, Minister, Nato--Generalsekret\"ar}. For the recognition of place names, a suffix list is used containing onomastic suffixes like {\em --acker, --aich, --beuren, --hafen, --hausen, --stetten, --weiler} and a prefix list containing prefixes like {\em Mittel--, Ost--, West--, Zentral--}. In addition to this the ending of the left capitalized word of two adjacents is checked for adjectival endings {\em --er, --aner}, as in: \begin{itemize} \item[(11)] Mainzer Landtag\\ (the state parliament of {\em Mainz}) \item[(12)] M\"unsteraner Parteitag\\ (the party conference of {\em M\"unster}) \end{itemize} If they also occur without this ending ({\em Mainz, M\"unster}), then these forms are proper nouns and are stored in the PN--lexicon. The adjectival forms in (11)--(12) are considered as adjectives (following \cite{Fleischer1989}, p. 265). \begin{sloppypar} Furthermore, loose appositional constructions (``lockere appositionelle Konstruktionen'', \cite{Hackel1986}) as in (13)--(14) are analyzed according to the patterns of noun phrases which occur before the proper name. \end{sloppypar} \begin{itemize} \item[(13)] der Staatssekret\"ar des Landesinnenministeriums, {\em Basten}, ...\\ (the under--secretary of the Department of the Interior, {\em Basten}, ...) \item[(14)] der Chef des Schweizer Wehrministeriums, Bundesrat {\em Koller}, ...\\ (the director of the Swiss Department of the Armed Forces, the minister of state {\em Koller}, ...) \end{itemize} During this run through the corpus, a second AVL--tree is constructed in which all capitalized words are stored together with some information that can be useful for the hypotheses processing. For each word (node) there is a counter for all occurences of the word with an article and a list of all its immediately preceding words, if these are also capitalized or are prepositions (see table 1). \begin{table} \begin{tablefont} \begin{tabular}{|p{1,5cm}|p{4cm}|r|}\hline {\normalsize Node} & {\normalsize List} & {\normalsize Article}\\\hline\hline ADN & bei & 0\\ & Nachrichtenagentur & \\\hline Angaben & nach & 1\\ & Donnerstag & \\\hline Belgien & aus & 0\\ & in & \\\hline Baum & FDP-Politker & 0\\ & FDP-Abgeordnete & \\\hline \end{tabular} \end{tablefont} \captionsenglish{{\small Table 1: contexts of capitalized words}} \end{table} \subsection*{Hypotheses Processing} In this section of the procedure, hypotheses are generated and evaluated. A hypothesis may consist of two adjacent capitalized words or a preposition with a capitalized word. These hypotheses are evaluated on the basis of all occurences of the second word found in the corpus. A hypothesis of two capitalized words is rejected, if \begin{enumerate} \item the left word is already in the PN-lexicon \item the right word is an inflected form which is not possible with PNs. \end{enumerate} All other hypotheses are analyzed in the following way. If the left word is a MCpot or a derived form of a MCpot, then the right word is a proper name. For example ``Senatspr\"asident Spadolini'' is analyzed as proper name ``Spadolini'' with the apposition ``Senatspr\"asident'' which is derived from the MCpot ``Pr\"asident''. The hypothesis is also accepted if the right word has a genitive ending and occurs without this ending in the corpus, because only proper names may occur in such constructions, as in (15). Normal nouns have to be accompanied by an article, as in (16). \begin{itemize} \item[(15)] die Strategie {\em Frankreichs}\\ (the strategy of {\em France}) \item[(16)] die Strategie des M\"orders\\ (the strategy of the murderer) \end{itemize} A hypothesis of a preposition and a capitalized word is rejected, if the capitalized word \begin{enumerate} \item is a potential minimal context \item is followed by a genitive article \item is followed by a past participle. \end{enumerate} The latter two conditions exclude such constructions (``feste Syntagmen''), as in: \begin{itemize} \item[(17)] aus Anla{\ss} des\\ (on the occasion of) \item[(18)] in Kauf genommen\\ (accepted) \end{itemize} In addition, it is checked whether we have a construction like ``zu Olims Zeiten'', i.e., whether the capitalized word has a genitive ending and is followed by a capitalized word. For example, we found the following proper names: \begin{itemize} \item[(19)] in {\em Lafontaines} Worten\\ (in the words of {\em Lafontaine}) \item[(20)] in {\em Stoltenbergs} Bilanz\\ (in {\em Stoltenberg's} the balance sheet) \item[(21)] gegen {\em Hitlers} Erm\"achtigungsgesetz\\ (against {\em Hitler's} Enabling Act) \end{itemize} All resulting hypotheses are evaluated by another procedure which takes into account the AVL--tree containing all capitalized words together with the distributional information described above. Because the corpus is very small and often there is only one occurence of a word, this information is not very reliable and therefore error--prone. This could be improved by the application of the procedure to a very large corpus (several million words). At this point, it is only checked whether the right word occurs with an article (a clue for a normal noun) and whether it often occurs with other capitalized words or prepositions (a clue for a proper name). Proper names are normally not used with articles with the exception of ones -- mostly cases place names and institutional names -- which always occur with an article (e.g. ``die T\"urkei'', ``die Vereinigten Staaten''). So, this method has to be used carefully. The processing of hypotheses is iterated until no more proper names can be found (pn\_new = 0), because new proper names supply new contexts and new contexts may supply new proper names. \subsection*{Tagging} In order to tag the proper names collected in the EN--lexicon, it is necessary to run through the corpus for a last time. All words listed in the EN--lexicon are tagged as proper names.\\ The procedure of proper name tagging was implemented in C under UNIX. \section{Evaluation} The first half of the corpus was used to develop the procedure, the second half served for an evaluation. For the evaluation, all proper names in the second corpus half were manually tagged and (manually) compared to the result of the automatic tagging procedure applied to this corpus part, i.e., to a corpus of 25,000 words. Of the 1300 proper name tokens 461 occurrences were not recognized, 30 text words were wrongly tagged as proper names. This corresponds to a recognition rate of about 65\% (counting errors not excluded). In order to provide background for this figure, some of the problems are discussed here in more detail. The preprocessing module could be improved by enlarging the MC--lexicon with a list of most frequently used first names, for example. For the recognition of non--German proper names, it could be possible to add non--German titles and forms of address as well. The latter were also found in the corpus (e.g. {\em Captain Alan Stephenson, Lord Carrington}). At the Moment, first names are collected in the MCpot--lexicon if they are used attributively to a surname already recognized. This is in contrast to the approaches of \cite{Fleischer1989} and others (\cite{Wimmer1973}, \cite{Kalverkamper1978}), who analyze first names and surnames as a unit. One reason for this is that only the surname can be inflected, as in (22). But as this also applies to titles, as in (23), the reason does not hold. \begin{itemize} \item[(22)] {\em Peter M\"uller\underline{s}} Auto\\ (the car of {\em Peter M\"uller}) \item[(23)] Minister {\em W\"orner\underline{s}} Rede\\ (the speech of minister {\em W\"orner}) \end{itemize} A better argument is that constructions of first name and surname cannot be expanded, e.g., as loose appositional constructions. The procedure of proper name tagging described here is not able to recognize multi--word proper names because only two adjacent capitalized words (apposition + proper name) are examined. Table 2 shows an excerpt of unresolved hypotheses in which some multi--word proper names consisting of first name and surname ({\em Albrecht M\"uller, Angelika Beer, Harry Ristock, Ruth Winkler, Josef Felder, Gabi Witt, Florian Gerster, Sepp Binder, Kurt Schumacher}), of normal nouns ({\em (das) Deutsche Rote Kreuz, Kleine Brogel, Ewige Lampe}) and of some non-German proper names ({\em Alan Stephenson, (Canadian) Air Group, Central Enterprise, Frecce Tricolori, Standardisation Agreement, Acrobatic Full Scale}) are found. \begin{table} \begin{tablefont} \begin{tabular}{|r|p{6,4cm}|}\hline {\normalsize Text} & {\normalsize Hypothesis}\\\hline\hline 1 & Militaerflughafen Rhein-Main\\ 2 & Dutzend Personenwagen\\ 2 & Captain Alan\\ 2 & Alan Stephenson\\ 6 & Mitte April\\ 7 & Metern Abstand\\ 11 & Fraktionskollege Albrecht\\ 11 & Albrecht Mueller\\ 12 & Kanadische Luftwaffendivision\\ 12 & Air Group\\ 12 & Hochleistungsflugzeug F-18\\ 13 & Central Enterprise\\ 13 & Central Enterprise\\ 14 & Central Enterprise\\ 22 & Frecce Tricolori\\ 22 & Deutsche Rote\\ 22 & Rote Kreuz\\ 22 & Dutzend Demonstranten\\ 22 & Autobahnzufahrt Frankfurt-Sued\\ 22 & Luftsportgruppe Breitscheid/Haiger\\ 23 & Kleine Brogel\\ 24 & Fraktionskollegin Angelika\\ 24 & Angelika Beer\\ 25 & Ende September\\ 27 & IG Metall\\ 27 & Harry Ristock\\ 27 & Lehrerin Ruth\\ 27 & Ruth Winkler\\ 28 & Regierung Kohl\\ 28 & Prozent Kandidatinnen\\ 30 & Leitende Oberstaatsanwalt\\ 30 & Oberstaatsanwalt Sattler\\ 32 & Frecce Tricolori\\ 34 & Geburtstag Bert\\ 34 & Josef Felder\\ 34 & Gabi Witt\\ 34 & Ewige Lampe\\ 34 & Museumsdorf Muehlendorf\\ 34 & Florian Gerster\\ 34 & Sepp Binder\\ 34 & Kurt Schumacher\\ 35 & Standardisation Agreement\\ 35 & Standardisation Agreement\\ 35 & Acrobatic Full\\ 35 & Full Scale\\ 36 & Frecce Tricolori\\ 36 & Frecce Tricolori\\ 36 & Frecce Tricolori\\ 36 & Demokratische Proletarier\\ 37 & IG Metall\\ 37 & IG Chemie\\ 37 & IG Bergbau\\ 39 & Kanzleramt Erwaegungen\\ 56 & Partei Ernst\\ 96 & Bundespartei Stellung\\\hline \end{tabular} \end{tablefont} \captionsenglish{{\small Table 2: unresolved hypotheses (excerpt)}} \end{table} The non-German proper names are often put in quotation marks, so this could be an additional criterion for the hypotheses evaluation, but cases in which quotation marks are used to emphasize or to cite one or more words must be excluded (24). \begin{itemize} \item[(24)] die FDP warnt vor ``Panikmache''\\ (the FDP warns of ``panic mongering'') \end{itemize} Multi--word proper names consisting of normal nouns or mixed of normal nouns, adjectives, articles, prepositions and proper names constitute a major problem. Apart from the fact that adjectives and prepositions belonging to a proper name are capitalized, some of these proper names (25) behave like normal nouns, i.e., they are inflectional and take an article, but some do not (26)-(28). The latter are mostly used with an introductory apposition and often put in quotation marks. For one it is difficult to determine which constituents belong to the proper name, and which do not when the construction can be modified and reduced as well (e.g. {\em Vereinigte Staaten von Amerika, die Staaten, die Bundesrepublik, Deutschland}). Under the more distributional analysis described here, it is not possible to recognize them and no easy solution is possible. In secondly place, it is possible to recognize them if we know the minimal context (here {\em Luftwaffenbasis, Gasthaus, Stra{\ss}e}), which may be resolved if we use a very large corpus, and if we consider more than one following word and existing quotation marks. \begin{itemize} \item[(25)] {\em die Vereinigten Staaten} und {\em die Bundesrepublik Deutschland}\\ ({\em the United States} and {\em the Federal Republic of Germany}) \item[(26)] auf der nordbelgischen Luftwaffenbasis {\em Kleine Brogel}\\ (at the North Belgian air force base {\em Kleine Brogel}) \item[(27)] ein Teil von ihnen geht [...] ins Gasthaus ``{\em Ewige Lampe}''\\ (some of them go to the inn ``{\em Ewige Lampe}'') \item[(28)] ich habe in der Stra{\ss}e ``{\em Am Mariahof}'' gewohnt\\ (I have lived in the street ``{\em Am Mariahof}'') \end{itemize} Some of the remaining hypotheses in Table 2 are noun pairs consisting of quantity terms and normal nouns (29)-(31) or constructions with month names (32). Quantity terms could be excluded by an exception list and month names could be added to the EN--lexicon from the start. \begin{itemize} \item[(29)] ein Dutzend Personenwagen/Demonstranten\\ (a dozen automobiles/demonstrators) \item[(30)] mindestens vierzig Prozent Kandidatinnen\\ (at least 40 per cent candidates) \item[(31)] nach Metern Abstand\\ (after a distance of some metres) \item[(32)] Mitte April/Ende September\\ (in the middle of April/at the end of September) \end{itemize} But some of the remaining hypotheses are the result of a free German word order, often observed in sentences with support verb constructions (34: {\em Ernst machen mit} (to be serious about), 35: {\em Stellung beziehen gegen} (to take a stand against)). The hypotheses `Kanzleramt Erw\"agungen' in sentence (33) could be ruled out if the form `Erw\"agungen' was analyzed as a non--possible inflection form of a proper noun and therefore as a normal noun. This was not performed by the morphological analysis\footnote{% The analysis is based on a very simple mechanism: inflectional endings which are not possible for proper names are removed from the word under consideration, and the remaining form is searched for in the corpus. If successful, the word cannot be a proper name and the hypothesis is rejected; if not, the hypothesis is kept. }, because there were no occurrences of `Erw\"agung' without a plural ending in the corpus. This could be improved by the use of a very large corpus or a powerful morphological analyzer (e.g. GERTWOL, \cite{Koskenniemi1994}). The support verb constructions could be excluded if we look for typical verbs used in such constructions ({\em machen, bringen, nehmen, ...}). \begin{itemize} \item[(33)] ... war bekanntgeworden, da{\ss} im Kanzleramt Erw\"agungen [...] stattf\"anden, wie ...\\ (... became known that the chancellorship takes into consideration ...) \item[(34)] ... wenn seine Partei Ernst macht mit ...\\ (... if his party gets serious about ...) \item[(35)] ... indem man [...] gegen die Bundespartei Stellung bezieht\\ (... while taking a stand against the federal party) \end{itemize} Most of the incorrectly tagged proper names are the result of the hypotheses processing, because the corpus is too small. For example, the evaluation of the hypothesis `ohne R\"ucksicht' (with no consideration) provides `R\"ucksicht' as proper name, because it also occurs with the preposition `aus' (from), which is frequently used with place names and never occurs with an article, but its frequency is only 4. This is not representative for a reliable conclusion and it is hoped that a very large corpus would allow for a better analysis. \section{Conclusions and Future Perspectives} Most of the known statistically based tagging systems are confronted with the problem of proper name tagging. In German the problem is not only restricted to the disambiguation of sentence--initial words but also occurs with sentence--internal capitalized words. The procedure of proper name tagging described here makes use of a database of definite minimal contexts as a starting point for an analysis which takes into account both morphological and syntactic properties of proper names. Furthermore, this local analysis is supported by a global analysis regarding all occurrences of capitalized words in the corpus. This global analysis should be improved by a larger corpus than the one used, and a more meaningful statistic procedure, like {\em mutual information} \cite{Church1990b}. However, the central idea of an incremental procedure for the collection of proper name contexts is encouraging. It is planned to include this proper name tagging in the German part-of-speech tagger {\sc Likely} \cite{Feldweg1993a} developed in T\"ubingen to disambiguate all the remaining cases where the tagger could not decide between proper name or normal noun.
1,477,468,751,189
arxiv
\section{Introduction} Studying the formation and evolution of potentially habitable Earth-like planets requires a good knowledge of the environment close to the habitable zone, and thus of the \emph{exozodiacal} dust residing in this region (similar to our zodiacal dust). The presence of exozodiacal dust around other stars may represent a major obstacle for future terrestrial planet-finding missions \citep{defrere10, defrere12, roberge12, stark14b}. Indeed, exozodiacal dust disks (``exozodis'') not only add a significant amount of photon noise to the observations, but may also result in confusion, where the structures of the exozodis mimic the expected signal of an Earth-like planet as seen by future coronagraphic or interferometric space-based observatories. Usually, when referring to exozodiacal dust, one considers primarily the dust in the habitable zone \citep[e.g.][]{defrere10, roberge12, stark08}. However, in our Solar system, zodiacal dust is much more extended than the habitable zone, and actually shows an increasing density down to the F-corona, with a possible dust-free zone within 0.1--0.2~au from the Sun \citep[e.g.][]{Dikarev2015,Howard2019}. Likewise, it is expected that exozodiacal dust can extend over a broad range of separations from its host star, much larger than just the habitable zone. The capability of near-infrared interferometry to probe the presence of hot dust in the innermost regions around nearby stars was first demonstrated by \citet{ciardi01} and by \citet{absil06}. The study of \citet{absil06} was then followed by a series of papers, which have extended the search to about 150 nearby stars, mostly using the CHARA/FLUOR and VLTI/PIONIER instruments \citep{absil06,difolco07,Absil08,Absil13,ertel14,nunez17}. These studies have shown that near-infrared excesses can be resolved around about 10\% to 30\% of nearby main sequence stars depending on the observing wavelength. \citet{ertel16} have also demonstrated the repeatability of the detections, showing that near-infrared excesses are not spurious and caused by poorly understood instrumental or astrophysical errors. Our current understanding is that near-infrared excesses around main sequence stars are related to the thermal emission from hot dust grains close to their sublimation temperature ($\sim$1500~K for silicate dust grains). The contribution of scattered light cannot be excluded in some cases \citep{difolco07,mennesson11,defrere12,ertel14}, although recent polarimetric, interferometric, and theoretical studies argue against scattered light as a prominent contributor to the detected excesses \citep{kennedy15,kennedy2015,marshall16,kirchschlager17,kirchschlager20}. These previous studies have highlighted a tentative correlation between spectral type and near-infrared excess detection rate, but could not formally identify any correlation between the presence of hot dust and of cold, distant dust reservoirs detected by far-infrared and submillimeter photometry. The factors influencing the presence of hot exozodiacal dust around nearby main sequence stars are therefore still unclear, which calls for more observational constraints. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{hot-warm-cold3.pdf} \caption{Illustration of the typical orbital distances and temperatures for the hot, warm, cold dust belts considered in this work, and of the corresponding wavelength ranges in terms of spectral energy distribution. Adapted from \citet{kirchschlager17}.} \label{fig:hotwarmcold} \end{figure} Here, we study the possible correlation of the hot dust phenomenon with the presence of warm asteroid belts around nearby main sequence stars. We define warm dust as dust producing a detectable excess in the mid-infrared but not in near-infrared (typical temperatures in the range 100--500~K), while hot dust is defined as dust producing an excess in the near-infrared (see Fig.~\ref{fig:hotwarmcold}). Our main goal is to determine whether the presence of hot exozodiacal dust could be directly related to the presence of a large reservoir of planetesimals in an asteroid belt, in an attempt to improve our understanding of the origin, architecture, and evolution of bright exozodiacal dust disks, as well as of the factors influencing their detection rate. To this aim, we build a sample of stars known to have a mid-infrared excess attributed to debris disks based on infrared space missions such as Spitzer and WISE (Sect.~\ref{sec:stelsamp}). After detailing the PIONIER observations and data reduction in Sect.~\ref{sec:obsandred}, we present in Sect.~\ref{sec:comp} the search for unknown companions in this sample -- a necessary step to remove possible contamination in our sample. In Sect.~\ref{sec:exozodi}, we present the search for hot exozodis in this sample, detailing the search method and the results. Finally, in Sect.~\ref{sec:discussion}, we discuss the connection between hot and warm dust. We also challenge the standard hypothesis of fully resolved exozodis in interferometric observations, and explore the consequences of partly resolved disks on the measured detection rates. \section{Stellar sample} \label{sec:stelsamp} \begin{figure*}[!t] \centering \includegraphics[width=1\textwidth]{Us-Yel_comp.png} \caption{Histograms of excess significance for PACS70 (left) and PACS100 (right) measurements. The purple dotted curve shows the noise distribution derived by \citet{yelverton19} for their PACS70 (left) and PACS100 (right) data sets, while the red dashed curve shows the same noise distribution derived from our PACS100 data set---our PACS70 data set was not large enough to robustly fit the noise distribution. The values noted as $\mu_{\rm Fit}$ and $\sigma_{\rm Fit}$ are respectively the mean and standard deviation of the significance in our PACS100 distribution, while $\mu_{\rm Yel}$ and $\sigma_{\rm Yel}$ are the ones from \citet{yelverton19}.} \label{fig:PacsYelComparison} \end{figure*} Searching for correlations between hot and warm dust populations first requires to build a large enough sample of nearby stars hosting warm dust. Three main space-based missions have been used to search for warm dust around nearby stars: Spitzer, AKARI, and WISE. We searched the literature for warm excesses around nearby stars, focusing mostly on these three missions \citep{carpenter09, chen06, hillenbrand08, ishihara10, morales12, ballering13, fujiwara13, chen14, vican14, patel14}. To identify warm dust, these missions rely on spectrophotometric observations at wavelengths shorter than 25~$\mu$m. Showing a mid-infrared excess is however not a sufficient condition to infer the presence of warm dust, as excesses in this wavelength range can sometimes correspond to the short-wavelength end of a bright but cold circumstellar emission. We originally built our sample based on the dust temperature estimated in the literature, using a threshold of 130~K\footnote{This temperature threshold of 130~K was only used to select our target stars, based on the available literature in 2013. We will discuss later that a temperature of 100~K was finally chosen to classify between warm and cold dust populations. A significant fraction of the selected targets actually turned out not to show the presence of any warm dust after re-evaluation of their mid- to far-infrared excess, as described in Sect.~\ref{sub:warmcold}.} as a criterion to distinguish warm from cold populations, following \citet{ballering13}. In several cases, the warm excesses could only be detected at a single wavelength, making an accurate temperature determination impossible. In these cases, the authors generally quote the highest possible temperature compatible with their data set. Lacking more precise information, we decided to use these upper limits as a criterion to select stars with possible warm dust populations, where applicable. While our previous near-infrared interferometric surveys were targeting stars brighter than $H=5$, here to build a sufficiently large sample we include stars up to $H=7$, which remains comfortably within the magnitude limit of VLTI/PIONIER. Stars with visual companions within the interferometric field of view of PIONIER on the VLTI Auxiliary Telescopes ($\sim$400~mas full width at half maximum in H band) are not appropriate for detecting weak, extended circumstellar emission. Even light from companions outside the field of view may enter the optical path in case of bad seeing. Thus, as in \citet{ertel14}, all known binary systems with angular separation $<5\arcsec$ are removed from our sample. We identified a total of 62 stars meeting our criteria, which had not been observed yet with precision near-infrared interferometry. The main properties of these 62 targets to be observed with PIONIER are summarized in Table~\ref{tab:allsurv}. We collected PIONIER data of sufficient quality for only 52 of them, as described in Sect.~\ref{sec:obsandred}. Furthermore, four of these 52 stars turned out to be binaries, based on our PIONIER observations (see Sect.~\ref{sec:comp}). These binary stars are not amenable to a search for exozodiacal dust, and are therefore removed from our sample, so that we are left with 48 new stars to study the correlation between warm and hot dust. \longtab{ \begin{longtable}{cccccccc} \caption{Main properties of the 62 newly observed stars.} \label{tab:allsurv}\\ \hline\hline Star & Type & Dist. & $V$ & $H$ & $\theta_{LD}$ & Age & References\\ & & (pc) & (mag) & (mag) & (mas) & (Gyr) & \\ \hline \endfirsthead \caption{continued.}\\ \hline\hline Star & Type & Dist. & $V$ & $H$ & $\theta_{LD}$ & Age & References\\ & & (pc) & (mag) & (mag) & (mas) & (Gyr) & \\ \hline \endhead \hline \endfoot \object{HD 203} & F2IV & 39.0 & $6.181^{0.003}$ & $5.32^{0.05}$ & $0.347^{0.005}$& 0.021 & 1, 2, \textbf{3}, 31, 32\\ \object{HD 2834} & A0V & 53.0 & $4.751^{0.008}$ & $4.76^{0.07}$ & $0.381^{0.006}$ & 0.22 & \textbf{1}, 2, 3, 4\\ \object{HD 3126} & F5V & 41.0 & $6.907^{0.009}$ & $5.85^{0.05}$ & $0.284^{0.004}$& 1.59 & \textbf{1}, 2, 3, 4\\ \object{HD 4113} & G5V & 44.0 & $7.889^{0.009}$ & $6.34^{0.02}$ & $0.240^{0.003}$ & 5.8 & 5, \textbf{7} \\ \object{HD 4247} & F0V & 27.0 & $5.218^{0.003}$ & $4.46^{0.01}$ & $0.513^{0.006}$ & 1.7 & 6 \\ \object{HD 9672} & A1V & 59.0 & $5.611^{0.004}$ & $5.53^{0.02}$ & $0.273^{0.004}$ & 0.1 & \textbf{1}, 3, 4, 8 \\ \object{HD 10008} & K0/1V & 24.0 & $7.66^{0.01}$ & $5.90^{0.04}$ & $0.324^{0.005}$ & 4.2 & 1, \textbf{3}, 9\\ \object{HD 10269} & F5V & 48.0 & $7.078^{0.004}$ & $5.90^{0.04}$ & $0.252^{0.003}$ & 1.6 & \textbf{1}, 4\\ \object{HD 10939} & A1V & 62.0 & $5.033^{0.003}$ & $5.03^{0.02}$ & $0.339^{0.006}$ & 0.2 & \textbf{1}, 2, 3\\ \object{HD 15427} & A2/3V & 47.0 & $5.124^{0.002}$ & $5.03^{0.02}$ & $0.349^{0.005}$ & 0.24 & \textbf{1}, 2\\ \object{HD 17848} & A2V & 50.5 & $5.252^{0.004}$ & $5.16^{0.08}$ & $0.350^{0.005}$ & 0.28 & 1, 2, \textbf{3}, 4, 8\\ \object{HD 23484} & K1V & 16.0 & $6.982^{0.004}$ & $5.09^{0.02}$ & $0.484^{0.006}$ & 6.9 & 2, 3, 8, 10, 11\\ \object{HD 24649} & F6V & 41.0 & $7.217^{0.007}$ & $6.09^{0.03}$ & $0.261^{0.004}$ & 4.8 & \textbf{1}, 6 \\ \object{HD 28287} & K0V & 38.0 & $8.77^{0.01}$ & $6.87^{0.04}$ & $0.210^{0.003}$ & 0.1 & \textbf{1}, 3, 7 \\ \object{HD 29137} & G5V & 52.0 & $7.663^{0.008}$ & $6.16^{0.03}$ & $0.258^{0.004}$ & 6.29 & \textbf{1}, 12 \\ \object{HD 31203} & F0IV & 37.1 & $5.606^{0.004}$ & $4.88^{0.02}$ & $0.414^{0.006}$ & 0.70 & 4, 13 \\ \object{HD 31392} & G9V & 26.0 & $7.600^{0.008}$ & $5.89^{0.04}$ & $0.317^{0.005}$ & 3.70 & 3, \textbf{8} \\ \object{HD 36187} & A0V & 87.8 & $5.557^{0.003}$ & $5.51^{0.02}$ & $0.264^{0.004}$ & 0.25 & 4, \textbf{14}\\ \object{HD 37306} & A2V & 63.0 & $6.087^{0.004}$ & $5.992^{0.02}$ & $0.215^{0.003}$ & 0.16 & 1, \textbf{2}, 3, 4, 8 \\ \object{HD 37484} & F3V & 57.0 & $7.249^{0.008}$ & $6.29^{0.02}$ & $0.217^{0.003}$ & 0.7 & \textbf{1}, 2, 3, 6, 8 \\ \object{HD 38949} & G1V & 43.3 & $7.808^{0.008}$ & $6.48^{0.04}$ & $0.215^{0.003}$ & 0.9 & 2, 3, \textbf{8} \\ \object{HD 41278} & F5V & 56.0 & $7.394^{0.008}$ & $6.36^{0.03}$ & $0.220^{0.003}$ & 2.3 & \textbf{1}, 6 \\ \object{HD 43879} & F5V & 64.1 & $7.494^{0.008}$ & $6.46^{0.04}$ & $0.216^{0.003}$ & 2.0 & 4, 15\\ \object{HD 44524} & F3V & 102.3 & $7.012^{0.01}$ & $6.46^{0.03}$ & $0.193^{0.003}$ & 1.6 & 6, \textbf{14}, 15 \\ \object{HD 59967} & G3V & 21.8 & $6.657^{0.004}$ & $5.25^{0.02}$ & $0.412^{0.006}$ & 0.63 & \textbf{1}, 2, 3, 16, 17 \\ \object{HD 60491} & K2V & 25.0 & $8.15^{0.01}$ & $6.14^{0.02}$ & $0.298^{0.005}$ & 0.08 & \textbf{1}, 3, 18 \\ \object{HD 61005} & G3/5V & 35.3 & $8.215^{0.008}$ & $6.58^{0.04}$ & $0.228^{0.004}$ & 0.1 & \textbf{1}, 2, 3, 19 \\ \object{HD 71722} & A0V & 71.7 & $6.05^{0.004}$ & $5.91^{0.02}$ & $0.225^{0.003}$ & 0.4 & \textbf{1}, 2, 3, 4, 8, 20 \\ \object{HD 76143} & F5IV & 52.0 & $5.328^{0.003}$ & $4.42^{0.02}$ & $0.536^{0.009}$ & 2.2 & \textbf{1}, 3, 4 \\ \object{HD 80133} & K1/2V & 68.5 & $7.76^{0.01}$ & $5.90^{0.03}$ & $0.337^{0.005}$ & 0.4 & \textbf{7}, 11 \\ \object{HD 80883} & K0V & 36.2 & $8.59^{0.01}$ & $6.63^{0.05}$ & $0.242^{0.004}$ & 6.3 & \textbf{ 7}, 21 \\ \object{HD 89886} & F7V & 167.0 & $7.44^{0.01}$ & $6.09^{0.05}$ & $0.273^{0.004}$ & 1.6 & 6, \textbf{7} \\ \object{HD 90781} & F3V & 77.0 & $7.448^{0.008}$ & $6.51^{0.03}$ & $0.198^{0.003}$ & 1.2 & 4, \textbf{14}, 15 \\ \object{HD 90874} & A2V & 68.0 & $5.991^{0.004}$ & $5.86^{0.04}$ & $0.237^{0.003}$ & 0.25 & \textbf{1}, 2, 3, 4, 8 \\ \object{HD 92945} & K1V & 21.4 & $7.708^{0.007}$ & $5.77^{0.05}$ & $0.347^{0.005}$ & 0.21 & 7, 8 \\ \object{HD 93453} & A4IV & 72.0 & $6.288^{0.004}$ & $5.91^{0.03}$ & $0.244^{0.003}$ & 0.4 & \textbf{1}, 4 \\ \object{HD 105850} & A1V & 56.1 & $5.447^{0.003}$ & $5.35^{0.04}$ & $0.290^{0.004}$ & 0.2 & \textbf{1}, 2, 3, 4, 8, 22 \\ \object{HD 105912} & F2/3V & 50.0 & $6.940^{0.007}$ & $5.96^{0.06}$ & $0.258^{0.004}$ & 2.7 & 1, 2, 3, \textbf{8}, 23\\ \object{HD 106906} & F5V & 59.0 & $7.798^{0.008}$ & $6.76^{0.04}$ & $0.184^{0.003}$ & 0.015 & 1, 2, 3\\ \object{HD 109573} & A0V & 67.1 & $5.777^{0.004}$ & $5.79^{0.04}$ & $0.231^{0.003}$ & 0.01 & 2, 3, 4, \textbf{8}, 22\\ \object{HD 109704} & A3V & 68.8 & $5.869^{0.003}$ & $5.77^{0.05}$ & $0.245^{0.003}$ & 0.4 & 1, 2, 3, 4, \textbf{8}, 22 \\ \object{HD 112603} & F2V & 61.0 & $6.952^{0.004}$ & $6.14^{0.05}$ & $0.232^{0.003}$ & 1.5 & \textbf{1}, 6 \\ \object{HD 117716} & A0/1V & 72.0 & $5.690^{0.004}$ & $5.67^{0.03}$ & $0.255^{0.003}$ & 0.3 & 1, 2, 3, \textbf{8}\\ \object{HD 118972} & K1V & 15.6 & $6.918^{0.004}$ & $5.14^{0.05}$ & $0.480^{0.006}$ & 0.3 & \textbf{1}, 2, 3, 7, 8, 11, 24 \\ \object{HD 136544} & F6V & 74.0 & $7.43^{0.01}$ & $6.35^{0.03}$ & $0.221^{0.004}$ & 2.0 & \textbf{1}, 9 \\ \object{HD 141378} & A5IV & 54.0 & $5.522^{0.003}$ & $5.27^{0.03}$ & $0.306^{0.004}$ & 0.3 & 1, 2, \textbf{3}, 8, 25 \\ \object{HD 141943} & G0/2V & 67.0 & $7.85^{0.01}$ & $6.41^{0.03}$ & $0.231^{0.004}$ & 0.03 & 3, \textbf{8}, 33 \\ \object{HD 142139} & A3V & 66.0 & $5.747^{0.003}$ & $5.66^{0.05}$ & $0.261^{0.004}$ & 0.2 & \textbf{1}, 2, 3, 4, 8 \\ \object{HD 161612} & G6/8V & 26.9 & $7.18^{0.01}$ & $5.6^{0.1}$ & $0.344^{0.005}$ & 0.8 & \textbf{7}, 11, 26 \\ \object{HD 174474} & A2V & 82.0 & $6.169^{0.004}$ & $5.89^{0.04}$ & $0.236^{0.004}$ & 0.6 & \textbf{3}, 4, 14, 15\\ \object{HD 175073} & K1V & 24.0 & $7.96^{0.01}$ & $5.95^{0.03}$ & $0.324^{0.005}$ & 4.1 & \textbf{1}, 27 \\ \object{HD 178606} & F5V & 53.0 & $6.520^{0.007}$ & $5.49^{0.02}$ & $0.323^{0.004}$ & 1.7 & \textbf{1}, 23 \\ \object{HD 179520} & F3V & 62.0 & $7.092^{0.007}$ & $6.24^{0.02}$ & $0.232^{0.003}$ & 0.6 & \textbf{1}, 3, 23 \\ \object{HD 181327} & F5/6V & 52.0 & $7.035^{0.008}$ & $5.98^{0.04}$ & $0.263^{0.004}$ & 0.021 & 1, 2, 3, 28, 31, 32 \\ \object{HD 184932} & F2V & 65.0 & $8.03^{0.01}$ & $6.95^{0.02}$ & $0.166^{0.003}$ & 2.1 & \textbf{1}, 4 \\ \object{HD 185615} & G6IV & 43.5 & $8.11^{0.01}$ & $6.54^{0.03}$ & $0.286^{0.004}$ & 9.2 &\textbf{7}, 9, 15 \\ \object{HD 191089} & F5V & 52.0 & $7.178^{0.007}$ & $6.09^{0.03}$ & $0.243^{0.004}$ & 0.021 & 1, 3, 8, 22, 31, 32 \\ \object{HD 192758} & F0V & 62.0 & $7.013^{0.008}$ & $6.30^{0.04}$ & $0.217^{0.004}$ & 0.04 & 3, \textbf{8}, 29\\ \object{HD 196141} & G3V & 37.0 & $8.09^{0.01}$ & $6.58^{0.03}$ & $0.213^{0.004}$ & 0.4 & \textbf{7} \\ \object{HD 205674} & F3/5IV & 52.0 & $7.178^{0.007}$ & $6.25^{0.03}$ & $0.228^{0.003}$ & 2.2 & \textbf{1}, 3, 4, 8 \\ \object{HD 220476} & G5V & 30.0 & $7.611^{0.009}$ & $6.11^{0.04}$ & $0.276^{0.004}$ & 0.4 & \textbf{7} \\ \object{HD 224228} & K3V & 22.0 & $8.237^{0.009}$ & $6.01^{0.03}$ & $0.325^{0.005}$ & 0.1-0.2 & \textbf{1}, 30 \\ \end{longtable} \tablefoot{$1\sigma$ error bars are given as superscripts. V and H magnitudes are from \citet{kharchenko09}. Limb-darkened stellar diameters ($\theta_{\rm LD}$) are computed from surface-brightness relationships based on the V and K magnitudes, following \citet{Kervella04}. References include previous searches for warm and cold dust around the target stars, with the reference in bold highlighting the origin of the warm dust classification that led to their addition to our sample, where applicable.} \tablebib{ (1) \citet{patel14}; (2) \citet{ballering13}; (3) \citet{cotten16} ; (4) \citet{david15}; (5) \citet{bonfanti16}; (6) \citet{holmberg09}; (7) \citet{vican14}; (8) \citet{chen14}; (9) \citet{pace13}; (10) \citet{eiroa13}; (11) \citet{valenti05}; (12) \citet{delgado14}; (13) \citet{huensch98}; (14) \citet{wu13}; (15) \citet{mcdonald12}; (16) \citet{durkan16}; (17) \citet{tuccimaia16}; (18) \citet{maldonado12}; (19) \citet{desidera15}; (20) \citet{pawellek14}; (21) \citet{delgado15}; (22) \citet{mittal15}; (23) \citet{feltzing01}; (24) \citet{mamajek08}; (25) \citet{DeRosa14}; (26) \citet{tsantaki13}; (27) \citet{casagrande11}; (28) \citet{stark14}; (29) \citet{wahhaj13}; (30) \citet{maire14}; (31) \citet{zuckerman04}; (32) \citet{binks14}; (33) \citet{chauvin10} }} To assess a possible correlation between the presence of hot and warm dust, we also need to build a control sample. Our control sample is based on the VLTI/PIONIER survey for hot exozodiacal dust carried out by \citet{ertel14}. The reader is referred to that paper for a detailed description of the stellar parameters of this sample. Among the 85 single and non-evolved stars included in that sample, we expected from the literature that a large majority does not host any warm dust population, based on the absence of mid-infrared excesses. We however noted that the warm vs.\ cold dust classification derived from the dust temperatures described in the literature was inconsistent, because of the various assumptions made in the publications of these mid-infrared surveys. We therefore decided to re-assess the presence of warm dust around the 133 stars included in both the \citet{ertel14} sample (85 stars) and our new sample (48 stars). \subsection{Reassessing the presence of warm and cold dust} \label{sub:warmcold} \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{Chi_hist_crop.png} \caption{Histograms of excess significance for WISE12 (top left), WISE22 (top right), MIPS24 (bottom left), and MIPS70 (bottom right), together with their best-fit Gaussian noise distribution (red dashed curve). The mean ($\mu$) and standard deviation ($\sigma$) of the noise distributions are also plotted respectively as red vertical and horizontal lines. Significance values lower than $-4$ and higher than $8$ are not displayed for the sake of clarity.} \label{fig:ChiHist} \end{figure*} In order to reassess the presence of warm and/or cold dust around our combined sample of 133 stars, we collected photometry at all available optical, mid-, and far-infrared wavelengths for all targets. In most cases these are available in literature catalogues, for example for optical $UVB$ and $ubvy$ data \citep{1987A&AS...71..413M,2015A&A...580A..23P}, and 2MASS and WISE IR data \citep{2003tmc..book.....C,2010AJ....140.1868W}. However, far-IR photometry for our targets is either unpublished or spread across many different analyses. To maximize consistency, Spitzer/MIPS photometry at 24 and 70~$\mu$m uses our own updated PSF fitting for all targets, as described in \citet{2014ApJ...785...33S} and \citet{yelverton19}, except four bright targets, which use 70~$\mu$m photometry from \citet{chen14}. Herschel/PACS photometry at 70, 100, and 160~$\mu$m, and SPIRE photometry at 250, 350, and 500~$\mu$m, also uses our PSF fitting, as described in \citet{2018MNRAS.475.3046S}. We also use Spitzer/IRS spectra from the CASSIS archive \citep{2011ApJS..196....8L}, where available. The data for each star are initially fit with a star + disk model using the \texttt{sdf} code, as described by \citet{yelverton19}. The star is a BT-Settl photosphere model \citep{2012RSPTA.370.2765A} and the disk is a modified blackbody (a Planck function that is multiplied by $\lambda_0^{-\beta}$ at wavelengths longer than $\lambda_0$), and the fitting is done using the \texttt{multinest} code \citep{2009MNRAS.398.1601F}. The fitting serves two purposes, firstly to provide an estimate of the stellar flux at all wavelengths to allow the presence of any IR excess to be quantified, and secondly to estimate the temperature and luminosity of any disk if the excess is deemed significant as described below (if no excess is present the disk component has negligible flux by definition, and is not used). In some cases the single-component disk provides a poor fit to the IR excess, in which case a second disk component is added. Whether a second component is needed is somewhat subjective, since the true dust spectrum is unknown and might mimic a two-component disk \citep{kennedy14}. Our assessment primarily considers whether two disk components are needed to fit all photometry and the IRS spectrum, but also considers whether the dust temperatures of a two-component fit are sufficiently different \citep{kennedy14}. To assess whether IR excesses are significant, we use the empirical method used by many previous studies \citep[e.g.][]{su06,ertel14,yelverton20}, and which we also use below for the PIONIER observations. Essentially, we assume that most stars do not have a significant excess, and therefore that the distribution of excess significance in a given band $i$ $$ \chi_i = \frac{F_{obs}- F_\star}{\sqrt{\sigma_{obs}^2+\sigma_\star^2}} $$ should be approximately Gaussian with zero mean and unity standard deviation. Any positive outliers with $\chi_i>3$ would therefore be considered to have a significant excess. If the standard deviation is greater or smaller than unity, as is commonly the case, the uncertainties for that band are considered to be under- or overestimated respectively, and the threshold for an excess adjusted accordingly. For example, \citet{yelverton20} found the standard deviation for PACS 100~$\mu$m to be 1.68, and therefore set the threshold for an excess to be $3 \times 1.68$. In general the IRS spectra were used as important input for deriving disk temperatures, but in a few cases (HD~40307, HD~90874, HD~91324) were also used to confirm an excess that was not significant photometrically (due to low signal-to-noise ratio and/or poor wavelength coverage). We decided to classify as dusty any stellar system whose excess significance was beyond four standard deviations from the average ($\chi>\mu+4\sigma$) in at least one band. In order to keep an average excess significance around zero in our noise distributions, the Gaussian fitting of the noise distribution was done without any star with $\chi>3$. In the case of PACS70, there were not enough data points to fit a decent Gaussian distribution. We therefore explored the possibility to use the noise distributions from \citet{yelverton19} for that filter. In Fig.~\ref{fig:PacsYelComparison}, we compare our Gaussian fit to the PACS100 significance with the ones obtained by \citet{yelverton19} for their PACS70 and PACS100 data sets. The difference, in terms of outliers, was found to be marginal: only the excesses of two additional stars (namely HD~7570 for PACS100, and HD~174474 for PACS70) were significant with the use of \citet{yelverton19} parameters but not with ours. We therefore decided to adopt the Gaussian noise distributions fitted by \citet{yelverton19} for our PACS70 and PACS100 data sets. The $\chi$ histograms for the remaining four WISE and MIPS filters are plotted in Fig.~\ref{fig:ChiHist} with their respective noise distributions. \begin{figure*}[t] \centering \includegraphics[width=0.45\textwidth]{Rbb_Lstar_v5.png} \hspace*{5mm} \includegraphics[width=0.45\textwidth]{CorrRbb_Lstar_v5_line.png} \caption{(Left). Equivalent black body radius of the dust disks around the 66 stars in our combined sample showing a significant mid- to far-infrared excess, for which a dust temperature could be derived. Disks classified as warm ($>100$~K) and as cold ($<100$~K) are respectively colored red and blue. (Right). Same as left, for the corrected black body radius, using the 50\% astrosilicate + 50\% ice composition of \citet{pawellek15}. A purple dashed line at 50~au shows a rough separation between warm and cold dust populations. The vertical green dotted lines link the warm and the cold dusts that were found in the same stellar system.} \label{fig:diskradius} \end{figure*} Among the 133 stars in our combined sample, 68 show the presence of circumstellar dust in their mid- to far-infrared SED based on our analysis (see Appendix~\ref{appendix} for an illustration of all 133 SEDs). In order to separate these 68 stars into two categories (warm and cold dust), we set an arbitrary limit of 100~K to distinguish between the two populations. The new warm/cold dust classification for these 68 stars is given in Table~\ref{tab:class}. We note that two stars (HD~36187 and HD~89886) do not have sufficient mid- to far-infrared data to conclude on the temperature of the detected excess. These two stars are therefore removed from our warm/cold dust samples, leaving us with a total of 131 stars, among which 66 show the presence of circumstellar dust. \longtab{ \begin{longtable}{cccccccccccc} \caption{Warm vs.\ cold dust classification for the 68 stars showing the presence of a debris disk in our SED modeling. Symbols X and O represent respectively a significant excess and the absence of an excess in the considered filter, while a dash denotes a filter where no data (or data of insufficient quality) was available for the considered target. For the IRS data, symbol X is used when the $\pm 1 \sigma$ error interval around the measured spectrum is located above the photospheric model for a significant, contiguous part of the wavelength range spanned by the IRS spectrum. The last four columns give the inferred temperature and luminosity of the detected disks (``nc'' stands for non-constrained). Asterisks denote the newly observed stars from this study.} \label{tab:class}\\ \hline\hline name & W12 & W22 & M24 & IRS & M70 & P70 & P100 & $T_{\rm warm}$ & $T_{\rm cold}$ & $L_{\rm warm}$/$L_{\ast}$ & $L_{\rm cold}$/$L_{\ast}$ \\ & & & & & & & & (K) & (K) & ($\times 10^{-4}$) & ($\times 10^{-4}$) \\ \hline \endfirsthead \caption{continued.}\\ \hline\hline name & W12 & W22 & M24 & IRS & M70 & P70 & P100 & $T_{\rm warm}$ & $T_{\rm cold}$ & $L_{\rm warm}$/$L_{\ast}$ & $L_{\rm cold}$/$L_{\ast}$ \\ & & & & & & & & (K) & (K) & ($\times 10^{-4}$) & ($\times 10^{-4}$) \\ \hline \endhead \hline \endfoot $^{\ast}$HD 203 & O & X & X & X & X & X & X & 115 & $-$ & 1.3 & $-$\\ HD 2262 & & O & X & X & X &$-$& X & 122 & $-$ & 0.095 & $-$ \\ $^{\ast}$HD 2834 & O & O & X & X & X &$-$&$-$& $-$ & 96 & $-$ & 0.12 \\ $^{\ast}$HD 3126 & O & O & O & O & X &$-$&$-$& $-$ & 53 & $-$ & 1.3 \\ HD 7570 & O & O & X & X & O &$-$& X & 100 & $-$ & 0.066 & $-$\\ $^{\ast}$HD 9672 & X & X &$-$& X & X & X & X & 153 & 57 & 1.6 & 7.0\\ $^{\ast}$HD 10269 & O & X &$-$&$-$&$-$& X &$-$& 120 & $-$ & 1.3 & $-$\\ HD 10647 & O & O & X & X & X & X & X & 100 & 40 & 0.62 & 2.5 \\ $^{\ast}$HD 10939 & O & X & X & X & X & X & X & 190 & 58 & 0.15 & 0.73 \\ HD 11171 & O & O & O &$-$& X & X & X & $-$ & 100 & $-$ & 0.11\\ $^{\ast}$HD 15427 & O & X & X & X & X &$-$&$-$& 101 & $-$ & 0.33 & $-$\\ $^{\ast}$HD 17848 & O & O & X & X & X & X & X & $-$ & 61 & $-$ & 0.42\\ HD 17925 & & O & & X & X & X & X & $-$ & 80 & $-$ & 0.33\\ HD 20794$^{a}$ & &$-$& O &$-$& O & O & O & $-$ & 80 & $-$ & 0.022\\ $^{\ast}$HD 23484 & O & O & O & O & X & X & X & $-$ & 55 & $-$ & 0.93\\ $^{\ast}$HD 24649 & O & X & X &$-$&$-$& X &$-$& $-$ & 76 & $-$ & 1.5 \\ HD 25457 & & X & X & X & X &$-$&$-$& 180 & 60 & 0.40 & 0.97\\ $^{\ast}$HD 28287 & O & O &$-$&$-$&$-$& X &$-$& 140 & $-$ & 1.0 & $-$\\ HD 28355 & O & X & X & X & X &$-$& X & $-$ & 88 & $-$ & 0.41\\ HD 30495 & O & O & O & O & X & X & X & $-$ & 59 & $-$ & 0.32\\ HD 31295 & O & X & X & X & X &$-$& X & 165 & 61 & 0.19 & 0.54\\ HD 33262 & & O & O &$-$& X &$-$& X & 120 & $-$ & 0.16 & $-$\\ $^{\ast}$HD 36187$^{b}$ & O & X &$-$&$-$&$-$&$-$&$-$& nc & nc & nc & nc \\ $^{\ast}$HD 37306 & O & X & X & X & X & X & X & 120 & $-$ & 0.70 & $-$\\ $^{\ast}$HD 37484 & O & X & O & X & X &$-$&$-$& 150 & 67 & 1.4 & 2.2\\ HD 38858 & O & O & O & X & X & X &$-$& $-$ & 52 & $-$ & 22 \\ HD 39060 & X & X &$-$&$-$& X & X & X & 290 & 86 & 8.1 & 0.76\\ HD 40307 & O & O & O & X & O & O & O & $-$ & 60 & $-$ & 0.071 \\ HD 45184 & O & O & O & X & X & X &$-$& $-$ & 58 & $-$ & 0.77\\ $^{\ast}$HD 60491 & O & O & X &$-$& X &$-$&$-$& $-$ & 76 & $-$ & 2.1 \\ $^{\ast}$HD 61005 & O & X & X & X & X & X & X & 120 & 53 & 2.1 & 24 \\ HD 69830 & X & X & X & X & O &$-$& X & 373 & $-$ & 2.0 & $-$\\ HD 71155 & X & X & X &$-$& X & X & X & 109 & $-$ & 0.28 & $-$\\ $^{\ast}$HD 71722 & O & X & X & X & X &$-$& X & 210 & 73 & 0.28 & 0.76 \\ $^{\ast}$HD 76143$^{c}$ & O & X &$-$&$-$&$-$&$-$&$-$& >100 & $-$ & $\sim$2 & $-$\\ HD 76151 & O & O & O & X & X &$-$& X & 103 & $-$ & 0.14 & $-$\\ $^{\ast}$HD 89886$^{b}$ & O & X &$-$&$-$&$-$&$-$& & nc & nc & nc & nc\\ $^{\ast}$HD 90874 & O & O & O & X & O &$-$&$-$& 148 & $-$ & 0.12 & $-$\\ HD 91324 & & O & O & X & O &$-$&$-$& $-$ & 80 & $-$ & 0.053\\ $^{\ast}$HD 92945 & O & O & O & O & X & X &$-$& $-$ & 34 & $-$ & 6.8 \\ $^{\ast}$HD 105850 & X & X & X & X & O &$-$&$-$& 161 & $-$ & 0.28 & $-$\\ $^{\ast}$HD 105912 & O & X & X & X & X &$-$&$-$& 111 & $-$ & 0.84 & $-$\\ $^{\ast}$HD 109573 & X & X &$-$& X & O & X & X & $-$ & 97 & $-$ & 44 \\ HD 109704 & O & X & X & X & O &$-$&$-$& 140 & $-$ & 0.40 & $-$\\ HD 115617 & O & X & O & X & X & X & X & $-$ & 59 & $-$ & 0.23 \\ $^{\ast}$HD 117716 & X & O & X & X & O &$-$&$-$& 150 & $-$ & 0.085 & $-$\\ $^{\ast}$HD 118972 & O & O & O & X & X &$-$& X & $-$ & 95 & $-$ & 0.41\\ HD 135379 & & X & X & X & O &$-$&$-$& 173 & $-$ & 0.44 & $-$\\ HD 139664 & O & O &$-$&$-$& X & X &$-$& $-$ & 74 & $-$ & 1.26\\ $^{\ast}$HD 141943 & O & X & X & X & O &$-$&$-$& 102 & $-$ & 0.83 & $-$ \\ HD 160032 & & O & O & X & O &$-$& X & $-$ & 78 & $-$ & 0.053\\ HD 172555 & X & X & X & X & X & X & X & 190 & $-$ & 5.14 & $-$\\ $^{\ast}$HD 174474 & O & X &$-$&$-$&$-$& X &$-$& 280 & $-$ & 0.92 & $-$ \\ HD 178253 & O & X & X & X & X &$-$&$-$& 164 & $-$ & 0.17 & $-$ \\ $^{\ast}$HD 179520 & O & X &$-$&$-$&$-$& X &$-$& 160 & $-$ & 1.48 & $-$ \\ $^{\ast}$HD 181327 & O & X & X & X & X &$-$& X & $-$ & 80 & $-$ & 27 \\ HD 188228 & O & O & O & X & X & X & X & $-$ & 80 & $-$ & 0.041 \\ $^{\ast}$HD 191089 & O & X & X &$-$& X &$-$& X & $-$ & 94 & $-$ & 15 \\ HD 192425 & O & X & X & X & X &$-$& X & 210 & 57 & 0.26 & 0.36 \\ $^{\ast}$HD 192758 & O & X & X & X & X &$-$& X & $-$ & 64 & $-$ & 5.0 \\ HD 195627 & O & O & X & X & X & X & X & 140 & 45 & 0.15 & 0.90\\ $^{\ast}$HD 205674 & O & O & X & X & X &$-$& X & $-$ & 55 & $-$ & 3.4 \\ HD 206860 & O & O & O & X & O &$-$& X & $-$ & 85 & $-$ & 0.082 \\ HD 207129 & O & O & O & X & X & X & X & $-$ & 45 & $-$ & 0.91\\ HD 213845$^{a}$ & O & O & O & O & O &$-$& O & $-$ & 80 & $-$ & 0.032\\ HD 216435 & O & O &$-$& O & X & X &$-$& $-$- & 50 & $-$ & 0.18 \\ HD 219482 & O & O & X & X & X & X & X & $-$ & 86 & $-$ & 0.27 \\ $^{\ast}$HD 224228 & O & O & X &$-$& O &$-$&$-$& 130 & $-$ & 0.61 & $-$ \\ \end{longtable} \tablefoot{($^{a}$) no formal excess detected, but the combination of marginal (close to significant) excesses at 70 and 100~$\mu$m is clear evidence for a cold dust population (see also SEDs in Fig.~\ref{seds0} to \ref{seds-1}); ($^{b}$) no data available beyond 22~$\mu$m, preventing us from determining the temperature and luminosity of the disk; ($^{c}$) temperature constrained to be higher than 100~K by a combination of WISE12, WISE22, and archival IRAS60 \& IRAS100 photometry, also giving a rough estimate of disk luminosity (with a large error bar $\sim 10^{-4}$ due to the temperature vs.\ luminosity degeneracy).} } \subsection{Statistics of our combined sample} Based on our SED modeling, our final sample of 131 stars contains a total of 35 stars that show the presence of warm dust populations ($>100$~K), among which 11 also show the presence of a cold dust reservoir ($<100$~K). Another 31 stars show the presence of cold dust only, while 65 stars show no sign of circumstellar dust, at a sensitivity level that depends on the stellar brightness and spectral type, and on the quality of the mid- to far-infrared observations available for each star. For the 66 stars that show the presence of circumstellar dust, we plot in Fig.~\ref{fig:diskradius} the equivalent black-body radius of the dust disk, as well as the corrected black-body radius following \citet{pawellek15}, using a 50\% astrosilicate + 50\% ice composition. The correction takes into account the blow-out of the smallest dust grains, and is meant to provide a more representative estimate of the true radius of a spatially unresolved dust disk. The right-hand side plot in Fig.~\ref{fig:diskradius} shows that our 100~K criterion corresponds more or less to a 50~au limit in terms of corrected black body radius between the warm and cold dust disks, although some A-type stars show warm dust up to about 60~au, while late-type stars show cold dust down to about 40~au. This relatively good agreement between our temperature classification and a classification based on the corrected black body distance to the host star provides an independent justification to our classification strategy. While a 50~au limit between warm and cold dust populations may seem large, it ensures that the two populations are balanced in size, and we argue in Sect.~\ref{sub:temp_corr} that choosing a higher temperature threshold would not change the conclusions of our study. While the dusty and non-dusty samples are relatively equally spread between A-type, F-type and G/K-type stars, with $33\% \pm 8\%$ of stars in each of the three spectral type categories, it must be noted that the warm dust sample is largely biased towards A-type stars (with 18 A-type stars out 35), while the cold dust sample is biased towards G/K-type stars (14 G/K-type stars out of 31). This imbalance likely arises because disks tend to be warmer around earlier type stars \citep[e.g.][]{kennedy14}. This could also be partly due to the fact that warm dust appears up to larger orbital distances around A-type stars, even after taking into account the black body correction of \citet{pawellek15}. \begin{table*}[t] \caption{Summary of the new VLTI/PIONIER observations.} \label{tab:log} \centering \begin{tabular}{cccccc} \hline \hline Run & Night & \# stars & Seeing (\arcsec) & $\tau_0$ (ms) & Notes\\ \hline 093.C-0712(A) & 07-04-2014 & 7 & 1.6 (0.8 - 2.5) & 1.4 (1.1 - 1.7) & Strong seeing at the end of the night\\ 093.C-0712(A) & 08-04-2014 & 9 & 1.7 (0.6 - 2.9) & 2.2 (0.8 - 3.7) & Strong seeing in the middle of the night\\ 093.C-0712(A) & 09-04-2014 & 9 & 1.3 (0.6 - 2.0) & 2.5 (1.3 - 3.8) & Some clouds\\ 093.C-0712(B) & 30-08-2014 & 8 & 1.2 (0.6 - 1.9) & 1.7 (1.1 - 2.3) & Good conditions\\ 093.C-0712(B) & 31-08-2014 & 9 & 1.2 (0.6 - 1.9) & 2.1 (1.0 - 3.2) & Some clouds\\ 093.C-0712(B) & 01-09-2014 & 4 & 1.1 (0.7 - 1.6) & 3.2 (1.9 - 4.5) & Thin clouds, dome closed for a part of the night\\ 094.C-0325(A) & 22-12-2014 & 7 & 1.0 (0.5 - 1.5) & 2.3 (1.4 - 3.2) & Good conditions\\ 094.C-0325(A) & 23-12-2014 & 11& 1.4 (0.5 - 2.4) & 2.3 (1.1 - 3.6) & Strong seeing at the beginning of the night\\ 094.C-0325(A) & 24-12-2014 & 8 & 1.5 (0.4 - 2.5) & 3.1 (0.9 - 5.3) & Strong seeing at the beginning of the night\\ \hline \end{tabular} \end{table*} Our target stars mostly consist of old, main sequence field stars, generally not younger than a few hundreds of million years. A handful of stars in our sample are somewhat younger: HD~141943 \citep[field star, 30~Myr,][]{chen14}; HD~203, HD~39060, HD~172555, HD~181327 and HD~191089 \citep[part of the $\beta$~Pic moving group, 21~Myr,][]{binks14}; HD~192758 \citep[part of IC~2391, 40~Myr,][]{wahhaj13}; HD~109573 \citep[part of the TW Hya association, 10~Myr,][]{mittal15}; HD~106906 \citep[part of the Lower Centaurus Crux association, 15~Myr,][]{Pecaut16}; and HD~188228 \citep[part of the Argus association, 38~Myr,][]{booth13}. We note that the average stellar age of the warm dust sample (0.83 Gyr) is significantly lower than for the control sample (2.7 Gyr), which is not unexpected as the presence of warm dust is known to be correlated with the system age \citep[e.g.,][]{su06,vican14}. Any correlation between hot and warm dust will therefore also be tested for a possible age bias, although previous studies \citep[e.g.][]{Absil13,ertel14} do not suggest a significant age dependence in the hot exozodi phenomenon. \section{Observations and data reduction} \label{sec:obsandred} Observations were carried out with VLTI/PIONIER \citep{LeBouquin11} at H band in April, August and December 2014, each run consisting of three consecutive observing nights. An observing log of all nights can be found in Table~\ref{tab:log}. We used the four 1.8-m ATs to obtain six visibility measurements simultaneously. We used for all observing runs an array configuration (D0-H0-G1-I1) with baselines ranging between 41~m and 82~m. This configuration is larger than the one used for the \citet{ertel14} survey, because the stars in the present sample are more distant in average and therefore require a higher angular resolution to resolve their sublimation radius (see Sect.~\ref{subsec:sublimation} for more details). After the August run, the detector of PIONIER was changed, which implied a change in the read-out mode. The read-out mode was set to FOWLER with SMALL dispersion (three spectral channels) for the observations of April and August 2014, and to HIGH SENS with a GRISM dispersion (six spectral channels) for the observations of December 2014. Four calibrator stars were selected from \citet{merand2005} for each science target, typically within 10\degree \ on sky to minimize the effects of pupil rotation or instrumental polarization \citep[see][]{LeBouquin12}. Additional selection criteria were an H-band magnitude similar to the science target, and a small angular diameter. Most of the targets were observed in a CAL1-SCI-CAL2-SCI-CAL3-SCI-CAL4 sequence, where two non-consecutive calibrators can be the same. Out of the 62 stars in our observing list, a total of ten stars had to be removed from our final sample for various reasons. Four could not be appropriately observed: HD~141378 and HD~43879 due to incomplete observing sequences (not enough data), HD~59967 because of inappropriate calibrators, and HD~93453 because two out of the three SCI observations were obtained during a burst of bad seeing ($>2\arcsec$). The other six stars were removed after the data reduction and calibration procedure, as detailed below. The data reduction consists of the conversion of raw observations into calibrated interferometric observables. We use the exact same method as in \citet{ertel14}. The first step of the calibration is to calibrate the instrumental visibility within the CAL-SCI-$\dots$-CAL sequence. To do so, we calibrate each SCI individually by pairing it with either the preceding or the following CAL. During this process, we also make sure to discard all calibrators with low S/N or with a clear closure phase signal \citep[see][for details]{ertel14}. After calibration, six stars had to be rejected from our sample (HD~4247, HD~10008, HD~31392, HD~142139, HD~178606, and HD~184932), because of large discontinuities in the interferometric transfer function\footnote{The interferometric transfer function monitors the instrumental visibility, or instrumental closure phase, as a function of time.} due to poor seeing conditions, low coherence time, or clouds. The actual number of new stars added through this observing program therefore amounts to 52. The last step in the data analysis procedure is to assess the systematic polarization effects of PIONIER. This part is automatically corrected by a dedicated option in the standard PIONIER pipeline \citep[\texttt{pndrs},][]{LeBouquin11} used for the reduction. A detailed explanation of the polarization effects can be found in \citet{ertel14}. \section{Searching for companions} \label{sec:comp} Before searching for hot exozodiacal disks based on our interferometric observations, we first need to identify possible unknown (sub-)stellar companions that could also produce an infrared excess. This will ensure that all our target stars are single. \subsection{Principle of the search} \label{subsec:princip} Following the same lines as in \citet{absil11} and \citet{marion14}, we use the full information delivered by PIONIER (squared visibilities, $V^2$, and closure phases, CP) to systematically search for companions around all the stars. Doing so, we are able to discriminate whether the small near-infrared excess detected around some of the target stars is due to an extended, mostly symmetric source, or to a point-like companion. The method used to detect companions is fully described in \citet{marion14}. We provide here a brief summary. First, we define the search region, taking into account three main factors. The first one is the Gaussian profile of the single-mode fiber used in PIONIER (FWHM $\approx$ 400 mas). The second one is the finite scan length of the optical path delay. With the medium configuration used here and for a typical scan length of 100~$\mu$m, the maximum separation that can be probed is $\Delta\theta_{\rm max} \simeq 200$~mas, although we recognize that companions with separations larger than 70~mas may not be simultaneously visible on all baselines. The third one is the sufficient sampling of the closure phase signal as a function of wavelength, which depends on the baseline and on the spectral resolution. Following \citet{absil11}, the well-sampled field-of-view is about 50~mas in radius. Taking all of this into account, we consider a search region 50~mas in radius in this study. We note however that our search is actually sensitive to companions out to about 200~mas, with point-like sources between 50 and 200~mas creating aliasing within our 50~mas search region. As explained in Sect.~\ref{sec:stelsamp}, companions beyond 200~mas may bias our observations, but we expect that such companions (with contrasts of 1\% or more) would have already been identified in the literature. \begin{table*}[t] \caption{Summary of the stars showing a significance level higher than $5\sigma$ based on the analysis of the combined $\chi^2$ (CP+$V^2$). The significance of the detection based on the separate analysis of the CP and the $V^2$ is also given. The nature of the detection is either a disk or a point-like source, in which case its main properties are given in the last three columns.} \label{tab:detcomp} \centering \begin{tabular}{cccccccc} \hline \hline & & & & & \multicolumn{3}{c}{Point-like source} \\ \cline{6-8} Name & \multicolumn{3}{c}{Significance} & Nature & Separation & P.A. & Contrast \\ &(CP+$V^2$) & (CP) & ($V^2$)& & (mas) & (deg) & (\%)\\ \hline HD~203 & 5.4 & 3.7 & 7.3 & disk & -- & -- & --\\ HD~31203 &14.5 & 20.2 & 25.6 &point-like& $64.6 \pm 1.3$ & $-50.2\pm 0.4$ & $4.3 \pm 0.6$\\ HD~36187 & 9.3 & 2.2 & 23.4 & disk & -- & -- & --\\ HD~61005 & 6.7 & 2.5 & 7.7 & disk & -- & -- & --\\ HD~76143 & 6.4 & 5.3 & 4.9 & disk& -- & -- & -- \\ HD~80133 &4642.7 & 318.5 & 14616.4 & point-like&$6.0\pm 0.2$ & $13.2 \pm 0.7$ & $85.2 \pm 2.6$\\ HD~106906 &145.0 & 12.0 & 509.1 & point-like& $1.4 \pm 0.1$ & $95.2 \pm 5.1$ & $95.0 \pm 7.4$ \\ HD~175073 &64.7 & 59.6 & 391.3 & point-like& $31.2 \pm 0.6$ & $-84.7 \pm 0.4$ & $13.0 \pm 0.9$\\ \hline \end{tabular} \end{table*} To detect the presence of a companion, we use the closure phases and the squared visibilities in a combined way. As in \citet{marion14}, we compute a binary model considering the primary star at the center of the search region with an off-axis companion of various contrast $c$ at each point $(x,y)$ of the search region. In the present case, we can safely assume that both the primary and the secondary stars are unresolved, since all of our targets have an angular diameter $\lesssim 0.5$~mas. Then, we compute the $V^2$ and CP for each model and derive a combined goodness of fit that we normalize and collapse along the contrast axis to keep only the best-fitting companion contrast (i.e., minimum $\chi^2$ value) at each position in the search region. The resulting $\chi^2$ map can then be used to derive the probability for the single-star model to adequately represent the data, based on the $\chi^2$ distribution under a Gaussian noise assumption. If this probability is below a predefined threshold, the single-star model can be rejected and the best-fit binary solution is then considered as statistically significant. The detection criterion is defined as a threshold on the significance of the detection, which can be translated into a confidence level if the underlying probability distribution function is known. \begin{figure}[t] \begin{center} \includegraphics[scale=0.45]{histosignleveltotneg_v2.pdf} \caption{Statistics of the (signed) significance level for the 52 stars based on the combined $\chi^2$, taking into account the CP and $V^2$. Four stars with a significance level higher than $10\sigma$ are not represented here for the sake of clarity.} \label{fig:histosignif} \end{center} \end{figure} To determine the significance level to use as a detection threshold, we study the noise properties of the data set by including negative contrasts in our model for the off-axis companions. While non-physical, negative companions can be used to represent positive fluctuations in the $V^2$ (i.e., situations where the measured $V^2$ is higher than the expected $V^2$ from the photosphere). Negative companions can also be attributed to noise fluctuations in the CP, which can take both positive and negative values. In the following, we associate negative significance levels to negative companions. The histogram of the significance levels for our complete sample is illustrated in Fig.~\ref{fig:histosignif}, where the range of the plot has been limited to $[-10,10]$ for the sake of clarity. The negative significance levels in the histogram are purely due to noise fluctuations, and can therefore be used as a reference to study the noise properties of our sample. The absence of significance levels close to 0 in the histogram can be explained by the fact that, in presence of noise and due to the limited number of observations, it is generally possible to obtain a better fit to our data sets by inserting a companion somewhere in the field-of-view than by using a single-star model. Out of the 52 stars in our observed sample, ten show a negative significance level, but none are below $-5\sigma$. We therefore decide to use $5 \sigma$ as our empirical companion detection threshold based on the combined analysis of $V^2$ and CP. We note that a $4\sigma$ threshold would also have been appropriate for this analysis, owing to the distribution of the negative excesses. However, after a more detailed inspection of the closure phases, all the stars with a significance level between $4\sigma$ and $5\sigma$ actually turned out to be surrounded by extended emission, and not by point-like sources (see below for details on how this inspection was performed). \subsection{Results of the search} \label{subsec:rescomp} \begin{figure*}[t] \centering \includegraphics[scale=0.5]{hd3120320141224chi2total.pdf} \includegraphics[scale=0.5]{hd8013320140410chi2total.pdf}\\ \includegraphics[scale=0.5]{hd10690620141225chi2total.pdf} \includegraphics[scale=0.5]{hd17507320140409chi2total.pdf}\\ \caption{Normalized $\chi^2$ maps related to the combined CP+$V^2$ analysis for the four stars showing clear signs of an off-axis companion: HD~31203, HD~80133, HD~106906, and HD~175073 (from left to right and top to bottom). The black circles indicate the positions of the minima in the maps.} \label{fig:chi2maps} \end{figure*} Table~\ref{tab:detcomp} lists the stars that have a significance level higher than $5\sigma$ for the combined $\chi^2$ analysis. HD~31203, HD~80133, HD~106906, and HD~175073 have strong detections, not only in the combined analysis but also in the individual analysis of $V^2$ and CP. They are therefore identified as bona fide binary stars. The $\chi^2$ maps illustrating the detection of a point-like source in these four data sets are illustrated in Fig.~\ref{fig:chi2maps}. HD~36187 and HD~61005 have a low significance for the CP-only analysis, so that the detected excess is identified as being due to extended emission (no evidence for the presence of a point source). For HD~203 and HD~76143, the situation is not as clear, as the detection is at best marginal in the combined and individual analyses. This requires more careful data inspection to decide on the nature of the excess. For HD~203, we note that the detection in the $V^2$ is about twice as significant as in the closure phases. This is a sign that the excess identified in the combined analysis is most probably due to the presence of a disk, which creates a strong signal in the $V^2$ but not in the CP. For HD~76143, looking at the closure phases signal reveals a global offset from 0, which is the sign of a poor calibration. This poor calibration of the CP is suspected to be at the origin of the (marginal) detection of an excess emission in the CP, and we propose that the most likely explanation is rather the presence of a circumstellar disk (although more observations would be needed to firmly confirm this statement). This leaves four stars in our sample with previously unknown companions, which will be removed from our search for hot exozodiacal disks. The four new binary stars are described in more details the following paragraphs. \subsection{Notes on newly identified companions} \paragraph{HD~31203 (iot Pic A).} This F0V-type star is located at 37.1~pc, and known to be a member of a multiple star system \citep{tokovinin15}. The first companion (HD~31204, F4V) is located at $12\farcs3$, and the second one (HD~31261, K2V) at $289\arcsec$. These companions are well outside the PIONIER field-of-view, and even outside the AT field-of-view, so that they do not affect our observations. Besides being a visual multiple system, iot Pic A is known to have variable radial velocities \citep{nordstrom85}, with a variability larger than 30~km\,s$^{-1}$ on a timescale of a few days, based on four measurements. The nature and orbital parameters of the potential close companion are however not constrained, although the amplitude of the radial velocity (RV) variation point toward a solar-type companion. Based on the measured contrast $c = 0.04$ and the distance, we estimate the companion found by our interferometric observations to have an absolute magnitude $M_H = 5.49$, which corresponds roughly to a K7V spectral type according to \citet{allen00}. Assuming a face-on, circular orbit with a semi-major axis of 2.39~au, and a mass of 1.52 $M_{\odot}$ for iot Pic A \citep{david15}, the orbital period would be around 3~years. Determining whether this companion corresponds (at least partly) to the source of the RV variability found by \citet{nordstrom85} would require more RV and interferometric observations. \paragraph{HD~80133.} This K1V-type star is located at 32.8~pc. Based on the measured contrast $c = 0.85$ and the distance, we estimate the companion to have an absolute magnitude $M_H = 4.16$, which corresponds roughly to a G8V spectral type according to \citet{allen00}. In practice, the measured contrast would rather point to a pair of K1-2V stars, or to a slightly evolved K1IV-V primary with a less evolved G8V secondary, owing to their estimated age of about 13~Gyr \citep{takeda07}. Assuming a face-on, circular orbit with a semi-major axis of 0.19~au, and a mass of about $1 M_{\odot}$ for HD~80133 \citep{takeda07}, the period would be around 1~month. Surprisingly, this star has not been identified as a binary star based on RV measurements, while it was included in the California/Carnegie Planet Search programs \citep{valenti05,takeda07}. This might be explained either by a (very) poor time coverage in the RV survey, or by a quasi-perfectly face-on orbit. We note that the warm dust disk detected around HD~80133 by \citet{vican14} is located well outside the estimated semi-major axis of the companion (beyond 1~au), and should therefore be in a stable circumbinary configuration. \paragraph{HD~106906.} This F5V-type star is located at 92.1~pc, and is identified as a short-period binary in \citet{lagrange21} based on RV measurements. The interferometric observations presented here confirm the binary nature of the star, which turns out to be a quasi-equal flux binary. Our interferometric observations have been included in the analysis of \citet{lagrange21} to better constrain the orbital parameters of the system. We refer to that paper for a full discussion of this system. \paragraph{HD~175073.} This K1V-type star is located at 24~pc. Based on the measured contrast $c = 0.13$ and the distance, we estimate the companion to have an absolute magnitude $M_H = 6.36$, which corresponds roughly to an M2V spectral type according to \citet{allen00}. Assuming a face-on, circular orbit with a semi-major axis of 0.76~au, and a mass of 0.8 $M_{\odot}$ for HD~175073 \citep{casagrande11}, the period would be around 9~months. Surprisingly, this star has not been identified as a binary star based on RV measurements, while it was included in previous RV planet surveys according to \citet{grether06}. This might be explained either by a (very) poor time coverage in the RV survey, or by a quasi-perfectly face-on orbit. We note that this newly discovered companion cannot be at the origin of the W4 WISE excess detected by \citet{patel14} as their analysis was purely based on mid-infrared colors, which are similar for the host star and its companion. We also note that the warm dust disk detected around HD~175073 by \citet{patel14} is located well outside the estimated semi-major axis of the companion (beyond 2~au), and should therefore be in a stable circumbinary configuration. \subsection{On the PIONIER sensitivity to faint companions} \label{subsec:sens} In the cases where no companion, nor hot exozodiacal disk, is detected around the target stars (35 stars out of the 52 in our sample, see Sect.~\ref{sec:exozodi} for a discussion of the hot exozodi detections), we can compute an upper limit on the contrast of faint companions around the target stars, as a function of the position in the field-of-view. These sensitivity maps are derived from the $\chi^2$ analysis, as explained in \citet{absil11}, with the difference that here we use both the $V^2$ and the CP in our $\chi^2$ analysis. From the sensitivity maps, we can derive the median sensitivity at a given radial distance by computing the median upper limit along an annulus. The result is illustrated in Fig.~\ref{fig:senscont}, where the median sensitivity is plotted as a function of the angular separation for the 35 stars. The typical $5\sigma$ sensitivity of PIONIER in ``survey mode'' (3 OBs per target), illustrated by the red curve in Fig.~\ref{fig:senscont}, is a flux ratio of 0.7\% (i.e., $\Delta H=5.4$) for angular separations larger than 2~mas in the medium-sized AT configuration. This sensitivity corresponds typically to companions with spectral types ranging from M1V to M6V around main sequence stars with spectral types ranging from A0V to K0V. \begin{figure}[t] \centering \includegraphics[scale=0.45]{radialproftot.pdf} \caption{Sensitivity of PIONIER to point-like companions as a function of the radial distance to the central star for the 35 stars showing no H-band excess in our observations. The sensitivity is expressed as the azimuthal median of the $5\sigma$ upper limit, based on an analysis of the combined $\chi^2$ for the CP and $V^2$. The red curve is the median sensitivity on the 35 stars.} \label{fig:senscont} \end{figure} \section{Search for exozodis} \label{sec:exozodi} After removing the four stars identified as binaries in Sect.~\ref{sec:comp}, we are left with a combined sample of 133 stars, as already described in Sect.~\ref{sec:stelsamp}. Of these 133 stars, 48 are new observations from the observing program described in this paper. In this section, we briefly summarize the principle of the search for hot exozodis, and detail the new exozodis found around our 48 new targets. When it comes to the detection of faint, circumstellar excess emission, the strength of infrared interferometry is the ability to spatially resolve this emission and thus spatially disentangle it from the much brighter stellar emission. When observing at small baselines of up to a few tens of meters, the host star is nearly unresolved (minimizing the effects of its uncertain diameter on the prediction of the system's $V^2$), while an extended circumstellar emission is ideally fully resolved \citep[see][]{difolco07}. This results in a drop in $V^2$ compared to the purely stellar $V^2$, because it adds incoherent flux. This represents the core of our detection strategy. \subsection{Fitting strategy} \label{subsec:strat} As shown by previous studies \citep{absil09,defrere11}, the $V^2$ drop induced by a circumstellar disk does not depend significantly on the assumed geometry of the disk, provided that the disk is resolved at the considered baselines. As in previous studies, we therefore consider a model consisting of a limb-darkened photosphere surrounded by a uniform circumstellar emission filling the entire field of view of PIONIER ($\sim 400$~mas). The visibility expected from a limb-darkened photosphere is estimated according to \citet{hanbury74} using the linear H-band limb-darkening coefficients of \citet{claret95}. We estimate the visibility for the whole bandwidth of each spectral channel, considering the actual spectrum of the star using tabulated H-band spectra from \citet{pickles98} and the spectral transmission of the PIONIER instrument. The estimated $V^2$ are then compared with the measurements, and the flux ratio for each data set is derived. The computation is performed by a set of IDL routines initially developed for CHARA observations by \citet{absil06}, and later adapted to other interferometers by \citet{defrere11}. To derive the value and uncertainty of the flux ratio for each target, we use a bootstrapping algorithm, where each individual fit to the data is performed using a Levenberg-Marquardt least-squares minimization \citep{Markwardt09}. This means that the individual uncertainties on the data points are not considered directly in the estimate of the uncertainty of the flux ratio, but rather their scatter. In addition, a systematic uncertainty of $5\times 10^{-4}$ due to chromaticism is added to the estimated flux ratio \citep{ertel14}. For the bootstrapping, we consider that simultaneous spectral channels are fully correlated while the six baselines are fully uncorrelated. \subsection{Results of the search} \label{subsec:resdisk} \begin{table}[!t] \caption{Summary of the results for the 48 stars used for our new hot exozodiacal disk survey (excluding all binaries, and the data sets removed in Sect.~\ref{sec:obsandred}). Stars showing a significant level of excess emission (significance higher than 3$\sigma$) are highlighted in gray. The reduced $\chi^2$ of the star+disk model fit to the data is given in the last column.} \label{tab:all} \centering \begin{tabular}{cccc} \hline\hline Star & Contrast & Signif. & $\chi^2_r$ \\ & (\%) & ($\sigma$) & \\ \hline \rowcolor[gray]{0.9} \object{HD 203} & $0.96 \pm 0.23$ & 4.25 & 0.84 \\ \object{HD 2834} & $2.29 \pm 0.85$ & 2.69 & 5.69 \\ \object{HD 3126} & $-0.01 \pm 0.24$ & $-0.04$ & 0.63 \\ \rowcolor[gray]{0.9} \object{HD 4113} & $0.75 \pm 0.20$ & 3.82 & 0.63 \\ \object{HD 9672} & $0.30 \pm 0.31$ & 0.96 & 1.85 \\ \object{HD 10269} & $-0.16 \pm 0.14$ & $-1.15$ & 0.31 \\ \object{HD 10939} & $0.35 \pm 0.33$ & 1.05 & 1.11 \\ \object{HD 15427} & $0.05 \pm 0.18$ & 0.28 & 0.46 \\ \rowcolor[gray]{0.9} \object{HD 17848} & $1.21 \pm 0.14$ & 8.69 & 0.37 \\ \object{HD 23484} & $0.69 \pm 0.25$ & 2.71 & 0.74 \\ \rowcolor[gray]{0.9} \object{HD 24649} & $1.14 \pm 0.23$ & 5.05 & 0.80 \\ \object{HD 28287} & $0.06 \pm 0.32$ & 0.19 & 1.36 \\ \object{HD 29137} & $0.23 \pm 0.16$ & 1.45 & 0.66 \\ \rowcolor[gray]{0.9} \object{HD 36187$^{a}$} & $1.95 \pm 0.13$ & 15.00 & 0.40 \\ \object{HD 37306} & $0.26 \pm 0.15$ & 1.75 & 0.49 \\ \object{HD 37484} & $0.28 \pm 0.21$ & 1.36 & 0.84 \\ \object{HD 38949} & $-0.09 \pm 0.18$ & $-0.51$ & 0.71 \\ \object{HD 41278} & $0.40 \pm 0.22$ & 1.85 & 0.68 \\ \object{HD 44524} & $0.01 \pm 0.20$ & 0.05 & 0.66 \\ \object{HD 60491} & $0.44 \pm 0.16$ & 2.78 & 0.61 \\ \rowcolor[gray]{0.9} \object{HD 61005} & $0.81 \pm 0.12$ & 6.70 & 0.36 \\ \object{HD 71722} & $0.37 \pm 0.17$ & 2.21 & 0.57 \\ \rowcolor[gray]{0.9} \object{HD 76143} & $0.60 \pm 0.18$ & 3.33 & 0.47 \\ \rowcolor[gray]{0.9} \object{HD 80883} & $1.43 \pm 0.18$ & 8.07 & 0.65 \\ \rowcolor[gray]{0.9} \object{HD 89886$^{a}$} & $0.92 \pm 0.26$ & 3.47 & 1.02 \\ \rowcolor[gray]{0.9} \object{HD 90781} & $0.63 \pm 0.16$ & 3.98 & 0.41 \\ \object{HD 90874} & $0.34 \pm 0.13$ & 2.62 & 0.36 \\ \object{HD 92945} & $0.00 \pm 0.18$ & 0.00 & 0.51 \\ \object{HD 105850} & $-0.06 \pm 0.18$ & $-0.34$ & 0.56 \\ \object{HD 105912} & $0.31 \pm 0.15$ & 2.08 & 0.45 \\ \object{HD 109573} & $0.35 \pm 0.15$ & 2.35 & 0.39 \\ \rowcolor[gray]{0.9} \object{HD 109704} & $0.85 \pm 0.12$ & 7.03 & 0.34 \\ \object{HD 112603} & $0.41 \pm 0.25$ & 1.61 & 0.84 \\ \object{HD 117716} & $0.40 \pm 0.18$ & 2.26 & 0.44 \\ \object{HD 118972} & $0.16 \pm 0.09$ & 1.70 & 0.16 \\ \rowcolor[gray]{0.9} \object{HD 136544} & $1.43 \pm 0.35$ & 4.04 & 1.16 \\ \object{HD 141943} & $-0.15 \pm 0.19$ & $-0.76$ & 0.51 \\ \object{HD 161612} & $-0.30 \pm 0.12$ & $-2.48$ & 0.30 \\ \object{HD 174474} & $-0.12 \pm 0.20$ & $-0.61$ & 0.75 \\ \object{HD 179520} & $0.44 \pm 0.24$ & 1.79 & 0.76 \\ \rowcolor[gray]{0.9} \object{HD 181327} & $0.48 \pm 0.16$ & 3.03 & 0.45 \\ \object{HD 185615} & $-0.61 \pm 0.31$ & $-1.94$ & 0.92 \\ \object{HD 191089} & $0.33 \pm 0.48$ & 0.68 & 2.29 \\ \object{HD 192758} & $0.12 \pm 0.28$ & 0.42 & 0.89 \\ \object{HD 196141} & $0.39 \pm 0.20$ & 1.99 & 0.58 \\ \object{HD 205674} & $0.43 \pm 0.50$ & 0.86 & 2.52 \\ \object{HD 220476} & $0.04 \pm 0.18$ & 0.23 & 0.49 \\ \object{HD 224228} & $0.27 \pm 0.39$ & 0.69 & 0.98 \\ \hline \end{tabular} \tablefoot{($^{a}$) not used in the statistical analysis due to insufficient mid- to far-infrared data.} \end{table} \begin{figure*}[t] \begin{center} \includegraphics[scale=0.45]{histogaussfit2.pdf} \includegraphics[scale=0.45]{histoerrtot.pdf}\\ \caption{Distribution of the excess significance level (left) and of the uncertainties on the disk-to-star flux ratio (right) for the observed sample. The Gaussian fit to the negative part of the significance distribution is represented by a dotted line.} \label{fig:histoexo} \end{center} \end{figure*} Table~\ref{tab:all} presents the results of the fit in terms of disk/star flux ratio, for the 48 new targets observed here. The measured flux ratio is averaged over the three or six spectral channels in our PIONIER observations. To define an appropriate detection threshold, we study the distribution of the significance level $\chi_f$, defined as the ratio between the measured disk/star flux ratio and the uncertainty on this quantity. Figure~\ref{fig:histoexo} shows the histogram of the significance level for our sample of 48 stars. We decided to use a threshold of $3~\sigma$ for the detection as in the study of \citet{ertel14}. Since we used the same methods for observation and data reduction on the same instrument, we can assume that the distribution of the uncertainties will also be comparable. As an additional argument, we study the negative part of the distribution of the significance level. The standard deviation of the negative part, after mirroring it on the positive side is found to be equal to 1.2. In Fig.~\ref{fig:histoexo}, a Gaussian distribution with this standard deviation is over-plotted on the data to guide the eye. The good match in shape, and the width of the distribution, confirm that a 3$\sigma$ criterion is appropriate, which corresponds to a false alarm probability of 0.27\%, and should therefore avoid spurious detections in our sample. The 13 stars highlighted in gray in Table~\ref{tab:all} have a significance level above $3\sigma$, and are therefore classified as having a near-infrared circumstellar excess associated with the presence of circumstellar emission. In Fig.~\ref{fig:v2fits1}, we show the wavelength dependence of the measured flux ratio for the 13 stars showing a significant near-infrared excess. The large number of stars with significance levels in the 1$\sigma$-3$\sigma$ range suggests that there may be a population of excesses just below the detection threshold, which remain undetected in our study. Comparing the negative and positive parts of the histogram, we estimate that an additional dozen stars could have an undetected H-band excess in the 1$\sigma$-3$\sigma$ range. \begin{figure*}[p] \centering \includegraphics[scale=0.80]{multiplotctr.pdf} \caption{Disk/star flux ratio as a function of wavelength for the 13 targets showing a significant H-band excess in our observations, as well as for HD~109573 (HR~4796), which shows a significant excess only in the reddest spectral channel. The blue, red, and green curves show the best fit to these measured flux ratios using blackbodies at 1000~K, 2000~K, and at the star's temperature (constant flux ratio), respectively.} \label{fig:v2fits1} \end{figure*} \subsection{Notes on specific hot-exozodi targets} \paragraph{HD~4113.} This old G5V star is known to have a planetary companion, discovered by RV measurements \citep{Tamuz08}, as well as a directly imaged brown dwarf companion at a projected separation of 22~au \citep{Cheetham18}. The properties of the inner, planetary companion were revisited by \citet{Cheetham18}: $M \sin i=1.602$~M$_{\rm Jup}$, $a=1.298$~au, $e=0.8999$. While this star was originally classified as surrounded by a warm dust disk by \citet{vican14}, this classification was based on a single WISE photometric data point at 22~$\mu$m. Our revised SED analysis does not show the presence of any significant dust population around this star, based on WISE and AKARI photometry. The presence of a hot dust population therefore does not seem to be connected to a massive outer reservoir of larger bodies. It is interesting to assess whether its inner, eccentric giant planet may have a direct influence on the architecture of the hot dust population. We assume that most of the hot dust is located at the sublimation distance of silicate grains (sublimation temperature of 1500~K), i.e., a distance of 0.04~au. Based on the orbital elements of the planets, the periastron is at 0.12~au, while the apastron is at 2.42~au. This orbital configuration suggests that the planet could have a direct influence on the architecture of the dust disk. To our knowledge, this is only the second hot dust system with a well-characterized inner giant planet (the first one being $\beta$~Pictoris). This configuration could be used to constrain the origin of the hot dust. Due to the presence of the planet, Poynting-Robertson (P-R) drag acting on dust grains from a hypothetical (warm or cold) outer dust belt that would remain below the detection threshold is unlikely to efficiently replenish the hot disk, as already suggested in a more general case by \citet{vanLieshout2014} and by \citet{Bonsor18} based on numerical simulations. A scenario where planetesimals belonging to an outer reservoir would be destabilized by the RV planet and sent towards the inner, hot regions where they would sublimate seems like a more plausible hot dust production scenario in this case, in a process somewhat akin to the falling evaporating bodies scenario proposed for the $\beta$~Pictoris system \citep{Beust90} -- although such a massive planet would be better at ejecting planetesimals than scattering them inwards \citep[e.g.,][]{Wyatt17}. \paragraph{HD~20794.} The near-infrared excess detected around this nearby Solar-type star was already reported in \citet{ertel14}. This star is known to be the host of at least three (maybe four) super-Earth planets, orbiting between 0.1 and 1 au from the star \citep{Feng2017}, and may also host a massive giant planet at a separation between 2 and 10 au based on Gaia proper motion analysis \citep{Kervella19}. Based on our SED analysis, and on the detailed model described in \citet{kennedy2015}, we do not confirm the presence of a warm dust population as suggested by \citet{cotten16}, and classify this target as a cold disk system with a temperature of 80~K. It is another case where a planetary system is located between the hot, inner disk and the outer debris disk, as discussed in detail by \citet{kennedy2015}. Even only considering the RV planets orbiting around HD~20794, which are much less massive than the Jupiter-sized companion of HD~4113, the planetary system is still expected to largely prevent dust from replenishing the inner disk through P-R drag. This is another indication that P-R drag is probably not at the origin of -- or at least not the only contributor to -- the detected near-infrared excess. \paragraph{HD~61005 and HD~181327.} These two stars are known to be surrounded by copious amounts of dust, and show asymmetries in their outer debris disk, which might be due to collision of Pluto-like objects. HD~61005 is also found by our SED analysis to include a warm dust population, at a black-body temperature of $\sim$120~K. Based on near-infrared scattered light observations, \citet{olofsson16} and \citet{esposito16} show that the eastern side of HD~61005 disk is brighter than the western side. \citet{olofsson16} argue that an observed peak of density at the pericenter of the disk may be the signpost of a recent impact, since the material released by the impact would pass again through the initial collision point, creating more collision and thus enhancing the density. HD~181327, a member of the $\beta$~Pic moving group ($\sim$20~Myr), also shows an asymmetry in its outer disk, which may be caused by a recent massive collisional event or by interactions with the interstellar medium \citep{stark14}. The possible collisional activity in the outer part of these two debris disks could be related to a major dynamical instability akin to the Large Heavy Bombardment in our Solar system. In such an event, we would expect planetesimals to be injected in the inner parts of the planetary system, where they may create the hot dust detected in our observations. \paragraph{HD~109573 (HR~4796).} This A0-type member of the TW Hya association ($\sim$10~Myr) does not show a significant H-band excess when considering the three PIONIER spectral channels together. However, looking at the spectral channels separately shows a strong slope of the excess emission, increasing with wavelength to a level such that the longest channel has an excess of $0.51\% \pm 0.17\%$, significant at the $3\sigma$ level (see Fig.~\ref{fig:v2fits1}). This may correspond to the onset of thermal emission of a hot exozodiacal disk at a temperature around 1000~K. Although tentative, this possible H-band excess is interesting to put in perspective with the global debris disk architecture. According to \citet{chen14}, the debris disk can be best represented by a two-temperature black body model, with the innermost ring at a temperature of 231~K (i.e., at about 5.7~au from the star). We note however that the presence of warm dust in this system is disputed. Indeed, \citet{wahhaj05} found evidence of warm dust based on mid-infrared imaging, but \citet{kennedy14} proposed that the emission of HR~4796 is compatible with a single black body. A single black body is also suggested by our SED analysis, with a temperature of 97~K, which results in a cold dust classification in our statistical sample. More recently, \citet{Lisse2017} suggested the presence of a tenuous thermal emission component from close-in, $\sim$850~K circumstellar material based on near- to mid-infrared spectroscopy, which might be directly connected to the small H-band excess detected in our PIONIER data. Near-infrared high-contrast imaging shows that the outer belt around HR~4796 consists of a sharp, offset ring of dust \citep[e.g.,][]{Milli17,Chen20}, with an angular separation from the star as small as $\sim 200$~mas along the semi-minor axis due to projection effects. The detected H-band excess may therefore also be (partly) due to the contribution of scattered light from the outer debris disk. Based on the surface brightness of the disk extracted by \citet{Milli17}, and considering the off-axis transmission of the PIONIER single-mode fibers, we estimate that the outer disk could contribute up to 0.1\% in terms of integrated H-band excess. The outer disk alone would therefore most probably not explain the measured excess of $\sim0.5\%$. Another piece of evidence for that is the tentative slope in the measured excess, which would not be consistent with scattered light. The morphology of the HR~4796 outer disk can be best explained through the influence of an eccentric planetary companion that would clear the interior region of the cold dust belt \citep{Lagrange12}. Both \citet{Perrin15} and \citet{Milli17} suggest that the main contribution to scattered light in the outer dust ring comes from rather large, porous grains. This points towards a low dynamical excitation in the outer disk \citep{Lisse2017}, which seems at odds with the main scenarios proposed to explain the presence of a hot exozodiacal disk. Once again, the hot dust population seems disconnected from the cold dust reservoir, but before further investigating the global disk architecture, follow-up observations with near-infrared interferometry will be needed to confirm the tentative H-band excess. \section{Discussion} \label{sec:discussion} \begin{table*}[t] \caption{Hot exozodiacal disk statistics from the combined PIONIER surveys of \citet{ertel14} and this work. Columns ``\#S'' and ``\#E'' represent the number of target stars and of hot exozodi detections, respectively.} \label{tab:statdata} \centering \begin{tabular}{ccccccccccccccc} \hline \hline & \multicolumn{3}{c}{A-type stars} & \multicolumn{3}{c}{F-type stars} & \multicolumn{3}{c}{G/K-type stars} & \multicolumn{3}{c}{Total}\\ & \#S & \#E & detect.~rate & \#S & \#E & detect.~rate & \#S & \#E & detect.~rate & \#S & \#E & detect.~rate \\ [+4pt] \hline All$^{a}$ & 40 & 7 & $17.5^{+7.5}_{-4.4} \%$ & 51 & 10 &$19.6^{+6.7}_{-4.4} \%$ & 42 & 5 & $11.9^{+6.8}_{-3.3} \%$ & 133 & 22 &$16.5^{+3.7}_{-2.7} \%$\\ [+4pt] Warm dust & 18 & 3 & $16.7^{+12.1}_{-5.3} \%$ & 11 & 2 & $18.2^{+16.3}_{-6.4} \%$ & 6 & 1 & $16.7^{+23.2}_{-6.3} \%$ & 35 & 6 & $17.1^{+8.1}_{-4.6} \%$\\ [+4pt] Warm dust only & 12 & 2 & $16.7^{+15.5}_{-5.8} \%$ & 7 & 2 & $28.6^{+20.3}_{-10.6} \%$ & 5 & 0 & $0.0^{+26.3}_{-0.0} \%$ & 24 & 4 & $16.7^{+10.1}_{-5.0} \%$\\ [+4pt] Cold dust only & 5 & 2 & $40.0^{+21.4}_{-15.6} \%$ & 12 & 2 & $16.6^{+15.5}_{-5.8} \%$ & 14 & 1 & $7.1^{+13.2}_{-2.3} \%$ & 31 & 5 & $16.1^{+8.6}_{-4.5} \%$\\ [+4pt] No warm dust & 21 & 3 & $14.3^{+10.8}_{-4.6} \%$ & 39 & 7 & $17.9^{+7.7}_{-4.5} \%$ & 36 & 4 & $11.1^{+7.4}_{-3.2} \%$ & 96 & 14 & $14.6^{+4.3}_{-2.8} \%$\\ [+4pt] No dust & 16 & 1 & $6.3^{+11.8}_{-2.0} \%$ & 27 & 5 & $18.5^{+9.6}_{-5.2} \%$ & 22 & 3 & $13.6^{+10.4}_{-4.3} \%$ & 65 & 9 & $13.8^{+5.4}_{-3.2} \%$ \\ [+4pt] \hline \end{tabular} \tablefoot{($^{a}$) also includes two dusty stars that have no warm/cold classification: HD~36187 (A0V) and HD~89886 (F7V).} \end{table*} In Table~\ref{tab:statdata}, we summarize the results of the PIONIER surveys for hot exozodis presented here and in \citet{ertel14}, in terms of number of detections and detection rates. The results are separated as a function of spectral type, and as a function of the presence of detectable amounts of warm and/or cold dust populations. A graphical representation of the most important information of this table is shown in Fig.~\ref{fig:histodust}, and forms the basis of the discussion in the next paragraphs. \subsection{Correlation between hot and warm dust} \label{sub:temp_corr} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{histo_stat_tot.pdf} \caption{Detection rate of hot exozodiacal dust as a function of spectral type, and as a function of the presence of a known warm dust reservoir. No significant difference in detection rate is found between the various populations.} \label{fig:histodust} \end{figure} In our warm dust sample, we measure a detection rate of $17.1^{+8.1}_{-4.6}$\% for H-band excesses, while the control sample with no warm dust shows a detection rate of $14.6^{+4.3}_{-2.8} \%$. These two occurrence rates are well within the error bars of each other, and we note that choosing any temperature threshold in the 100--200~K range to classify warm against cold dust populations would not change this conclusion. In order to confirm that this result is compatible with the two samples being drawn from the same population, we perform a two-sample Anderson-Darling test, which tests the null hypothesis that two samples are drawn from the same population, without having to specify the distribution function of that population. Here, the two samples are defined as the collection of the H-band excess levels in the warm dust and control samples, regardless of their spectral type. The two-sample test returns a p-value of 0.13, which confirms that the null hypothesis cannot be rejected. Performing the Anderson-Darling test on the significance of the H-band excess instead of the excess level itself, to account for the specific sensitivity level reached on each star, does not change the conclusion, with a p-value of 0.22. This is consistent with the study of \citet{mennesson14}, who used the mid-infrared Keck Interferometer Nuller (KIN) to search for warm dust around 11 stars already known to host hot excesses from near-infrared interferometric observations (among a total KIN sample of 40 stars), and did not find a significant correlation between the presence of hot and warm dust. The same conclusion was reached by the recent analysis of the Large Binocular Telescope Interferometer (LBTI) HOSTS survey \citep{ertel18,ertel20}, based on a sample of 38 stars. This lack of correlation is understood as the telltale sign of a disconnection between hot and warm dust populations, which would then not be created by the same parent bodies. Here, with our much larger sample (131 stars), we confirm that the detection rate of hot dust is not significantly enhanced by the presence of a warm asteroid belt. While it cannot be excluded that warm asteroid belts act as prominent suppliers of material to replenish the short-lived hot dust population, the presence of large amounts of warm dust does not seem to be a pre-requisite to the presence of hot exozodiacal dust, and we confirm that hot dust should not be considered as the bright, near-infrared counterpart of warm belts (or at least not in a directly connected way). This conclusion is of course only valid at the sensitivity level of the instruments used in this study (both regarding near-infrared interferometry and mid- to far-infrared spectrophotometry), and may be challenged by future, more sensitive observations. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{histo_stat_temp.pdf} \caption{Occurrence rate of H-band excesses for the 66 stars hosting a known dust reservoir, as a function of the estimated temperature of the dust.} \label{fig:tmpstat} \end{figure} To refine our analysis, we also investigate the possible correlation between the temperature of the outer dust reservoirs and the detection of an H-band excess. If there is a direct connection between inner and outer dust disks, we may expect that the warmer the outer disk, the higher the chances will be to detect an H-band excess. However, Fig.~\ref{fig:tmpstat} indicates a lack of correlation between the temperature of the outer dust reservoir and the occurrence rate of H-band excesses -- the apparent drop in occurrence rate for the warmest dust reservoirs being non-significant. A more relevant way of making this analysis may be to use the expected warm belt location rather than its temperature. Since the dust temperature is a good proxy for its location, when taking into account the black-body correction of \citet{pawellek15} as discussed in Sect.~\ref{sub:warmcold}, this does not change the conclusion. A last possible correlation that we investigated is between the luminosity of the warm debris disk (as a proxy for its mass) and the H-band excess. The inward flux of dust due to P-R drag is indeed expected to scale (albeit weakly) with the mass of the warm dust disk \citep{kennedy15}. No correlation was found here either, which seems to concur with the conclusions of \citet{Sezestre19} that P-R drag is unlikely to be at the origin of the hot exozodi phenomenon, although we recognize that fractional luminosities may not be directly proportional to the dust mass. Finally, our new results also make it possible to revisit the conclusion of \citet{ertel14} that the presence of hot dust does not correlate with the presence of cold dust. We confirm this conclusion by comparing the detection rate of H-band excesses around the ``cold dust only'' (no warm dust) and ``no dust'' samples (see Table~\ref{tab:statdata}), and find them to be fully compatible within the statistical uncertainties. \subsection{Occurrence rate vs.\ stellar parameters} Previous studies suggested that hot exozodiacal dust is more frequent around early-type stars than solar-type stars, although no firm conclusion could be drawn due to the limited sample \citep{Absil13,ertel14}. Here, this correlation does not appear as obvious any more, with A-type stars showing a similar detection rate ($17.5^{+7.5}_{-4.4} \%$) as FGK-type stars ($16.1^{+4.5}_{-3.1} \%$) in the combined sample. This result may seem to contradict the prediction of the magnetic trapping model, which is shown by \citet{rieke16} to be more efficient around rapidly rotating stars. Although a measurement of $v \sin i$ is not available for all of the stars of our sample, we consider that stars with spectral type earlier than F5 have a much higher chance of showing high rotational velocities, due to the absence of a strong convective layer to brake their initial rotation. A two-sample Anderson-Darling test comparing the H-band excess levels of hot exozodis around stars earlier and later than F5 shows a 2.5\% probability for them to be drawn from the same population, which suggests, at a $2.2\sigma$ level, that the two samples are drawn from different populations. The same Anderson-Darling test, performed on the excess significance instead of the excess levels, shows a 5.3\% probability for them to be drawn from the same population, a marginal evidence at best. We found in the previous section that there is a lack of correlation between the presence of inner (hot) dust and outer (warm/cold) dust in our combined sample of 133 stars. It is interesting to investigate whether this lack of correlation holds when looking separately at different spectral types. To do so, we perform the same two-sample Anderson-Darling test as before, considering separately early-type stars and solar-type stars. We choose to set the boundary between early-type and solar-type stars at F5, which corresponds to the spectral type where strong convective envelopes start to appear. The probability of the null hypothesis turns out to above 0.3 in both cases, which suggests once again that the distribution of H-band excesses does not have a different behavior around stars with and without warm dust. \begin{figure}[t] \centering \includegraphics[scale=0.35]{histoage.pdf} \caption{Occurrence rate of H-band excesses as a function of stellar age in the combined sample of 133 stars.} \label{fig:agestat} \end{figure} Based on 85 single stars in their PIONIER survey, \citet{ertel14} investigated the possible relation between stellar age and hot exozodi detection rate, showing the absence of correlation, although a possible trend was observed that FGK stars could have more frequent hot exozodis at old ages. We revisit their analysis including the 48 new single stars observed here (see Fig.~\ref{fig:agestat}). Based on this larger sample of young main sequence stars, we note an increase in detection rate at very young ages, with in particular three stars out of five within the $\beta$ Pic moving group ($\sim$20~Myr) showing a hot exozodi in the combined sample. Actually, among the two stars from the $\beta$ Pic moving group showing no hot exozodi, HD~172555 was identified as a marginal H-band detection by \citet{ertel14} based on the longest wavelength channel, and this detection was later confirmed to be significant through follow-up observations \citep{ertel16}. This leads to an actual detection rate of $80^{+8}_{-25}\%$ for hot exozodis at 20~Myr of age. To determine whether the population of young stars is significantly different from the older stars, we perform an Anderson-Darling test on two samples: the first one composed of young stars (younger than 30~Myr) and the second one composed of the other stars. We find a probability that the two samples are drawn from the same population of 6.8\%, i.e., a marginal evidence for them to be issued from different populations, at a $1.8\sigma$ level. Although this trend is based on a very small sample, it fits well within the picture that young main sequence stars might still be in the process of forming terrestrial planets, which may lead to strong dust production rate even in the innermost parts of the planetary systems. It is somewhat puzzling, though, that the youngest star in our sample (HD~109573 aka HR~4796, part of the TW Hya association), only shows a marginal H-band excess ($0.35 \pm 0.15 \%$). \subsection{Influence of partly resolved exozodis} \label{subsec:sublimation} An important aspect of the hot exozodi detection statistics, which was not explored in previous works, is the influence of the location of the dust on its detectability with infrared interferometry. The most critical case in terms of angular resolution is for the most compact disks, which corresponds to the case where the circumstellar emission comes mostly from a region close to the sublimation distance of the dust grains. This situation actually corresponds to our current picture of hot exozodi detected with near-infrared interferometry, for which the measured excesses are understood to originate from the thermal emission of hot grains at a temperature close to sublimation \citep[e.g.,][]{mennesson11,lebreton13}. The emission could be even more confined by physical mechanisms such as grain pile-up \citep{kobayashi09}, magnetic trapping \citep{rieke16}, or gas drag \citep{pearce20}. In this case, the circumstellar emission might only be partly resolved by the interferometer, which would decrease the strength of the visibility drop, especially at the shortest baselines. Partly resolving the hot exozodiacal disk would therefore lead to a decreased sensitivity, as only part of the disk emission would affect the measured $V^2$. Here, we explore how the uneven sensitivity to compact hot exozodis around our target stars could bias the results of our survey. So far, our working hypothesis has always been that circumstellar disks are fully resolved, and we have modeled them as a uniform emission filling the whole field of view. To test the impact of this working hypothesis on the measured detection rates, we computed the sublimation radius of the grains for each of the 133 stars in our combined sample, assuming silicates with a sublimation temperature of 1500~K. Our estimation of the sublimation radius is based on a simple black-body assumption, which we validated by running specific simulations with the GRaTeR radiative transfer package \citep{augereau99,lebreton12} to explore the dependency of the sublimation radius as a function of the grain size and composition. The resulting sublimation radii are given in Table~\ref{tab:subdiam}. \longtab{ \begin{longtable}{ccccc} \caption{Sublimation radius, sensitivity reduction factor ($\phi$), and effective sensitivity ($\sigma_{\rm eff}$) for the combined sample of 133 stars, computed for a sublimation temperature of 1500~K under black-body assumption. Asterisks denote the newly observed stars on the medium-sized AT configuration, while the other stars were observed on the compact AT configuration by \citet{ertel14}.} \label{tab:subdiam}\\ \hline \hline name & subl.\ rad. & subl.\ rad. & $\phi$ & $\sigma_{\rm eff}$ \\ & (au) & (mas) & &(\%)\\ \hline \endfirsthead \caption{continued.}\\ \hline\hline name & subl.\ rad. & subl.\ rad. & $\phi$ & $\sigma_{\rm eff}$ \\ & (au) & (mas) & &(\%)\\ \hline \endhead \hline \endfoot HD~142 & 0.06 & 2.29 & 0.25 & 1.06 \\ $^{\ast}$HD~203 & 0.08 & 1.94 & 0.90 & 0.25 \\ HD~1581 & 0.04 & 4.60 & 0.70 & 0.44 \\ HD~2262 & 0.12 & 5.02 & 0.77 & 0.23 \\ $^{\ast}$HD~2834 & 0.22 & 4.18 & 1.16 & 0.73 \\ $^{\ast}$HD~3126 & 0.06 & 1.43 & 0.59 & 0.40 \\ HD~3302 & 0.09 & 2.52 & 0.29 & 0.89 \\ HD~3823 & 0.05 & 2.20 & 0.23 & 0.97 \\ $^{\ast}$HD~4113 & 0.04 & 0.90 & 0.29 & 0.69 \\ HD~7570 & 0.05 & 3.15 & 0.42 & 0.62 \\ HD~7788 & 0.07 & 3.42 & 0.48 & 0.36 \\ $^{\ast}$HD~9672 & 0.17 & 2.82 & 1.23 & 0.26 \\ $^{\ast}$HD~10269 & 0.06 & 1.27 & 0.49 & 0.28 \\ HD~10647 & 0.04 & 2.36 & 0.26 & 1.01 \\ $^{\ast}$HD~10939 & 0.22 & 3.49 & 1.26 & 0.26 \\ HD~11171 & 0.06 & 2.43 & 0.27 & 1.55 \\ HD~14412 & 0.02 & 1.83 & 0.16 & 1.30 \\ HD~15008 & 0.21 & 5.00 & 0.77 & 0.42 \\ $^{\ast}$HD~15427 & 0.14 & 2.95 & 1.25 & 0.14 \\ HD~17051 & 0.05 & 2.72 & 0.33 & 0.70 \\ $^{\ast}$HD~17848 & 0.16 & 3.24 & 1.26 & 0.11 \\ HD~17925 & 0.02 & 2.23 & 0.23 & 0.99 \\ HD~19107 & 0.11 & 2.75 & 0.34 & 0.63 \\ HD~20766 & 0.03 & 2.49 & 0.28 & 0.92 \\ HD~20794 & 0.03 & 4.36 & 0.66 & 0.56 \\ HD~20807 & 0.03 & 2.89 & 0.36 & 1.46 \\ HD~22001 & 0.08 & 3.64 & 0.52 & 0.39 \\ $^{\ast}$HD~23484 & 0.02 & 1.27 & 0.49 & 0.52 \\ $^{\ast}$HD~24649 & 0.05 & 1.26 & 0.49 & 0.46 \\ HD~25457 & 0.05 & 2.65 & 0.31 & 0.45 \\ $^{\ast}$HD~28287 & 0.02 & 0.64 & 0.16 & 1.97 \\ HD~28355 & 0.15 & 3.08 & 0.40 & 0.22 \\ $^{\ast}$HD~29137 & 0.05 & 0.91 & 0.30 & 0.53 \\ HD~30495 & 0.03 & 2.62 & 0.31 & 0.67 \\ HD~31295 & 0.14 & 4.05 & 0.60 & 0.25 \\ HD~31925 & 0.10 & 2.48 & 0.28 & 0.79 \\ HD~33111 & 0.24 & 8.62 & 0.96 & 0.43 \\ HD~33262 & 0.04 & 3.79 & 0.55 & 0.38 \\ HD~34721 & 0.05 & 2.03 & 0.20 & 1.06 \\ $^{\ast}$HD~36187 & 0.25 & 2.89 & 1.24 & 0.11 \\ $^{\ast}$HD~37306 & 0.13 & 1.99 & 0.92 & 0.16 \\ $^{\ast}$HD~37484 & 0.07 & 1.18 & 0.44 & 0.47 \\ HD~38858 & 0.03 & 2.16 & 0.22 & 1.31 \\ $^{\ast}$HD~38949 & 0.04 & 0.85 & 0.26 & 0.68 \\ HD~39060 & 0.11 & 5.66 & 0.82 & 0.28 \\ HD~40307 & 0.02 & 1.48 & 0.11 & 2.18 \\ $^{\ast}$HD~41278 & 0.06 & 1.11 & 0.40 & 0.54 \\ HD~43162 & 0.03 & 1.76 & 0.15 & 1.40 \\ $^{\ast}$HD~44524 & 0.11 & 1.05 & 0.37 & 0.54 \\ HD~45184 & 0.04 & 1.75 & 0.15 & 1.00 \\ HD~53705 & 0.04 & 2.17 & 0.22 & 1.04 \\ HD~56537 & 0.21 & 6.72 & 0.90 & 0.28 \\ $^{\ast}$HD~60491 & 0.02 & 0.79 & 0.23 & 0.69 \\ $^{\ast}$HD~61005 & 0.03 & 0.83 & 0.25 & 0.48 \\ HD~69830 & 0.03 & 2.20 & 0.23 & 1.15 \\ HD~71155 & 0.22 & 5.90 & 0.84 & 0.30 \\ $^{\ast}$HD~71722 & 0.18 & 2.46 & 1.13 & 0.15 \\ HD~72673 & 0.02 & 1.88 & 0.17 & 1.90 \\ $^{\ast}$HD~76143 & 0.14 & 2.70 & 1.20 & 0.15 \\ HD~76151 & 0.04 & 2.04 & 0.20 & 1.42 \\ HD~76932 & 0.05 & 2.19 & 0.23 & 1.85 \\ $^{\ast}$HD~80883 & 0.05 & 0.73 & 0.20 & 0.88 \\ HD~82434 & 0.11 & 5.73 & 0.83 & 0.70 \\ HD~88955 & 0.17 & 5.46 & 0.80 & 0.31 \\ $^{\ast}$HD~89886 & 0.18 & 1.10 & 0.39 & 0.68 \\ HD~90132 & 0.11 & 2.67 & 0.32 & 1.28 \\ HD~90781 & 0.08 & 1.08 & 0.38 & 0.42 \\ $^{\ast}$HD~90874 & 0.15 & 2.19 & 1.01 & 0.13 \\ HD~91324 & 0.07 & 3.36 & 0.46 & 0.37 \\ $^{\ast}$HD~92945 & 0.02 & 0.91 & 0.30 & 0.60 \\ HD~99211 & 0.12 & 4.67 & 0.71 & 0.31 \\ HD~102365 & 0.03 & 3.34 & 0.46 & 0.50 \\ HD~104731 & 0.07 & 2.81 & 0.35 & 0.40 \\ $^{\ast}$HD~105850& 0.17 & 2.99 & 1.25 & 0.14 \\ $^{\ast}$HD~105912 & 0.06 & 1.28 & 0.50 & 0.30 \\ HD~108767 & 0.26 & 9.72 & 0.96 & 0.16 \\ $^{\ast}$HD~109573 & 0.17 & 2.53 & 1.15 & 0.13 \\ HD~109704 & 0.14 & 2.07 & 0.96 & 0.13 \\ HD~109787 & 0.20 & 5.05 & 0.77 & 0.26 \\ $^{\ast}$HD~112603 & 0.08 & 1.30 & 0.52 & 0.49 \\ HD~115617 & 0.03 & 4.04 & 0.60 & 0.38 \\ $^{\ast}$HD~117716 & 0.19 & 2.63 & 1.18 & 0.15 \\ $^{\ast}$HD~118972 & 0.02 & 1.26 & 0.49 & 0.19 \\ HD~120136 & 0.06 & 4.06 & 0.60 & 0.37 \\ HD~128898 & 0.10 & 6.32 & 0.87 & 0.25 \\ HD~129502 & 0.09 & 5.17 & 0.78 & 0.18 \\ HD~130109 & 0.27 & 6.56 & 0.89 & 0.48 \\ HD~134083 & 0.06 & 3.25 & 0.44 & 1.07 \\ HD~135379 & 0.15 & 4.92 & 0.75 & 0.49 \\ HD~136202 & 0.07 & 2.67 & 0.32 & 2.00 \\ $^{\ast}$HD~136544 & 0.08 & 1.07 & 0.38 & 0.93 \\ HD~139664 & 0.06 & 3.68 & 0.52 & 0.36 \\ HD~141891 & 0.10 & 8.43 & 0.96 & 0.21 \\ $^{\ast}$HD~141943 & 0.05 & 0.90 & 0.29 & 0.69 \\ HD~149661 & 0.02 & 2.44 & 0.27 & 0.81 \\ HD~152391 & 0.03 & 1.66 & 0.13 & 1.34 \\ HD~160032 & 0.07 & 3.27 & 0.44 & 0.25 \\ HD~160915 & 0.05 & 2.89 & 0.36 & 0.77 \\ $^{\ast}$HD~161612 & 0.03 & 1.10 & 0.40 & 0.30 \\ HD~164259 & 0.09 & 3.68 & 0.52 & 0.36 \\ HD~165777 & 0.15 & 5.58 & 0.81 & 0.34 \\ HD~172555 & 0.10 & 3.44 & 0.48 & 0.53 \\ $^{\ast}$HD~174474 & 0.18 & 2.18 & 1.01 & 0.19 \\ HD~178253 & 0.18 & 4.67 & 0.71 & 0.50 \\ $^{\ast}$HD~179520 & 0.08 & 1.26 & 0.49 & 0.50 \\ $^{\ast}$HD~181327 & 0.07 & 1.27 & 0.49 & 0.32 \\ HD~182572 & 0.05 & 3.06 & 0.40 & 0.33 \\ $^{\ast}$HD~185615 & 0.03 & 0.80 & 0.23 & 1.37 \\ HD~188228 & 0.23 & 7.27 & 0.94 & 0.29 \\ $^{\ast}$HD~191089 & 0.06 & 1.23 & 0.46 & 1.04 \\ HD~192425 & 0.16 & 3.49 & 0.49 & 0.51 \\ $^{\ast}$HD~192758 & 0.08 & 1.24 & 0.48 & 0.60 \\ HD~195627 & 0.10 & 3.43 & 0.48 & 1.09 \\ $^{\ast}$HD~196141 & 0.03 & 0.79 & 0.23 & 0.86 \\ HD~197157 & 0.10 & 4.04 & 0.60 & 0.50 \\ HD~197692 & 0.07 & 4.72 & 0.72 & 0.28 \\ HD~203608 & 0.04 & 4.47 & 0.68 & 0.50 \\ $^{\ast}$HD~205674 & 0.06 & 1.20 & 0.45 & 1.11 \\ HD~206860 & 0.04 & 2.06 & 0.20 & 1.48 \\ HD~207129 & 0.04 & 2.51 & 0.28 & 0.63 \\ HD~210049 & 0.17 & 4.15 & 0.62 & 0.61 \\ HD~210277 & 0.04 & 1.71 & 0.14 & 2.14 \\ HD~210302 & 0.06 & 3.24 & 0.44 & 0.57 \\ HD~210418 & 0.18 & 6.35 & 0.87 & 0.33 \\ HD~213845 & 0.06 & 2.56 & 0.30 & 0.80 \\ HD~214953 & 0.05 & 2.18 & 0.22 & 1.00 \\ HD~215648 & 0.07 & 4.59 & 0.70 & 0.31 \\ HD~215789 & 0.30 & 7.63 & 0.96 & 0.27 \\ HD~216435 & 0.06 & 1.89 & 0.17 & 1.56 \\ HD~219482 & 0.05 & 2.40 & 0.26 & 0.64 \\ HD~219571 & 0.11 & 4.89 & 0.75 & 0.36 \\ $^{\ast}$HD~220476 & 0.03 & 0.97 & 0.33 & 0.54 \\ $^{\ast}$HD~224228 & 0.02 & 0.82 & 0.24 & 1.64 \\ \hline \end{longtable} } \begin{table*}[t] \caption{Hot exozodiacal disk statistics from the combined PIONIER surveys \citep[][and this work]{ertel14}, after removing all stars showing an effective sensitivity larger than $0.5\%$, taking into account partial resolution effects. Columns ``\#S'' and ``\#E'' represent the number of target stars and of hot exozodi detections, respectively.} \label{tab:statdatacorr} \centering \begin{tabular}{ccccccccccccccc} \hline \hline & \multicolumn{3}{c}{A-type stars} & \multicolumn{3}{c}{F-type stars} & \multicolumn{3}{c}{G/K-type stars} & \multicolumn{3}{c}{Total}\\ & \#S & \#E & detect.~rate & \#S & \#E & detect.~rate & \#S & \#E & detect.~rate & \#S & \#E & detect.~rate \\ [+4pt] \hline All$^{a}$ & 34 & 7 & $20.6^{+8.5}_{-5.2} \%$ & 28 & 7 &$25.0^{+9.7}_{-6.3} \%$ & 6 & 1 & $16.7^{+23.2}_{-6.3} \%$ & 68 & 15 &$22.1^{+5.8}_{-4.2} \%$\\ [+4pt] Warm disk & 16 & 3 & $18.8^{+13.1}_{-6.1} \%$ & 8 & 2 & $25.0^{+19.3}_{-9.1} \%$ & 1 & 1 & $100^{+0.0}_{-60.0} \%$ & 25 & 6 & $24.0^{+10.3}_{-6.4} \%$\\ [+4pt] Warm disk only & 11 & 2 & $18.2^{+16.3}_{-6.4} \%$ & 6 & 2 & $33.3^{+21.1}_{-12.7} \%$ & 0 & 0 & N.A. & 17 & 4 & $23.5^{+12.7}_{-7.1} \%$\\ [+4pt] Cold disk only & 4 & 2 & $50.0^{+20.2}_{-20.2} \%$ & 6 & 2 & $33.3^{+21.1}_{-12.7} \%$ & 2 & 0 & $0.0^{+45.7}_{-0.0} \%$ & 12 & 4 & $33.3^{+15.1}_{-10.3} \%$\\ [+4pt] No warm disk & 17 & 3 & $17.6^{+12.6}_{-5.7} \%$ & 20 & 5 & $25.0^{+11.7}_{-7.1} \%$ & 5 & 0 & $0.0^{+26.3}_{-0.0} \%$ & 42 & 8 & $19.0^{+7.4}_{-4.6} \%$\\ [+4pt] No disk & 13 & 1 & $7.7^{+14.0}_{-2.6} \%$ & 14 & 3 & $21.4^{+14.2}_{-7.0} \%$ & 3 & 0 & $0.0^{+36.8}_{-0.0} \%$ & 30 & 4 & $13.3^{+8.6}_{-4.0} \%$ \\ [+4pt] \hline \end{tabular} \tablefoot{($^{a}$) also includes one dusty star with no warm/cold classification: HD~36187 (A0V).} \end{table*} The estimated sublimation radii must then be compared with the angular resolution of the interferometric array to determine whether the disks are fully or only partly resolved. To do so, we considered infinitesimally thin rings with diameters ranging between 1.2 and 8.5~mas, corresponding to twice the minimum and maximum sublimation distance for the newly observed stars on the medium-sized AT configuration (see Table~\ref{tab:subdiam}). We injected these thin rings around a typical star of our survey to produce the expected $V^2$ for the star-disk system, using the medium-sized AT configuration at the VLTI (D0-H0-G1-I1) and a typical observing setup in terms of target elevation and hour angle coverage. This expected $V^2$ was then passed to our exozodi detection routine, which is based on the assumption of a fully resolved circumstellar disk filling the entire field-of-view, and we extracted the measured disk/star flux ratio using our standard fitting method. This measured flux ratio was then compared to the actual flux ratio injected in the model (chosen to be $3\%$ in this case), to produce a ``sensitivity reduction factor'' ($\phi$), defined as the ratio of measured to injected disk/star flux ratio. The result of this exercise is illustrated in Fig.~\ref{fig:contmodel}, where we plot the sensitivity reduction factor for thin annular disks of increasing diameters. As expected, smaller disk diameters lead to a bigger hit in sensitivity, with only about 15\% of the flux detected for the most compact disks. Half of the flux is missed for a disk diameter of about 2.5~mas (i.e., disk radius of 1.25~mas). The same exercise was carried out on the compact AT configuration (A1-B2-C1-D0) for the sample observed by \citet{ertel14}. The resulting sensitivity reduction factor $\phi$ is given for all of the stars in our combined sample in Table~\ref{tab:subdiam}, where asterisks denote the stars observed on the medium-sized AT configuration within the observing program presented in this paper. \begin{figure}[t] \centering \includegraphics[scale=0.36]{ctrmodel2014.pdf} \caption{Sensitivity reduction factor (i.e., ratio between the measured and injected flux ratio) as a function of the diameter of the circumstellar ring, for the medium-sized AT configuration (D0-H0-G1-I1).} \label{fig:contmodel} \end{figure} Knowing the sensitivity reduction factor for all the stars in the PIONIER surveys, we can compute the effective sensitivity ($\sigma_{\rm eff}$) of our observations under the new working hypothesis that all the disks are confined to the sublimation radius of silicates. The effective sensitivity, defined as the $1\sigma$ error bar on the disk/star flux ratio divided by the sensitivity reduction factor, is given in Table~\ref{tab:subdiam}. Based on these revised sensitivities, we define a homogeneous sample in terms of effective sensitivity, by rejecting all the stars that have an effective sensitivity larger than $0.5\%$ (for which the chances to detect a hot exozodi are much lower, owing to the typical brightness of hot exozodis). This gives us a new sample of 68 stars, among which 25 show the presence of warm dust. The hot exozodi detection rate can then be recomputed on this new, more homogeneous sample, which is however strongly biased towards early-type stars because of the larger star/disk angular separation in those systems. The new detection rates are summarized in Table~\ref{tab:statdatacorr}. They are still compatible with each other within error bars, although the occurrence rate for the ``no-dust'' sample ($13.3^{+8.6}_{-4.0} \%$) seems to be systematically lower than for the rest of the sample (stars hosting warm and/or cold dust), which shows an occurrence rate of $28.9^{+8.2}_{-6.1} \%$ (11 out of 38 stars\footnote{this includes one dusty star for which the dust temperature could not be determined, so that it does not show up in either of the ``warm'' and ``cold'' dust categories in Table~\ref{tab:statdatacorr}}). To further explore this possible correlation, we used a two-sample Anderson-Darling test to compare the H-band excess distribution within the dusty and non-dusty samples, containing respectively 38 and 30 stars. The Anderson-Darling test on the H-band excess levels shows that the null hypothesis that the two samples are drawn from the same population can be rejected at significance level $p=0.0028$, which corresponds to a $3.0\sigma$ level. Using the excess significance instead of the excess levels in the Anderson-Darling test would increase the significance level to $3.4\sigma$ that the two samples are drawn different populations. Since the $1\sigma$ sensitivity threshold of 0.5\% used to define our sensitivity-corrected sample is somewhat arbitrary, we also examine a case where the threshold is set to 0.33\%. This new threshold boosts the significance that a common underlying population can be rejected to a $3.7\sigma$ level, for a sample of 40 stars. A tentative evidence for a correlation between the presence of hot dust and an outer reservoir was already found based on K-band observations at the CHARA array, but only for solar-type stars (FGK types), and based on a smaller sample \citep{Absil13,nunez17}. This tentative correlation was not confirmed at H band on a larger sample of stars in the study of \citet{ertel14}. The analysis presented here seems to finally reconcile the trends observed at H and K bands. It provides the first evidence at a $>3\sigma$ level that the presence of an outer dust reservoir may have a significant influence on the appearance of a near-infrared excess across all spectral types, although we underline the facts that this conclusion is based on the assumption that the dust is arranged in a thin ring close to its sublimation radius, and that the samples are still relatively small. It must also be kept in mind that the absence of observable amounts of warm/cold dust populations does not mean the complete absence of outer dust reservoirs, which could artificially increase or decrease the significance of this tentative correlation. \begin{figure*}[t] \centering \includegraphics[width=0.48\textwidth]{hd80883base_v2.pdf} \hspace*{3mm} \includegraphics[width=0.48\textwidth]{hd80883basedisk_v2.pdf} \caption{Measured and modeled squared visibilities for HD~80883 using a limb-darkened star surrounded by uniform circumstellar emission (left) or by a ring of dust at the sublimation radius (right). The different colors of the data points represent the different spectral channels (one color per channel). The solid blue line shows the expected visibility for the stellar photosphere alone, and the dotted blue line the best fit for the star+disk model. Both disk models provide a reasonable fit to the measured visibilities, with $\chi_r^2 \leq 1$. } \label{fig:comparslope} \end{figure*} \subsection{Location and temperature of the hot dust} In order to further investigate the robustness of the tentative conclusion from the previous section, an interesting question is whether we could discriminate between a fully resolved (uniform) circumstellar emission, and a thin annulus model. This type of morphological study has already been attempted on Fomalhaut by \citet{absil09} using VLTI/VINCI, and on $\beta$~Pictoris by \citet{defrere12} using VLTI/PIONIER. In both cases, a very large number of observations were available, but no constraint could be derived on the disk morphology. We do not expect this situation to change in the present case, where we only collected three OBs on each of our targets. Nevertheless, we search for possible signs of partly resolved disks in our whole sample of detected hot exozodis, by looking for a slope in the $V^2$ drop as a function of baseline. Indeed, partly resolved disks should lead to smaller $V^2$ drop at shorter baselines, as they become less and less resolved. This exercise is illustrated in Fig.~\ref{fig:comparslope} for the case of HD~80883, for which the expected sublimation radius is particularly small. The data set shows no significant slope, suggesting that the excess is more probably caused by an extended disk than by a thin ring at the sublimation radius, although both models are consistent with the available data set. Another possible way to constrain the location of the dust grains would be to infer their temperature. This can potentially be done by exploring the wavelength dependence of the measured disk/star flux ratio. To test this, we have fitted blackbodies of various temperatures, including the host star temperature (flat contrast) to simulate the effect of scattered light, to the measured disk/star flux ratio as a function of wavelength (Fig.~\ref{fig:v2fits1}). Blackbody models with temperatures between 1500~K and the host star temperature fit the data almost equally well, with a reduced $\chi^2$ around 1. Using a blackbody temperature of 1000~K increases the median reduced $\chi^2$ to about 2 for our sample of 13 hot exozodis. The accuracy that can currently be reached with precision infrared interferometers such as VLTI/PIONIER is therefore not high enough to conclude on the dust temperature. At best, we could reject the hypothesis that the detected excess are due to the thermal emission of grains at temperature below 1000~K, which is not expected as such grains would not produce a significant H-band emission anyway. A possible way to circumvent this limitation would be to follow up our detections at other wavelengths, for instance using the second generation interferometric instruments of the VLTI \citep[GRAVITY in the near-infrared and MATISSE in the mid-infrared, see e.g.,][]{kirchschlager20}. \subsection{Origin of the hot dust} The lack of a strong correlation between the hot exozodi phenomenon and the amount of warm and/or cold dust in outer reservoirs remains puzzling. Our understanding is that the hot dust is likely supplied from an outer reservoir for most stars, because parent bodies cannot survive on Myr timescales close to the dust sublimation radius due to collisional activity \citep[e.g.,][]{absil06}. However, the determining factor of whether detectable amounts of hot dust are present seems not to be the mass or location of this reservoir, but rather a different condition triggering the phenomenon. Trapping mechanisms have been proposed to sustain the observed, high dust masses that would otherwise require extreme replenishment rates due to the efficient removal of the hot dust \citep[e.g.,][]{pearce20}. These mechanisms include the pile-up of sublimating dust \citep{kobayashi09}, the effect of the stellar magnetic field \citep{czechowksi10,rieke16}, or the effect of gas \citep{lebreton13,pearce20}, possibly originating in the sublimation of the dust grains themselves. Trapping mechanisms could explain why even faint, undetected outer belts can supply sufficient material to produce detectable hot dust. Alternatively, or in addition, a specific configuration of the dust reservoir and a planetary system could be required to supply sufficient material to the inner regions \citep{Bonsor12a,Bonsor12b,Bonsor14,Faramaz17}. Such a mechanism could also be sufficiently efficient to act indistinguishably by us on systems with detectable and undetectable cold reservoirs. Irrespective of the delivery mechanism and of the presence or absence of a trapping mechanism, a larger reservoir of cold material provides more material to be delivered to the inner region. Thus, naively one would still expect a correlation between the presence of massive debris disks and near-infrared excesses, at least statistically if not for individual targets. There is only a weak evidence for this to be the case in our sample, with a tentative correlation appearing only after correcting the sample for sensitivity under the assumption that all the hot dust is located close to the sublimation distance of silicates (Sect.~\ref{subsec:sublimation}). The lack of a more prominent correlation between hot dust and outer reservoirs might imply that there is an upper limit on the amount of hot dust that may be present or be supplied, and that this is reached in systems where the conditions for delivery and trapping are met, even for relatively small outer reservoirs. In the case of P-R drag, the amount that reaches the innermost regions is limited by collisions between the migrating dust grains \citep{Wyatt05}: while massive, collision-dominated outer belts produce more dust that can be dragged inward, most of this dust is destroyed before it reaches the inner regions. In contrast, most of the dust created in more tenuous, transport-dominated outer belts may reach the inner regions. This naturally decouples to some degree the amount of dust supplied to the hot exozodi region from the mass of the outer reservoir. For other scenarios, like comet delivery of the hot dust, an upper limit on the amount of hot dust could be set by the collisional evolution of the dust, which happens at a shorter time scale for more massive disks so that an equilibrium between dust influx and removal could typically be reached around similar dust levels. It is also possible that the amount of material transported inwards is dominated by the efficiency of the transport process (e.g., scattering) rather than the supply of material, which would also result in a (partial) decoupling of the hot dust quantity from the brightness of the cold reservoir. Finally, some trapping mechanisms may have intrinsic upper limits on the amount of dust they can trap. An alternative explanation for the similar flux ratios observed around all stars with near-infrared excess would be that the hot dust is optically thick. In that case, the observed flux would be driven by the surface area of the disk, and would be independent of the dust mass to the first order. This scenario would then require the surface area of the hot disk to increase for earlier spectral types, which could at least partly be explained by the larger dust sublimation radius around earlier spectral types. Among the potential origins for the hot dust population, the possibility that hot dust is primordial (i.e., a remnant of the initial protoplanetary disk) cannot be completely ruled out without a detailed analysis of the possible trapping mechanisms. This would however require the trapping mechanism at play to be efficient enough on Gyr timescales, and to be on-going since the late stages of the primordial disk dispersal, which seems rather unlikely. This scenario would also be expected to show a more prominent age dependence in the hot exozodi phenomenon. The apparent lack of correlation between hot and warm dust populations could be both an asset and a drawback in the preparation of space-based missions dedicated to Earth-like planet imaging. On one hand, the lack of correlation means that stars hosting hot dust populations should not necessarily be removed from the potential target lists of such missions, because they are not necessarily associated with large amounts of warm dust in the habitable zone. The influence on the mission performance of hot dust populations located at much smaller angular separation from the star than the habitable zone is however still to be investigated -- a task that we defer to a future, dedicated work. On the other hand, the lack of correlation also means that the detection of significant near-infrared emission with precision interferometry is not a prime criterion to build the target lists, which means that more mid-infrared interferometric observations with existing \citep[LBTI/NOMIC,][]{ertel20} or upcoming \citep[VLTI/Hi-5,][]{defrere18} instruments will be needed. \section{Conclusions} \label{sec:concl} In this paper, we used the VLTI/PIONIER interferometric instrument to search for resolved near-infrared circumstellar emission around a sample of main sequence stars known to harbor a warm dust disk from previous mid-infrared spectrophotometric observations, in an attempt to identify a possible connection between warm and hot dust populations. For that, we built a target list of 62 stars that showed signs of warm dust in the literature. Among the 52 stars for which we obtained data of sufficient quality, we identified 17 new H-band excesses, among which four are shown to be due to the presence of a previously unknown close stellar companion. The remaining 13 excess are thought to originate from hot dust populations, adding to the nine hot exozodi systems already detected with PIONIER by \citet{ertel14}. Combining these two samples, resulting in a total of 133 stars, we find an overall detection rate of $16.5^{+3.7}_{-2.7}\%$ for H-band excesses around nearby main sequence stars, with a possible hint for a larger underlying population of excesses below our sensitivity limit. Taking into account the fact that some of the hot exozodiacal disks may only be partly resolved by our interferometric baseline lengths, we estimate that the true occurrence rate could actually be as high as $22.1^{+5.8}_{-4.2}\%$, if we only include stars that have a corrected $1\sigma$ sensitivity of 0.5\% or better on the disk/star flux ratio. Our data sets do however not allow us to discriminate between a fully resolved disk and a thin annulus at the sublimation radius as the most appropriate model to reproduce our observations, so that the true occurrence rate at our sensitivity level could be anywhere between 16.5\% and 22.1\%. We then searched for a possible correlation between the presence of a known warm dust population around the target stars and the detection of a near-infrared excess in our interferometric observations. For that, we re-evaluated the presence of warm and/or cold dust around all of the 133 stars in our combined sample through SED modeling, and defined two samples containing respectively the stars showing warm dust emission or not. We found that the distribution of near-infrared excesses around the warm dust sample is fully compatible with that of the control sample, suggesting the absence of direct connection between warm and hot dust populations. This conclusion does not depend on the considered spectral type. No correlation was found either between the detection rate of near-infrared excess and the stellar age, although there is a marginal trend for young stars ($\leq 30$~Myr) to have more frequent H-band excesses. After correcting the sensitivity of our observations for the fact that the hot dust could be arranged in a thin ring around its sublimation radius, and subsequently limiting our sample to the stars for which the corrected $1\sigma$ error bar is smaller than 0.5\%, we find tentative evidence at the $3\sigma$ level that the distribution of near-infrared excesses around stars showing any kind of outer dust reservoir (warm or cold) is statistically different from the distribution of near-infrared excesses around stars showing no outer dust reservoir, with larger near-infrared excesses around the dusty stars. This conclusion pertains mostly to early-type (A and F) stars, which make up the most of our sensitivity-corrected sample, and only holds if the dust is arranged in a thin annulus close to its sublimation radius, a hypothesis that we cannot confirm nor infirm based on our PIONIER data. A possible caveat to these conclusions is that some of the near-infrared excesses might be variable, as suggested by \citet{ertel16} and \citet{nunez17}, so that a non-detection does not necessarily mean that there is never any detectable excess around a particular star. It must also be kept in mind that we can only probe correlations down to the sensitivity level of both the near-infrared interferometric observations and the mid- to far-infrared photometry used in this study, and that underlying correlations may exist at lower sensitivity levels. Although the present work puts in light a tentative, previously unknown correlation between hot and warm/cold dust populations, it does not settle the question of the origin of hot exozodiacal dust. Our current understanding is that at least one transport mechanism is at play to inject material in the innermost parts of planetary systems, and that the material additionally needs to be confined close to its sublimation radius by a trapping mechanism. The nature of these transport and trapping mechanisms is however still unclear, and will probably require new diagnostic tools to be properly constrained, although it is worth noting that some specific hot-dust systems in our sample look incompatible with P-R drag dust production. High-contrast interferometric observations in the thermal infrared (L, M, and N bands) would be a powerful way to derive useful new constraints on these dust populations. \begin{acknowledgements} The authors thank the French National Research Agency (ANR, contract ANR-2010 BLAN-0505-01, EXOZODI) for financial support. L.\,M.\ acknowledges the F.R.S.-FNRS for financial support through a FRIA PhD fellowship. G.\,M.\,K.\ is supported by the Royal Society as a Royal Society University Research Fellow. J.\,O. acknowledges support by ANID, -- Millennium Science Initiative Program -- NCN19\_171, from the Universidad de Valpara\'iso, and from Fondecyt (grant 1180395). We thank the Belgian GTO on VISA for the generous allocation of observing time. This work made use of the Smithsonian/NASA Astrophysics Data System (ADS) and of the Centre de Donn\'ees astronomiques de Strasbourg (CDS). \end{acknowledgements} \bibliographystyle{aa}
1,477,468,751,190
arxiv
\section{Introduction} \label{Intr} The transverse force on a vortex in superfluids (neutral and charged) is debated during many decades and has been a topic of reviews and books \cite{RMP,Kop,Magn,VolB}. In a continuous superfluid at $T=0$, which is identical to a perfect fluid in classical hydrodynamics, the balance of forces on a vortex is \begin{equation} \bm F_M + \bm F_L= m n \left[\left(\bm v_L-\bm v_s \right)\times \bm \kappa\right] = \bm F_{ext}, \ee{B} where the Magnus force $\bm F_ M$ is proportional to the vortex velocity $\bm v_L$ and the Lorentz force $\bm F_L$ is proportional to the superfluid velocity $\bm v_s ={\hbar \over m} \bm \nabla \varphi$ determined by the phase $\varphi$ of the order parameter wave function, and the external force $ \bm F_{ext}$ combines all other forces on the vortex, e.g., pinning and friction forces. Here $n$ is the density of particles with mass $m$, and $\bm \kappa$ is the vector parallel to the vortex axis with its modulus equal to the circulation quantum $\kappa=h/m$. The united transverse force $\bm F_M + \bm F_L$ depends only on the relative velocity $\bm v_L-\bm v_s$ as required by the Galilean invariance. In lattice models of superfluids the Galilean invariance is absent, and the value of the Magnus force was under scrutiny. The most known lattice model of the superfluid is the Josephson junction array. Usually they studied vortex dynamics in the array in the continuous approximation. These studies have not revealed any Magnus force normal to the vortex velocity \cite{Eck}. Moreover, there was experimental evidence of the ballistic vortex motion in the Josephson junction array \cite{2}, which is possible only in the absence of the Magnus force. In the classical theory of the Josephson junction array they usually assumed the particle-hole symmetry, which forbids the Magnus force in the model (see \cite{PRB7} and references therein). However, this symmetry is not exact, and there was a lot of theoretical works aiming at finding a finite Magnus force, mostly suggesting some quantum effects. In superconductors the Magnus force determines the Hall effect, and the presence or the absence of this force means the presence or the absence of the Hall effect. Intensive investigations of Bose-condensed cold atoms attracted an interest to another lattice model of a superfluid: the Bose--Hubbard model \cite{FishH}. The periodic structure of potential wells for bosons, which leads to the Bose--Hubbard model in the tight-binding limit, is realized for cold-atom BEC in experiments with optical lattices \cite{Ued}. Recently Lindner {\em et al.} \cite{Auer} and Huber and Lindner \cite{Lind} calculated the Magnus force in the Bose--Hubbard model and revealed that close to the superfluid-insulator transition the force changes its sign as happens in Fermi superfluids at changing the sign of the carrier charge. The paper presents the analysis of the transverse forces on the vortex in a lattice, which is approximated by a continuous model. The forces are determined from the momentum balance. The absence of the Galilean invariance makes necessary to analyze two momentum balances: for true momentum and for quasimomentum known from the Bloch band theory for particles in a periodic potential. This yields the general expression for the Magnus force in the absence of the Galilean invariance, which was used for calculation of the Magnus force in the Bose--Hubbard model close to the superfluid-insulator transition. \section{Vortex dynamics in the continuous approximation for the lattice superfluid} \label{CML} The continuous approximation for lattice superfluids generally enough gives the phenomenological theory, which corresponds to the Lagrangian: \begin{equation} L=- \hbar n \dot \varphi - {\hbar^2 \tilde n \over 2m}(\bm \nabla \varphi )^2 -E_c(n), \label{contL} \end{equation} where $E_c(n)$ is the energy of a resting liquid which depends only on $n$. For simplicity we consider the 2D problem, where $n$ is the particle number per unit area. The Hamiltonian (energy) for this Lagrangian is \begin{equation} H= {\partial L\over \partial \dot \varphi} \dot \varphi -L={\hbar^2 \tilde n \over 2m}(\bm \nabla \varphi )^2+E_c(n) . \label{contHh} \end{equation} Despite similarity of the model to hydrodynamics of the perfect fluid, there is an essential difference. The continuous approximation for the lattice model restores translational invariance but not Galilean invariance. The latter is absent since the effective density $\tilde n$, which characterizes stiffness of the phase field, is different from the true particle density $n$. This difference is an attribute of any lattice model, and the effective density $\tilde n$ is much less than $n$ if the lattice nodes are weakly connected. Let us discuss the conservation laws, which follow from Noether's theorem. The gauge invariance provides the conservation law for charge (particle number): \begin{equation} {\partial \over \partial t}{\partial L\over \partial \dot \varphi} +\nabla_k\left({\partial L\over \partial \nabla_k \varphi} \right)=0. \ee{} This is the continuity equation (the first Hamilton equation) for the fluid: \begin{equation} m{\partial n \over \partial t}=-\bm \nabla \cdot {\bm j} . \ee{nt} where $n$ is the particle density and \begin{equation} \bm j=-{m\over \hbar}{\partial L\over \partial \bm\nabla \varphi} =\hbar \tilde n\bm \nabla \varphi \ee{Gcur} is the mass current. The mass current, which by the factor $m/q$ differs from the charge current of particles with the charge $q$, is at the same time the momentum density. The second Hamilton equation for the phase $\varphi$ canonically conjugate to $n$ is \begin{equation} \hbar {\partial \varphi \over \partial t}= -{\partial H\over \partial n }=-\mu- {\hbar^2\over 2m }{d\tilde n\over dn} (\bm \nabla \varphi)^2, \ee{nph} where $\mu =\partial E_c(n)/\partial n$ is the chemical potential of the liquid at rest. The translational invariance provides the conservation law \begin{eqnarray} {\partial g _k \over \partial t} +\nabla_l\Pi_{kl}=0, \eem{NT} for the momentum with the density (current) \begin{equation} \bm g=-{\partial L\over \partial \dot \varphi}\bm\nabla \varphi =\hbar n\bm \nabla \varphi. \ee{} Here the momentum-flux tensor is \begin{eqnarray} \Pi_{kl}={\partial L\over \partial \nabla_k\varphi} \nabla_l\varphi-L\delta_{kl}={\hbar^2\over m } \tilde n\nabla_k\varphi \nabla_l\varphi \nonumber \\ +\left[P +{\hbar^2\over 2m }\left({d \tilde n\over dn} n-\tilde n\right)(\bm \nabla \varphi)^2 \right] \delta_{kl}, \eem{MF} and the pressure $P$ is connected with the chemical potential $\mu$ by the $T=0$ thermodynamic Gibbs-Duhem relation $d P = n d\mu$. In the Galilean invariant system the current $\bm g$, which appears in the Noether conservation law following from the translation invariance, coincides with $\bm j$. But in our case with broken Galilean invariance ($\tilde n \neq n$) the currents $\bm g$ and $\bm j$ differ. The true mass current (true momentum density) is $\bm j$ but not $\bm g$. This follows from the fact that the density $n$ and the current $\bm j$ in the continuity equation (\ref{nt}) are nothing else as averages of their relevant quantum mechanical operators $\hat n ={\hat \psi}^\dagger \ {\hat \psi}$ and \begin{equation} \hat {\bm j}=-{i\hbar \over 2}({\hat \psi}^\dagger \bm \nabla {\hat \psi} - \bm \nabla{\hat \psi}^\dagger {\hat \psi}) , \ee{qmCur where ${\hat \psi}$ and ${\hat \psi}^\dagger$ are the annihilation and the creation operators normalized to the density. The continuity equation (\ref{nt}) is universal and valid for {\em any} gauge-invariant system including lattice superfluids independently from what forces are applied to the system or how particles interact. So Noether's theorem for a translational invariant but not Galilean invariant system does not provide the conservation law for the true momentum. In the next section we shall see that for particles in a periodic potential the current $\bm g$ coincides with the quasimomentum density. Although Noether's theorem does not lead to the conservation law for the true momentum, the true momentum conservation law, nevertheless, approximately takes place as can be checked using the Hamilton equations (\ref{nt}) and (\ref{nph}) and neglecting higher than second order in phase gradients terms: \begin{eqnarray} {\partial j _k \over \partial t} +\nabla_l \tilde\Pi_{kl}=0, \eem{jk} where the momentum-flux tensor is \begin{eqnarray} \tilde \Pi_{kl}={\hbar^2\over m } {d \tilde n\over dn}\tilde n\nabla_k\phi \nabla_l\varphi +\tilde P \delta_{kl}, \eem{MFg} and the partial pressure $\tilde P $ is determined by the relation $d\tilde P =\tilde n d\mu$. The most reliable method to derive the equation of vortex motion is to consider the momentum balance. The momentum balance requires that any external force on a vortex is compensated by the momentum flux through a cylindric surface surrounding the vortex line \cite{RMP,PRB7}. The problem with superfluids on lattices is that there is a momentum exchange between the superfluid and the system, which provides the periodic lattice potential. The continuous approximation, which restores translational invariance, in fact takes into account this momentum exchange since translational invariance leads to the conservation law for the Noether momentum (quasimomentum) but not the true momentum of particles. We argue that the Lorentz and the Magnus force must be derived from the balance of different momenta: the quasimomentum for the former and the true momentum for the latter. Deriving the Lorentz force one can assume that the vortex is at rest in the laboratory coordinate frame connected with the lattice. Solving \eq{nph} for the time-independent phase $\varphi$ one obtains the quadratic in $\bm \nabla \varphi $ correction to the chemical potential (Bernoulli's effect): \begin{equation} \mu' =- {\hbar^2\over 2m }{d\tilde n\over dn} (\bm \nabla \varphi)^2. \ee{nph0} Then the momentum-flux tensor (\ref{MF}) becomes \begin{eqnarray} \Pi_{kl}={\hbar^2\over m } \tilde n\nabla_k\phi \nabla_l\varphi +\left[P_0-{\hbar^2\over 2 m }\tilde n(\bm\nabla \varphi)^2 \right] \delta_{kl}, \eem{MF1} where $P_0$ is a constant pressure in the absence of any velocity field. The components of the Lorentz force are given by the integral over a cylinder around the vortex: $F_{Li} = \oint \Pi_{kl} dS_l$. The phase gradient $\bm\nabla \varphi= \bm \nabla\varphi_v + \bm \nabla\varphi_t $ consists of the gradient $\bm \nabla\varphi_v =[\hat z \times \bm r]/ r^2$ induced by the vortex line and the gradient $\bm \nabla\varphi_t =\bm j /\hbar \tilde n$ produced by the transport current. The force arises from the cross terms $\bm \nabla\varphi_v \cdot \bm \nabla\varphi_t $ in the momentum flux tensor. Their integration yields \begin{equation} \bm F_L=- [\bm j \times \bm \kappa]=- m\tilde n[\bm v_s \times \bm \kappa]. \ee{MagnL} The Lorentz force follows from the quasimomentum balance because it is a momentum exchange between the transport velocity field and the vortex. But any variation of the transport velocity must be accompanied by the momentum transfer to or from the lattice as follows, e.g., from the Bloch band theory for particles in a periodic potential (see the next section). Deriving the Magnus force proportional to the vortex velocity we can consider the case when the superfluid does not move with respect to the lattice. Then it is natural to expect that there is no momentum exchange between the superfluid and the lattice. Therefore one may conclude that the derivation of the Magnus force requires the balance of the true momentum of particles. It is more convenient to consider this balance in the coordinate frame moving with the vortex since only in this frame the state is stationary, at least in average. The law of the Galilean transformation (see the next section) requires that the expressions for the energy, \eq{contHh}. and the momentum-flux tensor, \eq{MFg}, remain valid in the coordinate frame moving with the velocity $\bm w$ if the phase gradient $\bm \nabla \varphi$ is replaced by the phase gradient $\bm \nabla \varphi'=\bm \nabla \varphi - m \bm w /\hbar$ in the moving frame. Following the same steps as at derivation of the Lorentz force in the laboratory frame, one obtains that the momentum transferred to the liquid, which is the Magnus force in our case, is proportional to the phase gradient $\bm \nabla \varphi'_t$ connected with the transport supercurrent in the moving frame. Since in the laboratory frame the transport supercurrent is absent ($\bm \nabla \varphi_t=0$) and $\bm w=\bm v_L$, calculating the Magnus force components $F_{Mi} = \oint \tilde \Pi_{kl} dS_l$ one obtains the Magnus force \begin{equation} \bm F_M={d\tilde n\over dn} \tilde n m[\bm v_L \times \bm \kappa]. \ee{Mf} The force appears due to convection of the vortex-related momentum $\hbar \tilde n \bm \nabla \varphi_v$ into the area of the momentum balance by the supercurrent. Since the circular velocity field ${\hbar\over m}\nabla \varphi_v$ is fixed, an arrival of a particle into the balance area may change the vortex-related momentum only via variation of $\tilde n$ and does not require accompanying momentum transfer to the lattice. This is another argument why the Magnus force is determined from the balance of the true momentum and is proportional to $d\tilde n/ dn$. In the Josephson junction array the current between two nodes of the lattice is determined by the Josephson energy $E_J\cos(\varphi_1-\varphi_2)$, where $\varphi_1$ and $\varphi_2$ are the phases at the two nodes. In the continuous limit this yields $ \tilde n =m E_J/\hbar^2$. The particle-hole symmetry requires that $E_J$ and $ \tilde n $ do not depend on the average density $n$, and the Magnus force vanishes in agreement with the symmetry of this model. So our approach yields correct values of the Magnus force at least in the two limits when the force is known exactly: the Galilean invariant liquid and the Josephson junction array with particle-hole symmetry. Without external forces $\bm F_L +\bm F_M=0$, and the vortex moves with the velocity by the factor ${d\tilde n/ dn}$ less than the superfluid velocity $\bm v_s$. This shows that Helmholtz's theorem (the vortex moves with the fluid velocity) is not valid without Galilean invariance. {\section{Vortex dynamics from the Bloch band theory} \label{Bloch} The meaning of our approach becomes more transparent if one applies it to particles in a periodic potential $U(\bm r)=U(\bm r+\bm a)$. Here $\bm a$ can be any of the translation vectors of the periodic structure. The density of the single-particle energy is \begin{equation} E_s= {\hbar^2 \over 2m}|\bm \nabla \psi|^2+U(\bm r) | \psi|^2. \ee{} The eigenstates are described by Bloch functions: \begin{equation} \psi(\bm r,t) =u_n(\bm r, \bm k)e^{i \bm k\cdot \bm r-iE(\bm k) t/\hbar}, \ee{BF} where $u_n(\bm r, \bm k)$ is a periodic function. The quasimomentum $\hbar \bm k$ differs from the true momentum of the quantum state. The latter can be calculated averaging, i.e., integrating the quantum mechanical expression (\ref{qmCur}) for the momentum operator over the crystal unit cell: \begin{equation} \bm p= -i\hbar \int \psi(\bm r)^*\bm \nabla \psi(\bm r)d\bm r = \hbar \bm k -i\hbar \int u(\bm r)^*\bm \nabla u(\bm r)d\bm r . \ee{} Here, in contrast to \eq{qmCur}, the wave function is normalized to the unit cell,: $\int |\psi|^2 d\bm r=1$. Calculating the band energy $E(\bm k)$ in the $\bm k \bm p$ approximation for small $k$, i.e., at the band bottom (the energy minimum at $k=0$) one obtains that \begin{equation} E(\bm k)={\hbar^2 k^2\over 2m^*},~~\bm p =m \bm v_g,~~\bm v_g= {d^2 E(\bm k )\over \hbar d\bm k^2 } \bm k = {\hbar \bm k\over m^*}, \ee{} where $\bm v_g $ is the group velocity and $m^*$ is the effective mass. Suppose that particles are bosons, which condense in a single Bloch state with density $n$. The wave vector $\bm k =\bm \nabla \varphi$ is a gradient of the phase $\varphi$. Then the true momentum density (mass current) $\bm j=n \bm p$ coincides with that given by \eq{Gcur} if $\tilde n =n m/m^*$. On the other hand, the current $\bm g = n\hbar \bm k$ is the quasimomentum density. So the Bloch band theory for bosons clearly connects the currents $\bm j$ and $\bm g$ derived in Sec.~\ref{CML} from the phenomenological Lagrangian with the densities of the true momentum and the quasimomentum respectively. It is well known from the solid state physics that an external force on a particle in the energy band determines time variation of the quasimomentum but not the true momentum: \begin{equation} \hbar {d\bm k\over dt}= m^*{d \bm v_g\over dt} = \bm f. \ee{New} In the absence of Umklapp processes the total quasimomentum is also a conserved quantity, and the conservation law for the quasimomentum of the Bose-condensate in a single Bloch state is given by \eq{NT}. Only the part $d\bm p/dt$ of the whole momentum variation $\hbar d\bm k/dt$ brought to the system by the external force is transferred to particles. The rest is transferred to the lattice supporting the periodic potential. It is worthwhile of noticing that in the Bloch theory for particles in a periodic potential the true momentum differs from the quasimomentum by the constant factor $m/m^*$. Therefore the true momentum conservation law is exact and directly follows from the quasimomentum (Noether's) conservation law after multiplying the latter by $m/m^*$. But in the general case considered in the previous section only the quasimomentum conservation law was exact. At the Galilean transformation to the coordinate frame moving with the velocity $\bm w$ ($\bm r=\bm r'+\bm w t$, $t=t'$) the Hamiltonian and the Schr\"odinger equation retain their form, but the wave function must transform as $ \psi=\psi'e^{im\bm v\cdot \bm r' /\hbar +im v^2 t/2\hbar}$. Correspondingly, in the moving coordinate frame the Bloch function (\ref{BF}) becomes \begin{equation} \psi'(\bm r',t') =u_n(\bm r'+\bm w t', \bm k)e^{i \bm k'\cdot \bm r'-iE_f(\bm k)\hbar}, \ee{BFt} where the wave vector $\bm k'$ and the energy $E_f (\bm k)$ are connected with those in the laboratory frame by the relations \begin{eqnarray} \bm k'=\bm k-{m\over \hbar}\bm w,~~E_f=E(\bm k) -\hbar \bm k \cdot \bm w +{mw^2\over 2} \nonumber \\ \approx { (\hbar \bm k-m^*\bm w)^2\over 2m^*}+{(m-m^*)w^2\over 2} . \eem{} In the moving frame the Schr\"odinger equation contains the time dependent periodic potential, and its solution is not an eigenstate of the quantum mechanical energy operator $i\hbar \partial /\partial t$. \Eq{BFt} is a solution of this equation following from the Floquet theorem, the energy $E_f$ being the Floquet quasienergy. The true energy of the state is an average value (expectation value) of the energy operator independently from whether the state is an eigenstate of the energy operator, or not. It is different from the Floquet quasienergy and is given by \begin{eqnarray} E' = \int \psi'^* i\hbar{\partial \psi \over \partial t}d\bm r= E_f + \int u_n ^* i\hbar(\bm w \cdot \bm \nabla) u\,d\bm r=E(\bm k) \nonumber \\ -\bm p \cdot \bm w +{mw^2\over 2}\approx {\hbar ^2 (\bm k-m\bm w)^2\over 2m^*}+{mw^2\over 2}\left(1-{m\over m^*}\right). \nonumber \\ \eem{enM} The Galilean transformation demonstrates the difference between the quasimomentum and the quasienergy on one side, and the true momentum and the true energy on the other. If one ignores the difference and treats the quasiparticle as a real particle with the mass $m^*$ the particle current $\bm j'/m=n (\hbar \bm k/m^*- \bm w) $ in the the ground state in the moving frame vanishes. But at the minimum of the true energy given by \eq{enM} the particle current $\bm j'/m$ does not vanish. This is the effect of dragging of particles by a moving periodic potential, which is especially pronounced in the limit of infinite effective mass when the particles are totally trapped by the periodic potential and cannot move relatively to it. The effect was observed for a potential produced by a running acoustic wave, which drags electrons (acoustoelectric effect) \cite{AEE} or excitons \cite{AcusEx}. The whole analysis of this section addressed only single-particle states and the ideal Bose--Einstein condensation in a single-particle state. But a real vortex with a well-defined core is impossible without interaction. However, adding weak particle-particle interaction one can develop the Gross--Pitaevskii theory similar to that theory for uniform translational invariant liquids. This approach is valid as far as the interaction is not too strong and the vortex-core radius essentially exceeds the lattice period. \section{Vortex dynamics in the Bose--Hubbard model} The Hamiltonian of the Bose--Hubbard model \cite{FishH} for a lattice with distance $a$ between nodes is \begin{eqnarray} {\cal H}= -J\sum _{i,j} \hat b_i^\dagger \hat b_j +{U\over 2}\sum_i \hat N_i(\hat N_i-1)- \mu \sum_i \hat N_i. \eem{BH} Here $\mu$ is the chemical potential, the operators $\hat b_i$ and $\hat b_i^\dagger$ are the operators of annihilation and creation of a boson at the $i$th lattice node, and $\hat N_i=\hat b_i^\dagger\hat b_i$ is the particle number operator at the same node. The first sum is over neighboring lattice nodes $i$ and $j$. In the superfluid phase with large numbers of particles $N_i$ all operator fields can be replaced by the classical fields in the spirit of the Bogolyubov theory: \begin{equation} \hat b_i~\to~\sqrt{N_i}e^{i\varphi_i},~~~\hat b_i^\dagger~\to~\sqrt{N_i}e^{-i\varphi_i}, \ee{qaN} where $\varphi_i$ is the phase at the $i$th node. After transition to the continuous approach one obtains the Hamiltonian (\ref{contHh}), where\footnote{We assume that there is the same number of particles at all nodes and write $N_i$ without the subscript $i$.} \begin{equation} n={N\over a^2},~~\tilde n= {mz_0Ja^2 \over \hbar^2}n,~~E_c(n)={Ua^2\over 2}n^2- \mu n. \ee{} Here $z_0$ is the number of nearest neighbors equal to 4 in the quadratic lattice. This is the tight-binding limit of the Bose condensate of particles in a Bloch state (see the previous section) when the effective mass is \begin{equation} m^*= {\hbar^2\over z_0Ja^2}. \ee{mass} \begin{figure \includegraphics[width=.5\textwidth]{f12-00.eps} \caption[]{ The phase diagram of the Bose--Hubbard model. The Mott insulator phase occupies lobes corresponding to fixed integer numbers $N$ of bosons. The shaded beaks of the superfluid phase, which penetrate between insulator lobes, are analyzed in the text. The dash line separates the region with the inverse Magnus force from the rest of the superfluid phase. The line is schematic since it was really calculated only in the limit $J\to 0$ where it is horizontal. The region of the inverse Magnus force exists under any lobe but is shown only for the beak between the $N=1$ and $N=2$ lobes. } \label{f12-0} \end{figure} When the energy $J$ of the internode hopping decreases, the phase transition from superfluid to Mott insulator must occur \cite{FishH}. In the limit $z_0J/U \to 0$ when the hopping term $\propto J$ can be ignored the eigenstates are given by Fock states $|\Psi_N\rangle= |N\rangle$ with fixed number $N$ of particles at any node. At growing $J$ the transition line can be found in the mean-field approximation \cite{Ued}. One takes into account the hopping term introducing the mean field equal to the average value of the annihilation operator (and its complex conjugate the creation operator): \begin{equation} \langle \hat b_i\rangle=\psi_i =|\psi| e^{i\varphi_i},~~~ \langle \hat b_i^\dagger\rangle=\psi_i^* =|\psi| e^{-i\varphi_i}. \ee{qa} It is assumed that only the phase but not the modulus of the order parameter $\psi$ varies from node to node. In general $|\psi|^2 $ is not equal to $N$ as \eq{qaN} assumes and must be determined from the the condition of self-consistency (see below). Introducing the mean field one reduces the problem to the single-node problem with the Hamiltonian \begin{eqnarray} {\cal H}_s= -zJ(\hat b^\dagger \psi +\psi^* \hat b)+{U\over 2} \hat N(\hat N-1)- \mu \hat N. \eem{BHs} Here \begin{equation} z=\sum_j e^{i(\varphi_j-\varphi_i)} \ee{} reduces to the number $z_0$ of nearest neighbors in the uniform state with the constant phase at all nodes. The multi-node wave function is a product of the single-node wave functions. Calculating the energy of the original Hamiltonian (\ref{BH}) for this wave function and minimizing it with respect to $\psi$ one obtains the self-consistency equation, which determines $\psi$. Following this approach \cite{Ued} one obtains the phase diagram shown in Fig.~\ref{f12-0}. The Mott-insulator phases with fixed numbers $N$ of particles per node occupy interiors of lobes at small $z_0J/U$. We address vortex dynamics close to the phase transition at minimal values of $J$, i.e. at beaks of the superfluid phase between lobes, which are shaded in Fig.~\ref{f12-0}. Here the mean-field approximation is simplified by the fact that only two states with $N$ and $N+1$ particles interplay in the beak between the lobes $N$ and $N+1$. This is because at $\mu=NU$ these two states have the same energy, whereas all other states are separated by a gap on the order of the high energy $U$. So for a beak between the lobes corresponding to Mott insulators with the number of bosons per node $N$ and $N+1$ we look for a solution in the form of a superposition of two Fock states: \begin{equation} |\Psi_N\rangle= f_N|N\rangle +f_{N+1}|N+1\rangle. \ee{} This wave function is an eigenfunction of the single-node Hamiltonian (\ref{BHs}) if \begin{eqnarray} f_{N,N+1}=\sqrt{ \sqrt{{\mu'^2\over 4}+z^2J^2(N+1) |\psi|^2}\mp {\mu' \over 2}\over {2 \sqrt{{\mu'^2\over 4}+z^2J^2(N+1) |\psi|^2}}}, \eem{} where the upper sign corresponds to $N$ and the lower one to $N+1$. The energy of the ground state is \begin{eqnarray} \epsilon_N =-\mu' \left(N+{1\over 2}\right) - \sqrt{{\mu'^2\over 4}+z^2J^2(N+1) |\psi|^2}, \eem{} where $\mu'=\mu -Un$. The average number of particles per one node is a function of $\mu'$: \begin{eqnarray} \langle \hat N \rangle=N+{1\over 2} + \frac{\mu'}{ 2 \sqrt{\mu'^2+4z^2J^2(n+1) |\psi|^2} }. \eem{} The self-consistency equation follows either from the minimization of the total energy with respect to $\psi$ or from the condition that $\psi$ is the average value of the operator $\hat b$: \begin{eqnarray} \psi=\langle \hat b \rangle= \frac{zJ(N+1)}{2 \sqrt{{\mu'^2\over 4}+z^2J^2(N+1) |\psi|^2} }\psi . \eem{} A non-trivial (i.e., non-zero) solution of this equation is \begin{equation} |\psi|^2 ={N+1\over 4}- {\mu'^2\over 4z^2J^2(N+1)}. \ee{} The eigenvalue $\epsilon_N$ of the Hamiltonian (\ref{BH}) determines the Gibbs thermodynamic potential $G_N= zJ|\psi|^2 +\epsilon_N$ of the grand canonical ensemble per one node. It is useful to go from the grand canonical ensemble with the Gibbs potential being a function of $\mu$ to the canonical ensemble where the energy density is a function of the particle number density $n$. Then the energy per node is \begin{eqnarray} E_N=G_N+\mu N={UN^2\over 2}+UNN_e \nonumber \\ +zJ\left(|\psi|^2 - 2\sqrt{N+1}\sqrt{\frac{1}{ 4 }-N_e^2} |\psi|\right), \eem{En} where $N_e= \langle \hat N \rangle -N-{1\over 2}$. The energy has a minimum at \begin{equation} |\psi|^2=(N+1)\left(\frac{1}{ 4 }-N_e^2\right). \ee{} As in any second-order phase transition, $\psi$ vanishes at the phase transition lines, where $N_e=\pm {1\over 2}$ and the number of particles reaches $N$ at the lower border and $N+1$ at the upper one. But in contrast to the Landau-Lifshitz theory of the second order transitions, there is no analytic expansion in $\psi$ near the critical temperature because of the term linear in $|\psi|$. For the transition to the continuous model, let us consider the effect of slow phase variation from node to node. Assuming that $\psi_i =|\psi|e^{i \bm k \cdot \bm r_i}$ where $\bm r_i$ is the position vector of the $i$th node, one obtains for the square lattice with the number $z_0=4$ of nearest neighbors: \begin{equation} z=2\cos(k_x a)+2\cos(k_y a) \approx 4- k^2 a^2. \ee{} Since the wave vector $\bm k$ corresponds to the gradient operator $\bm \nabla$ in the configurational space one obtains in the continuum limit for small $\bm k$ the Hamiltonian (\ref{contHh}) with \begin{eqnarray} \tilde n ={2m\over \hbar^2} J (N+1)\left(\frac{1}{ 4 }-n_e^2a^4\right),~~E_c(n)={2\hbar ^2 \tilde n\over m}, \eem{} where $n_e=N_e/a^2=n-\left(N+{1\over 2}\right)/a^2$ is the effective density, and constant terms in the energy were ignored. This allows to find the density dependent factor in the expression (\ref{Mf}) for the Magnus force: \begin{equation} {d\tilde n\over dn} \tilde n=-{8m^2\over \hbar^4} J^2a^4 (N+1)^2n_e\left(\frac{1}{ 4 }-n_e^2a^4\right). \ee{MA} A remarkable feature of the Magnus force in the beaks of the superfluid phase is that its sign can be inverse with respect to that dictated by the sign of velocity circulation around the vortex. This happens in the upper halves of the beaks as shown in Fig.~\ref{f12-0}. The regions of the inverse Magnus force neighbor any insulator lobe from below, where $n_e $ is positive. Since at upper borders of the lobes $n_e$ is negative, the line $n_e=0$, where the Magnus force changes its sign, must end somewhere at the border of the lobe. In Fig.~\ref{f12-0} it is shown by a dashed line for the beak between the $N=1$ and $N=2$ lobes. \section{Conclusions and discussion} We derived the transverse (Magnus and Lorentz) forces on the vortex from the balances of momenta in the continuous approximation for lattice models of superfluids. The two forces are obtained from two different conservation laws, one for the true momentum of particles in the lattice (Magnus force), another for the quasimomentum (Lorentz force) known from the Bloch band theory. The calculated Magnus force vanishes for the Josephson junction array where the particle-hole symmetry forbids any Magnus force. The theory was applied for studying vortex dynamics in the Bose-Hubbard model. In some areas of the phase diagram close to the superfluid--Mott insulator transition the calculated Magnus force has an inverse sign with respect to the sign dictated by the velocity circulation around the vortex as has been already revealed earlier. \cite{Auer,Lind}. Our approach was based on (i) the continuous approximation for the lattice model and on (ii) the assumption that there is no momentum exchange between the liquid and the lattice if the superfluid is at rest with respect to the lattice. One cannot take validness of these two assumptions for granted. Among most important effects beyond the continuous approximation is intrinsic pinning, which impedes free motion of vortices in the lattice. Therefore one can use our theory if the forces on the vortex are not too weak: they must be higher than the depinning threshold. This may be in conflict with another restriction on the theory that the superfluid velocities are much lower than their critical value. This issue needs a further analysis. Anyway, the theory provides correct results in the two opposite limits: (i) the Galilean invariant liquid with the maximum Magnus force, and (ii) the Josephson junction array with particle-hole symmetry with the zero Magnus force. Therefore, despite possible inaccuracy of assumptions made at its derivation, the theory can serve at least as a reasonable interpolation between these two extreme cases. The Magnus force leads to the Hall effect if particles have a charge $q$. The electric field is determined by the vortex velocity: ${\cal \bm E}={1\over c} [\bm B\times \bm v_L]$. The value of the Hall conductivity $\sigma_H= j_q /{\cal E}$, where $j_q ={q\over m}j$ is the charge current, depends on the amplitude of the Magnus force. According to our analysis within the Bloch band theory (Sec.~\ref{Bloch}), the Hall conductivity $\sigma_H =(m/m^*)^2qnc /B$ is by the factor $(m/m^*)^2$ less than the Hall conductivity $qnc /B$ known for normal electrons in solids and derived from the same Bloch band theory as used by us. There is no conflict between these two results. In the normal electron liquid the magnetic force lines (counterparts of our vortex lines) move with the same velocity as charges, i..e., with the group velocity $\bm v_g=\hbar \bm k/m^*$ in the Bloch band, in accordance with Helmholtz's theorem of classic hydrodynamics. In fact, it is the only relevant velocity, since there is no coherent phase, which determines the superfluid velocity $v_s={\hbar \over m}\bm \nabla \varphi $. In the superfluid case Helmholtz's theorem is not valid in general, and the velocity $\bm v_L$ is a velocity of a phase singularity, which is by the factor $m^*/m$ less than the velocity $\bm v_g$. The uniform magnetic field is crucial for dynamics of normal electrons making them to move along cycloid trajectories. In dynamics of superconducting vortices the nonuniform magnetic field is localized in fluxons and is commonly neglected, as being weak compared to the effect of the phase gradient around the phase singularity \cite{Kop}. In the light of this connection between the Magnus force and the Hall conductivity let us compare our theory with that of Lindner {\em et al.} \cite{Auer} and Huber and Lindner \cite{Lind}, who have already noticed that in the Bose--Hubbard model for charged particles the Hall conductivity changes its sign together with the sign of $n_e$.\footnote{Note that Huber and Lindner \cite{Lind} used the name ``Magnus force'' for the force proportional to the superfluid velocity $\bm v_s$ but not to the vortex velocity $\bm v_L$. This disagrees with the nomenclature usually used in the theory superconductivity. According to this nomenclature used in our paper, the force $\propto \bm v_s$ is the Lorentz force and the force $\propto \bm v_L$ is the Magnus force (see Sec.~\ref{Intr}).} However, in their theory the Hall conductivity $\sigma_H$ remains constant at the line $n_e=0$. So the change of the $\sigma_H$ sign is accompanied by a jump of $\sigma_H$, whereas our analysis shows that $\sigma_H $, which is proportional the Magnus force amplitude, must be continuous at $n_e=0$ [see \eq{MA}]. Moreover, our theory predicts the Hall conductivity, which differs from that in Refs.~\onlinecite{Auer,Lind} by the factor $(m/m^*)^2$ proportional to $J^2$ [see \eq{mass}]. The factor can be very small in the tight-binding limit. This is the same factor, which differentiates our Hall conductivity from that of the normal liquid (see the previous paragraph). The Hall conductivity of Refs.~\onlinecite{Auer,Lind} far from the superfluid--insulator transition would be obtained if one considered the effective mass as a true mass of particles in a superfluid and the quasimomentum as a true momentum. In fact this means that in the theory of Refs.\onlinecite{Auer,Lind} the broken Galilean invariance does not suppress the Magnus force. A possible source of disagreement is that the theory of these papers used topological arguments without directly addressing the momentum balance. In the past there were other attempts to derive the Magnus force in lattice superfluids from topology. In particular, the topological origin of the first term $-\hbar n \dot \varphi$ in the Lagrangian (\ref{contL}), which is called the Wess--Zumino term, was widely discussed in the literature \cite{VolB}. The arguments were about whether the total liquid density $n$ must be replaced by some other density. It is evident that adding any constant $C$ to the density $n$ in the Wess--Zumino term does not affect the Hamilton equations (\ref{nt}) and (\ref{nph}). However the role of the Wess--Zumino term changes after transition from the continuous model in terms of fields to the reduced description in terms of the vortex coordinates $\bm r_L(x_L,y_L)$. This leads to substitution of the phase field $\varphi_v(\bm r-\bm r_L)$ for a vortex moving with the velocity $\bm v_L=d\bm r_L/dt$ into the Wess--Zumino term and its integration over the whole space. Bearing in mind that $\dot \varphi_v=-(\bm v_L \cdot \bm \nabla )\varphi_v$, the Wess--Zumino term becomes \begin{eqnarray} L_{WZ} =-\hbar (n+C) \bm v_L \cdot [\hat z\times \bm r_L].~~~~ \eem{} Varying the total Lagrangian of the vortex with respect to $\bm r_L(t)$, one obtains the equation of vortex motion with the effective Magnus force $\propto (n+C)$. So the constant $C$ does matter for the value of the Magnus force. It was argued that the contribution $\propto C$ is of topological origin and can be found from the topological analysis \cite{VolB}. We think that there is no general principle, which dictates the charge in the Wess--Zumino term. It is not the undefined Wess-Zumino term that determines what and whether the transverse force is, but vice versa; one must derive the transverse force from dynamical equations and only after this does one know what Wess-Zumino term should be in the vortex Lagrangian. In the Galilean and translational invariant liquid one obtains from the momentum-conservation law that the amplitude of the Magnus force is proportional to the total density. Then only the latter enters the Wess--Zumino term and $C=0$. On the other hand, if the Magnus force vanishes, then the Wess--Zumino term vanishes also ($C=-n$). The suppression of the Magnus force and the Hall effect because of broken Galilean invariance in periodic potentials in some sense is similar to the suppression of the Magnus force by the Kopnin--Kravtsov force when Galilean invariance is broken by impurities \cite{Kop}. The existence of the Kopnin--Kravtsov force was also rejected in the past on the basis of some topological analysis connecting the Magnus force with the Berry phase.\cite{AT} Later it was realized that although this connection definitely exists and is very important a proper calculation of the Berry phase itself requires in fact knowledge of the Magnus force, which one can obtain only after the dynamical analysis based on the momentum balance. This was demonstrated on the example of the Iordanskii force (the transverse force on the vortex produced by normal quasiparticles) \cite{Th,Magn}, which was also rejected in the original Berry-phase analysis.\cite{AT} Despite an analogy between suppression of the Magnus force by a periodic potential discussed in the present paper and suppression of the Magnus force by a random potential from impurities in the Kopnin--Kravtsov theory, one should not ignore an important difference between the two cases. The Kopnin--Kravtsov force originated from bound states in vortex cores in Fermi superfluids. In the cases considered in the present paper there were no core bound states, since the Bose--Hubbard model is for Bose superfluids where vortices have no bound states, whereas in the Josephson junction array, which consists of islands of the Fermi superfluid, vortices have no singular cores. This provides a ground for thinking that the existence of core bound states is not critical for suppression of the Magnus force. Therefore, although our analysis addressed ideal strictly periodical lattices, its conclusion about suppression of the Magnus force can be relevant also both for Fermi and Bose superfluids in random potentials (e.g., superfluids in porous media or on disordered substates), independently from whether core bound states exist or not. \begin{acknowledgments} I thank Ehud Altman, Assa Auerbach, and Netanel Lindner for interesting discussions. The work was supported by the grant of the Israel Academy of Sciences and Humanities and by the FP7 program Microkelvin of the European Union. \end{acknowledgments}
1,477,468,751,191
arxiv
\section{Introduction} Solutions of integrable hydrodynamic chains and 2+1 quasilinear systems can be found by the method of hydrodynamic reductions (see, for instance, \textbf{\cite{Fer+Kar}}, \textbf{\cite{Gib+Tsar}}). Suppose some hydrodynamic chain is given. Then one can seek hydrodynamic reductions for this hydrodynamic chain by the aforementioned method of hydrodynamic reductions or by $\bar{\partial}$ approach (see \textbf{\cite{Bogdan}}), for instance. Thus, hydrodynamic reductions of a dispersionless limit of 2+1 Harry Dym equation can be found directly by the Hamiltonian approach applied for the Kupershmidt hydrodynamic chain (see \textbf{\cite{Maks+Kuper}} and \textbf{\cite{Maks+Hamch}}) or by $\bar{\partial}$ approach (see \textbf{% \cite{Wu}}). However, in this paper we concentrate on re-calculation of hydrodynamic reductions of hydrodynamic chains related by Miura type and reciprocal transformations. It means that we use already known hydrodynamic reductions of the Benney hydrodynamic chain (see \textbf{\cite{Bogdan}}, \textbf{\cite{Krich}}, \textbf{\cite{Zakh}}) for reconstruction of hydrodynamic reductions for a dispersionless limit of 2+1 Harry Dym equation. Thus, this paper is devoted to the construction of hydrodynamic reductions common for 2+1 quasilinear systems (dKP, dmKP, 2+1 Harry Dym) related by the Miura type and reciprocal transformations (see \textbf{\cite{Chang}}, \textbf{\cite{Chang+Tu}}, \textbf{\cite{Chen+Tu}}, \textbf{\cite{Rogers}}, \textbf{\cite{Shaw+Tu}}). Let us recall (see \textbf{\cite{Gibbons}}) that the Benney hydrodynamic chain (see \textbf{\cite{Benney}})% \begin{equation} A_{t}^{k}=A_{x}^{k+1}+kA^{k-1}A_{x}^{0}\text{, \ \ \ \ \ \ \ }k=0,1,2,... \label{1} \end{equation}% satisfies the Gibbons equation% \begin{equation} \lambda _{t}-\mu \lambda _{x}=\frac{\partial \lambda }{\partial \mu }\left[ \mu _{t}-\partial _{x}\left( \frac{\mu ^{2}}{2}+A^{0}\right) \right] , \label{2} \end{equation}% where the equation of the Riemann mapping is given by the asymptotic series% \begin{equation} \lambda =\mu +\frac{A^{0}}{\mu }+\frac{A^{1}}{\mu ^{2}}+\frac{A^{2}}{\mu ^{3}% }+... \label{3} \end{equation}% The inverse asymptotic series% \begin{equation*} \mu =\lambda -\frac{\mathbf{H}_{0}}{\lambda }-\frac{\mathbf{H}_{1}}{\lambda ^{2}}-\frac{\mathbf{H}_{2}}{\lambda ^{3}}-... \end{equation*}% yields an infinite series of polynomial conservation law densities $\mathbf{H% }_{k}(A^{0},A^{1},...,A^{k})$, where the generating function of conservation laws is given by% \begin{equation} \mu _{t}=\partial _{x}\left( \frac{\mu ^{2}}{2}+A^{0}\right) . \label{zak} \end{equation}% Since the transformation $\mathbf{H}_{k}=\mathbf{H}% _{k}(A^{0},A^{1},...,A^{k})$ is invertible, the Benney hydrodynamic chain can be written in the conservative form% \begin{equation} \partial _{t}\mathbf{H}_{0}=\partial _{x}\mathbf{H}_{1}\text{, \ \ \ \ }% \partial _{t}\mathbf{H}_{n}=\partial _{x}\left( \mathbf{H}_{n+1}-\frac{1}{2}% \overset{n-1}{\underset{k=0}{\sum }}\mathbf{H}_{k}\mathbf{H}_{n-1-k}\right) \text{, \ \ \ }n=1,2,... \label{cons} \end{equation} Lets us apply the generating function of the Miura type transformations (see \textbf{\cite{Kuper}})% \begin{equation*} \mu =p+B^{0} \end{equation*}% to the above series (see also \textbf{\cite{Chang+Tu}}, \textbf{\cite% {Shaw+Tu}}). A deformation of the Riemann mapping determined by the asymptotic series% \begin{equation} \lambda =p+B^{0}+\frac{B^{1}}{p}+\frac{B^{2}}{p^{2}}+\frac{B^{3}}{p^{3}}+... \label{5} \end{equation}% satisfies the modified Gibbons equation% \begin{equation} \lambda _{t}-(p+B^{0})\lambda _{x}=\frac{\partial \lambda }{\partial p}\left[ p_{t}-\partial _{x}\left( \frac{p^{2}}{2}+B^{0}p\right) \right] , \label{6} \end{equation}% where a dynamics of the coefficients $B^{k}$ is given by the modified Benney hydrodynamic chain% \begin{equation} B_{t}^{k}=B_{x}^{k+1}+B^{0}B_{x}^{k}+kB^{k}B_{x}^{0}\text{, \ \ \ \ \ \ }% k=0,1,2,... \label{7} \end{equation} \textbf{Remark}: The comparison of two asymptotic series (\textbf{\ref{3}}) and (\textbf{\ref{5}})% \begin{equation*} p+B^{0}+\frac{B^{1}}{p}+\frac{B^{2}}{p^{2}}+\frac{B^{3}}{p^{3}}+...=p+B^{0}+% \frac{A^{0}}{p+B^{0}}+\frac{A^{1}}{(p+B^{0})^{2}}+\frac{A^{2}}{(p+B^{0})^{3}}% +... \end{equation*}% yields explicit polynomial Miura type transformations $% A^{k}(B^{0},B^{1},...,B^{k+1})$, $k=0,1,2,...$ The relationship between 2+1 Harry Dym equation and the KP equation was established in \textbf{\cite{Rogers}} (see also \textbf{\cite{Chang}}). Let us recall this link between corresponding hydrodynamic chains (see \textbf{% \cite{Chen+Tu}}). The hydrodynamic chain connected with a dispersionless limit of 2+1 Harry Dym equation is (see, for instance, \textbf{\cite{Chang}})% \begin{equation} C_{y}^{k}=(C^{-1})^{2}C_{z}^{k+1}+(k+1)C^{k+1}C^{-1}C_{z}^{-1}\text{, \ \ \ \ \ \ }k=-1,0,1,2,... \label{g} \end{equation}% A deformation of the Riemann mapping% \begin{equation} \lambda =C^{-1}q+C^{0}+\frac{C^{1}}{q}+\frac{C^{2}}{q^{2}}+\frac{C^{3}}{q^{3}% }+... \label{8} \end{equation}% is given by the Gibbons equation% \begin{equation} \lambda _{y}-q(C^{-1})^{2}\lambda _{z}=\frac{\partial \lambda }{\partial q}% \left[ q_{y}-\partial _{z}\left( \frac{q^{2}(C^{-1})^{2}}{2}\right) \right] . \label{9} \end{equation}% This hydrodynamic chain has the first conservation law% \begin{equation*} \partial _{y}\frac{1}{C^{-1}}=\partial _{z}(-C^{0}). \end{equation*}% Under the reciprocal transformation% \begin{equation} dx=\frac{1}{C^{-1}}dz-C^{0}dy\text{, \ \ \ \ \ }dt=dy \label{soh} \end{equation}% the hydrodynamic chain (\textbf{\ref{g}}) reduces to the hydrodynamic chain (% \textbf{\ref{7}}); the Gibbons equation (\textbf{\ref{9}}) reduces to the Gibbons equation (\textbf{\ref{6}}); the equation of the Riemann mapping (% \textbf{\ref{8}}) reduces to the equation of the Riemann mapping (\textbf{% \ref{5}}), where the generating function of the Miura type transformations is% \begin{equation*} p=C^{-1}q \end{equation*}% and the Miura type transformations are (see \textbf{\cite{Chen+Tu}})% \begin{equation*} B^{k}=C^{k}(C^{-1})^{k}\text{, \ \ \ \ }k=0,1,2,... \end{equation*} The Benney hydrodynamic chain is well known. Plenty hydrodynamic reductions (see for instance, \textbf{\cite{Bogdan}}, \textbf{\cite{Krich}}, \textbf{% \cite{Zakh}}) are found many years ago. Below we show links with hydrodynamic reductions of hydrodynamic chains connected by the Miura type and reciprocal transformations with the Benney hydrodynamic chain. \section{Modified Benney hydrodynamic chain} For instance, the Zakharov hydrodynamic reductions (see \textbf{\cite{Zakh}})% \begin{equation} a_{t}^{i}=\partial _{x}\left[ \frac{(a^{i})^{2}}{2}+A^{0}\right] \text{, \ \ \ \ \ }b_{t}^{i}=\partial _{x}(a^{i}b^{i})\text{, \ \ \ \ \ \ \ }A^{0}=% \overset{N}{\underset{n=1}{\sum }}b^{n} \label{neg} \end{equation}% of the Benney moment chain (\textbf{\ref{1}}) are connected with the equation of the Riemann surface% \begin{equation*} \lambda =\mu +\overset{N}{\underset{n=1}{\sum }}\frac{b^{n}}{\mu -a^{n}}. \end{equation*}% Let us rewrite the above formula in the form% \begin{equation} \lambda =\mu +\frac{b^{i}}{\mu -a^{i}}+\underset{k\neq i}{\sum }\frac{b^{k}}{% \mu -a^{k}}, \label{rim} \end{equation}% where the index $i$ is \textbf{fixed}. Substituting the Taylor series% \begin{equation*} \mu ^{(i)}=a^{i}+\frac{b^{i}}{\lambda }+\frac{c^{i}(\mathbf{a},\mathbf{b})}{% \lambda ^{2}}+\frac{d^{i}(\mathbf{a},\mathbf{b})}{\lambda ^{3}}+... \end{equation*}% in (\textbf{\ref{zak}}), one can obtain an infinite series of conservation laws at the vicinity of each puncture $\mu ^{(i)}=a^{i}$. The Benney hydrodynamic chain (\textbf{\ref{cons}}) is determined for $k\geqslant 0$. Let us extend this hydrodynamic chain in the \textit{negative} direction. Then the negative part of the Benney hydrodynamic chain is given by% \begin{equation*} \partial _{t}\mathbf{H}_{-1}=\partial _{x}\left[ \frac{(\mathbf{H}_{-1})^{2}% }{2}+\mathbf{H}_{0}\right] \text{, \ \ \ \ \ \ }\partial _{t}\mathbf{H}% _{-k-1}=\frac{1}{2}\partial _{x}\left[ \overset{k}{\underset{m=0}{\sum }}% \mathbf{H}_{-m-1}\mathbf{H}_{m-k-1}\right] \text{, \ \ \ \ }k=1,2,..., \end{equation*}% where $\mathbf{H}_{-1}\equiv a^{i}$, $\mathbf{H}_{-2}\equiv b^{i}$, and the above hydrodynamic type system% \begin{eqnarray*} a_{t}^{k} &=&\partial _{x}\left( \frac{(a^{k})^{2}}{2}+\mathbf{H}_{-2}+% \underset{n\neq i}{\sum }b^{n}\right) \text{, \ \ \ \ \ }b_{t}^{k}=\partial _{x}(a^{k}b^{k})\text{, \ \ \ \ \ }k=1\text{, }2\text{, ... , }N\text{, \ \ \ \ \ }k\neq i, \\ && \\ \partial _{t}\mathbf{H}_{-1} &=&\partial _{x}\left( \frac{(\mathbf{H}% _{-1})^{2}}{2}+\mathbf{H}_{-2}+\underset{n\neq i}{\sum }b^{n}\right) \text{, \ \ \ \ \ \ }\partial _{t}\mathbf{H}_{-2}=\partial _{x}(\mathbf{H}_{-1}% \mathbf{H}_{-2}) \end{eqnarray*}% is connected with the equation of the Riemann surface (see (\textbf{\ref{rim}% }))% \begin{equation} \tilde{\lambda}\equiv \lambda ^{(i)}=\frac{\mathbf{H}_{-2}}{\mu -\mathbf{H}% _{-1}}+\mu +\underset{k\neq i}{\sum }\frac{b^{k}}{\mu -a^{k}}. \label{tot} \end{equation}% Since the first moment of the modified Benney chain $B^{0}\equiv \mathbf{H}% _{-1}$, the equation of the Riemann surface for the corresponding hydrodynamic reduction% \begin{eqnarray*} u_{t}^{k} &=&\partial _{x}\left( \frac{(u^{k})^{2}}{2}+B^{0}u^{k}\right) \text{,\ \ \ \ \ \ \ }b_{t}^{k}=\partial _{x}[(B^{0}+u^{k})b^{k}]\text{, \ \ \ \ \ }k=1\text{, }2\text{, ... , }N\text{, \ \ \ \ \ }k\neq i, \\ && \\ B_{t}^{0} &=&\partial _{x}\left( \frac{(B^{0})^{2}}{2}+\mathbf{H}_{-2}+% \underset{n\neq i}{\sum }b^{n}\right) \text{, \ \ \ \ }\partial _{t}\mathbf{H% }_{-2}=\partial _{x}(B^{0}\mathbf{H}_{-2}) \end{eqnarray*}% of the modified Benney hydrodynamic chain is given by% \begin{equation*} \lambda =\frac{\mathbf{H}_{-2}}{p}+B^{0}+p+\underset{k\neq i}{\sum }\frac{% b^{k}}{p-u^{k}}, \end{equation*}% where $u^{k}=a^{k}-B^{0}$; the generating function of the Miura type transformations is $p=\mu -B^{0}$. \section{Dispersionless 2+1 Harry Dym equation} Finally, let us apply the reciprocal transformation (see (\textbf{\ref{soh}}% ))% \begin{equation} dz=\mathbf{H}_{-2}dx+B^{0}\mathbf{H}_{-2}dt\text{, \ \ \ \ \ \ }dy=dt \label{ro} \end{equation}% to the above hydrodynamic type system. Then the equation of the Riemann surface for the corresponding hydrodynamic reduction% \begin{eqnarray*} \bar{u}_{y}^{k} &=&\partial _{z}\left[ \frac{(\bar{u}^{k})^{2}}{2}% (C^{-1})^{2}\right] \text{,\ \ \ \ \ \ \ }\bar{b}_{y}^{k}=\partial _{z}\left( C^{-1}\bar{u}^{k}\bar{b}^{k}\right) \text{, \ \ \ \ \ }k=1\text{, }2\text{, ... , }N\text{, \ \ \ \ \ }k\neq i, \\ && \\ C_{y}^{0} &=&\left( 1+\underset{n\neq i}{\sum }\bar{b}^{n}\right) C^{-1}C_{z}^{-1}+(C^{-1})^{2}\underset{n\neq i}{\sum }\bar{b}_{z}^{n}\text{, \ \ \ \ \ }C_{y}^{-1}=(C^{-1})^{2}C_{z}^{0} \end{eqnarray*}% of a dispersionless limit of 2+1 Harry Dym equation is given by% \begin{equation*} \lambda =C^{-1}q+C^{0}+\frac{1}{q}+\underset{k\neq i}{\sum }\frac{\bar{b}^{k}% }{q-\bar{u}^{k}}, \end{equation*}% where $C^{0}\equiv B^{0}$, $C^{-1}\equiv \mathbf{H}_{-2}$, $\bar{b}% ^{k}=b^{k}/\mathbf{H}_{-2}$, $\bar{u}^{k}=u^{k}/\mathbf{H}_{-2}$; and the generating function of the Miura type transformations is $p=qC^{-1}$. Let us rewrite the equation of the Riemann surface (\textbf{\ref{rim}}) for the Zakharov hydrodynamic reduction (\textbf{\ref{neg}}) in the form (see \textbf{\cite{Maks93}})% \begin{equation*} \lambda =p\underset{k=1}{\overset{N}{\prod }}\frac{p-c^{k}}{p-u^{k}}, \end{equation*}% where $B^{0}\equiv \mathbf{H}_{-1}=\sum (u^{n}-c^{n})$. Application of the reciprocal transformation (\textbf{\ref{ro}}) yields \textit{another} hydrodynamic reduction of a dispersionless limit of 2+1 Harry Dym equation% \begin{equation*} \bar{u}_{y}^{k}=\partial _{x}\left[ \frac{(\bar{u}^{k})^{2}}{2}(C^{-1})^{2}% \right] \text{, \ \ \ \ \ }\bar{c}_{y}^{k}=\partial _{x}\left[ \frac{(\bar{c}% ^{k})^{2}}{2}(C^{-1})^{2}\right] \text{, \ \ \ \ }k=1\text{, }2\text{, ..., }% N, \end{equation*}% where\ $C^{-1}\equiv \mathbf{H}_{-2}=\prod (u^{n}/c^{n})\equiv \prod (\bar{u}% ^{n}/\bar{c}^{n})$. The corresponding equation of the Riemann surface is given by% \begin{equation*} \lambda =q\underset{k=1}{\overset{N}{\prod }}\frac{1-q/\bar{c}^{k}}{1-q/% \bar{u}^{k}}. \end{equation*} Let us start from the waterbag reduction (see \textbf{\cite{Gib+Yu}} and \textbf{\cite{Kodama}})% \begin{equation*} a_{t}^{k}=\partial _{x}\left[ \frac{(a^{k})^{2}}{2}+\underset{n=1}{\overset{N% }{\sum }}\varepsilon _{n}a^{n}\right] \text{, \ \ \ \ \ }k=1\text{, }2\text{% , ..., }N \end{equation*}% of the Benney moment chain (\textbf{\ref{1}})% \begin{equation*} \lambda =\mu -\varepsilon _{i}\ln (\mu -a^{i})-\underset{k\neq i}{\sum }% \varepsilon _{k}\ln (\mu -a^{k})\text{, \ \ \ \ \ \ \ \ \ }\sum \varepsilon _{n}=0. \end{equation*}% Let us identify $\mathbf{H}_{-1}\equiv a^{i}$, where the index $i$ is \textbf{fixed}. Then the above hydrodynamic type system% \begin{eqnarray*} a_{t}^{k} &=&\partial _{x}\left[ \frac{(a^{k})^{2}}{2}+\varepsilon _{i}% \mathbf{H}_{-1}+\underset{n\neq i}{\sum }\varepsilon _{n}a^{n}\right] \text{% , \ \ \ \ \ }k=1\text{, }2\text{, ... , }N\text{, \ \ \ \ \ }k\neq i, \\ && \\ \partial _{t}\mathbf{H}_{-1} &=&\partial _{x}\left[ \frac{(A^{-1})^{2}}{2}% +\varepsilon _{i}\mathbf{H}_{-1}+\underset{n\neq i}{\sum }\varepsilon _{n}a^{n}\right] \end{eqnarray*}% is connected with the equation of the Riemann surface (cf. (\textbf{\ref{tot}% }))% \begin{equation*} \tilde{\lambda}\equiv \lambda ^{(i)}=\frac{1}{\mu -\mathbf{H}_{-1}}\underset{% k\neq i}{\prod }(\mu -a^{k})^{-\varepsilon _{k}/\varepsilon _{i}}\exp \frac{% \mu }{\varepsilon _{i}}. \end{equation*}% Since the first moment of the modified Benney chain $B^{0}\equiv \mathbf{H}% _{-1}$, the equation of the Riemann surface for the corresponding hydrodynamic reduction% \begin{eqnarray*} u_{t}^{k} &=&\partial _{x}\left( \frac{(u^{k})^{2}}{2}+B^{0}u^{k}\right) \text{, \ \ \ \ \ }k=1\text{, }2\text{, ... , }N\text{, \ \ \ \ \ }k\neq i, \\ && \\ B_{t}^{0} &=&\partial _{x}\left( \frac{(B^{0})^{2}}{2}+\underset{n\neq i}{% \sum }\varepsilon _{n}u^{n}\right) \text{,} \end{eqnarray*}% of the modified Benney chain is given by% \begin{equation*} \lambda =\frac{\mathbf{H}_{-2}}{p}e^{p/\varepsilon _{i}}\underset{k\neq i}{% \prod }\left( 1-\frac{p}{u^{k}}\right) ^{-\varepsilon _{k}/\varepsilon _{i}}. \end{equation*}% Finally, let us apply the reciprocal transformation (\textbf{\ref{ro}}), where% \begin{equation*} \mathbf{H}_{-2}=e^{B^{0}/\varepsilon _{i}}\underset{k\neq i}{\prod }% (u^{k})^{-\varepsilon _{k}/\varepsilon _{i}}. \end{equation*}% Then the equation of the Riemann surface for the corresponding hydrodynamic reduction% \begin{eqnarray*} \bar{u}_{y}^{k} &=&\partial _{z}\left[ \frac{(\bar{u}^{k})^{2}}{2}% (C^{-1})^{2}\right] \text{, \ \ \ \ \ }k=1\text{, }2\text{, ... , }N\text{, \ \ \ \ \ }k\neq i, \\ && \\ C_{y}^{-1} &=&(C^{-1})^{2}\partial _{z}\left( \underset{n\neq i}{\sum }% \varepsilon _{n}\ln \bar{u}^{n}\right) \text{,} \end{eqnarray*}% of the dispersionless limit of 2+1 Harry Dym equation is given by% \begin{equation*} \lambda =C^{-1}q-\underset{n\neq i}{\sum }\varepsilon _{n}\ln \frac{1-\bar{u}% ^{n}/q}{\bar{u}^{n}}. \end{equation*}% All other hydrodynamic reductions of the Benney moment chain can be re-calculated into hydrodynamic reductions of (\textbf{\ref{g}}) in the same way. \section*{Acknowledgement} I am grateful to the Institute of Mathematics in Taipei (Taiwan) where some part of this work has been done, and especially to Jen-Hsu Chang and Derchyi Wu for fruitful and stimulating discussions.% \addcontentsline{toc}{section}{References}
1,477,468,751,192
arxiv
\section{Introduction} \label{sec:intro} In the past decade it has become clear from observations that star formation in the Central Molecular Zone (CMZ; \citealt{morris96a}) of the Milky Way, and in the centres of other nearby galaxies \citep[e.g.,][]{barth95a,jogee02a}, deviates from the patterns of star formation and gas distribution that are observed at larger galactic radii. In the bulk of galactic discs, including that of the Milky Way, the molecular gas that fuels star formation is organised into clouds that are arranged in spiral patterns, either flocculent or grand design. In contrast, in the Milky Way's CMZ much of the gas is collected into a partially filled, ring-like stream of material $\sim 100$ pc from the Galactic Centre, which appears to be a persistent structure \citep{sofue95a, molinari11a, kruijssen15a, henshaw16a}. Clouds exist within the ring, but appear to form a well-defined time sequence in terms of their level of star formation activity \citep{longmore13b}. While rings such as this are occasionally seen at larger galactocentric radii (e.g., Andromeda), they are far from the typical arrangement of gas. Second, the molecular gas in the bulk of spiral galaxies appears to form stars with a fairly constant depletion time (defined as the ratio of the gas surface density to the star formation surface density), with either no dependence \citep[e.g.,][]{bigiel08a, leroy08a, leroy13a} or only a weak dependence \citep[e.g.,][]{meidt13a, suwannajak14a} on the large scale rate of shear or other galactic-scale dynamics. In contrast, galactic centres exhibit a much wider range of depletion times than do the outer parts of discs \citep[e.g.,][]{saintonge12a, leroy13a, longmore13a}. Furthermore, in outer discs there is no obvious evidence for dynamical effects at all if one considers gas much denser than the $\sim 100$ cm$^{-3}$ traced by CO emission \citep[e.g.,][]{gao04b, garcia-burillo12a, usero15a}. In contrast, given its budget of dense gas, the Milky Way's CMZ appears to be forming significantly fewer stars than one would expect if it had the same depletion time observed elsewhere. The present-day star formation rate of the CMZ is $\sim 0.05$ $M_\odot$ yr$^{-1}$ \citep{crocker12a, longmore13a, koepferl15a}, whereas the expected rate if the dense gas in the CMZ formed stars on a timescale similar to that found elsewhere in galaxies would be at least an order of magnitude larger. At the extreme end of this variation are CMZ objects such as ``The Brick" \citep{longmore12a, kauffmann13a, rathborne14a, rathborne15a, mills15a}, large clouds of extremely dense molecular gas that, if found in the outer Galaxy, would be expected to be intensely star-forming, yet in fact display almost no star formation activity. A third potentially odd feature of Galactic Centre star formation is its burstiness. While star formation is always bursty when measured on sufficiently small scales simply as a result of finite molecular cloud masses and lifetimes \citep[e.g.][]{da-silva14b, kruijssen14c}, there is substantial evidence that the Milky Way's CMZ is significantly more episodic than the rest of the disc. Lines of evidence for episodic star formation in the CMZ include both direct star counts \citep{yusef-zadeh09a} that reveal more stars than would be expected given the present day production rate, and the presence of large off-plane bubbles \citep{sofue84a, bland-hawthorn03a, su10a} that would appear to require $\sim 0.1$ $M_\odot$ yr$^{-1}$ to drive, somewhat higher than the present-day star formation rate, but not higher than the time-averaged star formation rate that would be inferred from the present-day mass of the stellar bulge \citep{crocker12a, kruijssen14b, crocker15a}. In \citet[hereafter \citetalias{krumholz15d}]{krumholz15d}, we introduced a model to explain some of the major observed features of the Milky Way CMZ and, by extension, the analogous regions of other barred spiral galaxies. The central idea of this model was to note that the Galactic Bar will transport a relatively continuous supply of gas from the inner Lindblad resonance (ILR; $r\sim 1$ kpc) to the outskirts of the CMZ disc ($r<500$ pc; \citealt{binney91a, kormendy04a, sormani15a}). Once deposited there, gas in the CMZ will be subject to periodic perturbations from the bar, which inside the ILR can drive acoustic instabilities that will simultaneously transport mass inward and pump up the gas velocity dispersion \citep{bertin89a, montenegro99a}, thereby preventing it from becoming self-gravitating and forming stars. This process will continue until the gas reaches $\sim 100$ pc, where the observed rotation curve of the Milky Way begins to turn over from flat to solid body, and the rate of shear drops. The loss of shear suppresses acoustic instabilities (which only occur when shear is present) and causes gas to accumulate until it becomes self-gravitating and star formation begins. (Indeed, the idea that low-shear regions tend to accumulate gas and produce rings goes back considerably before our model, e.g., \citealt{icke79b} and \citealt{fukunaga83a}.) We showed that this mechanism naturally produces the observed ring-like structure and explains its location, and that it naturally explains the long depletion times observed in in the CMZ. We further conjectured that, once star formation begins, stellar feedback would then expel much of the gas, leading to quenching until the bar replenished the gas supply, and explaining why star formation occurs in bursts, though we modify this picture in this paper. While this model has a number of attractive features, the last portion of it necessarily remained conjectural, because we did not model the process of star formation feedback and gas ejection directly in \citetalias{krumholz15d}. We could not directly estimate the time scale of the bursts, for example, nor could we compute their magnitude, the partition of inflowing material between star formation and loss in a wind, and the level of variation we expect in the gas mass as a result of starbursts. In this paper we seek to remedy this situation by extending the model presented in \citetalias{krumholz15d} with a treatment of star formation feedback and wind ejection. As in \citetalias{krumholz15d}, we focus first on the Milky Way's CMZ, because that is the region for which we have by far the best dynamical information, but we then extend the model to other galaxies. The plan for the remainder of this paper is as follows. in \autoref{sec:model} we present our basic model, and highlight the new treatment of star formation and feedback that we have added in comparison to \citetalias{krumholz15d}. In \autoref{sec:results} we present simulation results. We discuss the implications of these results in \autoref{sec:discussion}, and summarise and discuss prospects for future work in \autoref{sec:conclusion}. \section{Model} \label{sec:model} The model we build for the Milky Way's Central Molecular Zone (CMZ) is a generalisation of the one presented in \citetalias{krumholz15d}. Here we summarise the most salient aspects of that model, referring readers to \citetalias{krumholz15d} for full details, before moving on to the new aspects of the model included here. Unless otherwise noted, all parameter choices made in this paper are identical to the fiducial ones made in \citetalias{krumholz15d}. All the simulation code used for this project is publicly available from \url{https://bitbucket.org/krumholz/cmzsf}. \subsection{Dynamical Evolution} We approximate the gas in the CMZ as an axisymmetric thin disc characterised by a surface density $\Sigma$ and velocity dispersion $\sigma$, both as a function of radius $r$ from the Galactic Centre. The gas orbits in a potential derived from the measurements of \citet{launhardt02a}. We use these measurements to produce a smooth, interpolated rotation curve $v_\phi(r)$ from which we can derive the dimensionless index $\beta = d\ln v_\phi/d\ln r$ that describes the rate of shear; formally, the dimensionless shear rate is $1-\beta$. We treat the rotation curve as constant in time.\footnote{Our approximation that the rotation curve is constant limits the total time for which we can run our simulations to be such that the mass of stars formed during the simulation is small compared to the dynamical mass responsible for producing the rotation curve. For the run duration of 500 Myr that we adopt below, this condition is satisfied for all our runs; in our fiducial case the mass added to the domain is below 10\% of the dynamical mass interior to the radius where stars form, and for all runs it is below 20\%.} We evolve the gas using the \texttt{VADER}\ code of \citet{krumholz15a}, which solves the equations of mass, energy, and angular momentum conservation for the disc in conservative form. As in \citetalias{krumholz15d}, we place the inner and outer edges of the region to be simulated at $r = 10$ and $450$ pc, respectively, and use 512 computational zones uniformly spaced in $\log r$. Our model here differs from that in \citetalias{krumholz15d} only in that we include source terms in the equations to represent the effects of star formation and winds. Formally, the equations we solve are \begin{eqnarray} \frac{\partial}{\partial t}\Sigma + \frac{1}{r} \frac{\partial}{\partial r} \left(r v_r \Sigma\right) & = & -\dot{\Sigma}_* - \dot{\Sigma}_{\rm wind} \\ \lefteqn{\frac{\partial}{\partial t} E + \frac{1}{r} \frac{\partial}{\partial r} \left[rv_r(E+P)\right] - \frac{1}{r}\frac{\partial}{\partial r} \left(r \frac{v_\phi \mathcal{T}}{2\pi r^2}\right)} \qquad\qquad\qquad\qquad \nonumber \\ & = & \dot{E}_{\rm SF,turb} - \dot{E}_{\rm rad}, \end{eqnarray} where the source terms on the right-hand side of the first equation represent the rates of change of gas surface density due to star formation and loss by winds, while those in the second equation represent the rate of change of turbulent energy due to star formation feedback and due to radiative losses from shocks. We discuss the values of these terms below. In these equations $P = \Sigma \sigma^2$ is the vertically-integrated pressure, $v_r$ is the radial velocity, and $\mathcal{T}$ is the turbulent torque, which is related to $v_r$ via angular momentum conservation: \begin{equation} v_r = \frac{\partial\mathcal{T}/\partial r}{2\pi r \Sigma v_\phi (1+\beta)}. \end{equation} A key parameter of this model is the dimensionless rate of angular momentum transport $\alpha$ produced by instabilities, which determines $\mathcal{T}$ via \begin{equation} \mathcal{T} = -2\pi r^2 \alpha P(1-\beta). \end{equation} As in \citetalias{krumholz15d}, we consider two sources of transport: gravitational and acoustic instability. The former is parameterised by the usual \citet{toomre64a} $Q$ parameter. The latter instability can occur when gas orbits inside the inner Lindblad resonance of a periodic perturber, in this case the Galactic Bar. It arises when pressure waves within the disc driven by the bar cause the perturbed gas orbits to align, leading to a growing mode. The instability grows most strongly in regions of weak self-gravity and high shear. Both gravitational and acoustic modes can be combined into a single dispersion relation, derived by \citet{montenegro99a}. In our simulations, we obtain numerical solutions to this dispersion relation at each radius. When an unstable mode is present, we compute the growth timescale $t_{\rm growth}$ of the fastest growing mode. In unstable regions, we take \begin{equation} \alpha = \min(\alpha_0 e^{1-t_{\rm growth}/t_{\rm orb}}, 1), \end{equation} where $t_{\rm orb}$ is the local orbital period in the disk. We adopt the same fiducial value $\alpha_0 = 1$ as in \citetalias{krumholz15d}, so that, in regions of the disk where an unstable mode has a growth timescale equal to or smaller than the orbital period, the rate of transport corresponds to a large value $\alpha \approx 1$. We argue in \citetalias{krumholz15d} that, given the nature of the instabilities we are considering, this is the most plausible value. A second key parameter in our models is the rate of radiative losses from the disc, $\dot{E}_{\rm rad}$. These losses occur due to radiative shocks produced by the turbulence in the disc, and result in the full disc energy being radiated away each dynamical time. We compute $\dot{E}_{\rm rad}$ exactly as in \citetalias{krumholz15d}. We pause here to note an important implication of the value of $\dot{E}_{\rm rad}$: the loss of turbulent energy on a flow crossing timescale tends to push galactic discs toward $\alpha \approx 1$ in non-star-forming regions. The reason is that, in the absence of star formation as an energy source, maintaining energy balance in a galactic disc requires that the rate of energy release by inward transport of mass balance the rate of energy dissipation. If the timescale for energy dissipation is a dynamical time, then the rate of inward mass flow required for balance corresponds to $\alpha\approx 1$, with the exact value depending on the exact energy dissipation rate, the gas fraction, and the rotation curve \citep{krumholz10c}. The disc simulation requires boundary conditions at the inner and outer edges. For the inner boundary, we set the mass flux to be zero; in practice we find that no significant amount of mass approaches the inner boundary, so this choice has no practical effect. At the outer boundary, we impose a fixed inward mass flux $\dot{M}_0$, for which we consider a range of possible values. This mass flux is provided by material that is removed from its circular orbit by the Galactic bar and transported inwards to settle into the CMZ \citep{binney91a, kormendy04a, crocker12a, sormani15a}. The mass transport rate is uncertain, but observations suggest it lies in the range $\dot{M}_{\rm in} = 0.1 - 1.0$ $M_\odot$ yr$^{-1}$, so we consider this range in our work. We set the velocity dispersion of this inward-flowing material to $\sigma_{\rm in} = 40$ km s$^{-1}$, following \citetalias{krumholz15d}. We initialise all our simulations by placing a uniform surface density of $0.01$ $M_\odot$ pc$^{-2}$ with a velocity dispersion of $40$ km s$^{-1}$ in all zones, thereby beginning the simulations in a nearly gas-free state. \subsection{Star Formation} Where our model differs from that of \citetalias{krumholz15d} is that we have added models for star formation and feedback, which were absent from that paper. To determine where star formation will occur, we must answer the question of where the gas becomes self-gravitating. Let $H_g$ be the gas scale height, which we compute from the gas surface density, velocity dispersion, and stellar density as in \citetalias{krumholz15d}. Formally we can write the rate of star formation per unit area in the disc as \begin{equation} \dot{\Sigma}_* = \epsilon_{\rm ff} \frac{\Sigma}{t_{\rm ff}} \end{equation} where \begin{equation} t_{\rm ff} = \sqrt{\frac{3\pi H_g}{16 G \Sigma}} \end{equation} is the free-fall time at the mid-plane (using $\Sigma/2H_g$ as the gas density), and $\epsilon_{\rm ff}$ is the dimensionless star formation rate per free-fall time \citep{krumholz14c, padoan14a}. The value of $\epsilon_{\rm ff}$ depends on the degree of gravitational boundedness as characterised by the virial ratio $\alpha_{\rm vir}$, and also on the Mach number, plasma $\beta$, compressive to solenoidal ratio of the turbulence \citep[e.g.][]{krumholz05c, padoan11a, federrath12a, federrath13a}. However, $\alpha_{\rm vir}$, is by far the most important parameter, and is the only one we can easily calculate given our simple model. To determine its value, we note that the midplane pressure in our disc is \begin{equation} p_{\rm mp} = \frac{\Sigma\sigma^2}{H_g}. \end{equation} For a disc supported by pressure against self-gravity, we have \citep[e.g.,][]{krumholz05c} \begin{equation} \label{eq:pmp_eq} p_{\rm mp,eq} = \frac{\pi}{2} G \Sigma^2. \end{equation} Note that we have $\Sigma^2$ rather than $\Sigma (\Sigma + \rho_* H_g)$ here because we are interested in the support of the gas against its own self-gravity, discounting the contribution from the gravity of the stars. From these two expression, we can express the virial parameter of the gas as \begin{equation} \alpha_{\rm vir} = \frac{p_{\rm mp}}{p_{\rm mp,eq}}, \end{equation} so that gas becomes self-gravitating as $\alpha_{\rm vir} \rightarrow 1$ from above, and is non-self-gravitating if $\alpha_{\rm vir} \gg 1$. Note that, because we calculate the scale height under the assumption of hydrostatic equilibrium (see \citetalias{krumholz15d}), our model does not permit $\alpha_{\rm vir} < 1$, since $\alpha_{\rm vir} < 1$ can be achieved only under non-equilibrium conditions. Given a value $\alpha_{\rm vir}$, we determine $\epsilon_{\rm ff}$ using an approximation suggested by \citet{padoan11a}, which is that $\epsilon_{\rm ff}$ declines approximately exponentially with $\alpha_{\rm vir}$. Both observations and simulations suggest that $\epsilon_{\rm ff} \sim 0.01$ for $\alpha_{\rm vir}\approx 1$ (see the reviews by \citealt{krumholz14c} and \citealt{padoan14a}, and references therein), and we expect that $\epsilon_{\rm ff} \rightarrow 1$ as $\alpha_{\rm vir} \rightarrow 0$. Thus we adopt the relationship \begin{equation} \epsilon_{\rm ff} = \exp\left[\alpha_{\rm vir} \log(\epsilon_{\rm ff,0})\right], \end{equation} with $\epsilon_{\rm ff,0} = 0.01$ as a fiducial choice. This expression has all the properties we desire: $\epsilon_{\rm ff} \rightarrow 1$ as $\alpha_{\rm vir} \rightarrow 0$, $\epsilon_{\rm ff} = 0.01$ at $\alpha_{\rm vir} = 1$, and $\epsilon_{\rm ff}$ declines exponentially as $\alpha_{\rm vir}$ rises. While the value of $\epsilon_{\rm ff}$ is tightly constrained by observations to lie near our fiducial choice \citep[e.g.,][]{krumholz07e, krumholz12a, federrath13c, krumholz14c, evans14a, salim15a, heyer16a}, we also consider the effects of varying $\epsilon_{\rm ff,0}$. Before moving on we note that, although we have phrased our star formation rate as a function of $\alpha_{\rm vir}$, the virial ratio in our models is closely related to the Toomre $Q$ of the gas. One can show that $\alpha_{\rm vir} \approx 1$ is equivalent to $Q\approx 1$, and thus one may view the dependence of $\epsilon_{\rm ff}$ on $\alpha_{\rm vir}$ in our model as qualitatively equivalent to the condition that star formation starts up as $Q$ approaches 1. \subsection{Stellar Feedback} \label{ssec:feedback} Feedback from stars in our model takes two forms: injection of energy and ejection of mass in the form of winds. Both processes are governed by the momentum input of massive stars, since stellar winds and supernova ejecta that interact with the dense gas in the CMZ will become radiative very quickly, a point to which we will return in \autoref{ssec:windprop}. The first step in our model of feedback is therefore to compute the momentum injection rate from star formation. To do so, we use \texttt{starburst99} \citep{leitherer99a, vazquez05a} to compute the type II supernova rate per unit mass $\Gamma_{\rm SN}(t)$, the bolometric luminosity per unit mass $\mathcal{L}(t)$, and the wind momentum injection rate per unit mass $\mathcal{P}_{\rm wind}(t)$ for simple stellar populations of age $t$ with a \citet{kroupa02c} IMF. The starlight carries a momentum per unit stellar mass per unit time $L(t)/c$. For the supernovae, we adopt a momentum injection per supernova of $p_{\rm SN} = 3\times 10^5$ $M_\odot$ km s$^{-1}$ based on recent simulations \citep[e.g.,][]{martizzi15a, kim15a, walch15b, gentry16a}, giving a supernova momentum injection rate $\Gamma_{\rm SN}(t) p_{\rm SN}$.\footnote{One might worry that the momentum budget would be smaller at the $n\sim 10^4$ cm$^{-3}$ densities found in the CMZ than for the $n\sim 1-100$ cm$^{-3}$ densities found at larger radii, because supernova remnants would become radiative more quickly. However, the simulations show that supernova momentum budget is not very sensitive to density, with fits to the simulation results giving scalings that vary from $p_{\rm SN} \propto n^{-0.06}$ \citep{gentry16a} to $p_{\rm SN} \propto n^{-0.19}$ \citep{martizzi15a}. Moreover, the clustering expected in high density regions can also enhance the momentum budget by a factor of several, pushing in the other direction \citep{gentry16a}. Thus our fiducial estimate should be reasonable even in the CMZ.} The total momentum injection rate per unit time per unit area in our simulations is then simply the sum of these three quantities, convolved with the star formation history, i.e., \begin{eqnarray} \lefteqn{\frac{d\dot{p}}{dA}(t) = } \nonumber \\ & & \int_0^t \dot{\Sigma}_*(t - t') \left[p_{\rm SN} \Gamma_{\rm SN}(t') + \frac{\mathcal{L}(t')}{c} + \mathcal{P}_{\rm wind}(t')\right]\, dt'. \end{eqnarray} Since we know the star formation history from the prescription above, this quantity is straightforward to evaluate. We pause here for three brief comments on the model. First, although we have included winds, radiation pressure, and supernovae, our choice of $p_{\rm SN}$ implies that supernovae are by far the most important form of feedback; winds and radiation pressure are small perturbations on this. Second, we consider only star formation feedback and gravity as sources of turbulence, which means that we are omitting a potential contribution to turbulence from a galactic fountain or from accretion directly onto the CMZ from above (rather than through the disk). These effects could conceivably increase the turbulent velocity dispersion from what we find, but are very poorly constrained either observationally or theoretically. Third, note that we have not included a contribution from trapped infrared radiation pressure. The significance of such an effect has been subject to extensive discussion in the literature in the past few years \citep[e.g.,][]{krumholz09d, murray10a, krumholz12c, krumholz13a, davis14a, rosdahl15a, tsang15a}. We will not rehash that discussion here, but we note that, even in the simulations where trapped infrared radiation pressure is found to be most effective, it becomes significant only when the gas column density and luminosity are so high that the gas disc is optically thick even for radiation whose colour temperature is equal to that of the dust photosphere; \citet{krumholz13a} show that the condition is met only when the gas surface density exceeds $\sim 5000$ $M_\odot$ pc$^{-2}$ and the star formation surface density exceeds $\sim 1000$ $M_\odot$ pc$^{-2}$ Myr $^{-1}$. While such extreme combinations of gas and star formation surface density may exist on $\lesssim 1$ pc scales in Galactic Centre star-forming regions such as Sgr B2 \citep[e.g.,][]{schmiedeke16a}, they are never realised over the larger scales with which we are concerned, either in the real Galactic Centre or in our models. The second step is to consider where the momentum will be deposited. The simplest assumption would be to inject momentum where the stars form, but this ignores the fact that the stars will form with some velocity dispersion relative to the gas out of which they are born. Thereafter they are not constrained to move on exactly the same orbits as the gas. Since supernovae occur over timescale of $\sim 10$ Myr after star formation, and the orbital period at 100 pc from the Milky Way's centre is only $\sim 3$ Myr, stars that are on slightly different orbits than the gas from which they form will have time to drift some distance from their birth sites before exploding, and this will blur out the location where they deposit their momentum. We do not attempt to model this evolution in detail, and instead resort to parameterising it. Specifically, rather than compute the momentum injection rate using the true star formation rate $\dot{\Sigma}_*(r,t)$ in our simulation, we use the convolution of the star formation rate with a Gaussian blur, \begin{equation} \dot{\Sigma}_{*,\rm eff}(r,t) = N^{-1} \int \exp\left[-\frac{(r - r')^2}{2(\epsilon_r r')^2}\right] \dot{\Sigma}_*(r',t)\, dr', \end{equation} where the normalisation factor $N$ is set by the requirement that $\int\dot{\Sigma}_*\, dA = \int\dot{\Sigma}_{*,\rm eff}\, dA$, i.e., that the total amount of momentum injected, integrated over the area of the disc, remain unchanged. The dimensionless quantity $\epsilon_r$ parameterises the amount by which the stars spread out relative to the gas from which they form. Thus the rate of momentum injection in our simulations becomes \begin{eqnarray} \frac{d\dot{p}}{dA}(r,t) & = & \int_0^\infty \dot{\Sigma}_{*, \rm eff}(r, t - t') \cdot {}\nonumber \\ & & \; \left[p_{\rm SN} \Gamma_{\rm SN}(t') + \frac{\mathcal{L}(t')}{c} + \mathcal{P}_{\rm wind}(t')\right]\, dt'. \end{eqnarray} To decide on a fiducial value of $\epsilon_r$, note that if a population of stars begins on a circular orbit with radius $r$ and a velocity $v_\phi$, and their orbits are perturbed by a random velocity $v_*$, the resulting elliptical orbits will be confined to a range of radii \citep{binney87a} \begin{equation} r_* = r \left(1 \pm \frac{4}{3} \frac{v_*}{v_\phi}\right). \end{equation} Under the conditions observed in the CMZ, only some $\sim50$ per cent of all stars are expected to form in bound clusters \citep[e.g.][]{kruijssen12a,adamo15a}, with the rest forming in unbound associations. The unbound stars will drift apart at the internal velocity dispersions of the gas clouds from which they form \citep{efremov98a}, while the bound clusters will move together, dispersing from their birth sites at the overall centre of mass velocity of the cluster. We do not have direct measurements of either the velocity dispersions of associations or the bulk velocities of clusters, but we note that, at larger galactic radii, bound clusters and unbound associations appear to have roughly the same velocity dispersions, so we can use the measured velocity dispersions within CMZ clusters as a rough proxy for the typical velocity dispersion $v_*$. Observed one-dimensional velocity dispersions in Galactic centre star clusters such as the Arches \citep{clarkson12a} and Quintuplet \citep{stolte14a} are typically $\approx 5-6$ km s$^{-1}$, and certainly no more than 10 km s$^{-1}$, and these clusters are formed at $r\approx 90$ pc from the Galactic centre, where the circular velocity $v_\phi \approx 150$ km s$^{-1}$. This suggests that $(4/3)(v_*/v_\phi)\sim 0.05$, and so we adopt $\epsilon_r = 0.05$ as a fiducial value. We also explore variations around this choice. With the rate of momentum injection in hand, we can now proceed to compute the rate at which star formation feedback both drives turbulence and launches winds. Following \citet{krumholz06d}, \citet{matzner07a}, \citet{goldbaum11a}, and \citet{faucher-giguere13a}, we approximate that supernova remnants and similar bubbles merge with the background turbulence and add their energy to it once their expansion velocity decreases to the turbulent velocity, in which case the rate of energy injection into turbulence produced by a momentum injection rate per unit area $d\dot{p}/dA$ is approximately \begin{equation} \left(\frac{d\dot{E}}{dA}\right)_{\rm SF,turb} = \sigma\left(\frac{d\dot{p}}{dA}\right). \end{equation} To avoid producing unphysically-large velocity dispersions in very low surface density cells, we suppress energy injection in cells with surface densities below a minimum value of $1$ $M_\odot$ pc$^{-2}$. Thus our final expression for the rate of energy injection by star formation is \begin{equation} \label{eq:edotturb} \left(\frac{d\dot{E}}{dA}\right)_{\rm SF,turb} = \sigma\left(\frac{d\dot{p}}{dA}\right) e^{-\Sigma_{\rm lim}/\Sigma}, \end{equation} with $\Sigma_{\rm lim} = 1$ $M_\odot$ pc$^{-2}$. The final term we must compute is the rate at which momentum injection drives winds off the disc. We compute this rate following the formalism of \citet{thompson16a}. The essential idea of this model is that turbulence will produce a lognormal distribution of column densities in the disc. For a fixed rate of momentum injection per unit area, one can compute a critical column density below which the inertia of the gas is small enough that the upwards momentum injection produces a force that exceeds the force of gravity, leading material to be ejected. The rate of mass ejection depends on the ratio of the momentum injection rate to the mean Eddington injection rate, and on the Mach number of the turbulence, which determines the dispersion of column densities. The Mach number is simply $\mathcal{M} = \sigma/\sigma_{\rm th}$ where $\sigma_{\rm th}$ is the thermal velocity dispersion, which we take to be $0.5$ km s$^{-1}$ as in \citetalias{krumholz15d}. To compute the Eddington injection rate, we must know the depth of the potential from which the gas must escape, including both the gaseous and stellar\footnote{``Stellar" here should be understood to include any contribution from collisionless dark matter as well.} contributions. The former is easy to compute: for gas of surface density $\Sigma$ in a thin disc, the gravitational acceleration is simply $g_{\rm gas} = 2\pi G \Sigma$, independent of height. The corresponding acceleration from the stellar potential is somewhat trickier to estimate, because the stars have a much larger scale height than the gas, and thus the gravitational acceleration experienced by a parcel of gas will increase as it rises above the midplane in a wind. To escape from the CMZ and not simply be puffed above the disc to fall back, the gas must have enough momentum to overcome the gravitational acceleration well above the disc. Computing this properly would require knowledge of the full three-dimensional stellar potential, which is only poorly constrained, but we can make a rough estimate. Following Paper I, we note that, in spherical symmetry, the stellar mass density at radius $r$ that is required to produce a rotation curve with velocity $v_\phi$ is given by \begin{equation} \rho_{*,\rm sphere} = (1+2\beta) \frac{v_\phi^2}{4\pi G r^2}, \end{equation} and for such a spherical distribution the characteristic scale height is $\sim r$. We therefore approximate the stellar acceleration as $g_* \approx 2\pi G \rho_{*,\rm sphere} r$. A more flattened distribution would raise $\rho_*$ but decrease the scale height of the stellar distribution by the same factor, and thus produce about the same net result for the acceleration. Combining the gaseous and stellar contributions, the Eddington momentum injection rate in a region with gas surface density $\Sigma$ is \begin{equation} \label{eq:pdotedd} \left(\frac{d\dot{p}}{dA}\right)_{\rm Edd} = \Sigma (g_{\rm gas} + g_*), \end{equation} and, following \citet{thompson16a}, we define the parameter $x_{\rm crit}$ as \begin{equation} \label{eq:xcrit} x_{\rm crit} = \ln \left[\frac{d\dot{p}/dA}{\left(d\dot{p}/dA\right)_{\rm Edd}}\right]. \end{equation} Note that we use $(g_{\rm gas} + g_*)$ rather than simply $g_{\rm gas}$ when computing the Eddington rate, as opposed to our approach in computing the virial ratio (c.f.~\autoref{eq:pmp_eq}), because for the latter we are concerned with whether self-gravity can induce the gas to collapse, while for the former we are concerned with the question of whether supernovae inject enough momentum into the gas to unbind it from both itself and from the stellar potential. Given $\mathcal{M}$ and $x_{\rm crit}$, \citet{thompson16a} show that the mass ejection rate is given by \begin{equation} \label{eq:sigmawind} \dot{\Sigma}_{\rm wind} = \zeta \Sigma \frac{\sigma}{H_g}, \end{equation} where \begin{eqnarray} \zeta & = & \frac{1}{2}\left[1 - \textrm{erf}\left(\frac{-2 x_{\rm crit} + \sigma_{\ln\Sigma}^2}{2\sqrt{2}\sigma_{\ln \Sigma}}\right)\right] \\ \sigma_{\ln\Sigma}^2 & = & \ln\left(1 + R\frac{\mathcal{M}^2}{4}\right) \\ R & = & 0.5 \left(\frac{\mathcal{M}^{-1.0}-1}{1-\mathcal{M}^{1.0}}\right). \end{eqnarray} Physically, \autoref{eq:sigmawind} simply asserts that gas with little enough inertia to be accelerated to the escape speed in a disc crossing time will be removed on that same timescale, while material of higher inertia, as implied by higher surface density, will not. Note that the dispersion in column densities $\sigma_{\ln\Sigma}$ is smaller than the corresponding dispersion in volume density for the same Mach number as a result of line of sight averaging. All the above expressions are valid in the limit $\mathcal{M}\gg 1$. As with energy injection, we exponentially suppress this effect once the surface density has been driven too low, in order to avoid generating unphysically low surface density cells that produce numerical problems. Thus in our code we modify \autoref{eq:sigmawind} to \begin{equation} \dot{\Sigma}_{\rm wind} = \zeta \Sigma \frac{\sigma}{H_g} e^{-\Sigma_{\rm lim}/\Sigma}. \end{equation} \subsection{Numerical Limits} One final modification we make to the code is to impose a floor on the column density and a corresponding ceiling on the temperature. We do this because, after very long run times, cells near the inner edge of our grid can reach very low column densities and very high velocity dispersions not as a result of winds, but simply as a result of advection converting gravitational potential energy to velocity dispersion. This does not affect the results or the ability of the code to run, but it does result in time steps that are inconveniently small. We therefore add the following purely numerical source terms in all cells: \begin{eqnarray} \dot{\Sigma}_{\rm num} & = & \frac{\Sigma_{\rm floor}}{r/v_\phi} \left[\frac{e^{\Sigma_{\rm floor}/\Sigma}}{1 + e^{(\Sigma/\Sigma_{\rm floor})^2}}\right] \\ \dot{E}_{\rm num} & = & -\frac{\Sigma \sigma_{\rm NT}^2}{r/v_\phi}\left[\frac{e^{\sigma_{\rm NT}/\sigma_{\rm ceil}}}{1 + e^{(\sigma_{\rm ceil}/\sigma_{\rm NT})^2}}\right] \end{eqnarray} where $\sigma_{\rm NT} = \sqrt{\sigma^2 - c_s^2}$ is the non-thermal velocity dispersion, $c_s = 0.5$ km s$^{-1}$ is our adopted thermal sound speed, $\Sigma_{\rm floor} = 10^{-4}$ $M_\odot$ pc$^{-2}$, and $\sigma_{\rm ceil} = 400$ km s$^{-1}$. Thus these terms artificially add mass and remove energy to keep the surface density from falling below $10^{-4}$ $M_\odot$ pc$^{-2}$ and the velocity dispersion from increasing above 400 km s$^{-1}$; both source terms are suppressed as $e^{-x^2}$ in cells not near these limits. We have verified that both of these source terms change the total mass or energy in the computational domain by only a tiny amount over the full course of the simulations, while increasing the mean time step by a factor of $\sim 100$. \section{Results} \label{sec:results} \begin{table*} \centering \caption{List of simulations. \label{tab:sims}} \begin{tabular}{@{}l@{\qquad}ccc@{\qquad}ccc@{}} \hline & \multicolumn{3}{c}{Input Parameters} & \multicolumn{3}{c}{Results}\\ Run Name & $\dot{M}_{\rm in}$& $\epsilon_r$ & $\epsilon_{\rm ff, 0}$ & SFE & $\nu_{\rm max}^{-1}$ (observed) & $\nu_{\rm min}^{-1}$ (observed) \\ & [$M_\odot$ yr$^{-1}$] & & & & [Myr] & [Myr] \\ \hline m01r050f10 & 0.1 & 0.050 & 0.010 & 0.92 & 23 (23) & 10 (10) \\ m03r025f10 & 0.3 & 0.025 & 0.010 & 0.70 & 42 (42) & 6 (8) \\ m03r050f05 & 0.3 & 0.050 & 0.005 & 0.72 & 23 (23) & 10 (10) \\ m03r050f10 & 0.3 & 0.050 & 0.010 & 0.72 & 21 (21) & 5 (8) \\ m03r050f20 & 0.3 & 0.050 & 0.020 & 0.67 & 42 (42) & 4 (8) \\ m03r100f10 & 0.3 & 0.100 & 0.010 & 0.59 & 15 (15) & 7 (7) \\ m10r050f10 & 1.0 & 0.050 & 0.010 & 0.48 & 42 (42) & 7 (8) \\ \hline \end{tabular} \begin{tablenotes} \item Here SFE is the star formation efficiency, defined as the time-averaged ratio of mass converted to stars to mass converted into stars plus lost to the wind (see \autoref{eq:SFE}). Note that this is distinct from both the instantaneous star formation efficiency or a single cloud and the star formation rate per free-fall time $\epsilon_{\rm ff}$. For the timescales $\nu_{\rm max}^{-1}$ and $\nu_{\rm min}^{-1}$, the first figure is the value computed using the true star formation rate, while the second (in parentheses) is the figure using the observationally-inferred star formation rate. \end{tablenotes} \end{table*} In \autoref{tab:sims} we summarise the full set of simulations that we have run, and collect various quantitative results for them. Simulations vary only in the value of the accretion rate $\dot{M}_{\rm in}$ into the CMZ, the value of the parameter $\epsilon_r$ that determines the radial extent over which stellar feedback is spread, and the parameter $\epsilon_{\rm ff,0}$ that defines the rate of star formation per free-fall time at a virial ratio of unity; simulation names follow the convention m\textit{XX}r\textit{YYY}f\textit{ZZ}, where $XX=10\dot{M}_{\rm in}/(M_\odot\,\mathrm{yr}^{-1})$, $YYY=1000\epsilon_r$, and $ZZ=1000\epsilon_{\rm ff}$. All other parameters are as described in \autoref{sec:model}, or in \citetalias{krumholz15d}. We run all simulations for 500 Myr. \subsection{Qualitative Behaviour} \label{ssec:example} \begin{figure*} \includegraphics[width=\textwidth]{summary_fiducial} \caption{ \label{fig:summary_fiducial} Summary of the outcome of the fiducial simulation m03r050f10. In each panel, radial position is indicated on the $x$ axis and evolution time on the $y$ axis. Coloured panels indicate the values of the quantities indicated in the colour bars: gas surface density $\Sigma$, velocity dispersion $\sigma$, scale height $H$, virial ratio $\alpha_{\rm vir}$, depletion time $t_{\rm dep} = \Sigma/\dot{\Sigma}_*$, star formation rate $\dot{\Sigma}_*$, wind mass launching rate $\dot{\Sigma}_{\rm wind}$, and momentum injection rate $d\dot{p}/dA$. In the two centre-right panels, the line plots show the rotation velocity $v_\phi$ (top) and the dimensionless rate of shear $1-\beta$ (bottom) as a function of radius. Note that we plot only a portion of the simulation domain in order to emphasise interesting features. } \end{figure*} We first focus on run m03r050f10 ($\dot{M}_{\rm in} = 0.3$ $M_\odot$ yr$^{-1}$, $\epsilon_r = 0.05$, $\epsilon_{\rm ff} = 0.01$), since it was run with our fiducial parameter choices, and many of the qualitative features we find in this run are common to all the simulations. \autoref{fig:summary_fiducial} summarises the outcome of this simulation. Gas enters from the outer edge of the computational domain and flows inward toward the origin as a result of acoustic instability.\footnote{We caution that this acoustic instability-dominated region is at the edge of applicability for our thin disc model. The transport equations that \texttt{VADER} solves are valid to order $(H_g/r)^2$ \citep{krumholz10c, krumholz15a}, and in the acoustic region $H_g/r$ is in the range $0.3-0.5$. Thus in this region we are dropping terms that are smaller than the ones we have retained by only $\approx 10-25\%$. That said, since there are no interesting dynamics in this region, and the gas simply flows through, the impact of such errors is likely to be minimal in any event.} Just inside $100$ pc, where the rotation curve turns from near-flat to near-solid body, this instability shuts off due to the loss of shear. The ``dead zone" where the shear is too small to drive acoustic instability is most easily visible in the plot of gas velocity dispersion, where it manifests as a region where the dispersion falls to low values until star formation begins and pumps it back up. In this dead zone, gas accumulates and, as this happens, the velocity dispersion, scale height, and virial ratio all drop. Immediately outside the dead zone the scale height remains fairly constant at tens of pc and the velocity dispersion at tens of km s$^{-1}$, but inside the dead zone, the velocity dispersion drops as low as $\sim 1$ km s$^{-1}$ and the scale height reaches $\sim 1$ pc. This first occurs at $\sim 15-20$ Myr of evolution and, at this point, star formation begins. Momentum injection from star formation begins in earnest a few Myr later, and this in turn drives a wind with a mass flux comparable to the star formation rate, while also pumping up the turbulent velocity dispersion, scale height, and virial ratio, all of which lower the star formation rate. By $\sim 100$ Myr of evolution, the system has settled into a quasi-steady cycle, which we illustrate further in \autoref{fig:cycle_fiducial}. Star formation thereafter proceeds in bursts, always centred on a ring located at the shear minimum. To be quantitative, the time-averaged star formation rate peaks at $r_{\rm peak}=100$ pc, whereas the minimum of shear is at $r=81$ pc. Averaged over time, material at $r = 100 \pm 10$ pc accounts for 35\% of the mass and 48\% of the star formation in the computational domain. The velocity dispersion, virial ratio, and scale height in this region undergo cycles of increase and decay, oscillating between $\sigma \approx 1-10$ km s$^{-1}$, $H \approx 0.1 - 10$ pc, $\alpha_{\rm vir} \approx 1 - 2$. These in turn drive corresponding cycles in the depletion time, star formation rate, and momentum injection rate. Examining the final panel in \autoref{fig:cycle_fiducial}, one can see a clear phase shift between momentum injection and star formation: at the start of the time interval shown (blue points), the momentum injection rate is high and the star formation rate is low. After $\sim 20$ Myr the momentum injection rate declines, and after $\sim 30$ Myr the star formation rate rises while the momentum injection rate remains low. Finally, at $\sim 40$ Myr, the momentum injection rate rises again, returning to a value similar to that at the start of the cycle. \begin{figure} \includegraphics[width=\columnwidth]{cycle_fiducial} \caption{ \label{fig:cycle_fiducial} The cycle of star formation in the star-forming ring in run m03r050f10. The quantity shown on the horizontal axis is the total star formation rate within the star-forming region at $r = 100\pm 10$ pc. The quantities plotted on the vertical axes are the scale height $H$, virial ratio $\alpha_{\rm vir}$, velocity dispersion $\sigma$, and total momentum injection rate $\dot{p}$ in this region. We compute the first three of these quantities as averages over the ring, weighted by the star formation rate in each annulus; total momentum injection rate $\dot{p}$ and the star formation rate $\dot{M}_*$ are integrated over the ring. Each circle represents a snapshot in time separated by $0.4$ Myr, and the points plotted cover a time interval from 460 - 500 Myr of evolution. Points are coloured by time offset from 460 Myr. } \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{sfr_fiducial} \caption{ \label{fig:sfr_fiducial} Area-integrated rates of star formation (solid blue line), mass loss via winds (solid red line), and mass inflow from the outer boundary (dashed black line) in run m03r050f10. } \end{figure} We can see the bursts more clearly by integrating over the entire disc. In \autoref{fig:sfr_fiducial} we show the integrated rates of star formation, mass inflow, and mass outflow (via the wind) in the entire disc. it is clear that, once the system reaches quasi-equilibrium, star formation is an episodic phenomenon with a rough period of tens of Myr. The wind mass loss rate is also periodic, but with smaller oscillations than the star formation rate. Wind mass launching has a slight phase lag relative to star formation, as one might expect: winds are launched a few Myr after a peak in the star formation rate, since this is when supernova momentum injection peaks. Before proceeding further, it is useful to distinguish between the true star formation rate, and what an observer would infer using a star formation tracer. The most common tracers for the Galactic CMZ are based on ionising photon production, and we therefore compute the total ionising luminosity produced in our simulations via \begin{equation} Q = \int_0^\infty{\dot{M}_*}(t-t') q(t')\, dt', \end{equation} where $q(t)$ is the ionising luminosity per unit mass for a simple stellar population of age $t$; we derive this quantity from the same \texttt{starburst99} computations described in \autoref{ssec:feedback}. We then convert this to a star formation rate via \begin{equation} \dot{M}_{*,\rm obs} = \frac{Q}{1.57\times 10^{53}\,(\mathrm{photons}\;\mathrm{s}^{-1})/(M_\odot\;\mathrm{yr}^{-1})}, \end{equation} where the conversion factor is derived from a \texttt{starburst99} computation for a population with a constant star formation rate at an age of 50 Myr. Because $\dot{M}_{*,\rm obs}$ is derived from an integral over the stellar population, it slightly lags and smooths the true star formation rate. \begin{figure} \includegraphics[width=\columnwidth]{tdep_pdf_fiducial} \caption{ \label{fig:tdep_pdf_fiducial} Probability distribution function $dp/d\log t_{\rm dep}$ for the logarithm of the instantaneous depletion time $t_{\rm dep}$, computed using both the true (solid line) and observed (dashed line) area-integrated star formation rates in run m03r10f10. The dotted vertical line shows a depletion time of $2$ Gyr. } \end{figure} With this quantity in hand, in \autoref{fig:tdep_pdf_fiducial} we plot the corresponding probability distribution function (PDF) for the true and observationally-inferred log depletion times, $dp/d\log t_{\rm dep}$. This quantity is simply the probability density that the system would show a log depletion time between $\log t_{\rm dep}$ and $\log t_{\rm dep} + d\log t_{\rm dep}$ if it were observed at a random time; in computing this statistic we only consider times $t>200$ Myr, to exclude the phase when the system is still settling into steady state. We see that the system spends $\approx 40\%$ of its time with a depletion time $>1$ Gyr, a typical value for outer discs, and the other $\approx 60\%$ with a substantially smaller depletion time, which we might characterise as an outburst state. If we use the observationally-inferred star formation rate instead, these figures shift to 30\% in ``normal" star-forming mode and 70\% in ``starburst" mode; the difference arises because the use of the ionisation-based star formation rate smears out the bursts, making them appear to last longer. \begin{figure} \includegraphics[width=\columnwidth]{periodogram_fiducial} \caption{ \label{fig:periodogram_fiducial} Periodogram of the true (solid line) and observed (dashed line, almost completely hidden by the solid line) area-integrated star formation rates in run m01r050f10. The $x$ axis shows the period, and the $y$ axis shows power normalised by the power in the highest power bin. We compute this periodogram using a Hann window function. } \end{figure} To further characterise the burst behaviour, in \autoref{fig:periodogram_fiducial} we show a periodogram of the true and observed star formation rates; modulo issues of numerical aliasing due to the finite number of samples and the non-periodicity of the data, this quantity is simply the power spectrum of the star formation history, plotted as a function of inverse frequency. From the periodogram, see that there are primary power spikes at tens of Myr, with secondary spikes at $\sim 5-10$ Myr. To make this quantitative, we define two timescales, $\nu^{-1}_{\rm min}$, and $\nu^{-1}_{\rm max}$, as the minimum and maximum inverse frequencies for which the power spectral density density $P(\nu)$ is equal to 10\% of its peak value. The choice of 10\% is somewhat arbitrary, but results are not very sensitive to the exact threshold we choose, and visual examination of the periodograms and time series shows that this choice does a good job of reproducing what one would pick out by eye. Intuitively, we may think of $\nu^{-1}_{\rm min}$ and $\nu^{-1}_{\rm max}$ as characterising the shortest and longest timescale on which the star formation rate varies, with the former describing the short duration of individual bursts, and the latter describing the longer periodicity between bursts. For run m03r050f10, we find $\nu^{-1}_{\rm min} = 5$ Myr and $\nu^{-1}_{\rm max} = 21$ Myr; if we consider the observationally-inferred star formation rate instead, the longest period remains roughly the same, while the shortest one increases to about 8 Myr. \begin{figure} \includegraphics[width=\columnwidth]{tdep_mgas_fiducial} \caption{ \label{fig:tdep_mgas_fiducial} Depletion time (top panel) and gas mass (bottom panel) in run m03r050f10. We show results both for the entire computational domain (blue line) and for the ring of material at $r_{\rm peak}\pm 10$ pc, where $r_{\rm peak}=100$ pc is the radius at which the time-averaged star formation rate reaches its maximum. } \end{figure} During an outburst the star formation rate rises by a factor of $\sim 100$, from a few percent of the gas inflow rate to several times the inflow rate. As noted above, the wind mass loss rate varies less than the star formation rate, showing only factor of $\sim 10$ changes from peak to trough. Consequently, there is wide variation in the ratio of the wind outflow rate to the star formation rate, known as the mass loading factor. This can be as high as $\sim 10$ immediately after an outburst, and as small as $\sim 0.3$ immediately after a burst begins, before supernovae begin to occur. Averaging over all times after $200$ Myr, once the gas mass in the system reaches steady state, we find that the star formation efficiency is \begin{equation} \label{eq:SFE} \mathrm{SFE} = \frac{\left\langle\dot{M}_*\right\rangle}{\left\langle\dot{M}_* + \dot{M}_{\rm wind}\right\rangle} = 0.72, \end{equation} where the angle brackets indicate an average over times $>200$ Myr. Note that, since the total gas mass in the CMZ in our model is in steady state, the star formation efficiency is simply the mean fraction of the gas that enters the CMZ that is ultimately converted to stars. In our fiducial model, we find that 72\% of the inflowing mass goes into stars, while the remaining 28\% is lost in the form of winds. Phrased in terms of a mass loading factor, this corresponds to a time-averaged mass loading factor of 0.39. What causes the bursts? As limiting cases we could imagine that changes in the star formation rate are driven by changes in the gas mass while the gas depletion time remains fairly constant, changes in the depletion time while the gas mass remains fairly constant, or some combination of the two. To address this question, in \autoref{fig:tdep_mgas_fiducial} we show the total gas mass and the gas depletion time measured from the simulations. We show these quantities both for the entire computational domain, and for the ring of material at $r_{\rm peak}\pm 10$ pc. The Figure clearly shows that, while there is some periodic variation in the gas mass, it is far smaller than the variation in the depletion time. Thus bursts are not caused by wholesale ejection of mass from the ring, though \autoref{fig:summary_fiducial} shows that there clearly are local evacuations. Instead, they are caused when the gas is driven to higher velocity dispersions by the effects of stellar feedback. This in turn lowers the star formation rate, both by raising the virial ratio and by increasing the gas scale height and thus lowering the density. After several tens of Myr, the momentum injection rate drops and is no longer able to sustain the high level of turbulence. The velocity dispersion decreases and another outburst cycle begins. We note that this form of the feedback cycle is contrary to what we proposed in \citetalias{krumholz15d}, where we conjectured that there would be wholesale ejection. \subsection{Effects of Varying $\epsilon_{\rm ff,0}$} \begin{figure*} \includegraphics[width=\textwidth]{compare_epsff} \caption{ \label{fig:compare_epsff} Results from runs m03r050f05, m03r050f10, and m03r050f20 (left, centre, and right columns), which all have the same gas inflow rate and feedback prescription, but differ in the parameter $\epsilon_{\rm ff,0}$ that describes the star formation rate per dynamical time. The top three rows show, as a function of time, from top to bottom: mass per unit time $\dot{M}$ converted to stars (blue), lost in winds (red), and entering the outer edge of the disc (black dashed); gas depletion time $t_{\rm dep}$ computed over the entire disc (black) and over the ring of peak star formation at $r = 100\pm 10$ pc (gren); gas mass $M_{\rm gas}$ in the entire disc (black) and the star-forming ring (green). The bottom two rows show the periodogram of the star formation rate and the PDF of depletion times, for both the true (solid black) and observed (dashed black) star formation rates. All quantities are computed in exactly the same manner as those shown in \autoref{fig:sfr_fiducial} - \autoref{fig:tdep_mgas_fiducial}, and the data in the central column here are identical the data shown in those figures. } \end{figure*} The parameter $\epsilon_{\rm ff,0}$, which controls the star formation rate per free-fall time, is significantly constrained by observations. However, there are uncertainties nonetheless, not the least because the virial ratios of observed objects are generally only determinable to the factor of $\sim 2-3$ level. For this reason we compare runs m03r050f05, m03r050f10, and m03r050f20, in which we fix all parameters but $\epsilon_{\rm ff,0}$, which we vary from $0.005 - 0.02$. We compare the star formation histories, depletion times, gas masses, periodograms, and depletion time PDFs in \autoref{fig:compare_epsff}. Examining the different columns in the Figure, we see that all runs again show very similar qualitative behaviour. The main effect of varying $\epsilon_{\rm ff,0}$ is to change the amplitude and timescale of the periodic variation. Using $\epsilon_{\rm ff,0}=0.005$, a factor of two smaller than our fiducial case, leads to a star formation rate and wind outflow rate that varies somewhat more slowly but with somewhat larger amplitude than our fiducial case, while $\epsilon_{\rm ff,0}=0.02$ produces more frequent oscillations of smaller magnitude. This is reflected in the spread in the depletion time PDF as well, with higher values of $\epsilon_{\rm ff,0}$ leading to shorter depletion times especially during quiescence, thereby compressing the overall range of depletion times achieved. With $\epsilon_{\rm ff,0} = 0.02$, the depletion time never rises above 1 Gyr, which is incompatible with observation of galactic centres. However, the time variation of the star formation rate, as measured by the periodogram, is very similar in the three runs, as is the time-averaged star formation efficiency. The exact values of $\nu_{\rm min}^{-1}$, $\nu_{\rm max}^{-1}$, and SFE are given in \autoref{tab:sims}. Clearly, the periodic behaviour we observe is insensitive to the exact value of $\epsilon_{\rm ff,0}$. \subsection{Effects of Varying $\epsilon_r$} The most uncertain value in our model is $\epsilon_r$, the radius over which stellar feedback is spread due to the fact that newly-formed stars are not on orbits that are identical to those of the gas. We have argued that it should be approximately $0.05$ based on computing the spreads in orbits we expect based on the observed velocity dispersions of CMZ star clusters, but these velocity dispersions are purely empirical inputs, and could conceivably have been different at different times or in different galaxies. To test how uncertainty in $\epsilon_r$ might affect our conclusions, in runs m03r025f10, m03r050f10, and m03r100f10, we hold all parameters except $\epsilon_r$ fixed, and vary the value of $\epsilon_r$ from $0.025$ to $0.1$ (see \autoref{tab:sims}). \begin{figure*} \includegraphics[width=\textwidth]{compare_er} \caption{ \label{fig:compare_er} Same as \autoref{fig:compare_epsff}, but now comparing runs m03r05f10, m03r10f10, and m03r20f10, which use identical values for the parameters describing star formation and inflow, but differ in the radial extent over which feedback is injected. } \end{figure*} We show the results of the three simulations with varying $\epsilon_r$ in \autoref{fig:compare_er}. Qualitatively, runs m03r025f10 and m03r050f10, corresponding to $\epsilon_r = 0.025$ and $0.05$, are nearly identical. Run m03r100f10, corresponding to $\epsilon_r = 0.1$, also shows bursty behaviour, but its periodicity is much more regular than in the other two runs. While the PDF of depletion times is much the same, the periodogram for this run shows a single dominant peak at about 20 Myr, rather than several peaks as shown in our other runs. Examining the spatial distribution of gas and star formation for this run (not shown), we see that, rather than a patchy and irregular pattern of star formation found in the other runs, it has a more regular morphology, with star formation always occurring at the same radial location. Nonetheless, we find that burstiness on $\sim 20$ Myr timescales is again a generic outcome of the simulations. We can understand these results, and in particular the difference between the $\epsilon_r = 0.1$ and smaller $\epsilon_r$ cases, by thinking about the spread in feedback compared to the width of the star-forming ring. As noted above, in our fiducial case close to 50\% of the star formation takes place within a ring of $\pm 10$ pc width about $r=100$ pc, so the fractional width of the star-forming region is roughly 10\%. For $\epsilon_r < 0.1$, the feedback is localised within the star-forming ring, and causes disruption of patches of it. This leads to the chaotic, bursty behaviour we observe for the $\epsilon_r = 0.025$ and $0.05$ cases. On the other hand, for $\epsilon_r \gtrsim 0.1$, the feedback becomes close to uniform across the star-forming ring. This reduces its effectiveness somewhat, since some of the momentum is delivered to the non-star-forming gas outside the ring, and also means that the star-forming region reacts coherently rather than locally to the feedback. This coherent response explains the regular pattern we observe in star formation rates for the $\epsilon_r = 0.1$ case. \subsection{Effects of Varying $\dot{M}_{\rm in}$} \begin{figure*} \includegraphics[width=\textwidth]{compare_mdot} \caption{ \label{fig:compare_mdot} Results from runs m01r050f10, m03r050f10, and m10r050f10 (left, centre, and right columns). These all have the same parameters for star formation and feedback, but differ in the mass inflow rate into the CMZ that we assume, with values of $\dot{M}_{\rm in} = 0.1$, $0.3$, and $1.0$ $M_\odot$ yr$^{-1}$, respectively. Panels are the same as in \autoref{fig:compare_er}. } \end{figure*} The final parameter of our model that we consider varying is $\dot{M}_{\rm in}$, the mass accretion rate onto the CMZ from outside. We explore how this parameter affects the behaviour of the CMZ via models m01r050f10, m03r050f10, and m10r050f10, where we use values of $\dot{M}_{\rm in}=0.1$, 0.3, and $1.0$ $M_\odot$ yr$^{-1}$, while holding all other parameters fixed. We show the results of this experiment in \autoref{fig:compare_mdot}. Examining the Figure, it is clear that the primary quantities that are influenced by the inflow rate are, not surprisingly, the star formation and wind mass ejection rates, and steady-state gas mass of the CMZ and in the 10 pc ring. All these quantities appear to scale nearly linearly with the inflow rate. The temporal pattern of star formation is qualitatively the same in all the runs. The main systematic difference we see is in the partition of the inflow between star formation and winds. At the lowest inflow rate, $0.1$ $M_\odot$ yr$^{-1}$, the star formation rate exceeds the mass outflow rate at essentially all times, leading to a comparatively high star formation efficiency of $\approx 90\%$. In contrast, at an inflow rate of $1.0$ $M_\odot$ yr$^{-1}$ the wind outflow rate and star formation rate are nearly the same, leading to a star formation efficiency closer to 50\%. \section{Discussion} \label{sec:discussion} \subsection{A Dynamical Model of the CMZ} We are now in a position to make some general statements about how star formation in the Milky Way's CMZ, and the analogous regions of other galaxies, should behave. Gas enters the CMZ as a result of transfer by the Galactic Bar, and the bar further drives instabilities in the region of high shear that transport mass and keep the gas too turbulent to form stars. This ends where the rotation curve switches from flat to near solid-body, and gas accumulates in this region, forming a persistent ring-like structure. Within the ring star formation occurs in bursts. The driving feature of the bursts is an alternating cycle whereby turbulence decays, leading to high densities and low virial parameters, both of which boost the star formation rate. This leads to the formation of a large stellar population, which begins producing supernovae a few Myr later, raising the level of turbulence and driving the star formation rate back down. The low rate of star formation continues for a while, but over time the supernovae fade and the turbulence decays, causing the cycle to repeat. At the same time, the supernova feedback drives a wind off the CMZ, which carries away a portion of the mass that enters. A key requirement for this cycle to take place is that the timescale for turbulent decay and the onset of star formation is shorter than the time required for the onset of supernova feedback, which prevents the system from reaching an equilibrium in which injection of energy by supernovae balances dissipation. This condition is satisfied in the CMZ, because in the low-shear region at 100 pc, the orbital period is only $\approx 3$ Myr. Based on our simulations, we can make the following quantitative predictions about this cycle, which are robust against variations in any of our uncertain parameters. \begin{itemize} \item We predict that the duration of outbursts should be $\sim 5-10$ Myr, while the overall cycle of burst and quiescence should have a period of $\sim 15-40$ Myr. The former number comes mostly from the delay between the onset of star formation and the first supernovae, while the latter comes from the time required for supernovae to cease and for turbulence to decay, allowing gas to become gravitationally unstable again. \item Throughout this cycle there is a persistent, dense gas structure at $\sim 100$ pc from the Galactic Centre, where the Galactic rotation curve begins to turn toward solid body and the shear reaches a minimum. The mass in this structure varies periodically, and local patches of it may be evacuated by feedback, but the overall variation in mass in this structure is far smaller than the variation in the star formation rate. Instead, changes in the star formation rate are driven primarily by changes in the mean density and velocity dispersion of this structure, which combine to produce a short depletion time during outbursts and a long one during quiescence. \item During quiescence, the gas depletion time of the CMZ is of order 1 Gyr. During outburst this drops by a factor of $\sim 10-100$, reaching $\lesssim 100$ Myr. The true depletion time is $>1$ Gyr (i.e., comparable to what is seen in outer galaxies) for roughly $40\%$ of the time, and is shorter, indicating a starburst, about 60\% of the time. However, because of the short durations of the bursts, and because the true time-averaged depletion time is only a few hundred Myr, an observationally-determined fraction of the time spent in starburst will depend on the effective integration time of the star formation rate tracer used. If one measures with an ionisation-based star formation tracer, the CMZ spends $\sim 30\%$ of its time with what appears to be a ``normal" depletion time $>1$ Gyr, and $\sim 70\%$ with a shorter depletion time. The longer the integration time, the more time will appear to be spent in outburst. \item At an inflow rate of $0.3$ $M_\odot$ yr$^{-1}$, a slight majority of the gas entering the CMZ is converted to stars, while the rest is ejected in a wind driven primarily by supernova feedback. The balance between star formation and wind loss depends mildly on the inflow rate, with lower inflow rates producing higher star formation efficiencies and higher inflow rates producing lower ones. This wind is launched primarily from the same dense structure where star formation occurs, and carries away a time-averaged mass flux that is slightly smaller than the flux of mass going into stars. However, the ratio of wind mass flux to star formation rate undergoes extreme variations, ranging from $\sim 10$ to $\sim 0.03$ depending on where the system is in the outburst cycle. \end{itemize} \subsection{Comparison to the Observed Milky Way CMZ} \begin{figure} \includegraphics[width=\columnwidth]{obs_cycle} \caption{ \label{fig:obs_cycle} Observable cycle of properties of the star-forming ring at $100\pm 10$ pc in run m10r050f10 from $480-500$ Myr (coloured points) as compared to the observed Milky Way ring (gray ellipses). Points are coloured by time since 480 Myr, as in \autoref{fig:cycle_fiducial}, with one point per $0.2$ Myr of time. The properties shown are, from top to bottom: area-weighted mean scale height $H$, mass-weighted mean velocity dispersion $\sigma$, gas depletion time $t_{\rm dep}$, total gas mass $M_{\rm gas}$, and mass-weighted mean Toomre $Q$ parameter for the gas, where at each radius we have $Q = \kappa \sigma/\pi G \Sigma$, where $\kappa$ is the epicyclic frequency. All quantities are shown as a function of the star formation rate as observed with an ionisation-based tracer. The observational constraints shown in gray are taken from the compilations in \citet{kruijssen14b}, \citet{longmore12a,longmore13a}, and \citet{henshaw16a}; in cases where the authors did not state an uncertainty, we have adopted an uncertainty of a factor of 2. } \end{figure} How do our models compare to what we actually observe in the Milky Way? To address this, we focus on model m10r050f10, which by a variety of metrics appears to be the closest match to the Milky Way's CMZ. We illustrate this in \autoref{fig:obs_cycle}, which compares various observable properties of the star-forming ring at $r=100\pm 10$ pc in the simulation to the same properties observed in the Milky Way's star-forming ring, as summarised in Table 1 of \citet{kruijssen14b} and Table 2 of \citet{longmore13a}. This figure is analogous to \autoref{fig:cycle_fiducial}, except that since we are interested in observable rather than intrinsic properties, we slightly modify the quantities plotted; for example, we use the observable rather than the true star formation rate, and we plot an area-weighted rather than a star formation-weighted scale height. \autoref{fig:obs_cycle} clearly shows that this run spends a significant amount of time with properties that closely resemble those of the observed star-forming ring in the Milky Way's CMZ.\footnote{While this discussion focuses on the star-forming ring, we note that the macroscopic properties of the models are also consistent with the observed, large-scale spatial distribution of the gas in the CMZ. \citet{longmore13a} find that a dust-inferred gas mass of $1.8\times10^7~{\rm M}_\odot$ resides within $|\ell|<1^\circ$ (or $r<140$~pc), with $2.3\times10^7~{\rm M}_\odot$ residing at $|\ell|>1^\circ$. The masses shown in \autoref{fig:obs_cycle} reproduce the observed gas mass in the inner CMZ to within the uncertainties and our predicted gas mass outside of the star-forming ring of $\sim1.4\times10^7~{\rm M}_\odot$ also provides a good match to the observed mass. Due to the high virial ratio of the gas in the outer CMZ, its scale height is predicted to be substantially larger than that of the star-forming ring ($H\sim70$~pc rather than $H\sim10$~pc). Again, this increase is qualitatively consistent with observations. At $|\ell|>1^\circ$, the total vertical extent of the observed molecular gas emission (traced by $^{12}$CO, see Figure~3 of \citealt{bally10a}) covers more than a degree in latitude (i.e.~more than 140~pc). This increase of the scale height with radius is of the same order as predicted by our models (see \autoref{fig:summary_fiducial}, as well as Figure 13 of \citetalias{krumholz15d}).} At face value, two evolutionary stages in the modelled cycle seem to match best. At a time of $t=(482.0,486.9)$ Myr, the star-forming ring in this model has $\dot{M}_{*,\rm obs} = (0.050,0.061)$~$M_\odot$ yr$^{-1}$, $H = (7.1,3.1)$~pc, $\sigma = (11.8,7.4)$~km~s$^{-1}$, $t_{\rm dep} = (0.53,0.42)$~Gyr, $M_{\rm gas} = (2.65,2.58)\times 10^7$~$M_\odot$, and Toomre $Q$ parameter $Q_{\rm gas} = (1.55,0.82)$; all of these properties match the properties of {\it some part of} the observed star-forming ring in the Milky Way within the observational uncertainties. Both of the above two model snapshots are close to the star formation minimum in the cycle, but their evolutionary states do differ. At $t=482.0$~Myr (Case A), the star formation rate is decreasing, as the modelled star-forming ring has experienced a starburst some 5 Myr earlier (at $t=477$ Myr) and will evolve through the star formation minimum in another 3 Myr (at $t=485$ Myr), with the next star formation peak expected in 7 Myr (at $t=489$ Myr). By contrast, in the model snapshot at $t=487$ Myr (Case B), the star formation rate is rapidly increasing, as it is exactly midway between the star formation minimum at $t=485$ Myr and the maximum at $t=489$ Myr, with the most recent starburst 10 Myr earlier. If Case A best describes the star-forming ring in the Milky Way's CMZ, then the formation of the Arches and Quintuplet clusters (with ages of 3.5 and 4.8 Myr, respectively, see \citealt{schneider14a}) has taken place at the height of the most recent starburst. However, the highest-density clouds in the star-forming ring (all situated on the `dust ridge' between Sgr A$^*$ and Sgr B2, which has enough mass to form several Arches-like clusters, \citealt{longmore13b}) have such low velocity dispersions ($<10$~km~s$^{-1}$) and small scaleheights (few pc) that they best match the conditions of Case B \citep[cf.][]{henshaw16a}. In other words, our model predicts that these clouds are unlikely to remain quiescent for another 7 Myr (as would be required in Case A). These points suggest that the star-forming ring in the Milky Way's CMZ has a non-zero spread in evolutionary times in the cycle of \autoref{fig:obs_cycle}. This is not surprising; gas is continuously spiralling onto the star-forming ring, implying that a natural time interval for an evolutionary spread is the orbital time of the gas streams within the ring. The time-scale is $\sim4$~Myr \citep{kruijssen15a} and provides a good match to the time difference between these two best-fitting model snapshots. This does, however, point to a limitation of our axisymmetric assumption. This scenario has the following implications: \begin{enumerate} \item The star-forming ring covers the entire timeline between $t=482$--$487$~Myr and {\it on average} resides at the star formation minimum ($t=485$ Myr). \item The previous starburst took place at $t=477$ Myr, some $\sim8$ Myr ago. \item The Arches and Quintuplet clusters represent the last clusters that formed during this previous starburst ($\sim5$~Myr ago). \item The dust ridge contains the clouds that will collapse and form stars first during the onset of the upcoming starburst (in 1--2 Myr). \item The non star-forming, high-velocity dispersion, and large-scale height gas in the star-forming ring has recently been accreted. \end{enumerate} \begin{figure} \includegraphics[width=\columnwidth]{galmap} \caption{ \label{fig:galmap} Top panel: column density $N_{\rm H}$ for model m10r050f10 at time $t=485$ Myr as seen from Earth, with positions indicated in Galactic longitude $\ell$ and Galactic latitude $b$. Bottom panel: the image shows the column density map of \citet{molinari11a}, derived from \textit{Herschel} observations. We have superimposed on it the column density map shown in the top panel, with the colour scale adjusted to match that used in the \textit{Herschel} map; only pixels with $N_{\rm H}>4\times 10^{22}$, the minimum column in the \textit{Herschel} map, are shown, and those that we do show have been left partially transparent to allow comparison with the underlying image. The coloured circles in the lower panel mark the locations of the star-forming molecular clouds Sgr B2, Sgr B1, and Sgr C, as indicated; coordinates for these structures are taken from Table 3 of \citet{henshaw16a}. The coloured squares mark the positions of the Arches and Quintuplet clusters, and the black star indicates the position of Sgr A$^*$. } \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{pv_plot} \caption{ \label{fig:pv_plot} Position-velocity diagram for the same snapshot as shown in \autoref{fig:galmap}. The top colour panel, labeled ``Midplane", shows the column density per unit velocity along a cut through the Galactic midplane. The line plot above this shows the velocity-integrated column density corresponding to this. The bottom coloured panel, labeled ``z-integrated", shows the total mass per unit velocity per degree of Galactic longitude, integrating over Galactic latitude. The line plot below it shows the mass per degree of Galactic longitude, integrating over the velocity and Galactic latitude. The coloured circles and squares in the coloured panels indicate the positions and velocities of Sgr A$^*$, Sgr B2, Sgr B1, Sgr C, Arches, and Quintuplet, as indicated. The black line in the coloured panels shows the best-fitting orbital model describing the observed kinematics of the gas stream from \citet{kruijssen15a}, which highlights the difference in kinematics expected when the orbit is eccentric (with $e=0.3$). Data are from the same sources as in \autoref{fig:galmap}. } \end{figure} We further analyse the observable properties of the snapshot at $t=485$~Myr (corresponding to the average evolutionary phase of the star-forming ring, i.e., at a star formation minimum). To do so, we generate synthetic column density and position-velocity maps for the model, placing them in Galactic coordinates for the purposes of comparing to the observed CMZ. For the purposes of this calculation we assume a Galactic Centre distance of $8.5$ kpc \citep{ghez08a}, and following \citet{kruijssen15a} and \citet{henshaw16a} we place the center of our simulated disk at the position of Sgr A$^*$, which in position-position-velocity space is $(\ell, b, v) = (-0.\!\!^\circ056, -0.\!\!^\circ047, -14.0\mbox{ km s}^{-1})$. We show the column density map for our model, overlaid with the observed column density distribution in the CMZ from \citet{molinari11a}, in \autoref{fig:galmap}. The figure displays an impressive degree of agreement. The predicted vertical gas distribution quantitatively matches the observations, and the extent of the ring is nearly correct as well. The locations of the most active sites of ongoing star formation -- Sgr B2, Sgr B1, and Sgr C -- are exactly where the model predicts that such sites should be found. We show the synthetic position-velocity diagram in \autoref{fig:pv_plot}. This can be compared to observations such as those presented by \citet[their Figures 6-9]{henshaw16a}. In comparison, we find that the positional extent of our model is in good agreement with the data, as one would have expected based on \autoref{fig:galmap}, but the model velocities are somewhat higher than the observed ones. This is also apparent in the offset between the velocities of the Sgr B2, B1, and C molecular clouds and our data. The discrepancy between model and data reflects the limitations of our assumption of axisymmetry, which requires only circular orbits. In reality, \citet{kruijssen15a} show that the star-forming ring is only partially filled with dense gas, which orbits in an open stream with non-zero eccentricity $e\approx 0.3$. The extent of this orbit is from $\approx 60-120$ pc, in excellent agreement with our model, but in the unstable region we do not have filled circular orbits, but instead partially-filled elliptical ones. The orientation is such that the Sgr molecular clouds, and the bulk of the dense gas, lie near the apocenters of the orbit, producing line of sight velocities substantially smaller than the circular velocities at their projected positions. In addition, the eccentric nature of the orbit leads to orbital precession, which manifests itself as a vertical drift in the position-velocity space of \autoref{fig:pv_plot}. A comparison to the orbital model of \citet{kruijssen15a} shows that the Sgr clouds reside on exactly those parts of the orbit where the line-of-sight components of the velocities are suppressed even further relative to the orbital motion. These effects explain why our model does a very good job reproducing the position of the star-forming ring, but is less successful at reproducing the line of sight velocities. It also serves as a warning against the limitations of our axisymmetric assumption: our model is capable of predicting at which galactocentric radii the star formation should occur, and how it should be regulated, but is not adequate to reproducing the detailed kinematics, which likely vary substantially in time in any event. \begin{figure} \includegraphics[width=\columnwidth]{crit_dens} \caption{ \label{fig:critdens} Density structure of the cold interstellar medium for the same snapshot as shown in \autoref{fig:galmap}. Top panel: midplane volume density (black) and critical volume density threshold for star formation (red) as a function of Galactic longitude $\ell$. Bottom panel: mass fraction of all gas along the line of sight above a volume density $n_{\rm ref}$ as a function of Galactic longitude. The (solid, dashed, dotted) black lines represent $n_{\rm ref}=(10^4, 10^5, 10^6)$~cm$^{-3}$ and the solid red line indicates the gas mass fraction eligible for star formation, i.e., $n_{\rm ref}=n_{\rm crit}$, where $n_{\rm crit}$ is the critical density from the top panel. Throughout most of the Galactic Centre, the star-forming gas fraction is minor ($\ll1$ per cent), but it is predicted to be as high as several per cent in the star-forming ring. } \end{figure} To facilitate future observational tests, we now present a number of simple predictions for the volume density structure of the cold interstellar medium in the Milky Way's CMZ. As demonstrated in \citetalias{krumholz15d}, our model predicts a strong increase towards the Galactic Centre of the midplane volume density ($\rho=\Sigma/2H_g$), the dense gas fraction, and the critical density threshold above which gas decouples from the turbulent flow and can collapse to form stars ($\rho_{\rm crit}=A\alpha_{\rm vir}{\cal M}^2\rho$, with $A$ a constant of order unity, see \citealt{krumholz05c,hennebelle08b, padoan11a, federrath12a}). We quantify this prediction by considering the gas in the midplane and assuming that it follows a lognormal volume density probability distribution function (PDF) as expected for an isothermal, supersonically turbulent medium \citep[e.g.][]{vazquez-semadeni94a,padoan97a,krumholz05c}. For simplicity, we adopt a mixture of compressive and solenoidal turbulence driving \citep[cf.][]{federrath12a} and assume that the magnetic field is not dynamically important. At each galactocentric radius, we calculate the density PDF from the midplane volume density and Mach number provided by our model, and also derive the critical density for star formation as in \citet{krumholz05c}. The PDFs are then used to determine the gas mass fractions above several different volume density thresholds as a function of Galactic longitude, where the integration along the line of sight is carried out by weighting each element by its local surface density. The results of the above calculation are shown in \autoref{fig:critdens} for the same model snapshot as in \autoref{fig:galmap} (at $t=485$~Myr).\footnote{Across the star formation cycle in the star-forming ring (\autoref{fig:obs_cycle}), the dense gas fractions only vary by a factor of $\sim3$, whereas the critical density and midplane densities vary by factors of $\sim2$ and $\sim10$, respectively.} The top panel demonstrates that the midplane density in the gravitationally unstable, star-forming ring ($|\ell|\la1^\circ$) is much higher than elsewhere in the CMZ. The midplane densities of $n=10^3 - 10^4~{\rm cm}^{-3}$ provide a good match to the mean densities observed in the Galactic star-forming ring \citep[e.g.][]{bally10a,longmore12a,rathborne14b}. In addition, the critical density for star formation (which manifests itself in observed density PDFs as a power law deviation from the lognormal shape at high densities) is predicted to range from $n_{\rm crit}\sim10^5 - 3\times10^6~{\rm cm}^{-3}$ throughout the CMZ, with the highest values being reached in the star-forming ring. Such critical densities are orders of magnitude higher than those predicted for solar neighbourhood clouds, but are expected at the high pressures and densities of the CMZ gas \citep{kruijssen14b}. In gravitationally unstable gas, we predict $n_{\rm crit}\sim3\times10^6~{\rm cm}^{-3}$. This is remarkably consistent with the ALMA observations of the CMZ cloud G0.253+0.016 by \citet{rathborne14a}, who identify a power law deviation from the lognormal {\it column} density PDF that corresponds to a volume density of $n>10^6~{\rm cm}^{-3}$ when assuming spherical symmetry. The location of the high-density gas coincides with the only known site of star formation within the cloud (as traced by a water maser, see \citealt{lis94a}), providing further support to the interpretation that cloud-scale star formation in the CMZ is in accordance with the model presented here. The bottom panel of \autoref{fig:critdens} quantifies the increase of the dense gas fraction towards the Galactic Centre. For different definitions of ``dense" ($n_{\rm ref}$, see the legend), the figure shows that the star-forming ring holds the highest dense gas fractions in the CMZ, ranging from several per cent (for $n_{\rm ref}=10^6~{\rm cm}^{-3}$) to nearly 100 per cent (for $n_{\rm ref}\sim10^4~{\rm cm}^{-3}$). The gas fraction eligible for star formation, i.e., $f(n>n_{\rm crit})$, is shown by the red line, and explains why star formation in the Milky Way's CMZ is mostly confined to $|\ell|<1^\circ$. Only at those longitudes does a non-negligible fraction of the gas reside at densities high enough to decouple from the turbulent flow and collapse to form stars. Currently available observations confirm the prediction of our model that the majority of the cold interstellar medium in the CMZ resides at densities $n>10^4~{\rm cm}^{-3}$ \citep{longmore13a}, but observational tests of our predicted gas fraction above higher densities cannot be carried out yet, because no high-spatial resolution survey of the entire CMZ has been published. The few cases for which high-resolution observations are available match the predictions of \autoref{fig:critdens} \citep{kauffmann13a,rathborne14a}. However, a definitive test of our model requires a wide-field survey at arcsecond ($\sim0.08$ pc) resolution to enable the systematic mapping of the high-density gas and protostellar core population in the CMZ. This will be one of the main goals of the ongoing CMZoom Survey with the Submillimeter Array (SMA, PIs Keto \& Battersby), which is expected to reach densities of several $10^5~{\rm cm}^{-3}$. Future surveys with the Atacama Large Millimeter/submillimeter Array (ALMA) would grant access to even higher densities. \subsection{Galactic Centre Star Formation Beyond the Milky Way} Thus far we have focused on the Milky Way's CMZ, since that is the region for which we have the best measurement of the rotation curve and of the properties of the bar. However, there is every reason to believe that the Milky Way's centre is similar to that of other barred spiral galaxies, and thus that the phenomena we have investigated here should be generic in such systems. Inflows and bursts should occur in any CMZ where there is a bar to drive transport, a low shear region to trap the gas, and where the dynamical time at the low-shear region is shorter than the lifetimes of massive stars, preventing supernovae from establishing a time-steady equilibrium between driving and dissipation. What will other galaxies' CMZs look like if we observe them? To answer this question, we imagine observing the centre of an external galaxy and placing it on a Kennicutt-Schmidt (KS) plot, whereby we place the star formation rate per unit area on the $y$ axis and either the gas surface density $\Sigma$ or the gas surface density normalised by the orbital period $\Sigma/t_{\rm orb}$ on the $x$ axis. Because such an exercise is necessarily resolution-dependent, we perform it in two ways. First, we consider an aperture of 750 pc centred on the (generic) galactic centre. Our motivation for this choice of size is that it is typical for large-scale nearby galaxy surveys such as THINGS \citep{walter08a, bigiel08a, leroy08a} and HERACLES \citep{leroy09a, leroy13a}. We compute the total gas mass and total star formation rate for all radial bins whose centres lie within the aperture, and obtain the area-normalised quantities by dividing both the gas mass and star formation rate by the total area of this region. For the orbital period, we use the value for the outermost bin within the 750 pc aperture. Second, we consider a much higher resolution observation focused on the star-forming ring. For this case we identify the radius $r_{\rm peak}$ which has the highest time-averaged star formation rate, and we consider the ring of material at $r_{\rm peak}\pm 10$ pc. We use the gas mass, star formation rate, and area only of this region, and the orbital period at its outer edge. This produces an observation that is narrowly focused on the region of maximum star formation. In both cases we use the observationally-inferred rather than true star formation rate in our computation. Strictly speaking the timescales for our ``observationally-inferred" star formation rate are appropriate only for an observation based on an ionised gas tracer, whereas many galactic centre observations use other tracers such as infrared. However, the timescales for ionisation and bolometric luminosity (which is what is closest to infrared) are not so disparate that we worry about this detail. We perform this exercise for runs m01r10f10, m03r10f10, and m10r10f10, which use our fiducial parameters for star formation and feedback, and vary in their mass accretion rates, as we would expect for a realistic population of galaxies with different bar strengths and gas contents. \begin{figure*} \includegraphics[width=\textwidth]{ksplot} \caption{ \label{fig:ksplot} The observable properties of simulations m01r050f10 (red), m03r050f10 (green), and m10r050f10 (blue), corresponding to gas accretion rates $\dot{M}_{\rm in} = 0.1$, $0.3$, and $1.0$ $M_\odot$ yr$^{-1}$, in a Kennicutt-Schmidt plot. The left panel shows star formation rate per unit area $\dot{\Sigma}_*$ versus gas surface density $\Sigma$, while the right shows $\dot{\Sigma}_*$ versus surface density divided by orbital period, $\Sigma/t_{\rm orb}$. In each panel, colours indicate the log of the probability that the system would fall into the indicated pixel if observed at a random time $>200$ Myr of evolution in the simulation. We show the results both for an observation of the whole CMZ, and for one focusing on the ring of peak star formation, as indicated by arrows. For comparison, in the left panel the two black dashed lines show constant depletion times of 2 Gyr (bottom) and 200 Myr (top), respectively, while in the right panel they show gas depletion times of 100, 10, and 1 times the orbital period (bottom to top). See main text for details on how all quantities are computed. } \end{figure*} We show the results in \autoref{fig:ksplot}. Examining the left panel, it is clear that the simulations with different accretion rates form a sequence that slides from the bottom left to the upper right of the KS plot. The lower extent of the locus of points occupied by a galaxy with a particular accretion rate, observed at 750 pc resolution, moves along a line of constant, $\sim 2$ Gyr depletion time, while the upper extent rises $\sim 1.5$ orders of magnitude higher in star formation rate above this.\footnote{We caution at this point that the extent of the vertical rise may be overestimated somewhat, because our axisymmetric model forces star formation events to be perfectly synchronised in azimuth, whereas in reality they are not. The main effect of this will be to compress the range of the points along the $\dot{\Sigma}_*$ axis somewhat, though probably more at the high end than the low end.} The points corresponding to a high resolution observation show similar qualitative behaviour to the low resolution ones, but with much larger scatter. However, in both cases galaxies spend roughly half their time near the line of 2 Gyr depletion time that characterises star formation in spiral galaxies at larger galactocentric radii, and about half their time scattered above this line, with a slight bias to being found at lower depletion time. These statistics are in very good agreement with the observed sample of \citet{leroy13a}. The right panel of \autoref{fig:ksplot} tells a somewhat similar story. Measured at low resolution, galaxies spend about half their lives looking like their centres deplete on timescales of $\sim 100$ orbits, with this number dropping as low as $\sim 10$ orbits for $\sim 50\%$ of the time. Focusing on the ring where gas accumulates, star formation actually looks significantly \textit{less} efficient than on larger scales when measured in terms of the orbital period, with star formation rates rising to push the depletion time below 100 orbital periods only during outburst. This is simply a reflection of the fact that, during the quiescent period, $\epsilon_{\rm ff}$ is somewhat less than 1\% because the gas is supervirial, and the free-fall time is somewhat longer than the orbital period because the gas is not quite self-gravitating. Only when the gas becomes roughy virial and an outburst begins do we begin to have depletion times that approach $\sim 10$ orbital periods. This effect is not seen in the larger scale observations because, although the gas and star formation are all concentrated in a ring at $\sim 100$ pc, the orbital period being used is that measured at much larger galactocentric radii. \subsection{Properties of the Cool Wind} \label{ssec:windprop} We next examine in more detail the properties of the galactic centre winds that are launched in our simulations. In particular, we are interested in the kinematics of the cold gas that is launched from the winds, as well as the properties of any hot, escaping gas and non-thermal particles. Observations constrain all of these quantities in the Milky Way. First consider the cold gas driven upward by momentum injection. To obtain its velocity distribution, we again turn to the \citet{thompson16a} momentum-driven wind model. The central idea in this model is to consider a region of a galactic disc with mean surface density $\Sigma$, but where there are a wide range of local surface densities $\Sigma'$ as a result of turbulence. We then consider the vertical equation of motion for a particular fluid element near the top of the disc (i.e., at $z\sim H$) in a region with local surface density $\Sigma'$. This is \begin{equation} \frac{dv}{dt} \approx -g_{\rm gas} - g_* + \frac{d\dot{p}/dA}{\Sigma'}. \end{equation} Here $v$ is the vertical velocity, the first and second terms represent the force per unit mass exerted by gas and stars, and the final term represents the force per unit mass on the fluid element due to momentum injection from stellar feedback, which is provided at a rate per unit area $d\dot{p}/dA$. Note that, because gravity is a long-range force that is produced by material over a large area, the gravitational acceleration depends on the mean surface density $\Sigma$, while the acceleration due to feedback depends on the local one $\Sigma'$. Using our definitions of the Eddington injection rate, \autoref{eq:pdotedd}, and its non-dimensionalisation $x_{\rm crit}$, \autoref{eq:xcrit}, and defining $x=\ln \Sigma'/\Sigma$ for convenience, we can rewrite the equation of motion for a local fluid parcel as \begin{equation} \frac{dv}{dt} = (g_{\rm gas} + g_*) \left(e^{x_{\rm crit}-x}-1\right). \end{equation} Gas is ejected in regions where the local surface density is low enough that $x<x_{\rm crit}$, and thus the left-hand side is positive, indicating an upward acceleration. If the gas is accelerated over a distance $\sim r$, and $x$ remains constant as this happens, then its final speed will be \begin{equation} \label{eq:vx} v \approx v_{\rm esc} \sqrt{\left(e^{x_{\rm crit}-x}-1\right)}, \end{equation} where we may think of $v_{\rm esc} = \sqrt{2r(g_{\rm gas}+g_*)}$ as the characteristic escape speed for gas flowing out in the wind. Since this provides a mapping between the local surface density $\Sigma'$ and the outflow velocity, we can obtain the distribution of outflow velocity by combining this mapping with the distribution of surface densities $dm/dx$ produced by turbulence. Specifically, if we let $u = v/v_{\rm esc}$, then for any given $u$ we can invert \autoref{eq:vx} to obtain $x(u)$, and we can write \begin{equation} \frac{dm}{du} \propto \left|\frac{dx}{du}\right| \left(\frac{dm}{dx}\right)_{x=x(u)} \propto \frac{2u}{u^2+1}\left(\frac{dm}{dx}\right)_{x=x(u)}. \end{equation} Following \citet{thompson16a}, the mass distribution $dm/dx$ is a lognormal given by \begin{equation} \frac{dm}{dx} = \frac{1}{\sqrt{2\pi\sigma_x^2}} \exp\left[-\frac{(x-\sigma_x^2/2)^2}{2\sigma_x^2}\right], \end{equation} where we compute $\sigma_x$ from the Mach number as outlined in \citeauthor{thompson16a}. Armed with these relationships, we can compute the velocity distribution of outflowing gas from each computational zone, and by summing over zones we can obtain the full velocity distribution at every instant. \begin{figure} \includegraphics[width=\columnwidth]{wind} \caption{ \label{fig:wind} Wind velocity distribution $d\dot{M}_{\rm wind}/dv$ in simulations m01r050f10, m03r050f10, and m10r050f10, corresponding to accretion rates of $\dot{M} = 0.1$, 0.3, and 1.0 $M_\odot$ yr$^{-1}$, as indicated. Solid lines indicate the mean over all times $>200$ Myr, while shaded regions indicate the range from 10th to 90th percentile in time. } \end{figure} We perform this computation for runs m01r10f10, m03r10f10, and m10r10f10 and plot the resulting wind velocity distribution in \autoref{fig:wind}. We see that the wind velocity distribution strongly peaks at $\approx 350$ km s$^{-1}$, which is the escape speed from the star-forming ring. There is a tail to higher velocities, which becomes increasingly prominent at higher star formation rates, but the great majority of the mass emerges close to the escape speed. The wind launch rate at a given velocity varies by a factor of $\sim 3-5$ at any given time. \subsection{The Hot Wind and Non-Thermal Particles} Only a small fraction of the total energy injected by supernovae goes into driving either turbulent motions or the cool wind. Indeed, one can see this immediately from a simple argument. In a region with a steady star formation rate per unit area $\dot{\Sigma}_*$, if we have one supernova per mass $M_{\rm SN}$ of stars formed, then the supernova rate per unit area is $\dot{\Sigma}_*/M_{\rm SN}$. The momentum and energy injected per supernova are $p_{\rm SN}$ and $E_{\rm SN}$, respectively, giving momentum and energy injection rates per unit area $\dot{p}_{\rm SN} = p_{\rm SN} \dot{\Sigma}_*/M_{\rm SN}$ and $\dot{E}_{\rm SN} = E_{\rm SN} \dot{\Sigma}_*/M_{\rm SN}$. The energy injection rate into turbulent motions is (\autoref{eq:edotturb}) \begin{equation} \left(\frac{d\dot{E}}{dA}\right)_{\rm SF,turb} = \sigma \frac{p_{\rm SN}}{M_{\rm SN}}\dot{\Sigma}_*, \end{equation} and the ratio of this to the total supernova energy budget is \begin{equation} \frac{(d\dot{E}/dA)_{\rm SF,turb}}{(d\dot{E}/dA)_{\rm SN}} \approx \sigma \frac{p_{\rm SN}}{E_{\rm SN}} \approx \frac{\sigma}{170\,\mathrm{km\, s}^{-1}}, \end{equation} where the numerical evaluation is for our canonical values $p_{\rm SN} = 3\times 10^5$ $M_\odot$ km s$^{-1}$ and $E_{\rm SN} = 10^{51}$ erg. Thus for the values of $\sigma$ found in our simulation, only a small portion of the supernova energy budget is consumed by driving turbulence. Similarly, the wind kinetic luminosity per unit area is of order \begin{equation} \left(\frac{d\dot{E}}{dA}\right)_{\rm wind,kin} \approx \frac{1}{2}\dot{\Sigma}_{\rm wind} v_{\rm esc}^2, \end{equation} and the ratio of this to the supernova energy injection rate is \begin{equation} \frac{(d\dot{E}/dA)_{\rm wind,kin}}{(d\dot{E}/dA)_{\rm SN}} = \frac{\dot{\Sigma}_{\rm wind}}{\dot{\Sigma}_*} \left(\frac{M_{\rm SN} v_{\rm esc}^2}{2E_{\rm SN}}\right) \approx 0.09 \eta, \end{equation} where the numerical evaluation is for $M_{\rm SN}=100$ $M_\odot$ and $v_{\rm esc} = 300$ km s$^{-1}$, and $\eta = \dot{\Sigma}_{\rm wind}/\dot{\Sigma}_*$ is the mass loading factor, which for our simulations is $\lesssim 1$. Thus we conclude that neither launching the wind nor driving the turbulence consumes an appreciable fraction of the total supernova energy available. Instead, the energy released by supernovae must either be lost to radiation, or must go into a hot wind that carries it out. Unfortunately our model does not allow computation of the partition between these two forms of energy loss, and the simulations that have been published to date are not helpful in addressing this question -- answering it correctly requires simulating with enough resolution to resolve the Sedov-Taylor phase of supernova remnant expansion, without using any artificial methods to lower the density in the vicinity of the supernovae. As we discuss in \autoref{ssec:comparison}, no published simulations meet these criteria. Observations of superbubbles away from galactic centres suggest that radiation cannot be the primary loss mechanism \citep{rosen14a}, but it is unclear whether we can generalise these conclusions to the very different environment of the CMZ. If radiative losses do not dominate, then the hot wind must carry an energy flux of order $\dot{M}_* E_{\rm SN}/M_{\rm SN}$, and given our result that $\sim 50\%$ of the incoming gas is converted to stars, this implies an energy flux $\dot{E}_{\rm hot} \sim \dot{M}_{\rm in} E_{\rm SN}/2 M_{\rm SN} \approx 10^{41} (\dot{M}_{\rm in}/M_\odot\,\mbox{yr}^{-1})$ erg s$^{-1}$. This material will mostly be launched from the star-forming ring at 100 pc. The non-thermal particle energy injection rate should be $\sim 10\%$ of the hot gas energy budget, implying $\dot{E}_{\rm non-therm} \sim 10^{40} (\dot{M}_{\rm in}/M_\odot\,\mbox{yr}^{-1})$ erg s$^{-1}$. Because non-thermal particle acceleration happens even if the hot gas does not vent, and because the non-thermal particles have long mean-free paths and thus should be able to escape the disc, this estimate should hold even if the primary loss mechanism is radiation rather than a hot wind. Cosmic ray escape is confirmed observationally \citep[][and references therein]{crocker12a}: the diffuse non-thermal emission from the CMZ in radio continuum and $\gamma$-ray bands implies that only a small fraction ($\lesssim 10$\%) of the power injected into non-thermal particles accelerated in the CMZ is lost {\it in situ} radiatively; most of this power is carried off by escaping cosmic rays (and is claimed by \citet{crocker15a} to be radiated on much larger size scales in the Fermi Bubbles). This is also consistent with the upper limit on the dense gas ionisation rate by cosmic rays ($\zeta_{\rm CR} < 10^{-14}$ s$^{-1}$) that is implied by the observed temperature distribution of formaldehyde, which extends down to gas temperatures as low as $T \sim 40$ K and exhibits substantial cloud-to-cloud variation, showing that cosmic rays do not set the gas temperatures of CMZ clouds \citep{ginsburg16a}. These findings, in concert with the measured, hard spectrum of the diffuse, non-thermal radio continuum and $\gamma$-ray radiation from the CMZ, supports the notion that the region's cosmic ray population is advected away with the putative hot outflow. \subsection{Relationship to Other Models} \label{ssec:comparison} A number of other authors have proposed models of the CMZ, and it is worth commenting on the ways in which the model we propose here compares to theirs. \citet{kim11b} conduct 3D SPH simulations of the flow of gas in a barred spiral potential chosen to represent the Milky Way, including star formation and feedback, and find that the gas forms a nuclear ring $\sim 200$ pc from the Galactic Centre, somewhat further out than we observed in the Milky Way. The somewhat different location of the ring in their simulations is likely a result of the potential they adopt, which is a simplified model that, aside from the bar perturbation, possesses uniform shear, rather than having a low-shear region as our more realistic potential does. Once the simulation reaches equilibrium, \citeauthor{kim11b} find that the gas mass in the CMZ is about constant at $\sim 10^7$ $M_\odot$, and the star formation rate is relatively steady at $\sim 0.05$ $M_\odot$ yr$^{-1}$. While these figures are quite similar to the averages of our fiducial case, \citeauthor{kim11b}'s simulation does not show the bursty behaviour we observe in our model. In contrast, we use the measured potential, which does possess a shear minimum at the observed location of the gas ring. The lack of burstiness in their star formation rate is likely dependent on their feedback implementation (which is described in \citealt{saitoh08a}), and differs from what some other simulations find. We discuss this topic further below. \citet{kim12b}, \citet{kim12c} and \citet{li15b} perform high resolution 2D simulations of gas flows in the presence of bars with a wide range of parameters, and find that, for sufficiently slowly-rotating bars, the typical outcome is a ring in an $x_2$ orbit at distances of hundreds of pc from the centres of their simulated galaxies; the inner regions of their rings do correspond roughly to where the rotation curve turns over to solid body, consistent with the mechanism for ring formation that we have proposed. They do not include self-gravity, star formation, or feedback in their simulations, and thus do not make any predictions regarding these phenomena. \citet{crocker12a} and \citet{crocker15a} provide a one-zone model for the CMZ, focused on reproducing the properties of the outflow and non-thermal emission found there. Because the model is steady-state and one-zone, it does not address the questions of spatial and temporal variation on which we focus. Conversely, however, our model does not address the properties of the outflow or the non-thermal emission, and it would therefore be extremely interesting to extend our model using \citeauthor{crocker15a}'s machinery for the treatment of non-thermal emission. We plan to do so in future work. Most recently, \citet{torrey16a} presented a model and a set of 3D simulations for the behaviour of star formation in the central regions of galaxies. Their main finding is that star formation in these regions is bursty, with burst timescales of $\sim 50$ Myr. The mechanism they identify is similar to what we find in our simulations, namely that the dynamical time is comparable to the time for which supernovae go off after a starburst, so star formation feedback tends to ``overshoot", leading to alternating cycle of starburst and quenching rather than a steady state. The $\sim 50$ Myr variability timescale they find is a bit longer than ours, likely because they do not include a non-axisymmetric stellar potential that is capable of driving mass inflows. As a result, only the non-axisymmetric self-gravity of the gas and the galactic fountain are available as mechanisms to refill the gas in the CMZ once it has been expelled. In contrast, the outer parts of our simulated disc continue to move mass inward efficiently as a result of bar-driven instabilities regardless of what is happening in the star-forming region. This difference probably causes the longer delay in restarting star formation in their simulations compared to ours. A further difference between \citet{torrey16a}'s model and ours is that, in their simulations, the mechanism responsible for causing bursts is gas expulsion rather than changes in the depletion time of the gas, in contrast to our model where the opposite holds. It is unclear which result is more realistic. While their simulations of course are 3D rather than 1D, the results are also in strong contrast to those obtained by \citet{kim11b} who also use 3D simulations, and much of the result appears to depend on the sub-grid models used for feedback. Neither \citet{kim11b} nor \citet{torrey16a} resolve the Sedov-Taylor phase of supernova blast waves, and as a result they are forced to rely on approximate models for supernovae to avoid the ``overcooling" problem \citep{katz92a}, whereby simulations that do not resolve blast waves overestimate the rate of radiative losses from supernova remnants. \citet{kim11b} handle this problem using decoupled wind particles, following the prescription of \citet{okamoto08a}, while \citet{torrey16a} directly add radial momentum to the gas in cases where they do not resolve the blast wave. Although our model is also based on momentum injection, it differs from \citeauthor{torrey16a}'s approach in that we explicitly model the interaction of this momentum with the density structure of the turbulent medium, and determine the wind mass flux based on this model. In contrast, \citeauthor{torrey16a}'s simulations do not resolve turbulence at the momentum injection scale (since momentum is injected at the resolution scale), and thus their approach implicitly differs from ours. None of these approaches are perfect, and to the extent that the nature of the starbursts depends on them, the results of any model are somewhat suspect. \section{Conclusion} \label{sec:conclusion} In this paper we present a simple dynamical model for star formation in the nuclear regions of galaxies. We focus on the Milky Way CMZ, since this is the only nuclear region for which we have available a very high resolution measurement of the rotation curve, but we argue that the phenomena we find there should be generic in barred spiral galaxies. This model captures several essential elements that combine to produce the distinctive behaviour of star formation in these regions; some of these elements have been explored before, but the model we present here is the first to combine them all. These elements are as follows. \textit{Mass Transport by Acoustic Instability.} The nuclear regions of galaxies have gas depletion times far smaller than the Hubble time, so for them to continue star formation at the present epoch requires a constant resupply of mass. At large galactocentric radii, the required transport is likely provided by gravitational instability \citep{krumholz10c, forbes12a, forbes14a, goldbaum15a, goldbaum16a, schmidt16a}, but this mechanism is suppressed in nuclear regions by strong shear. However, inside the inner Lindblad resonance of the bar, another transport mechanism becomes available: acoustic instability driven by the bar, which thrives in regions of high shear (\citealt{montenegro99a}, \citetalias{krumholz15d}). This instability both moves gas inward and drives turbulence, keeping it gravitationally stable and suppressing star formation as the gas is transported. This explains the paucity of star formation in the Milky Way found at radii from $\sim 150 - 500$ pc. \textit{The Effects of the Rotation Curve.} Because the mechanism for mass transport and turbulence driving is sensitive to the amount of shear, it must cease where the rotation curve switches from flat to (near-)solid body, which is a common feature of galactic centres. This causes gas to accumulate and become gravitationally unstable in a particular region. Thus nuclear star formation is characterised by the presence of persistent, long-lived, ring-like structures, rather than by transient molecular clouds arranged in either grand design or flocculent spiral patterns. In the Milky Way, this structure is found at $\approx 100$ pc from the Galactic Centre, and manifests as a partially-filled ring, within which the bulk of the CMZ's dense gas and young star clusters reside. \textit{Evolutionary state of the Milky Way's CMZ.} Based on a detailed comparison of our model to the observed properties of the CMZ, we predict that the star-forming ring currently resides at a star formation minimum, with the previous starburst having taken place 8 Myr ago. In the context of our model, the Arches and Quintuplet clusters represent the final clusters to have formed during this latest starburst ($\sim5$~Myr ago). By contrast, the CMZ ``dust ridge" (spanning in projection from Sgr A$^*$ to Sgr B2 and containing the most massive and densest molecular clouds in the CMZ) will collapse and form stars first during the onset of the upcoming starburst (expected in 1--2 Myr). We also provide quantitative predictions for the dense gas fraction and critical density for star formation as a function of Galactic longitude, finding that dense ($n>10^{(4,5,6)}~{\rm cm}^{-3}$) gas and star formation are mostly confined to $|\ell|<1^\circ$ (or $R<150$ pc). This matches the position of the 100-pc stream in the CMZ \citep{molinari11a,kruijssen15a}, as well as the major known sites of recent star formation, such as Sgr B2, Sgr C, and the Arches and Quintuplet clusters. \textit{Supernova Feedback-Regulated Star Formation.} Within the ring-like structure acoustic instability is unable to drive turbulence or transport mass, and thus the gas is liable to become gravitationally unstable and begin vigorous star formation. When a starburst begins, there is initially little feedback, because supernovae, which provide the most important feedback mechanism, are delayed from $4-40$ Myr after the onset of star formation. This leads to an overshoot, so that, when supernovae do begin to occur, the system does not settle into forming stars at a steady state. Instead, the supernovae raise the velocity dispersion, scale height, and virial parameter in the star-forming ring so that the star formation rate falls dramatically. Star formation remains suppressed until there is time for supernova feedback to taper off and for turbulence to decay, leading to the resumption of star formation. Because this cycle occurs within a coherent star-forming structure whose location is fixed by the galactic rotation curve, the overall nuclear star formation rate and depletion time undergo large oscillations. In the Kennicutt-Schmidt diagram, which measures the gas depletion time, this results in nuclear regions undergoing large excursions, with some appearing similar to ``normal" galaxies and others resembling starburst galaxies. \textit{Supernova-Driven Winds.} The supernovae that regulate star formation also drive a two-phase wind off the star-forming ring. The cool phase of the wind dominates the mass flux, and carries off mass at a rate that is comparable to or slightly smaller than the mass flux going into stars. However, it carries away relatively little of the supernova energy budget. The energy is most likely carried by a hot phase that accompanies the cool, momentum-driven wind, though it could conceivably also be lost to radiation. Some of this energy likely goes into the production of non-thermal particles as well. Wind launching is bursty like star formation, but the magnitude of the variation is somewhat smaller than that of the star formation rate. Taken together, these elements are able to explain the observed properties of nuclear star formation in Milky Way-like galaxies in general, and in the Central Molecular Zone of the Milky Way in particular. \section*{Acknowledgements} We thank the referee for helpful comments. MRK acknowledges support from an Australian Research Council Discovery Project (DP160100695). JMDK gratefully acknowledges financial support in the form of a Gliese Fellowship and an Emmy Noether Research Group from the Deutsche Forschungsgemeinschaft (DFG), grant number KR4801/1-1. RMC is the recipient of an Australian Research Council Future Fellowship (FT110100108). MRK and JMDK thank the Aspen Center for Physics, which is supported by NSF Grant PHY-1066293, for its hospitality during the early phases of this work. \small \bibliographystyle{mn2e}
1,477,468,751,193
arxiv
\section{Introduction} Fashion is about 2\% of the world's GDP and a significant sector of the retail industry. Whenever a new fashion item like apparel or footwear is launched, the retailer needs to prepare and show rich information about the product, including pictures, text descriptions, and detailed attribute tags. The attributes of the fashion products, including color, pattern, texture, material, occasion-to-use, etc., require domain experts to label them piece by piece. This labeling process is time-consuming, costly, subjective, error-prone, and fundamentally imprecise due to the interdependency of the attributes. To address these issues, we introduce a multi-task multimodal machine learning model to automatically, consistently, and precisely infer the visual attributes of the fashion items. Each item is typically labeled with multiple tags that describe different attributes of the item. For example, an item can be labeled with ``shirt'', ``red'', ``solid pattern'', ``blue collar'' and ``short sleeve''. An intuitive way of learning such information is to train a multi-label classifier, which outputs the probability of multiple labels of each input sample. However, such a model cannot encode the relationship between different attributes. For example, ``short sleeve'' is a suitable attribute for ``shirt'', but not for ``jeans'', and ``red'' only describes the body part of the shirt, but not the collar. The model needs to learn attribute and object relationships and adjusts its output accordingly. We propose designing a Visual Questioning Answer (VQA) framework for fashion items, in which the model is trained to answer complex natural-language questions, such as ``is the person wearing a red shirt with solid pattern and blue collar?'', given the input image. The VQA task is more challenging than the simple attribute classifier since it requires a thorough understanding of both the question and the structure and relationship between various visual attributes in the image. By training such a model, we convert the manual process of tagging new products with visual attributes into automated answering of a series of questions with visual intents (auto-labeling). The model also generates multimodal embeddings of the product images attended to the questions for downstream dialogue, search, and recommendation systems. Prior to our work, there exists a large-scale VQA v2 dataset \cite{vqa2}, which includes 0.6 million \textit{question-answer-image} triplets. It has been widely used as the benchmark in recent research on VQA tasks. However, this general dataset only contains a small number of \textit{question-answer-image} triplets related to fashion. In this work, we build a fashion VQA dataset from a diverse apparel product database. The questions, including both binary and non-binary, are automatically composed by filling question templates with the given attribute information. The dataset contains 207 thousand images and 168 million \textit{question-answer-image} triplets. The automatic generation of the VQA dataset from a limited number of images and attributes allows us to achieve the scale required for training a multimodal domain expert model. We leverage a cross-modality fusion model mapping representations from visual and text space to the same latent feature space and performing answer prediction with classifier modules. Given an image that contains a fashion item and the corresponding questions regarding its different attributes, the model predicts the answers to the given questions. We can then use the model to generate the missing or alternative attribute information based on its answers. Additionally, given different but similar text descriptions on the same item, we can generate consistent feature embeddings that enable us to build better online search services. The existing search engines cannot attend to the relevant visual parts of a fashion item given the query and do not adapt the attention mask according to the chained adjectives. With this work, we can map the input query to the learned embeddings space and perform a robust and fuzzy search in that multimodal space. We can also provide a visual dialogue service, in which the customers can ask consecutive questions to narrow down the item list according to their apparel preferences. We can also build a fashion recommendation system in the multimodal embeddings space. The customer-item interaction history is mapped to this space, and the neighboring items are recommended. \section{Related work} \textbf{Visual feature learning in VQA:} The visual feature vector is often extracted from the input image using a Convolutional Neural Networks (CNN) as the visual encoder, e.g., VGG \cite{vgg}, ResNet \cite{resnet}, or ResNext \cite{resnext} models. In the early VQA frameworks, grid-based visual features extracted by ImageNet-pretrained \cite{imagenet} VGG or ResNet models were widely adopted. Since \cite{bottom-up-attention}, region-based visual features extracted using Faster R-CNN \cite{faster-rcnn}-based object detection model, especially fine-tuned on Visual Genome \cite{visualgenome} dataset, have been dominant \cite{dfaf}\cite{mcan}\cite{oscar}\cite{lxmert}. In \cite{grid-feature}, the authors propose extracting the grid features from the same layer as the pretrained detector, achieving comparable performance as the region-based features with higher efficiency. We benchmark these two types of visual feature extraction methods on our dataset across different VQA models. \textbf{Cross-modality fusion models:} Cross-modality fusion model is a core component of the VQA framework. It aligns the features from the visual and language modalities. Initially-proposed VQA models identify the high-level cross-modal interactions by Bilinear Fusion \cite{bilinear}. MCB \cite{mcb}, MLB \cite{mlb} and MUTAN \cite{mutan} are later introduced to achieve better fusion performance at much lower computational cost and parameters. Motivated by the remarkable performance of the attention mechanism in language and vision models \cite{bert}\cite{attention}, the attention module becomes the fundamental block in designing the cross-modality fusion models. DFAF \cite{dfaf} uses self-attention and co-attention modules to learn the inter-and intra-connections between the two modalities. MCAN \cite{mcan} builds the model with the blocks of self-attention and guided attention. LXMERT \cite{lxmert}, VilBERT \cite{vilbert} adopt a similar strategy and build a two-stream co-attention-based model. VisualBERT \cite{visualbert}, UNITER \cite{uniter}, OSCAR \cite{oscar}, OSCAR+ \cite{oscar+} learn the alignment between image and language by pretraining on multiple image caption datasets with BERT-style \cite{bert} visual language models (VLMs). \textbf{Fashion datasets:} In recent year, many valuable fashion datasets \cite{deepfashion}\cite{deepfashion2}\cite{modanet}\cite{fashionai}\cite{fashioniq}\cite{imaterialist} \cite{fashionpedia}\cite{fashion1}\cite{fashion2}\cite{runway}\cite{fashion-mnist} have greatly contributed to clothing item recognition and apparel attribute understanding. However, most of them suffer from some limitations when considered for training versatile VQA models. In \cite{runway}\cite{modanet}\cite{fashion-mnist}\cite{fashion2}, only primary categories in the dataset are labeled. Additional garment parts and attributes are annotated in \cite{imaterialist}\cite{deepfashion2}\cite{fashionai}. Segmentation masks over each piece of garment are drawn for the semantic segmentation task in \cite{modanet}\cite{runway}\cite{deepfashion2}\cite{fashionpedia}. Generally, the localization of the garment pieces and parts in these types of datasets takes considerable human annotation labor, and few of the datasets are suitable for conversion to a new dataset for vision language tasks. \section{Methods} In this section, we describe how we designed and generated a novel VQA dataset for fashion. We named the new dataset FashionVQA dataset. \subsection{Terminologies} \textbf{Category:} Each clothing item can be labeled with one super-category and several primary categories or sub-categories. \begin{itemize} \item Super-categories: ``apparel top'', ``apparel bottom'', ``one-piece clothing'', ``shoes'', and ``accessories''. \item Primary categories: ``shirt'', ``sweater'', ``jacket'', ``pants'', ``skirt'', ``dress'', ``jumpsuit'', ``boots'', ``sneaker'', ``gloves'', etc. \item Sub-categories: ``t-shirt'', ``cardigan'', ``blazer jacket'', ``pencil skirt'', ``sweatpants'', ``overall jumpsuit'', ``hiking boots'', etc. \end{itemize} \textbf{Attributes:} ``Color'', ``pattern'', ``fit type'', ``closure type'', and ``material/fabric'' are the general attributes for all the fashion items. Each apparel type also has its unique attributes. The unique attributes for ``apparel top'', ``apparel bottom'', ``one-piece clothing'', and ``shoes'' are listed below. \begin{itemize} \item Apparel top: ``torso length type'', ``sleeve length type'', ``pocket type'', ``neckline type'', ``sleeve style'', ``collar type'', ``lapel type'', and ``cuff type''. \item Apparel bottom: ``pant leg type'', ``skirt length type'', ``pant leg style'', and ``pleat type''. \item One-piece clothing: ``neckline type'', ``sleeve length type'', ``sleeve style'', ``pant leg type'', ``skirt length type'', and ``pleat type''. \item Shoes: ``height'', ``width'', ``toe openness'', and ``shape of toe''. \end{itemize} \textbf{Attribute values:} Each attribute is composed of a set of \textit{attribute values}. For example, the set of the \textit{attribute values} of color attribute includes ``red'', ``black'', ``green'',``blue'', ``yellow'', etc. \textbf{Parts:} The \textit{parts} mentioned in our dataset are typically lined on the fashion item, such as ``patches'' and ``pockets''. \textbf{Location:} In our dataset, there exist numerous images with a person wearing multiple fashion items. Therefore, we use \textit{Location} to specify the relative location of the primary fashion item in the image, such as ``on the top'', ``on the bottom'', ``on the feet'', ``over the neck'', or ``on the head''. \subsection{Data Collection pipeline} Our data collection pipeline involves four steps: [1] querying fashion items' unique identity numbers (image IDs), [2] querying and parsing meta-information, [3] downloading images, and [4] filling question templates and forming \textit{question-answer-image} triplets. Each fashion item comes with a unique identity (ID) number. First, we query all fashion items and retrieve their IDs. Then, we predefine a data structure that is eligible to query the meta-information of fashion items from the item database. Feeding the data structure to an open-source data query API, ``graphQL'', we can obtain the meta-information attached to each ID, which contains the primary image (front-view) URL and the description of the primary fashion item. We can directly download the primary image from the URL with Python. The description of the fashion item is not an on-deck dictionary that maps each unique targeted attribute to its corresponding set of \textit{attribute values}. For example, ``Color'' could be described with different phrases such as ``Product Color'' or ``Color Name''. Parsing from the description is a process that collects \textit{attribute values} from various sources and reduces similar attribute terminologies into the same group. Also, meta-information comes in a very raw manner with many \textit{attribute values} cross \textit{attributes} entangled, e.g., ``black/stripes'', or in a vague expression. In this stage, we also need to clean these \textit{attribute values} and map them into common terminologies, e.g., map ``black/stripes'' to ``black'' for color and ``stripes'' for pattern, or ``olive night'' color to ``olive green''. \subsection{Question templates} We adopt a templating mechanism to automatically create \textit{question-answer} pairs. The question templates are designed based on a set of fixed rules that meet the English grammar and result in human-readable sentences. By filling the question templates with specific item \textit{attribute}, \textit{attribute value}, \textit{category}, and \textit{location}, we can generate a variety of questions for each image. Answer of each question can be \textit{``Yes/No''} for binary questions and multiple choices from the relevant \textit{attribute values} for non-binary questions. Since the images from the FashionVQA dataset are all photoshoot images with a solid background, the question templates ask only attribute-related questions about the fashion items in the image. For example, ``what is the sleeve length of this shirt on the top?'' or ``is this a white v-neck sweater?''. The basic template is structured as ``\{\textit{question type}\} \{this/these\} \{a/an/\} \{pair of/pairs of/\} \{\textit{object}\} \{\textit{location}\}?''. When filling the template to expand into a full sentence, the choices between ``is/are'', ``this/these'', ``a/an'', ``a pair of/pairs of'', and singular or plural format of \textit{category} are required to follow the English grammar and be aligned with the number of targeted fashion item in the image. For example, if the \textit{number of pieces} in the image is more than one, we choose ``are'', ``these'', `` /pairs of'', and plural format of \textit{category}. If the fashion item includes pant legs or two pieces like eyeglasses, we add ``pair of / pairs of'' in the question templates. If a person is in an image, and the primary fashion item is not from the super-category of ``one-piece clothing'', we assume there are multiple fashion items in the image. We use ``\{\textit{location}\}'' to specify the relative location of the primary fashion item. We use ``on the top'' for ``apparel top'', ``on the bottom'' for ``apparel bottom'', ``on the feet'' for ``shoes'', ``on the head'' for ``hat'', and ``over the neck'' for ``scarf''. The question templates fall into two primary categories based on the answer types: binary and non-binary templates. \textbf{Binary question templates:} Binary question templates typically start with ``is this/are these'', ``can you see'', or ``is there any \{\textit{part}\} on this/these'', followed by the description of the targeted item in the format of ``\{\textit{location}\} \{a\}/\{a pair of/\}/\{\} \{\textit{attribute value 1}\} \{\textit{attribute value 2}\} \{\textit{category}\} '', whereas \textit{attribute value 1} and \textit{attribute value 2} are two \textit{attribute values} from different \textit{attributes}. Permuting \textit{attribute value 1}, \textit{attribute value 2}, \textit{category} in different orders yields different question templates. Conjunction words like ``with'', ``and'', or ``in'' can be used in templates when \textit{attribute value 1} or \textit{attribute value 2}, or both are located after \textit{category}. The most common question types used in binary questions are ``is/are'' and ``can''. \textbf{Non-binary question templates:} Non-binary question templates typically start with question words like ``what'' / ``why'' / ``when'' / ``how'' followed by terms of attribute. The formats of the question type vary from attribute to attribute. For example, the question type can be ``what color is'' or ``what is the color of'' for attribute ``color'', ``what pattern is on'' or ``what print is on'' for attribute ``pattern'', and ``how many'' or ``what number of'' for attribute ``number of pockets''. Unlike binary question templates in our current dataset, we do not leverage other \textit{attribute values} unrelated to the targeted attribute in filling the non-binary question templates; even the \textit{category} of the targeted fashion item is not necessary. Therefore, it is possible to increase the diversity of the non-binary question templates with additional \textit{attribute values} or \textit{categories}. For example, we can come up with a color question template like ``what color is on the top?'' or ``what color is this shirt the person wearing on the top?''. \textbf{Diversification:} The primary question templates are those preserving all the \textit{demonstratives}, \textit{subject pronouns}, and \textit{prepositional phrases}. By randomly either removing parts of those phrases or replacing them with alternatives, we can create assorted variant question templates. In non-binary question templates, the question types for a given attribute come in different fashions, contributing to diverse non-binary question templates. Additionally, it is reasonable to replace the specific \textit{category} information of the targeted item with the combination of \textit{pronoun} and \textit{location} to expand the diversity of question templates. Adding non-relevant \textit{attribute values} to describing the fashion item is also an approach to creating new question sentences. To further increase the robustness of the question templates, we also introduce a small portion of noise into the question templates, switching between ``this/these'', ``is/are'', and ``singular/plural''. In binary question templates, even elimination of the question type phrases like ``is this a'', ``are these'', or ``is there'' does not cause an obstacle to make the remaining phrase human readable. Therefore, we truncate a small fraction of the full question sentences by removing phrases of question type to increase the diversity of the binary questions. When \textit{attribute values} are placed after the \textit{category}, we randomly pick one from different \textit{conjunction structures} to form different phrases, which will remarkably increase the diversity of the binary question sentences. For example, for ``a shirt with stripe pattern'', an alternative expression can be ``a shirt designed with stripe pattern'', or ``a shirt featured in stripe design''. Table~\ref{qt_example} demonstrates some examples of question sentences generated from question templates. \begin{table} \centering \scalebox{0.86}{ \begin{tabular}{|p{0.42\linewidth}|p{0.11\linewidth}|p{0.12\linewidth}|p{0.38\linewidth}|} \hline Question templates & Answer types & Question types & Questions \\ \hline ``is this a \{\textit{attr1}\} \{\textit{category}\} with \{\textit{attr2}\}?'' & ``yes/no'' & ``is/are'' & ``is this a white shirt with long sleeves?'' \\ \hline ``on the top a \{\textit{category}\} with \{\textit{attr1}\} and in \{\textit{attr2}\} design?'' & ``yes/no'' & ``is/are'' & ``on the top a sweater with floral print and in v neck design?'' \\ \hline ``what \{\textit{attribute}\} is this \{\textit{category}\} the person wearing \{\textit{location}\}?'' &``others'' & ``what \newline\{\textit{attribute}\}'' & ``what color is this a-line dress the person wearing on the top?'' \\ \hline ``what \{\textit{attribute}\} is the one \{\textit{location}\}?'' &``others'' & ``what \newline\{\textit{attribute}\}'' & ``what color is the one on the top?'' \\ \hline ``when is a good time to wear this \{\textit{attr1}\} \{\textit{category}\}?'' &``others'' & ``when'' & ``when is a good time to wear this yellow dress?'' \\ \hline \end{tabular}} \caption{Question templates and examples in the FashionVQA dataset} \label{qt_example} \end{table} \subsubsection{Balance positive and negative samples for each binary question} Given binary and non-binary question templates and \textit{attribute values} for a specific image, we can easily generate non-binary \textit{question-(multiple answers)-image} triplets and binary \textit{question-(positive answer)-image} triplets. For a balanced VQA dataset, we expect each binary question to come with the same number of positive and negative samples, i.e., balanced \textit{(question, ``Yes'', image ID)} triplets and \textit{(question, ``No'', image ID)} triplets. Here, we consider two different strategies for generating the negative samples of each binary question. One strategy keeps the image fixed and changes the \textit{attribute values} in the question; the other one keeps the \textit{attribute values} fixed and changes the image. Here we further explain these strategies in detail: \textbf{Image-based:} For each image, by filling the binary question templates with specific \textit{attribute values} and \textit{category} information provided for this image, we make a positive binary sample. When an \textit{attribute value} or \textit{category} in an existing binary question is changed, if the alteration is not in the list of \textit{attribute values} or \textit{categories} corresponding to the image, we assume this is a negative sample for the binary question. \begin{algorithm}[h] \caption{Attribute-based balancing of the positive and negative samples for binary questions} \label{alg:balanced} \hspace*{0.02in} {\bf Input:} $\operatorname{S}$: \{$s_i, \ldots$\} list of all fashion items \\ \hspace*{0.02in} Each fashion item $s_i$: \{image ID: $u_i$, category: $c_i$, attributes: \{$a_k, \ldots$\}, attribute values: \{$v_{a_k}, \ldots$\} d\} \\ \hspace*{0.02in} $\operatorname{Q_T}$: list of binary question templates of all attributes \\ \hspace*{0.02in} {\bf Output:} $\operatorname{B}$: list of binary \textit{question-answer-image} triplets \\ \hspace*{0.02in} {\bf Initialization:} \begin{algorithmic} \For{each specific attribute $a_k$} \State $U_{a_k} \gets \{\}$: empty set of all image IDs with attribute $a_k$ \State $V_{a_k} \gets \{\}$: empty set of unique attribute values with attribute $a_k$ \EndFor \State $C \gets \{\}$: empty set of unique categories \end{algorithmic} \hspace*{0.02in} {\bf Build \textit{attribute-value-to-images} dictionary}: \begin{algorithmic} \For{each fashion item $s_i \in \operatorname{S}$} \State $C \gets c_i$ \For{each attribute $a_k \in s_i(\textrm{attributes})$} \State $U_{a_k} \gets u_i$ \State $P_{c_i}$(image ID set of positive answer of category $c_i$) $\gets u_i$ \For{each attribute value $v_{a_k} \in s_i(\textrm{attribute values})$} \State $V_{a_k} \gets v_{a_k}$; \State $P_{v_{a_k}}$(image ID set of positive answer with attribute value $v_{a_k}$) $\gets u_i $ \EndFor \EndFor \EndFor \end{algorithmic} \hspace*{0.02in} {\bf Build \textit{attribute-value-to-(positive/negative answer)-images} dictionary}: \begin{algorithmic} \For{ each attribute value $v_{a_k} \in V_{a_k}$} \State $V^\prime = \textrm{Synonyms}(v_{a_k})$ \For{ each $v_{+} \in (V^\prime \cap V_{a_k})$} \State $P_{v_{a_k} }= P_{v_{a_k} }\cup P_{v_{+} }$ \EndFor \EndFor \For{ each attribute value $v_{a_k} \in V_{a_k}$} \State $N_{v_{a_k}} = U_{a_k} - P_{v_{a_k}}$ \EndFor \For{each category $c_i \in C$} \State Follow the same strategy to update positive answer image ID set $P_{c_i}$ and build negative set $N_{c_i}$ \EndFor \end{algorithmic} \hspace*{0.02in} {\bf Expand \textit{attribute-value-to-(positive/negative answer)-images} dictionary with attributes and category combinations}: \begin{algorithmic} \For{ each attribute value $v_{a_k} \in V_{a_k}$}: \For{ each category $c_i \in C $} \State $P_{(v_{a_k}, c_i)}$ : positive answer image ID set of the combination of $(v_{a_k}, c_i)$ \State $N_{(v_{a_k}, c_i)}$ : negative answer image ID set of the combination of $(v_{a_k}, c_i)$ \State $P_{(v_{a_k}, c_i)} = P_{v_{a_k}}\cap P_{c_i}$ \State $N_{(v_{a_k}, c_i)} = (P_{v_{a_k}}\cap N_{c_i}) \cup (N_{v_{a_k}}\cap N_{c_i} ) \cup (N_{v_{a_k}}\cap N_{c_i}$) \EndFor \EndFor \end{algorithmic} \hspace*{0.02in} {\bf Create balanced \textit{question-(positive/negative answer)-images} triplets}: \begin{algorithmic} \For{each category $c_i \in C$} \For{each specific attribute $a_k$ of category $c_i$} \State $\operatorname{Q_{T_{(a_k, c_i)}}} = \operatorname{Q_T}(\textrm{binary question templates of attribute} \, a_k \textrm{and category} \, c_i)$ \For{ each combination of attribute value $v_{a_k} \in V_{a_k}$ and category $c_i \in C$ } \State $Q_{(v_{a_k}, c_i)} =$ Fill $\operatorname{Q_{T_{(a_k, c_i)}}}$ templates with $v_{a_k}$ and $c_i$ to generate binary questions \For{ each binary question $q_{(v_{a_k}, c_i)} \in Q_{(v_{a_k}, c_i)}$} \State Pick the same number of image IDs from $P_{(v_{a_k}, c_i)}$ and $N_{(v_{a_k}, c_i)}$: \State $B \gets (q_{(v_{a_k}, c_i)}, \textrm{yes}, u_p \in P_{(v_{a_k}, c_i)}) \cup (q_{(v_{a_k}, c_i)}, \textrm{no}, u_n \in N_{(v_{a_k}, c_i)})$ \EndFor \EndFor \EndFor \EndFor \end{algorithmic} \end{algorithm} \textbf{Attribute-based:} First, we build an \textit{attribute-value-to-images} dictionary to map each distinct \textit{attribute value} or \textit{category} to a set of eligible image IDs. Given a specific \textit{attribute value}, we collect a set of positive answer image IDs directly from this \textit{attribute-value-to-images} dictionary using given \textit{attribute value} and its synonyms. The negative answer image IDs are collected from all image IDs of the same \textit{attribute} excluding the positive image IDs. More concretely, to maximally reduce the noise in the positive/negative answer image IDs, we need to verify the relationship among \textit{attribute values} as alternative, hierarchical, or exclusive terms. Examples of alternative terminologies are ``sweatpants'', ``jogger pants'', and ``lounge pants''; examples of hierarchical terminologies are ``blue'', ``light blue'', and ``sky blue''; and, examples of exclusive terminologies are ``light blue'' and ``dark blue''. We expect \textit{attribute values} with similar terminologies (alternatives and parents of hierarchical terms) to contain the same set of positive samples, so they are considered synonyms. In this manner, we can build an \textit{attribute-value-to-(positive/negative answer)-images} dictionary (see Algorithm \ref{alg:balanced}). Then, we consider all the combinations of assorted \textit{attributes} with \textit{category}. For example, \textit{$\langle$ color, pattern, category $\rangle$, $\langle$ color, category $\rangle$, $\langle$ material, neckline type, category $\rangle$}, etc. For each combination, we further expand the \textit{attribute-value-to-(positive/negative answer)-images} dictionary by mapping the combination of one specific \textit{attribute value} and one specific \textit{category} (e.g. $\langle$red, shirt$\rangle$) to its positive/negative answer image ID set. We collect the positive answer image ID set of the combinations following the formula in Equation \ref{eqn:pos} and the negative answer image ID set following the formula in Equation \ref{eqn:neg}: \begin{align} \label{eqn:pos} Pos(<attr1, category>) = & Pos(attr1)\cap Pos(category)\\ \begin{split} \label{eqn:neg} Neg(<attr1, category>) = & (Pos(attr1)\cap Neg(category))\cup \\ & (Neg(attr1)\cap Pos(category)) \cup \\ & (Neg(attr1)\cap Neg(category)) \end{split} \end{align} whereas, \textit{Pos()} is the positive answer image ID set and \textit{Neg()} is the negative answer Image ID set. With the \textit{attribute-value-to-(positive/negative answer)-images} dictionary, we can easily generate different binary questions via filling the question templates with each combination of \textit{attribute value} and \textit{category} in the dictionary. We can pick a fixed number of positive and negative answer image IDs to guarantee the sample balance for each question. Following the same formula, we can easily expand the combinations to multiple attribute values and one category. \begin{figure} \centering \includegraphics [width=\textwidth]{dataset_snapshot.pdf} \caption{Four randomly picked \textit{question-answer-image} triplets from FashionVQA dataset.} \label{fig:dataset_snapshot} \end{figure} \subsection{Dataset description} Figure~\ref{fig:dataset_snapshot} shows four randomly picked \textit{question-answer-image} triplet examples in our dataset. There are 42 \textit{attributes} in our dataset, including \textit{category, color, pattern, occasion, material, number of}, 29 type-related \textit{attributes}, 5 style-related \textit{attributes}, and 2 shape-related \textit{attributes}. The binary questions in our dataset are composed of three major types: \textit{category}, \textit{category + one attribute}, and \textit{category + two attributes} with 1, 2, and 6 permutations between \textit{category} and \textit{attribute}, respectively, along with the ascending difficulty level to learn the alignment between a given binary question and an input image. \textbf{FashionVQA:} FashionVQA dataset includes 207,654 unique photoshoot images with resolution $600 \times 600$. We use 169,406 images in the train split for training and 38,248 images in the validation split for evaluation. The train split is composed of 163M \textit{question-answer-image} triplets and the validation split includes 5.2M \textit{question-answer-image} triplets. Since the information in binary questions is much more complicated than that in the non-binary questions, there are more binary triplets than non-binary ones in the dataset. In the train split, we have 22M non-binary \textit{question-answer-image} triplets covering 33 different question types, and approximately 141M binary \textit{question-answer-image} triplets, among which 134M have questions with one \textit{category} and two \textit{attribute values}, 6M have questions with one \textit{category} and one \textit{attribute value}, and 1M have questions with only one \textit{category} or one \textit{attribute value}. In the validation split, we have 1.2M non-binary \textit{question-answer-image} triplets and 4M binary \textit{question-answer-image} triplets. The answer vocabulary contains 1,545 different classes in total. \textbf{mini-FashionVQA:} We also create a subset dataset, named mini-FashionVQA, derived from the FashionVQA dataset. The mini-FashionVQA dataset includes 20M \textit{question-answer-image} triplets in the train split (11M from non-binary triplets and 9M from binary triplets) and 2.2M triplets in the validation split (0.7M from non-binary triplets and 1.5M from binary triplets). \section{Benchmarks} \begin{figure} \centering \includegraphics [width=\textwidth]{FigPipeline.pdf} \caption{Pipeline of the fashion VQA task.} \label{fig:pipeline} \end{figure} Every benchmark reported on our datasets is implemented via PyTorch\cite{pytorch}-v1.10 on servers with 8 Nvidia 80GB A-100 GPUs, 2 AMD 2.25GHz 7742 CPUs, and 4TB system memory. In the training stage, we adopted data-parallel multi-GPU training and set the batch size to 2048, and trained for 40 epochs. The Adam\cite{adam} optimizer is used across all the models. The learning rate is set to 0.0001 and reduced by half at the milestone epochs of 20, 30, and 35. We benchmark the FashionVQA dataset by training several VQA models to learn the interaction between images and questions. Figure~\ref{fig:pipeline} shows the VQA pipeline adopted in our experiment. Given the visual embedding of the input image and text embedding of the input question sentence, we train the model to output the given answer to the question. The dataset is used to train two variants of the MCAN \cite{mcan} model and a MUTAN \cite{mutan} model. One MCAN variant, named MCAN$\sp{\ast}$-v1, is a modification of the MCAN-small, which includes only two encoder-decoder modules. The other variant is named MCAN$\sp{\ast}$-VLM, which has a similar structure to MCAN$\sp{\ast}$-v1, but instead of an answer classifier, it has a token classifier covering all of the question and answer tokens. For MCAN$\sp{\ast}$-VLM, the answer to each question is tokenized as one token and concatenated with the question tokens as the language input. The special token `SEP' is inserted between the question and the answer. Also, `EOS' token is used at the end of the answer. During the training of MCAN$\sp{\ast}$-VLM, we randomly mask one token and predict the masked token as in the masked language modeling, similar to BERT \cite{bert}. Different from MCAN$\sp{\ast}$-v1 that answer vocabulary is independent of the word vocabulary of the questions, MCAN$\sp{\ast}$-VLM maps each answer to one token and expands the original word vocabulary to a larger one with the answer tokens. Thus, the tokens in the answers and questions share the same word vocabulary. This allows the MCAN$\sp{\ast}$-VLM to work as a visual language model, which directly benefits from the overlap in the question tokens of the binary questions and the answer tokens of the non-binary question. In the training stage, except MCAN$\sp{\ast}$-VLM, we treat the binary-question prediction and the non-binary question prediction as two different tasks and output the predicted answers from two different classifiers. We report top-1 accuracies for both binary and non-binary samples. \begin{table}[h] \caption{Benchmarks of MCAN$\sp{\ast}$-v1, MCAN$\sp{\ast}$-VLM, and MUTAN trained on FashionVQA dataset} \label{full} \begin{center} \begin{tabular}{c|ccc} \hline \multirow{2}{*}{Model} &\multicolumn{3}{|c}{Top-1 Acc} \\ \cline{2-4} & All & Non-binary & Binary \\ \hline MUTAN & 81.38\%&61.62\%&87.43\% \\ MCAN$\sp{\ast}$-v1 & 84.42\%&64.32\%&90.58\% \\ MCAN$\sp{\ast}$-VLM & \bf{84.69\%}& \bf{64.65\%}& \bf{90.84\%} \\ \hline \end{tabular} \end{center} \end{table} Table~\ref{full} lists the benchmark results of the three aforementioned models on the validation split of our FashionVQA dataset. The results show that MCAN$\sp{\ast}$-VLM works better than MCAN$\sp{\ast}$-v1 and MUTAN, indicating that a decoder-only visual language model (VLM) performs better than the dedicated VQA architectures. By visualizing the image attention maps generated from an intermediate layer of the model, we can validate whether the model focuses its attention on the regions mentioned in a question. Figure~\ref{fig:att_v} visualizes the attention map from two validation samples for a series of binary and non-binary questions. The three columns of images on each side are the input images, attention maps, and images overlayed by the attention maps, respectively, followed by the corresponding input questions, ground truth answers, and predicted answers. When the questions focus on different fashion items of the same image, the attention map shifts to the targeted region as expected. \begin{figure} \centering \includegraphics [width=\textwidth]{visualization_new.pdf} \caption{Visualization of attention maps generated by the model trained with FashionVQA dataset.} \label{fig:att_v} \end{figure} \subsection{Benchmarks with different VQA models} We also use the mini-FashionVQA to benchmark a larger variety of VQA models including Bottom-up-top-down (BUTD) \cite{bottom-up-attention}, MUTAN \cite{mutan}, DFAF \cite{dfaf}, MCAN$\sp{\ast}$-v1, MCAN$\sp{\ast}$-v2, MCAN$\sp{\ast}$-VLM, and OSCAR \cite{oscar}. MCAN$\sp{\ast}$-v2 has the same model structure as MCAN$\sp{\ast}$-v1 except for its intermediate feed-forward layer with only half the number of channels of MCAN$\sp{\ast}$-v1. We apply similar visual embedding, text embedding, and loss function in these models and train them from scratch. Table~\ref{comp-models-reg} lists the results (average of top-1 accuracies for three runs) from different VQA models with the same region-based visual features as input. The visual features are extracted from Faster-RCNN with ResNet-101 backbone fine-tuned with VisualGenome. We set the maximum number of objects extracted from the object-detection model to 25. The feature dimension of each object is 2048. A combination of GLove \cite{glove} + GRU\cite{GRU}/LSTM\cite{LSTM} is used for the text embedding in DFAF, MCAN$\sp{\ast}$-v1, MCAN$\sp{\ast}$-v2, MCAN$\sp{\ast}$-VLM, and BUTD. MUTAN adopts GRU for the text embedding, of which the parameters are initialized with SkipThoughts \cite{skipthoughts}. The number of parameters, FLOPs, and activation counts in all our experiments are calculated only from the cross-modality models, excluding text embedding and visual embedding components. On the mini-FashionVQA dataset, MCAN$\sp{\ast}$-VLM achieves the best accuracy for both non-binary question and binary question samples, with fewer parameters and FLOPs than OSCAR. Also, MCAN$\sp{\ast}$-VLM works better than MCAN$\sp{\ast}$-v1 on both non-binary questions and binary questions. \begin{table}[ht] \caption{Performance on different VQA models trained on the mini-FashionVQA dataset with same region-based visual features} \label{comp-models-reg} \begin{center} \begin{tabular}{ccccccc} \hline \multirow{2}{*}{Model}&\multirow{2}{*}{Parameters}& \multirow{2}{*}{FLOPs}& \multirow{2}{*}{Act.Count}& \multicolumn{3}{c}{Top-1 Acc} \\ \cline{5-7} & & & & All & Non-binary& Binary \\ \hline MUTAN &9.8M &38.5M &156.9K& 75.08\%&59.14\%&81.50\% \\ BUTD &11.5M &61.5M &30.7K&79.26\%&63.61\%&85.56\% \\ DFAF & 9M& 280M& 114.8K&80.55\%&62.52\%&87.81\% \\ OSCAR &86.7M &6475M &2832.3K &81.21\%& 64.20\%& 88.05\% \\ \hline MCAN$\sp{\ast}$-v1 & 19M &427M &238.9K & 81.69\% & 64.47\% & 88.61\% \\ MCAN$\sp{\ast}$-v2 & 14.5M &320M &134.5K &81.08\%&64.33\%&87.83\% \\ MCAN$\sp{\ast}$-VLM & 19M &464M &282.0K &\bf{81.80\%}&\bf{64.63\%}&\bf{88.71\%} \\ \hline \end{tabular} \end{center} \end{table} \subsection{Ablation study} \textbf{Impact of visual embedding extraction schemes:} We also benchmark different visual embedding extraction schemes and see their impact on the performance of VQA tasks for the mini-FashionVQA dataset. We replace the region-based feature with the grid feature with the same dimension. The grid-feature (refer to \cite{grid-feature}) model is built with ResNext-101 backbone and fine-tuned with object detection task on the VisualGenome dataset. The visual feature is extracted from the same layer as the object detection and pooled into different sizes. To be aligned with the visual input dimension size from the region-based feature, the spatial dimension of the grid feature is set to 5\-$\times$\-5 with the feature dimension set to 2048. Other than the visual embedding, all of the settings remain the same. \begin{table}[ht] \caption{Performance with region-based and grid visual features across different VQA models} \label{re-grid} \begin{center} \scalebox{0.9}{ \begin{tabular}{c|ccc|ccc} \hline \multirow{2}{*}{Model}&\multicolumn{3}{|c|}{Region-based features (ResNet-101)} & \multicolumn{3}{c}{Grid features (ResNext-101)} \\ \cline{2-7} & All&Non-binary &Binary & All & Non-binary& Binary \\ \hline MUTAN &75.08\%&59.14\%&81.50\%& 79.77\%\small{(+4.69\%)}&62.54\%&86.70\% \\ BUTD & 79.26\%&63.61\%&85.56\%&80.54\%\small{(+1.28\%)}&64.30\%&87.08\% \\ DFAF &80.55\%&62.52\%&87.81\%&82.01\%\small{(+1.46\%)}&64.70\%&88.97\% \\ \hline MCAN$\sp{\ast}$-v1 & 81.69\%& 64.47\%&88.61\% & 83.29\%\small{(+1.60\%)}&65.38\%&90.49\% \\ MCAN$\sp{\ast}$-v2 & 81.08\%&64.33\%&87.83\%& 82.98\%\small{(+1.90\%)}&65.17\%&90.14\% \\ MCAN$\sp{\ast}$-VLM & 81.80\%&64.63\%&88.71\%& 83.41\%\small{(+1.61\%)}& 65.52\%& 90.62\% \\ \hline \end{tabular} } \end{center} \end{table} Table~\ref{re-grid} shows that the grid-feature-based visual embedding extraction method consistently works better than the region-based method across all different VQA models by more than 1\% when trained on our dataset. In the other experiments, unless mentioned otherwise, we use grid-feature-based visual embedding for all the models. \textbf{Impact of different backbones for visual embedding:} Generally, a better visual backbone will contribute to better visual embedding. We benchmark three different visual backbones (ResNet-50, ResNext-101, ResNext-152) for the grid-feature extraction on our dataset for MCAN$\sp{\ast}$-v1, BUTD, and DFAF. All the visual backbones are pre-trained on VisualGenome \cite{visualgenome} dataset for the grid-feature extraction. \begin{table}[ht] \caption{Performances with different visual backbones for grid-feature} \label{comp-backbones} \begin{center} \scalebox{0.8}{ \begin{tabular}{c|ccc|ccc|ccc} \hline \multirow{2}{*}{Model}&\multicolumn{3}{|c|}{MCAN$\sp{\ast}$-v1} &\multicolumn{3}{|c|}{BUTD} &\multicolumn{3}{|c}{DFAF}\\ \cline{2-10} & All&Non-binary &Binary & All & Non-binary& Binary & All & Non-binary& Binary \\ \hline ResNet-50&82.99\%& 65.25\%& 90.13\%& 80.47\%& 64.35\%& 86.96\%& 80.81\%& 63.39\%& 87.81\% \\ ResNext-101 & 83.29\%& 65.38\%& 90.49\%& 80.54\%& 64.30\%& 87.08\%& 82.01\%& 64.70\%& 88.97\% \\ ResNext-152 & 83.15\%& 65.41\%& 90.29\%& 80.50\%& 64.62\%& 86.89\%& 82.39\%& 65.17\%& 89.32\% \\ \hline \end{tabular}} \end{center} \end{table} Table~\ref{comp-backbones} shows that ResNext-101 constantly works better than ResNet-50 on three different models for the performance of both non-binary and binary questions; however, the performance improvement from ResNext-101 to ResNext-152 is inconsistent. Overall, grid-feature with ResNext-101 as the backbone is the best choice for extracting visual features on our dataset. \textbf{Impact of different spatial dimension sizes for grid feature:} A larger spatial dimension size after the pooling operation will typically preserve more visual information. We benchmark MCAN$\sp{\ast}$-v1 with three different spatial dimension sizes (5\-$\times$\-5, 7\-$\times$\-7, and 9\-$\times$\-9) for the grid feature visual embeddings in Table~\ref{mcan-sp}. ResNext-101 is the selected visual backbone. The results in Table~\ref{mcan-sp} show that the best performance among the three is from the smallest spatial dimension size, 5\-$\times$\-5, rather than the largest one. One possible reason is that the background of the photoshoot images from our dataset includes some trivial information, and the larger spatial dimension sizes do not add useful information. \begin{table}[ht] \caption{Performances with different spatial dimension sizes for grid-feature} \label{mcan-sp} \begin{center} \begin{tabular}{ccccc} \hline \multirow{2}{*}{Model} &\multirow{2}{*}{Spatial dimension size}&\multicolumn{3}{c}{Top-1 Acc} \\ \cline{3-5} & & All & Non-binary & Binary \\ \hline \multirow{3}{*}{MCAN$\sp{\ast}$-v1} & 5\-$\times$\-5 & 83.29\%&65.38\%&90.49\% \\ & 7\-$\times$\-7 &82.94\%&64.19\%&90.48\%\\ & 9\-$\times$\-9 &82.60\%&65.23\%&89.59\% \\ \hline \end{tabular} \end{center} \end{table} \textbf{Impact of single-task versus multi-task training:} Due to the large difference in the answer distribution of non-binary questions and binary questions, we consider using different classifiers for answer predictions and treating the problem as a multi-task classification. Namely, predicting answers for two types of questions with either a single classifier or two separate classifiers. This applies to all models, except the MCAN$\sp{\ast}$-VLM model, where the outputs are generated by a single token classifier, including both answer and question tokens. \begin{table}[ht] \caption{Performance with different number of classifiers for non-binary and binary questions} \label{comp-classifiers} \begin{center} \begin{tabular}{c|ccc|ccc} \hline \multirow{2}{*}{Model}&\multicolumn{3}{|c|}{Single-task} & \multicolumn{3}{|c}{Multi-tasks} \\ \cline{2-7} & All&Non-binary &Binary & All & Non-binary& Binary \\ \hline MUTAN & 79.40\%& 62.12\%& 86.36\%& 79.77\%\small(+0.37\%)& 62.54\%& 86.70\%\\ BUTD & 80.32\%& 63.55\%& 87.07\%& 80.54\%\small{(+0.22\%)}& 64.30\%& 87.08\% \\ DFAF &81.6\%& 63.85\%& 88.74\%& 82.01\%\small{(+0.41\%)}& 64.70\%& 88.97\% \\ MCAN$\sp{\ast}$-v1 & 83.24\% & 65.36\% & 90.44\%& 83.29\%\small{(+0.05\%)}&65.38\%&90.49\% \\ \hline \end{tabular} \end{center} \end{table} Table~\ref{comp-classifiers} demonstrates that the proposed multi-task classification is superior to a single-task classification in predicting the answers for the VQA models. \section{Comparison to human performance} \textbf{Human accuracy for FashionVQA dataset:} To see how well humans can answer the question in our dataset, we implemented a user interface that shows one \textit{question-image} pair from the validation set at a time. The user interface allows the human annotators to select one of the acceptable answers among 1,545 answer classes, e.g., ``yes'', ``no'', ``purple'', ``unicorn print'', ``tailored'', ``fly hook and loop fastener'', ``three quarter length'', etc. We asked the annotators to answer each question to the best of their knowledge without looking up the terms. We have two types of annotators: experts and non-experts. We trained our expert annotators with at least ten examples per fashion term in our word vocabulary. Both expert and non-expert annotators are trained on the VQA task of our dataset. Table~\ref{human-on-dataset} shows the accuracies of nine human annotators compared to the MCAN$\sp{\ast}$-VLM model trained on the FashionVQA dataset. \begin{table}[ht] \caption{Performances of different human annotators on samples from FashionVQA validation set} \label{human-on-dataset} \begin{center} \scalebox{0.8}{ \begin{tabular}{l|ccc|ccc|ccc} \toprule \multirow{2}{*}{Annotator}&\multicolumn{3}{|c|}{Number of samples}&\multicolumn{3}{|c|}{Accuracy} &\multicolumn{3}{c}{Accuracy \textit{p}-value} \\ \cline{2-10} & All & Non-binary & Binary & All & Non-binary & Binary & All & Non-binary& Binary \\ \hline Expert 1 & 728 & 216 & 512 & 63.6\% & 43.5\% & 72.1\% & 8.5e-30 & 1.1e-09 & 7.5e-20 \\ Non-expert 1 & 106 & 29 & 77 & 58.5\% & 24.1\% & 71.4\% & 1.8e-07 & 1.4e-05 & 0.00018 \\ Non-expert 2 & 70 & 18 & 52 & 52.9\% & 22.2\% & 63.5\% & 6.9e-07 & 0.0003 & 8.7e-05 \\ Non-expert 3 & 61 & 17 & 44 & 63.9\% & 29.4\% & 77.3\% & 0.00072 & 0.0035 & 0.02 \\ Non-expert 4 & 51 & 14 & 37 & 47.1\% & 14.3\% & 59.5\% & 1.2e-06 & 8.7e-05 & 0.00025 \\ Non-expert 5 & 150 & 44 & 106 & 50.7\% & 22.7\% & 62.3\% & 3e-14 & 2.8e-08 & 1.3e-08 \\ Non-expert 6 & 211 & 62 & 149 & 52.6\% & 22.6\% & 65.1\% & 9.5e-18 & 3.9e-11 & 4.4e-10 \\ Non-expert 7 & 103 & 27 & 76 & 48.5\% & 25.9\% & 56.6\% & 3.3e-11 & 6.2e-05 & 3.6e-08 \\ Non-expert 8 & 50 & 14 & 36 & 52.0\% & 14.3\% & 66.7\% & 1.6e-05 & 8.7e-05 & 0.0023 \\ \bottomrule \end{tabular} } \end{center} \end{table} To analyze the statistical significance of the results, we calculated the \textit{p}-values of the human accuracies with respect to the validation accuracy of the model using the one-sided t-test. The validation accuracies of the MCAN$\sp{\ast}$-VLM model are 84.69\%, 64.65\%, and 90.84\% for all, non-binary, and binary questions, respectively. The model outperforms all of the human annotators, and at a 95\% confidence level, the differences between the model validation accuracy and human accuracies are statistically significant. \textbf{Accuracies for human-generated questions:} We also stress-tested the model by measuring its performance on human-generated questions. We asked an expert annotator, Expert 2, to paraphrase the questions of 300 random samples (218 binary and 82 non-binary samples) from the validation set. We used these questions instead of the original questions in the validation set to measure the accuracies of the MCAN$\sp{\ast}$-VLM model and a human annotator, Expert 1, as shown in Table~\ref{paraphrased}. \begin{figure}[htp] \centering \includegraphics [width=0.95\textwidth]{paraphrased-examples.pdf} \caption{FashionVQA paraphrased and answered by humans and model} \label{fig:paraphrased-examples} \end{figure} \begin{table}[ht] \caption{Performances of the MCAN$\sp{\ast}$-VLM model and a human expert on human-generated questions} \label{paraphrased} \begin{center} \scalebox{1.0}{ \begin{tabular}{lrrrrrrrrr} \toprule \multirow{2}{*}{}&\multicolumn{3}{c}{Accuracy} \\ \cline{2-4} & All & Non-binary & Binary \\ \midrule Human Expert 1 & 62.3\% & 30.5\% & 74.3\% \\ MCAN$\sp{\ast}$-VLM & 77.7\% & 47.6\% & 89.0\% \\ \midrule \textit{p}-value & 1.9e-05 & 0.0125 & 3.4e-05 \\ \bottomrule \end{tabular} } \end{center} \end{table} \begin{figure} \centering \includegraphics [width=\textwidth]{AB_test.pdf} \caption{An example of the side-by-side comparison of the search results with and without reranking. Given a random search query, e.g. ``green crew neck dress'', the annotator picks her/his preferred search results between the left (A) and right (B) result pages.} \label{fig:ab_test} \end{figure} We performed a one-sided t-test to analyze the statistical significance of the difference between the human and the model accuracies. At a significance level of 0.05 ($\alpha=0.05$), the \textit{p}-values reject the null hypothesis of the human accuracy being greater than or equal to the model. Figure~\ref{fig:paraphrased-examples} provides several examples from this experiment. \textbf{Impact on downstream tasks:} We performed a side-by-side comparison of the apparel search with and without FashionVQA. A baseline search engine returns the top 24 items for an apparel search query. Another variant of the search results is formed by reranking these 24 items with FashionVQA: we generate a set of binary questions from the search query and use MCAN$\sp{\ast}$-VLM model trained with FashionVQA to answer these questions for each of the 24 items. The average confidence scores of the yes and no answers are used as the additional features to rerank the top 24 items. For a number of randomly-selected search queries with two \textit{attribute values} and one \textit{category}, e.g., ``green crew neck dress'', a human annotator is presented with the original and reranked search result pages and gets to choose her/his preferred result page. The result pages are randomly located on the left and right sides of the screen without the annotator knowing which of the two pages presents the reranked results. Figure~\ref{fig:ab_test} shows an example of our side-by-side A/B test for the given random query. Out of 150 search queries, the human annotator preferred 117 search pages reranked based on the FashionVQA. Binomial statistical test results in a \textit{p}-value of 3.2e-12, showing that the human annotator significantly prefers the search result page reranked using FashionVQA. \section*{Conclusion} In this work, we design a fashion VQA dataset and generate non-binary and binary questions via diverse templates. The templates allow us to flexibly scale the dataset to the size and complexity required for training a domain-specific multimodal model. We benchmark this large-scale dataset on different VQA models and discuss several factors impacting the performance of the VQA task. The best model is a visual language model trained on the FashionVQA dataset. The model generates the cross-modality embeddings of the vision and language domains applicable to downstream tasks of fashion dialogue, search, and recommendation. \bibliographystyle{unsrtnat}
1,477,468,751,194
arxiv
\section{Introduction} We consider two models of Markov jump processes with mean field interaction. In both cases, we have $n$ particles or spins that evolve as a pure jump process, where the jump rates of the individual particles depend on the empirical distribution of all $n$ particles. We prove the large deviation principle(LDP) for the trajectory of these empirical quantities and show that the rate function is in Lagrangian form. The first set of models that we consider are conservative models that generalize the Ehrenfest model. In the one dimensional setting, this model is also known as the Moran model without mutation or selection. For these models, the empirical quantity of interest for large $n$ is the empirical magnetisation. The second class of models are jump processes of Glauber type such as Curie-Weiss spin flip dynamics. In this case, the empirical measure is given by \begin{equation*} \mu_n(t) := \frac{1}{n}\sum_{i \leq n} \delta_{\sigma_i(t)}, \end{equation*} where $\sigma_i(t) \in \{1,\dots,d\}$ is the state of the $i$-th spin at time $t$. Under some appropriate conditions, the trajectory $\mu_n(t)$ converges as $n \rightarrow \infty$ to $\mu(t)$, the solution of a McKean-Vlasov equation, which is a generalization of the linear Kolmogorov forward equation which would appear in the case of independent particles. For the second class of models, we obtain a large deviation principle for the trajectory of these empirical measures on the space $D_{\cP(\{1,\dots,d\})}(\bR^+)$ of c\`{a}dl\`{a}g paths on $E := \cP(\{1,\dots,d\})$ of the form \begin{equation*} \PR\left[\{\mu_n(t)\}_{t \geq 0} \approx \gamma\right] \approx e^{-nI(\gamma)} \end{equation*} where \begin{equation*} I(\gamma) = I(\gamma(0)) + \int_0^\infty \cL(\gamma(s),\dot{\gamma}(s)) \dd s \end{equation*} for trajectories $\gamma$ that are absolutely continuous and $I(\gamma) = \infty$ otherwise. In particular, $I(\gamma) = 0$ for the solution $\gamma$ of the limiting McKean-Vlasov equation. The \textit{Lagrangian} $\cL : E \times \bR^d \rightarrow \bR^+$ is defined as the Legendre transform of a \textit{Hamiltionan} $H : E \times \bR^d \rightarrow \bR$ that can be obtained via a limiting procedure \begin{equation} \label{eqn:convergence_of_H_intro} H(x,\nabla f(x)) = Hf(x) = \lim_n \frac{1}{n} e^{nf} A_n e^{nf}. \end{equation} Here $A_n$ is the generator of the Markov process of $\{\mu_n(t)\}_{t \geq 0}$. More details on the models and definitions follow shortly in Section \ref{section:main_results}. Recent applications of the path-space large deviation principle are found in the study of mean-field Gibbs-non-Gibbs transitions, see e.g. \cite{EK10,EFHR10} or the microscopic origin of gradient flow structures, see e.g. \cite{AdDiPeZi13,MPR13}. Other authors have considered the path-space LDP in various contexts before, see for example \cite{FW98,Co89,Le95,DPdH96,Fe94b,BuDuMa11,BoSu12}. A comparison with these results follows in Section \ref{section:comparison_to_results_in_literature}. The novel aspect of this paper with respect to large deviations for jump processes is an approach via a class of \textit{Hamilton-Jacobi} equations. In \cite{FK06}, a general strategy is proposed for the study for large deviations of trajectories which is based on the convergence of non-linear semigroups. As in the theory of weak convergence of Markov processes, this program is carried out in two steps, first one proves convergence of the generators, i.e. \eqref{eqn:convergence_of_H_intro}, and secondly one shows that $H$ is indeed the generator of a semigroup. The latter issue is non trivial and follows for example by showing that the Hamilton-Jacobi equation \begin{equation} \label{eqn:Hamilton_Jacobi_equation_intro} f(x) - \lambda H(x,\nabla f(x)) - h(x) = 0 \end{equation} has a unique solution $f$ for all $h \in C(E)$ and $\lambda >0$ in the viscosity sense. It is exactly this problem that is the main focus of the paper. An extra bonus of this approach is that the conditions on the Markov processes for finite $N$ are weaker then in previous studies, and allow for singular behaviour in the jump rate for a particle to move from $a$ to $b$ in boundary regions when the empirical average $\mu(a)$ is close to $0$. This approach via the Hamilton-Jacobi equation has been carried out in \cite{FK06} for Levy processes on $\bR^d$, systems with multiple time scales and for stochastic equations in infinite dimensions. In \cite{DFL11}, the LDP for a diffusion process on $(0,\infty)$ is treated with singular behaviour close to $0$. \smallskip As a direct consequence of our large deviation principle, we obtain a straightforward method to find Lyapunov functions for the limiting McKean-Vlasov equation. If $A_n$ is the linear generator of the empirical quantity of interest of the $n$-particle process, the operator $A$ obtained by $Af = \lim_n A_n f$ can be represented by $Af(\mu) = \ip{\nabla f(\mu)}{\mathbf{F}(\mu)}$ for some vector field $\mathbf{F}$. If solutions to \begin{equation} \label{eqn:vector_field_equation_intro} \dot{\mu}(t) = \mathbf{F}(\mu(t)) \end{equation} are unique for a given starting point and if the empirical measures $\mu_n(0)$ converges to $\mu(0)$, the empirical measures $\{\mu_n(t)\}_{t \geq 0}$ converge almost surely to a solution $\{\mu(t)\}_{t \geq 0}$ of \eqref{eqn:vector_field_equation_intro}. In Section \ref{subsection:Htheorems}, we will show that if the stationary measures of $A_n$ satisfy a large deviation principle on $\cP(\{1,\dots,d\})$ with rate function $I_0$, then $I_0$ is a Lyapunov function for \eqref{eqn:vector_field_equation_intro}. \smallskip The paper is organised as follows. In Section \ref{section:main_results}, we introduce the models and state our results. Additionally, we give some examples to show how to apply the theorems. In Section \ref{section:LDP_via_HJequation}, we recall the main results from \cite{FK06} that relate the Hamilton-Jacobi equations \eqref{eqn:Hamilton_Jacobi_equation_intro} to the large deviation problem. Additionally, we verify conditions from \cite{FK06} that are necessary to obtain our large deviation result wit a rate function in Lagrangian form, in the case that we have uniqueness of solutions to the Hamilton-Jacobi equations. Finally, in Section \ref{section:viscosity_solutions} we prove uniqueness of viscosity solutions to \eqref{eqn:Hamilton_Jacobi_equation_intro}. \section{Main results} \label{section:main_results} \subsection{Two models of interacting jump processes} \label{section:two_models} We do a large deviation analysis of the trajectory of the empirical magnetization or distribution for two models of interacting spin-flip systems. The first setting is a $d$-dimensional Ehrenfest model. \smallskip \textbf{Generalized Ehrenfest model in $d$-dimensions.} Consider $d$-dimensional spins $\sigma = (\sigma(1),\dots,\sigma(n)) \in (\{-1,1\}^d)^n$. For example, we can interpret this as $n$ individuals with $d$ types, either being $-1$ or $1$. For $k \leq n$, we denote the $i$-th coordinate of $\sigma(k)$ by $\sigma_i(k)$. Set $x_n = (x_{n,1},\dots,x_{n,d}) \in E_1 := [-1,1]^d$, where $x_{n,i} = x_{n,i}(\sigma) = \frac{1}{n} \sum_{j=1}^n \sigma_i(j)$ the empirical magnetisation in the $i$-th spin. For later convenience, denote by $E_{1,n}$ the discrete subspace of $E_1$ which is the image of $(\{-1,1\}^d)^n$ under the map $\sigma \mapsto x_n(\sigma)$. The spins evolve according to mean-field Markovian dynamics with generator $\cA_n$: \begin{multline*} \cA_n f(\sigma) = \sum_{i = 1}^d \sum_{j = 1}^n \bONE_{\{\sigma_i(j) = -1\}} r_{n,+}^i(x_n(\sigma)) \left[f(\sigma^{i,j}) - f(\sigma)\right] \\ + \sum_{i = 1}^d \sum_{j = 1}^n \bONE_{\{\sigma_i(j) = 1\}} r_{n,-}^i(x_n(\sigma)) \left[f(\sigma^{i,j}) - f(\sigma)\right]. \end{multline*} The configuration $\sigma^{i,j}$ is obtained by flipping the $i$-th coordinate of the $j$-th spin. The functions $r_{n,+}^i,r_{n,-}^i$ are non-negative and represent the jump rate of the $i$-th spin flipping from a $-1$ to $1$ or vice-versa. The empirical magnetisation $x_n$ itself also behaves Markovian and has generator \begin{multline*} A_n f(x) = \sum_{i = 1}^d \Bigg\{ n \frac{1-x_i}{2}r_{n,+}^i(x) \left[f\left(x + \frac{2}{n}e_i\right) - f(x) \right] \\ + n \frac{1+x_i}{2}r_{n,-}^i(x)\left[f\left(x - \frac{2}{n}e_i\right) - f(x) \right] \Bigg\}, \end{multline*} where $e_i$ the vector consisting of $0$'s, and a $1$ in the $i$-th component. Under suitable conditions on the rates $r_{n,+}^i$ and $r_{n,-}^i$, we will derive a large deviation principle for the trajectory $\{x_n(t)\}_{t \geq 0}$ in the Skorokhod space $D_{E_1}(\bR^+)$ of right continuous $E_1$ valued paths that have left limits. \smallskip \textbf{Systems of Glauber type with $d$ states.} We will also study the large deviation behaviour of copies of a Markov process on $\{1,\dots,d\}$ that evolve under the influence of some mean-field interaction. Here $\sigma = (\sigma(1),\dots,\sigma(n)) \in \{1,\dots,d\}^n$ and the empirical distribution $\mu$ is given by $\mu_n(\sigma) = \frac{1}{n} \sum_{i \leq n} \delta_{\sigma(i)}$ which takes values in \begin{equation*} E_{2,n} := \left\{\mu \in \cP(E) \, \middle| \, \mu = \frac{1}{n} \sum_{i=1}^n \delta_{x_i}, \text{ for some } x_i \in \{1,\dots,d\} \right\}. \end{equation*} Of course, this set can be seen as discrete subset of $E_2 := \cP(\{1,\dots,d\}) = \{\mu \in \bR^d \, | \, \mu_i \geq 0, \sum_i \mu_i = 1\}$. We take some $n$-dependent family of jump kernels $r_n : \{1,\dots, d\}\times\{1,\dots, d\}\times E_n \rightarrow \bR^+$ and define Markovian evolutions for $\sigma$ by \begin{equation*} \cA_nf(\sigma(1),\dots,\sigma(n)) = \sum_{i=1}^n \sum_{b = 1}^d r_n\left(\sigma(i),b, \frac{1}{n} \sum_{i=1}^n \delta_{\sigma(i)}\right) \left[f(\sigma^{i,b}) - f(\sigma) \right], \end{equation*} where $\sigma^{i,b}$ is the configuration obtained from $\sigma$ by changing the $i$-th coordinate to $b$. Again, we have an effective evolution for $\mu_n$, which is governed by the generator \begin{equation*} A_n f(\mu) = n \sum_{a,b} \mu(a)r_n(a,b,\mu)\left[f\left(\mu - n^{-1}\delta_a + n^{-1} \delta_b\right) - f(\mu)\right]. \end{equation*} As in the first model, we will prove, under suitable conditions on the jump kernels $r_n$ a large deviation principle in $n$ for $\{\mu_n(t)\}_{t \geq 0}$ in the Skorokhod space $D_{E_2}(\bR^+)$. \subsection{Large deviation principles} \label{section:main_ldp} The main results in this paper are the two large deviation principles for the two sets of models introduced above. To be precise, we say that the sequence $x_n \in D_{E_1}(\bR^+)$, or for the second case $\mu_n \in D_{E_2}(\bR^+)$, satisfies the large deviation principle with rate function $I : D_{E_1}(\bR^+) \rightarrow [0,\infty]$ if $I$ is lower semi-continuous and the following two inequalities hold: \begin{enumerate}[(a)] \item For all closed sets $G \subseteq D_{E_1}(\bR^+)$, we have \begin{equation*} \limsup_{n \rightarrow \infty} \frac{1}{n} \log \PR[\{x_n(t)\}_{t \geq 0} \in G] \leq - \inf_{\gamma \in G} I(\gamma). \end{equation*} \item For all open sets $U \subseteq D_{E_1}(\bR^+)$, we have \begin{equation*} \liminf_{n \rightarrow \infty} \frac{1}{n} \log \PR[\{x_n(t)\}_{t \geq 0} \in U] \geq - \inf_{\gamma \in G} I(\gamma). \end{equation*} \end{enumerate} For the definition of the Skorokhod topology defined on $D_{E_1}(\bR^+)$, see for example \cite{EK86}. We say that $I$ is \textit{good} if the level sets $I^{-1}[0,a]$ are compact for all $a \geq 0$. \smallskip For a trajectory $\gamma \in D_{E_1}(\bR)$, we say that $\gamma \in \cA\cC$ if the trajectory is absolutely continuous. For the $d$-dimensional Ehrenfest model, we have the following result. \begin{theorem} \label{theorem:ldp_ehrenfest_dd} Suppose that there exists a family of continuous functions $v_+^i, v_-^i : E_1 \rightarrow \bR^+$, $1 \leq i \leq d$, such that \begin{equation} \label{eqn:main_convergence_condition_for_ldp_ehrenfest} \lim_{n \rightarrow \infty} \sup_{x \in E_n} \sum_{i=1}^d \left|\frac{1-x_i}{2}r_{n,+}^i(x) - v_+^i(x)\right| + \left|\frac{1+x_i}{2}r_{n,-}^i(x) - v_-^i(x)\right|= 0. \end{equation} Suppose that for every $i$, the functions $v_+^i$ and $v_-^i$ satisfy the following. The rate $v_+^i$ is identically zero or we have the following set of conditions. \begin{enumerate}[(a)] \item $v_+^i(x) > 0$ if $x_i \neq 1$. \item For $z \in [-1,1]^d$ such that $z_i = 1$, we have $v_+^i(z) = 0$ and for every such $z$ there exists a neighbourhood $U_z$ of $z$ on which there exists a decomposition $v_+^i(x) = v_{+,z,\dagger}^i(x_i) v_{+,z,\ddagger}^i(x)$, where $v_{+,z,\dagger}^i$ is decreasing and where $v_{+,z,\ddagger}^i$ is continuous and satisfies $v_{+,z,\ddagger}^i(z) \neq 0$. \end{enumerate} The rate $v_-^i$ is identically zero or we have the following set of conditions. \begin{enumerate}[(a)] \item $v_-^i(x) > 0$ if $x_i \neq -1$. \item For $z \in [-1,1]^d$ such that $z_i = -1$, we have $v_-^i(z) = 0$ and for every such $z$ there exists a neighbourhood $U_z$ of $z$ on which there exists a decomposition $v_-^i(x) = v_{-,z,\dagger}^i(x_i) v_{-,z,\ddagger}^i(x)$, where $v_{+,z,\dagger}^i$ is increasing and where $v_{-,z,\ddagger}^i$ is continuous and satisfies $v_{-,z,\ddagger}^i(z) \neq 0$. \end{enumerate} Furthermore, suppose that $\{x_n(0)\}_{n \geq 1}$ satisfies the large deviation principle on $E_1$ with good rate function $I_0$. Then, $\{x_n\}_{n \geq 1}$ satisfies the large deviation principle on $D_{E_1}(\bR^+)$ with good rate function $I$ given by \begin{equation*} I(\gamma) = \begin{cases} I_0(\gamma(0)) + \int_0^\infty \cL(\gamma(s),\dot{\gamma}(s)) \dd s & \text{if } \gamma \in \cA\cC, \\ \infty & \text{otherwise} \end{cases} \end{equation*} where the \textit{Lagrangian} $\cL(x,v) : E_1 \times \bR^d \rightarrow \bR$ is given by the Legendre transform $\cL(x,v) = \sup_{p \in \bR^d} \ip{p}{v} - H(x,p)$ of the \textit{Hamiltonian} $H: E_1 \times \bR^d \rightarrow \bR$, defined by \begin{equation} \label{eqn:def_Hamiltionian_Ehrenfest} H(x,p) = \sum_{i = 1}^d v_+^i(x) \left[e^{2p_i} - 1 \right] + v_-^i(x) \left[e^{-2p_i} - 1 \right]. \end{equation} \end{theorem} \begin{remark} \label{remark:singular_rates} Note that the functions $v_+^i$ and $v_-^i$ do not have to be of the form $v_+^i(x) = \frac{1-x_i}{2} r_+^i(x)$ for some bounded function $r_+^i$. This we call singular behaviour, as such a rate cannot be obtained the large deviation principle for independent particles, Varadhan's lemma and the contraction principle as in \cite{Le95} or \cite{DPdH96}. \end{remark} \begin{theorem} \label{theorem:ldp_mean_field_jump_process} Suppose there exists a continuous function $v : \{1,\dots,d\}\times\{1,\dots,d\} \times E_2 \rightarrow \bR^+$ such that for all $a,b \in \{1,\dots, d\}$, we have \begin{equation} \label{eqn:property_of_jump_kernels} \lim_{n \rightarrow \infty} \sup_{\mu \in E_n} \left|\mu(x) r_n(a,b,\mu) - v(a,b,\eta_n(\mu))\right| = 0. \end{equation} Suppose that for each $a,b$, the map $\mu \mapsto v(a,b,\mu)$ is either identically equal to zero or satisfies the following two properties. \begin{enumerate}[(a)] \item $v(a,b,\mu) > 0$ for all $\mu$ such that $\mu(a) > 0$. \item For $\nu$ such that $\nu(a) = 0$, there exists a neighbourhood $U_\nu$ of $\nu$ on which there exists a decomposition $v(a,b,\mu) = v_{\nu,\dagger}(a,b,\mu(a)) v_{\nu,\ddagger}(a,b,\mu)$ such that $v_{\nu,\dagger}$ is increasing in the third coordinate and such that $v_{\nu,\ddagger}(a,b,\cdot)$ is continuous and satisfies $v_{\nu,\ddagger}(a,b,\nu) \neq 0$. \end{enumerate} Additionally, suppose that $\{\mu_n(0)\}_{n \geq 1}$ satisfies the large deviation principle on $E_2$ with good rate function $I_0$. Then, $\{\mu_n\}_{n \geq 1}$ satisfies the large deviation principle on $D_{E_2}(\bR^+)$ with good rate function $I$ given by \begin{equation*} I(\gamma) = \begin{cases} I_0(\gamma(0)) + \int_0^\infty \cL(\gamma(s),\dot{\gamma}(s)) \dd s & \text{if } \gamma \in \cA\cC \\ \infty & \text{otherwise}, \end{cases} \end{equation*} where $\cL : E_2 \times \bR^d \rightarrow \bR^+$ is the Legendre transform of $H : E_2 \times \bR^d \rightarrow \bR$ given by \begin{equation} \label{eqn:def_Hamiltionian_jump} H(\mu,p) = \sum_{a,b} v(a,b,\mu)\left[e^{p_b - p_a} -1\right]. \end{equation} \end{theorem} \subsection{The comparison principle} \label{section:comparison_priniple_main_results} The main results in this paper are the two large deviation principles as stated above. However, the main step in the proof of these principles is the verification of the comparison principle for a set of Hamilton-Jacobi equations. As this result is of independent interest, we state these results here as well, and leave explanation on why these equation are relevant for the large deviation principles for later. We start with some definitions. For $E$ equals $E_1$ or $E_2$, let $H : E \times \bR^d \rightarrow \bR$ be some continuous map. For $\lambda > 0$ and $h \in C(E)$ Set $F_{\lambda,h} : E \times \bR \times \bR^d \rightarrow \bR$ by \begin{equation*} F_{\lambda,h}(x,a,p) = a - \lambda H(x,p) - h(x). \end{equation*} We will solve the \textit{Hamilton-Jacobi} equation \begin{equation} \label{eqn:differential_equation_intro} F_{\lambda,h}(x,f(x),\nabla f(x)) = f(x) - \lambda H(x, \nabla f(x)) - h(x) = 0 \qquad x \in E, \end{equation} in the \textit{viscosity} sense. \begin{definition} \label{definition:viscosity} We say that $u$ is a \textit{(viscosity) subsolution} of equation \eqref{eqn:differential_equation_intro} if $u$ is bounded, upper semi-continuous and if for every $f \in C^{1}(E)$ and $x \in E$ such that $u - f$ has a maximum at $x$, we have \begin{equation*} F_{\lambda,h}(x,u(x),\nabla f(x)) \leq 0. \end{equation*} We say that $u$ is a \textit{(viscosity) supersolution} of equation \eqref{eqn:differential_equation_intro} if $u$ is bounded, lower semi-continuous and if for every $f \in C^{1}(E)$ and $x \in E$ such that $u - f$ has a minimum at $x$, we have \begin{equation*} F_{\lambda,h}(x,u(x),\nabla f(x)) \geq 0. \end{equation*} We say that $u$ is a \textit{(viscosity) solution} of equation \eqref{eqn:differential_equation_intro} if it is both a sub and a super solution. \end{definition} \begin{definition} We say that equation \eqref{eqn:differential_equation_intro} satisfies the \textit{comparison principle} if for a subsolution $u$ and supersolution $v$ we have $u \leq v$. \end{definition} Note that if the comparison principle is satisfied, then a viscosity solution is unique. \begin{theorem} \label{theorem:comparison_ehrenfest_ddim} Suppose that $H : E_1 \times \bR^d \rightarrow \bR$ is given by \eqref{theorem:ldp_ehrenfest_dd} and that the family of functions $v_+^i, v_-^i : E_1 \rightarrow \bR^+$, $1 \leq i \leq d$, satisfy the conditions of Theorem \ref{theorem:ldp_ehrenfest_dd}. Then, for every $\lambda > 0$ and $h \in C(E_1)$, the comparison principle holds for $f(x) - \lambda H(x,\nabla f(x)) - h(x) = 0$. \end{theorem} \begin{theorem}\label{theorem:comparison_mean_field_jump} Suppose that $H : E_2 \times \bR^d \rightarrow \bR$ is given by \eqref{theorem:ldp_ehrenfest_dd} and that function $v : \{1,\dots,d\}\times\{1,\dots,d\} \times E_2 \rightarrow \bR^+$ satisfies the conditions of Theorem \ref{theorem:ldp_mean_field_jump_process}. Then, for every $\lambda > 0$ and $h \in C(E_2)$, the comparison principle holds for $f(\mu) - \lambda H(\mu,\nabla f(\mu)) - h(\mu) = 0$. \end{theorem} The main consequence of the comparison principle for the Hamilton-Jacobi equations stems from the fact, as we will see below, that the operator $H$ generates a strongly continuous contraction semigroup on $C(E)$. \smallskip The proof of the large deviation principle is, in a sense, a problem of semigroup convergence. At least for linear semigroups, it is well known that semigroup convergence can be proven via the convergence of their generators. The main issue in this approach is to prove that the limiting generator $H$ generates a semigroup. It is exactly this issue that the comparison principle takes care of. Hence, the independent interest of the comparison principle comes from the fact that we have semigroup convergence whatever the approximating semigroups are, as long as their generators converge to $H$, i.e. this holds not just for the specifically chosen approximating semigroups that we consider in Section \ref{section:LDP_via_HJequation}. \subsection{A Lyapunov function for the limiting dynamics} \label{subsection:Htheorems} As a corollary to the large deviation results, we show how to obtain a Lyapunov function for the solutions of \begin{equation} \label{eqn:limiting_differential_equation} \dot{x}(t) = \mathbf{F}(x), \end{equation} where $\mathbf{F}(x) := H_p(x,0)$ for a Hamiltonian as in \eqref{eqn:def_Hamiltionian_jump} or \eqref{eqn:def_Hamiltionian_Ehrenfest}. Here $H_p(x,p)$ is interpreted as the vector of partial derivatives of $H$ in the second coordinate. We will see in Example \ref{example:ehrenfest_non_unique_solution} that the trajectories that solve this differential equation are the trajectories with $0$ cost. Additionally, the limiting operator $(A,C^1(E))$ obtained by \begin{equation*} \sup_{x \in E_n \cap K} |A_nf(x) - Af(x)| \rightarrow 0 \end{equation*} for all $f \in C^1(E)$ and compact sets $K \subseteq E$ has the form by $Af(x) = \ip{\nabla f(x)}{\mathbf{F}(x)}$ for the same vector field $\mathbf{F}$. This implies that the $0$-cost trajectories are solutions to the McKean-Vlasov equation \eqref{eqn:limiting_differential_equation}. Solutions to \ref{eqn:limiting_differential_equation} are not necessarily unique, see Example \ref{example:ehrenfest_non_unique_solution}. Uniqueness holds for example under a one-sided Lipschitz condition: if there exists $M > 0 $ such that $\ip{\mathbf{F}(x) - \mathbf{F}(y)}{x-y} \leq M |x-y|^2$ for all $x,y \in E$. \smallskip For non-interacting systems, it is well known that the relative entropy with respect to the stationary measure is a Lyapunov function for solutions of \eqref{eqn:limiting_differential_equation}. The large deviation principle explains this fact and gives a method to obtain a suitable Lyapunov function, also for interacting dynamics. \begin{proposition} \label{proposition:Htheorem} Suppose the conditions for Theorem \ref{theorem:ldp_ehrenfest_dd} or Theorem \ref{theorem:ldp_mean_field_jump_process} are satisfied. Suppose there exists measures $\nu_n \in \cP(E_n) \subseteq \cP(E)$ that are invariant for the dynamics generated by $A_n$. Furthermore, suppose that the measures $\nu_n$ satisfy the large deviation principle on $E$ with good rate function $I_0$. Then $I_0$ is increasing along any solution of $\dot{x}(t) = \mathbf{F}(x(t))$. \end{proposition} Note that we do not assume that \eqref{eqn:limiting_differential_equation} has a unique solution for a given starting point. \subsection{Examples} \label{section:examples_main_results} We give a series of examples to show the extend of Theorems \ref{theorem:ldp_ehrenfest_dd} and \ref{theorem:ldp_mean_field_jump_process}. For the Ehrenfest model, we start with the basic case, of spins flipping under the influence of some mean field potential. \begin{example} \label{example:Ehrenfest_with_potential} To be precise, fix some continuously differentiable $V : [-1,1]^d \rightarrow \bR$ and set for every $n \geq 1$ and $i \in \{1,\dots,d\}$ the rates \begin{align*} r_{n,i}^+(x) & = \exp\left\{- n 2^{-1} \left(V\left(x + \frac{2}{n} e_i\right) - V(x) \right)\right\}, \\ r_{n,i}^-(x) & = \exp\left\{- n 2^{-1} \left(V\left(x - \frac{2}{n} e_i\right) - V(x) \right)\right\}. \end{align*} The limiting objects $v_+^i$ and $v_-^i$ are given by \begin{equation*} v_+^i(x) = \frac{1-x_i}{2} e^{-\nabla_i V(x)}, \qquad v_-^i(x) = \frac{1+x_i}{2} e^{\nabla_i V(x)}, \end{equation*} which already have the decomposition as required in the conditions of the Theorem \ref{theorem:ldp_ehrenfest_dd}. For example, condition (b) for $v_+^i$ is satisfied by \begin{equation*} v_{+,z,\dagger}^i(x_i) := \frac{1-x_i}{2}, \qquad v_{+,z,\ddagger}^i(x) := e^{-\nabla_i V(x)}. \end{equation*} \end{example} For $d = 1$, we give two extra notable examples, the first one exhibits unbounded jump rates for the individual spins if the empirical magnetisation is close to one of the boundary points. The second example shows a case where we have multiple trajectories $\gamma$ with $I(\gamma) = 0$ that start from $x_0 = 0$. As $d = 1$, we drop all sub- and super-scripts $i \in \{1,\dots,d\}$ for the these two examples. \begin{example} \label{example:ehrenfest_diverging_jump_rates} Consider the one-dimensional Ehrenfest model with \begin{equation*} r_{n,+}(x) = \frac{2}{\sqrt{1-x}} \wedge n , \qquad r_{n,-}(x) = \frac{2}{\sqrt{1+x}} \wedge n. \end{equation*} Set $v_+(x) = \sqrt{1-x}$, $v_-(x) = \sqrt{1+x}$. By Dini's theorem, we have \begin{equation*} \sup_{x \in [-1,1]} \left|\frac{1-x}{2} r_{n,+(x)} - v_+(x)\right| = 0, \qquad \sup_{x \in [-1,1]} \left|\frac{1+x}{2} r_{n,-(x)} - v_-(x)\right| = 0. \end{equation*} And additionally, conditions (a) and (b) of Theorem \ref{theorem:ldp_ehrenfest_dd} are satisfied, e.g. take $v_{+,1,\dagger}(x) = \sqrt{1-x}$, $v_{+,1,\ddagger}(x) = 1$. \end{example} \begin{example} \label{example:ehrenfest_non_unique_solution} Consider the one-dimensional Ehrenfest model with some rates $r_{n,+}$, $r_{n,-}$ and functions $v_+(x) > 0, v_-(x) > 0$ such that $\frac{1}{2}(1-x)r_{n,+}(x) \rightarrow v_+(x)$ and $\frac{1}{2}(1+x)r_{n,-}(x) \rightarrow v_-(x)$ uniformly in $x \in [-1,1]$. Now suppose that there is a neighbourhood $U$ of $0$ on which $v_+,v_-$ have the form \begin{equation*} v_+(x) = \begin{cases} 1+ \sqrt{x} & x \geq 1, \\ 1 & x < 1, \end{cases} \qquad \qquad v_-(x) = 1. \end{equation*} Consider the family of trajectories $t \mapsto \gamma_a(t)$, $a \geq 0$, defined by \begin{equation*} \gamma_a(t) := \begin{cases} 0 & \text{for } t \leq a, \\ (t-a)^2 & \text{for } t \geq a. \end{cases} \end{equation*} Let $T > 0$ be small enough such that $\gamma_0(t) \in U$, and hence $\gamma_a(t) \in U$, for all $t \leq T$. A straightforward calculation yields $\int_0^T \cL(\gamma_a(t),\dot{\gamma}_a(t)) \dd t = 0$ for all $a \geq 0$. So we find multiple trajectories starting at $0$ that have zero Lagrangian cost. Indeed, note that $\cL(x,v) = 0$ is equivalent to $v = H_p(x,0) = 2\left[v_+(x) - v_-(x) \right] = 2\sqrt{(x)}$. This yields that trajectories that have $0$ Lagrangian cost are the trajectories, at least in $U$, that solve \begin{equation*} \dot{\gamma}(t) = 2 \sqrt{\gamma(t)} \end{equation*} which is the well-known example of a differential equation that allows for multiple solutions. \end{example} We end with an example for Theorem \ref{theorem:ldp_mean_field_jump_process} and Proposition \ref{proposition:Htheorem} in the spirit of Example \ref{example:Ehrenfest_with_potential}. \begin{example}[Glauber dynamics for the Potts-model] \label{example:Gibbs_dynamics} Fix some continuously differentiable function $V : \bR^d \rightarrow \bR$. Define the Gibbs measures \begin{equation*} \nu_n(\dd \sigma) := \frac{e^{-V(\mu_n(\sigma))}}{Z_n} P^{\otimes,n}(\dd \sigma) \end{equation*} on $\{1,\dots,d\}^n$, where $P^{\otimes,n}$ is the $n$-fold product measure of the uniform measure $P$ on $\{1,\dots,d\}$ and where $Z_n$ are normalizing constants. Let $S(\mu \, | \, P)$ denote the relative entropy of $\mu \in \cP(\{1,\dots,d\})$ with respect to $P$: \begin{equation*} S(\mu \, | \, P) = \sum_a \log (d \mu(a)) \mu(a). \end{equation*} By Sanov's theorem and Varadhan's lemma, the empirical measures under the laws $\nu_n$ satisfy a large deviation principle with rate function $I_0(\mu) = S(\mu \, | \, P) + V(\mu)$. Now fix some function $r : \{1,\dots,d\}\times\{1,\dots,d\} \rightarrow \bR^+$. Set \begin{equation*} r_n(a,b,\mu) = r(a,b)\exp\left\{- n 2^{-1}\left(V\left(\mu - n^{-1}\delta_a + n^{-1} \delta_b\right) - V(\mu)\right) \right\}. \end{equation*} As $n$ goes to infinity, we have uniform convergence of $\mu(a)r_n(a,b,\mu)$ to \begin{equation*} v(a,b,\mu) := \mu(a) r(a,b) \exp\left\{\frac{1}{2} \nabla_a V(\mu) - \frac{1}{2} \nabla_b V(\mu) \right\}, \end{equation*} where $\nabla_a V(\mu)$ is the derivative of $V$ in the $a$-th coordinate. As in Example \ref{example:Ehrenfest_with_potential}, condition (b) of Theorem \ref{theorem:ldp_mean_field_jump_process} is satisfied by using the obvious decomposition. \smallskip By Proposition \ref{proposition:Htheorem}, we obtain that $S(\mu \, | \, P) + V(\mu)$ is Lyapunov function for \begin{equation*} \dot{\mu}(a) = \sum_b \left[v(b,a,\mu) - v(a,b,\mu)\right] \qquad a \in \{1,\dots,d\}. \end{equation*} \end{example} \subsection{Discussion and comparison to the existing literature} \label{section:comparison_to_results_in_literature} We discuss our results in the context of the existing literature that cover our situation. Additionally, we consider a few cases where the large deviation principle(LDP) is proven for diffusion processes, because the proof techniques could possibly be applied in this setting. \textbf{LDP: Approach via non-interacting systems, Varadhan's lemma and the contraction principle.} In \cite{Le95,DPdH96,BoSu12}, the first step towards the LDP of the trajectory of some mean-field statistic of $n$ interacting particles is the LDP for non-interacting particles on some large product space obtained via Sanov's theorem. Varadhan's lemma then gives the LDP in this product space for interacting particles, after which the contraction principle gives the LDP on the desired trajectory space. In \cite{Le95,DPdH96}, the set-up is more general compared to ours in the sense that in \cite{Le95} the behaviour of the particles depends on their spatial location, and in \cite{DPdH96} the behaviour of a particle depends on some external random variable. On the other hand, systems as in Example \ref{example:ehrenfest_diverging_jump_rates} fall outside of the conditions imposed in the three papers, if we disregard spatial dependence or external randomness. The approach via Varadhan's lemma, which needs control over the size of the perturbation, does not work, at least naively, for the situation where the jump rate for individual particles is diverging to $\infty$, or converging to $0$, if the mean is close to the boundary, see Remark \ref{remark:singular_rates}. \smallskip \textbf{LDP: Explicit control on the probabilities.} For another approach considering interacting spins that have a spatial location, see \cite{Com87}. The jump rates are taken to be explicit and the large deviation principle is proven via explicit control on the Radon-Nikodym derivatives. This method should in principle work also in the case of singular $v$. The approach via the generators $H_n$ in this paper, avoids arguments based on explicit control. This is an advantage for processes where the functions $r_n$ and $v$ are not very regular. Also in the classical Freidlin-Wentzell approach \cite{FW98} for dynamical systems with Gaussian noise the explicit form of the Radon-Nikodym derivatives is used to prove the LDP. \smallskip \textbf{LDP: Direct comparison to a process of independent particles.} The main reference concerning large deviations for the trajectory of the empirical mean for interacting diffusion processes on $\bR^d$ is \cite{DG87}. In this paper, the large deviation principle is also first established for non-interacting particles. An explicit rate function is obtained by showing that the desired rate is in between the rate function obtained via Sanov's theorem and the contraction principle and the projective limit approach. The large deviation principle for interacting particles is then obtained via comparing the interacting process with a non-interacting process that has a suitably chosen drift. For related approaches, see \cite{Fe94b} for large deviations of interacting jump processes on $\bN$, where the interaction is unbounded and depends on the average location of the particles. See \cite{DjKa95} for mean field jump processes on $\bR^d$. Again, the comparison with non-interacting processes would fail in our setting due the singular interaction terms. \smallskip \textbf{LDP: Stochastic control.} A more recent approach using stochastic control and weak convergence methods has proposed in the context of both jump and diffusion processes in \cite{BuDuMa11,BDF12}. A direct application of the results in \cite{BuDuMa11} fails for jump processes in the setting of singular behaviour at the boundary. \smallskip \textbf{LDP: Proof via operator convergence and the comparison principle.} Regarding our approach based on the comparison principle, see \cite[Section 13.3]{FK06}, for an approach based on the comparison principle in the setting of \cite{DG87} and \cite{BDF12}. See \cite{DFL11} for an example of large deviations of a diffusion processes on $(0,\infty)$ with vanishing diffusion term with singular behaviour at the boundary. The methods to prove the comparison principle in Sections 9.2 and 9.3 in \cite{FK06} do not apply in our setting due to the different nature of our Hamiltonians. \smallskip \textbf{LDP: Comparison of the approaches} The method of obtaining exponential tightness in \cite{FK06}, and thus employed for this paper, is via density of the domain of the limiting generator $(H,\cD(H))$. Like in the theory of weak convergence, functions $f \in \cD(H)$ in the domain of the generator, and functions $f_n \in \cD(H_n)$ that converge to $f$ uniformly, can be used to bound the fluctuations in the Skorokhod space. This method is similar to the approaches taken in \cite{Co89,FW98,DG87}. The approach using operator convergence is based on a projective limit theorem for the Skorokhod space. As we have exponential tightness on the Skorokhod space, it suffices to prove the large deviation principle for all finite dimensional distributions. This is done the convergence of the logarithmic moment generating functions for the finite dimensional distributions. The Markov property reduces this to the convergence of the logarithmic moment generating function for time $0$ and convergence of the conditional moment generating functions, that form a semigroup $V_n(t)f(x) = \frac{1}{n} \log \bE[e^{nf(X_n(t))} \, | \, X_n(0) = x]$. Thus, the problem is reduced to proving convergence of semigroups $V_n(t)f \rightarrow V(t)f$. As in the theory of linear semigroups, this comes down to two steps. First one proves convergence of the generators $H_n \rightarrow H$. Then one shows that the limiting semigroup generates a semigroup. The verification of the comparison principle implies that the domain of the limiting operator is sufficiently large to pin down a limiting semigroup. This can be compared to the same problem for linear semigroups and the martingale problem. If the domain of a limiting linear generator is too small, multiple solutions to the martingale problem can be found, giving rise to multiple semigroups, see Chapter 12 in \cite{SV79} or Section 4.5 in \cite{EK86}. The convergence of $V_n(t)f(x) \rightarrow V(t)f(x)$ uniformly in $x$ corresponds to having sufficient control on the Doob-h transforms corresponding to the change of measures \begin{equation*} \frac{\dd \PR_{n,x}^{f,t}}{\dd \PR_{n,x}}(X_n) = \exp\left\{nf(X_n(t))\right\}, \end{equation*} where $\PR_{n,x}$ is the measure corresponding to the process $X_n$ started in $x$ at time $0$. An argument based on the projective limit theorem and control on the Doob h-transforms for independent particles is also used in \cite{DG87}, whereas the methods in \cite{Co89,FW98} are based on direct calculation of the probabilities being close to a target trajectories. \smallskip \textbf{Large deviations for large excursions in large time.} A notable second area of comparison is the study of large excursions in large time in the context of queuing systems, see e.g. \cite{DuIiSo90,DuEl95,DuAt99} and references therein. Here, it is shown that the rate functions themselves, varying in space and time, are solutions to a Hamilton-Jacobi equation. As in our setting, one of the main problems is the verification of the comparison principle. The notable difficulty in these papers is a discontinuity of the Hamiltonian at the boundary, but in their interior the rates are uniformly bounded away from infinity and zero. \smallskip \textbf{Lyapunov functions.} In \cite{BuDuFiRa15a,BuDuFiRa15b}, Lyapunov functions are obtained for the McKean-Vlasov equation corresponding to interacting Markov processes in a setting similar to the setting of Theorem \ref{theorem:ldp_mean_field_jump_process}. Their discussion goes much beyond Proposition \ref{proposition:Htheorem}, which is perhaps best compared to Theorem 4.3 in \cite{BuDuFiRa15b}. However, the proof of Proposition \ref{proposition:Htheorem} is interesting in its own right, as it gives an intuitive explanation for finding a relative entropy as a Lyapunov functional and is not based on explicit calculations. In particular, the proof of Proposition \ref{proposition:Htheorem} in principle works for any setting where the path-space large deviation principle holds. \section{Large deviation principle via an associated Hamilton-Jacobi equation} \label{section:LDP_via_HJequation} In this section, we will summarize the main results of \cite{FK06}. Additionally, we will verify the main conditions of their results, except for the comparison principle of an associated Hamilton-Jacobi equation. This verification needs to be done for each individual model separately and this is the main contribution of this paper. We verify the comparison principle for our two models in Section \ref{section:viscosity_solutions}. \subsection{Operator convergence} We start by recalling some results from \cite{FK06}. Let $E_n$ and $E$ denote either of the spaces $E_{n,1}, E_1$ or $E_{n,2}, E_2$. Furthermore, denote by $C(E)$ the continuous functions on $E$ and by $C^1(E)$ the functions that are continuously differentiable on a neighbourhood of $E$ in $\bR^d$. \smallskip Assume that for each $n \in \bN$, we have a jump process $X_n$ on $E_n$, generated by a bounded infinitesimal generator $A_n$. For the two examples, this process is either $x_n$ or $\mu_n$. We denote by $\{S_n(t)\}_{t \geq 0}$ the transition semigroups $S_n(t)f(y) = \bE\left[f(X_n(t)) \, \middle| \, X_n(0) = y \right]$ on $C(E_n)$. Define for each $n$ the exponential semigroup \begin{equation*} V_n(t) f(y) := \frac{1}{n} \log S_n(t) e^{nf}(y) = \frac{1}{n} \log \bE\left[e^{nf(X_n(t))} \, \middle| \, X_n(0) = y \right]. \end{equation*} Feng and Kurtz\cite{FK06} show that the existence of a strongly continuous limiting semigroup $\{V(t)\}_{t \geq 0}$ on $C(E)$ in the sense that for all $f \in C(E)$ and $T \geq 0$, we have \begin{equation} \label{eqn:convergence_for_semigroups} \lim_{n\rightarrow \infty} \sup_{t \leq T} \sup_{x \in E_n} \left| V(t)f(x) - V_n(t)f(x) \right| = 0, \end{equation} allows us to study to study the large deviation behaviour of the process $X_n$. We will consider this question from the point of view of the generators $H_n$ of $\{V_n(t)\}_{t \geq 0}$, where $H_n f$ is defined by the norm limit of $t^{-1} (V_n(t)f - f)$ as $t \downarrow 0$. Note that $H_n f = n^{-1} e^{-nf} A_n e^{nf}$, which for our first model yields \begin{multline*} H_n f(x) = \sum_{i = 1}^d \Bigg\{ \frac{1-x_i}{2}r_{n,+}^i(x) \left[\exp\left\{n \left(f\left(x + \frac{2}{n}e_i\right) - f(x)\right)\right\} - 1 \right] \\ + \frac{1+x_i}{2}r_{n,-}^i(x)\left[\exp\left\{ n\left(f\left(x - \frac{2}{n}e_i\right) - f(x)\right)\right\} - 1 \right] \Bigg\}. \end{multline*} For our second model, we have \begin{equation*} H_n f(\mu) = \sum_{a,b = 1}^d \mu(a)r_n(a,b,\mu)\left[\exp\left\{n\left(f\left(\mu - n^{-1}\delta_a + n^{-1} \delta_b\right) - f(\mu)\right)\right\} - 1\right]. \end{equation*} In particular, Feng and Kurtz show that, as in the theory of weak convergence of Markov processes, the existence of a limiting operator $(H,\cD(H))$, such that for all $f \in \cD(H)$ \begin{equation} \label{eqn:convergence_condition_Hamiltonians} \lim_{n\rightarrow \infty} \sup_{x \in E_n} \left| Hf(x) - H_n f(x) \right| = 0, \end{equation} for which one can show that $(H,\cD(H))$ generates a semigroup $\{V(t)\}_{t \geq 0}$ on $C(E)$ via the Crandall-Liggett theorem, \cite{CL71}, then \eqref{eqn:convergence_for_semigroups} holds. \begin{lemma} \label{lemma:operator_convergence} For either of our two models, assuming \eqref{eqn:main_convergence_condition_for_ldp_ehrenfest} or \eqref{eqn:property_of_jump_kernels}, we find that $H_n f \rightarrow Hf$, as in \eqref{eqn:convergence_condition_Hamiltonians} holds for $f \in C^1(E)$, where $Hf$ is given by $Hf(x) := H(x,\nabla f(x))$ and where $H(x,p)$ is defined in \eqref{eqn:def_Hamiltionian_Ehrenfest} or \eqref{eqn:def_Hamiltionian_jump}. \end{lemma} The proof of the lemma is straightforward using the assumptions and the fact that $f$ is continuously differentiable. \smallskip Thus, the problem is reduced to proving that $(H,C^1(E))$ generates a semigroup. The verification of the conditions of the Crandall-Liggett theorem is in general very hard, or even impossible. Two conditions need to be verified, the first is the \textit{dissipativity} of $H$, which can be checked via the positive maximum principle. The second condition is the \textit{range condition}: one needs to show that for $\lambda > 0$, the range of $(\bONE - \lambda H)$ is dense in $C(E)$. In other words, for $\lambda > 0$ and sufficiently many fixed $h \in C(E)$, we need to solve $f - \lambda H f = h$ with $f \in C^1(E)$. An alternative is to solve this equation in the \textit{viscosity sense}. If a viscosity solution exists and is unique, we denote it by $\tilde{R}(\lambda)h$. Using these solutions, we can extend the domain of the operator $(H,C^1(E))$ by adding all pairs of the form $(\tilde{R}(\lambda)h, \lambda^{-1}(\tilde{R}(\lambda)h - h))$ to the graph of $H$ to obtain an operator $\hat{H}$ that satisfies the conditions for the Crandall-Liggett theorem. This is part of the content of Theorem \ref{theorem_LDP_viscosity_FK} stated below. As a remark, note that any concept of weak solutions could be used to extend the operator. However, viscosity solutions are special in the sense that the extended operator remains dissipative. The next result is a direct corollary of Theorem 6.14 in \cite{FK06}. \begin{theorem} \label{theorem_LDP_viscosity_FK} For either of our two models, assume that \eqref{eqn:main_convergence_condition_for_ldp_ehrenfest} or \eqref{eqn:property_of_jump_kernels} holds. Additionally, assume that the comparison principle is satisfied for \eqref{eqn:differential_equation_intro} for all $\lambda > 0$ and $h \in C(E)$. Then, the operator \begin{equation*} \hat{H} := \bigcup_{\lambda > 0} \left\{ \left(\tilde{R}(\lambda)h, \lambda^{-1}(\tilde{R}(\lambda)h - h)\right) \, \middle| \, h \in C(E) \right\} \end{equation*} generates a semigroup $\{V(t)\}_{t \geq 0}$ as in the Crandall-Liggett theorem and we have \eqref{eqn:convergence_for_semigroups}. Additionally, suppose that $\{X_n(0)\}$ satisfies the large deviation principle on $E$ with good rate function $I_0$. Then $X_n$ satisfies the large deviation principle on $D_E(\bR^+)$ with good rate function $I$ given by \begin{equation*} I(\gamma) = I_0(\gamma(0)) + \sup_{m} \sup_{0 = t_0 < t_1 < \dots < t_m} \sum_{k=1}^m I_{t_k - t_{k-1}}(\gamma(t_k) \, | \, \gamma(t_{k-1})), \end{equation*} where $I_s(y \, | \, x) := \sup_{f \in C(E)} f(y) - V(s)f(x)$. \end{theorem} Note that to prove Theorem 6.14 in \cite{FK06}, one needs to check that viscosity sub- and super-solutions to \eqref{eqn:differential_equation_intro} exist. Feng and Kurtz construct these sub- and super-solutions explicitly, using the approximating operators $H_n$, see the proof of Lemma 6.9 in \cite{FK06}. \begin{proof} We check the conditions for Theorem 6.14 in \cite{FK06}. In our models, the maps $\eta_n : E_n \rightarrow E$ are simply the embedding maps. Condition (a) is satisfied as all our generators $A_n$ are bounded. The conditions for convergence of the generators follow by Lemma \ref{lemma:operator_convergence}. \end{proof} The additional assumptions in Theorems \ref{theorem:ldp_ehrenfest_dd} and \ref{theorem:ldp_mean_field_jump_process} are there to make sure we are able to verify the comparison principle. This is the major contribution of the paper and will be carried out in Section \ref{section:viscosity_solutions}. The final steps to obtain Theorems \ref{theorem:ldp_ehrenfest_dd} and \ref{theorem:ldp_mean_field_jump_process} are to obtain the rate function as the integral over a Lagrangian. Also this is based on results in Chapter 8 of \cite{FK06}. \subsection{Variational semigroups} In this section, we introduce the \textit{Nisio semigroup} $\mathbf{V}(t)$, of which we will show that it equals $V(t)$ on $C(E)$. This semigroup is given as a variational problem where one optimises a payoff $f(\gamma(t))$ that depends on the state $\gamma(t) \in E$, but where a cost is paid that depends on the whole trajectory $\{\gamma(s)\}_{0 \leq s \leq t}$. The cost is accumulated over time and is given by a `Lagrangian'. Given the continuous and convex operator $Hf(x) = H(x,\nabla f(x))$, we define this Lagrangian by taking the Legendre-Fenchel transform: \begin{equation*} \cL(x,u) := \sup_{p \in \bR^d} \left\{\ip{p}{u} - H(x,p) \right\}. \end{equation*} As $p \mapsto H(x,p)$ is convex and continuous, it follows by the Fenchel - Moreau theorem that also \begin{equation*} Hf(x) = H(x,\nabla f(x)) = \sup_{u \in \bR^d} \left\{\ip{\nabla f(x)}{u} - \cL(x,u) \right\}. \end{equation*} Using $\cL$, we define the Nisio semigroup for measurable functions $f$ on $E$: \begin{equation} \label{eqn:def_Nisio_semigroup} \mathbf{V}(t)f(x) = \sup_{\substack{\gamma \in \cA\cC \\ \gamma(0) = x}} f(\gamma(t)) - \int_0^t \cL(\gamma(s),\dot{\gamma}(s)) \dd s. \end{equation} To be able to apply the results from Chapter 8 in \cite{FK06}, we need to verify Conditions 8.9 and 8.11 of \cite{FK06}. \smallskip For the semigroup to be well behaved, we need to verify Condition 8.9 in \cite{FK06}. In particular, this condition implies Proposition 8.19 in \cite{FK06} that ensures that the Nisio semigroup is in fact a semigroup on the upper semi-continuous functions that are bounded above. Additionally, it implies that all absolutely continuous trajectories up to time $T$, that have uniformly bounded Lagrangian cost, are a compact set in $D_E([0,T])$. \begin{lemma} \label{lemma:condition_8_9} For the Hamiltonians in \eqref{eqn:def_Hamiltionian_Ehrenfest} and \eqref{eqn:def_Hamiltionian_jump}, Condition 8.9 in \cite{FK06} is satisfied. \end{lemma} \begin{proof} For (1),take $U = \bR^d$ and set $Af(x,v) = \ip{\nabla f(x)}{v}$. Considering Definition 8.1 in \cite{FK06}, if $\gamma \in \cA\cC$, then \begin{equation*} f(\gamma(t)) - f(\gamma(0)) = \int_0^t Af(\gamma(s),\dot{\gamma}(s)) \dd s \end{equation*} by definition of $A$. In Definition 8.1, however, relaxed controls are considered, i.e. instead of a fixed speed $\dot{\gamma}(s)$, one considers a measure $\lambda \in \cM(\bR^d \times \bR^+)$, such that $\lambda(\bR^d \times [0,t]) = t$ for all $ t \geq 0$ and \begin{equation*} f(\gamma(t)) - f(\gamma(0)) = \int_0^t Af(\gamma(s),v) \lambda(\dd v, \dd s). \end{equation*} These relaxed controls are then used to define the Nisio semigroup in equation (8.10). Note however, that by convexity of $H$ in the second coordinate, also $\cL$ is convex in the second coordinate. It follows that a deterministic control $\lambda(\dd v, \dd t) = \delta_{v(t)}(\dd v) \dd t$ is always the control with the smallest cost by Jensen's inequality. We conclude that we can restrict the definition (8.10) to curves in $\cA\cC$. This motivates our changed definition in equation \eqref{eqn:def_Nisio_semigroup}. For this paper, it suffices to set $\Gamma = E \times \bR^d$, so that (2) is satisfied. By compactness of $E$, (4) is clear. \smallskip We are left to prove (3) and (5). For (3), note that $\cL$ is lower semi-continuous by construction. We also have to prove compactness of the level sets. By lower semi-continuity, it is suffices to show that the level sets $\{\cL \leq c\}$ are contained in a compact set. Set $\cN := \cap_{x \in E} \left\{p \in \bR^d \, \middle| \, H(x,p) \leq 1 \right\}$. First, we show that $\cN$ has non-empty interior, i.e. there is some $\varepsilon > 0$ such that the open ball $B(0,\varepsilon)$ of radius $\varepsilon$ around $0$ is contained in $\cN$. Suppose not, then there exists $x_n$ and $p_n$ such that $p_n \rightarrow 0$ and for all $n$: $H(x_n,p_n) \in 1$. By compactness of $E$ and continuity of $H$, we find a value $H(x,0) = 1$, which contradicts our definitions of $H$, where $H(y,0) = 0$ for all $y \in E$. Let $(x,v) \in \{\cL\leq c\}$, then \begin{equation*} \ip{p}{v} \leq \cL(x,v) + H(x,p) \leq c + 1 \end{equation*} for all $p \in B(0,\varepsilon) \subseteq \cN$. It follows that $v$ is contained in some bounded ball in $\bR^d$. It follows that $\{\cL \leq c\}$ is contained in some compact set by the Heine-Borel theorem. \smallskip Finally, (5) can be proven as Lemma 10.21 in \cite{FK06} or Lemma 5.19 in \cite{Kr14a} \end{proof} The last property necessary for the equality of $V(t)f$ and $\mathbf{V}(t)f$ on $C(E)$ is the verification of Condition 8.11 in \cite{FK06}. This condition is key to proving that a variational resolvent, see equation (8.22), is a viscosity super-solution to \eqref{eqn:differential_equation_intro}. As the variational resolvent is also a sub-solution to \eqref{eqn:differential_equation_intro} by Young's inequality, the variational resolvent is a viscosity solution to this equation. If viscosity solutions are unique, this yields, after an approximation argument that $V(t) = \mathbf{V}(t)$. \begin{lemma} \label{lemma:condition_8_11} Condition 8.11 in \cite{FK06} is satisfied. In other words, for all $g \in C^{1}(E)$ and $x_0 \in E$, there exists a trajectory $\gamma \in \cA\cC$ such that $\gamma(0) = x_0$ and for all $T \geq 0$: \begin{equation} \label{eqn:lemma_optimising_trajectories} \int_0^T Hg(\gamma(t)) \dd t = \int_0^T \ip{\nabla g(\gamma(t))}{\dot{\gamma}(t)} - \cL(\gamma(t),\dot{\gamma}(t)) \dd t. \end{equation} \end{lemma} \begin{proof} Fix $T > 0$, $g \in C^{1}(E)$ and $x_0 \in E$. We introduce a vector field $\mathbf{F}^g : E \rightarrow \bR^d$, by \begin{align*} \mathbf{F}^g(x) : = H_p(x,\nabla g(x)), \end{align*} where $H_p(x,p)$ is the vector of partial derivatives of $H$ in the second coordinate. Note that in our examples, $H$ is continuously differentiable in the $p$-coordinates. For example, for the $d=1$ case of Theorem \ref{theorem:ldp_ehrenfest_dd}, we obtain \begin{equation*} \mathbf{F}^g(x) : = 2v_+(x)e^{2\nabla g(x)} - 2v_-(x)e^{-2\nabla g(x)}. \end{equation*} As $\mathbf{F}^g$ is a continuous vector field, we can find a local solution $\gamma^g(t)$ in $E$ to the differential equation \begin{equation*} \begin{cases} \dot{\gamma}(t) = \mathbf{F}^g(\gamma(t)), \\ \gamma(0) = x_0, \end{cases} \end{equation*} by an extended version of Peano's theorem \cite{Cr72}. The result in \cite{Cr72} is local, however, the length of the interval on which the solution is constructed depends inversely on the norm of the vector field, see his equation (2). As our vector fields are globally bounded in size, we can iterate the construction in \cite{Cr72} to obtain a global existence result, such that $\dot{\gamma}^g(t) = \mathbf{F}^g(\gamma(t))$ for almost all times in $[0,\infty)$. We conclude that on a subset of full measure of $[0,T]$ that \begin{align*} \cL(\gamma^g(t),\dot{\gamma}^g(t)) & = \cL(\gamma^g(t),\mathbf{F}^g(\gamma^g(t))) \\ & = \sup_{p \in \bR^d} \ip{p}{\mathbf{F}^g(\gamma^g(t))} - H(\gamma^g(t),p) \\ & = \sup_{p \in \bR^d} \ip{p}{H_p(\gamma^g(t), \nabla g(\gamma^g(t)))} - H(\gamma^g(t),p). \end{align*} By differentiating the final expression with respect to $p$, we find that the supremum is taken for $p = \nabla g(\gamma^g(t))$. In other words, we find \begin{align*} \cL(\gamma^g(t),\dot{\gamma}^g(t)) & = \ip{\nabla g(\gamma^g(t))}{H_p(\gamma^g(t), \nabla g(\gamma^g(t)))} - H(\gamma^g(t),\nabla g(\gamma^g(t))) \\ & = \ip{\nabla g(\gamma^g(t))}{\dot{\gamma}^g(t)} - Hg(\gamma^g(t)). \end{align*} By integrating over time, the zero set does not contribute to the integral, we find \eqref{eqn:lemma_optimising_trajectories}. \end{proof} The following result follows from Corollary 8.29 in \cite{FK06}. \begin{theorem} \label{theorem:general_ldp} For either of our two models, assume that \eqref{eqn:main_convergence_condition_for_ldp_ehrenfest} or \eqref{eqn:property_of_jump_kernels} holds. Assume that the comparison principle is satisfied for \eqref{eqn:differential_equation_intro} for all $\lambda > 0$ and $h \in C(E)$. Finally, suppose that $\{X_n(0)\}$ satisfies the large deviation principle on $E$ with good rate function $I_0$. Then, we have $V(t)f = \mathbf{V}(t)f$ for all $f \in C(E)$ and $t \geq 0$. Also, $X_n$ satisfies the large deviation principle on $D_E(\bR^+)$ with good rate function $I$ given by \begin{equation*} I(\gamma) := \begin{cases} I_0(\gamma(0)) + \int_0^\infty \cL(\gamma(s),\dot{\gamma}(s)) \dd s & \text{if } \gamma \in \cA\cC, \\ \infty & \text{if } \gamma \notin \cA\cC. \end{cases} \end{equation*} \end{theorem} \begin{proof} We check the conditions for Corollary 8.29 in \cite{FK06}. Note that in our setting $H = \mathbf{H}$. Therefore, condition (a) of Corollary 8.29 is trivially satisfied. Furthermore, we have to check the conditions for Theorems 6.14 and 8.27. For the first theorem, these conditions were checkel already in the proof of our Theorem \ref{theorem_LDP_viscosity_FK}. For Theorem 8.27, we need to check Conditions 8.9, 8.10 and 8.11 in \cite{FK06}. As $H1 = 0$, Condition 8.10 follows from 8.11. 8.9 and 8.11 have been verified in Lemmas \ref{lemma:condition_8_9} and \ref{lemma:condition_8_11}. \end{proof} The last theorem shows us that we have Theorems \ref{theorem:ldp_ehrenfest_dd} and \ref{theorem:ldp_mean_field_jump_process} if we can verify the comparison principle, i.e. Theorems \ref{theorem:comparison_ehrenfest_ddim} and \ref{theorem:comparison_mean_field_jump}. This will be done in the section below. \begin{proof}[Proof of Theorems \ref{theorem:ldp_ehrenfest_dd} and \ref{theorem:ldp_mean_field_jump_process}] The comparison principles for equation \eqref{eqn:differential_equation_intro} are verified in Theorems \ref{theorem:comparison_ehrenfest_ddim} and \ref{theorem:comparison_mean_field_jump}. The two theorems now follow from Theorem \ref{theorem:general_ldp}. \end{proof} \begin{proof}[Proof of Proposition \ref{proposition:Htheorem}] We give the proof for the system considered in Theorem \ref{theorem:ldp_ehrenfest_dd}. Fix $t \geq 0$ and some some starting point $x_0$. Let $x(t)$ be any solution of $\dot{x}(t) = \mathbf{F}(x(t))$ with $x(0) = x_0$. We show that $I_0(x(t)) \leq I_0(x_0)$. \smallskip Let $X_n(0)$ be distributed as $\nu_n$. Then it follows by Theorem \ref{theorem:ldp_ehrenfest_dd} that the large deviation principle holds for $\{X_n\}_{n \geq 0}$ on $D_E(\bR^+)$. \smallskip As $\nu_n$ is invariant for the Markov process generated by $A_n$, also the sequence $\{X_n(t)\}_{n \geq 0}$ satisfies the large deviation principle on $E$ with good rate function $I_0$. Combining these two facts, the Contraction principle\cite[Theorem 4.2.1]{DZ98} yields \begin{multline*} I_0(x(t)) = \inf_{\gamma \in \cA\cC: \gamma(t) = x(t)} I_0(\gamma(0)) + \int_0^t \cL(\gamma(s),\dot{\gamma}(s)) \dd s \\ \leq I_0(x(0)) + \int_0^t \cL(x(s),\dot{x}(s)) \dd s = I_0(x(0)). \end{multline*} Note that $\cL(x(s),\dot{x}(s)) = 0$ for all $s$ as was shown in Example \ref{example:ehrenfest_non_unique_solution}. \end{proof} \section{The comparison principle} \label{section:viscosity_solutions} We proceed with checking the comparison principle for equations of the type $f(x) - \lambda B(x,\nabla f(x)) - h(x) = 0$. In other words, for subsolutions $u$ and supersolutions $v$ we need to check that $u \leq v$. We start with some known results. First of all, we give the main tool to construct sequences $x_\alpha$ and $y_\alpha$ that converge to a maximising point $z \in E$ such that $u(z) - v(z) = \sup_{z'\in E} u(z') - v(z')$. This result can be found for example as Proposition 3.7 in \cite{CIL92}. \begin{lemma}\label{lemma:doubling_lemma} Let $E$ be a compact subset of $\bR^d$, let $u$ be upper semi-continuous, $v$ lower semi-continuous and let $\Psi : E^2 \rightarrow \bR^+$ be a lower semi-continuous function such that $\Psi(x,y) = 0$ implies $x = y$. For $\alpha > 0$, let $x_\alpha,y_\alpha \in E$ such that \begin{equation*} u(x_\alpha) - v(y_\alpha) - \alpha \Psi(x_\alpha,y_\alpha) = \sup_{x,y \in E} \left\{u(x) - v(y) - \alpha \Psi(x,y) \right\}. \end{equation*} Then the following hold \begin{enumerate}[(i)] \item $\lim_{\alpha \rightarrow \infty} \alpha \Psi(x_\alpha,y_\alpha) = 0$. \item All limit points of $(x_\alpha,y_\alpha)$ are of the form $(z,z)$ and for these limit points we have $u(z) - v(z) = \sup_{x \in E} \left\{u(x) - v(x) \right\}$. \end{enumerate} \end{lemma} We say that $\Psi : E^2 \rightarrow \bR^+$ is a \textit{good distance function} if $\Psi(x,y) = 0$ implies $x = y$, it is continuously differentiable in both components and if $(\nabla \Psi(\cdot,y))(x) = - (\nabla \Psi(x,\cdot))(y)$ for all $x,y \in E$. The next two results can be found as Lemma 9.3 in \cite{FK06}. We will give the proofs of these results for completeness. \begin{proposition} \label{proposition:comparison_conditions_on_H} Let $(B,\cD(B))$ be an operator such that $\cD(B) = C^{1}(E)$ of the form $Bf(x) = B(x,\nabla f(x))$. Let $u$ be a subsolution and $v$ a supersolution to $f(x) - \lambda B(x,\nabla f(x)) - h(x) = 0$, for some $\lambda > 0$ and $h \in C(E)$. Let $\Psi$ be a good distance function and let $x_\alpha,y_\alpha$ satisfy \begin{equation*} u(x_\alpha) - v(y_\alpha) - \alpha \Psi(x_\alpha,y_\alpha) = \sup_{x,y \in E} \left\{u(x) - v(y) - \alpha \Psi(x,y) \right\}. \end{equation*} Suppose that \begin{equation*} \liminf_{\alpha \rightarrow \infty} B\left(x_\alpha,\alpha (\nabla \Psi(\cdot,y_\alpha))(x_\alpha)\right) - B\left(y_\alpha,\alpha (\nabla \Psi(\cdot,y_\alpha))(x_\alpha)\right) \leq 0, \end{equation*} then $u \leq v$. In other words, $f(x) - \lambda B(x,\nabla f(x)) - h(x) = 0$ satisfies the comparison principle. \end{proposition} \begin{proof} Fix $\lambda >0$ and $h \in C(E)$. Let $u$ be a subsolution and $v$ a supersolution to \begin{equation} \label{eqn:proof_comp_equation} f(x) - \lambda B(x,\nabla f(x)) - h(x) = 0. \end{equation} We argue by contradiction and assume that $\delta := \sup_{x \in E} u(x) - v(x) > 0$. For $\alpha > 0$, let $x_\alpha,y_\alpha$ be such that \begin{equation*} u(x_\alpha) - v(y_\alpha) - \alpha \Psi(x_\alpha,y_\alpha) = \sup_{x,y \in E} \left\{u(x) - v(y) - \alpha \Psi(x,y) \right\}. \end{equation*} Thus Lemma \ref{lemma:doubling_lemma} yields $\alpha \Psi(x_\alpha,y_\alpha) \rightarrow 0$ and for any limit point $z$ of the sequence $x_\alpha$, we have $u(z) - v(z) = \sup_{x \in E} u(x) - v(x) = \delta > 0$. It follows that for $\alpha$ large enough, $u(x_\alpha) - v(y_\alpha) \geq \frac{1}{2}\delta$. \smallskip For every $\alpha > 0$, the map $\Phi^1_\alpha(x) := v(y_\alpha) + \alpha\Psi(x,y_\alpha)$ is in $C^{1}(E)$ and $u(x) - \Phi^1_\alpha(x)$ has a maximum at $x_\alpha$. On the other hand, $\Phi^2_\alpha(y) := u(x_\alpha) - \alpha \Psi(x_\alpha,y)$ is also in $C^{1}(E)$ and $v(y) - \Phi^2_\alpha(y)$ has a minimum at $y_\alpha$. As $u$ is a sub- and $v$ a super solution to \eqref{eqn:proof_comp_equation}, we have \begin{align*} \frac{u(x_\alpha) - h(x_\alpha)}{\lambda} & \leq H(x_\alpha,\alpha(\nabla\Psi(\cdot,y_\alpha))(x_\alpha)) \\ \frac{v(y_\alpha) - h(y_\alpha))}{\lambda} & \geq H(y_\alpha,-\alpha(\nabla\Psi(x_\alpha,\cdot))(y_\alpha)) \\ & = H(y_\alpha,\alpha(\nabla\Psi(\cdot,y_\alpha))(x_\alpha)) \end{align*} where the last equality follows as $\Psi$ is a good distance function. It follows that for $\alpha$ large enough, we have \begin{align} 0 & < \frac{\delta}{2\lambda} \leq \frac{u(x_\alpha) - v(y_\alpha)}{\lambda} \label{eqn:comp_proof_contradicting_assumption} \\ & = \frac{u(x_\alpha) - h(x_\alpha)}{\lambda} - \frac{v(y_\alpha) - h(y_\alpha)}{\lambda} + \frac{1}{\lambda}\left(h(x_\alpha) - h(y_\alpha)\right) \notag \\ & \leq H(x_\alpha,\alpha(\nabla\Psi(\cdot,y_\alpha))(x_\alpha)) - H(y_\alpha,\alpha(\nabla\Psi(\cdot,y_\alpha))(x_\alpha)) + \frac{1}{\lambda}\left(h(x_\alpha) - h(y_\alpha)\right) \notag \end{align} As $h$ is continuous, we obtain $\lim_{\alpha \rightarrow \infty} h(x_\alpha) - h(y_\alpha) = 0$. Together with the assumption of the proposition, we find that the $\liminf_{\alpha \rightarrow \infty} c_\alpha \leq 0$ which contradicts by \eqref{eqn:comp_proof_contradicting_assumption} that $\delta > 0$. \end{proof} The next lemma gives additional control on the sequences $x_\alpha,y_\alpha$. \begin{lemma} \label{lemma:control_on_H} Let $(B,\cD(B))$ be an operator such that $\cD(B) = C^{1}(E)$ of the form $Bf(x) = B(x,\nabla f(x))$. Let $u$ be a subsolution and $v$ a supersolution to $f(x) - \lambda B(x,\nabla f(x)) - h(x) = 0$, for some $\alpha > 0$ and $h \in C(E)$. Let $\Psi$ be a good distance function and let $x_\alpha,y_\alpha$ satisfy \begin{equation*} u(x_\alpha) - v(y_\alpha) - \alpha \Psi(x_\alpha,y_\alpha) = \sup_{x,y \in E} \left\{u(x) - v(y) - \alpha \Psi(x,y) \right\}. \end{equation*} Then we have that \begin{equation} \label{eqn:control_on_H} \sup_\alpha B\left(y_\alpha,\alpha (\nabla \Psi(\cdot,y_\alpha))(x_\alpha)\right) < \infty. \end{equation} \end{lemma} \begin{proof} Fix $\lambda > 0$, $h \in C(E)$ and let $u$ and $v$ be sub- and super-solutions to $f(x) - \lambda B(x,f(x)) - h(x) = 0$. Let $\Psi$ be a good distance function and let $x_\alpha,y_\alpha$ satisfy \begin{equation*} u(x_\alpha) - v(y_\alpha) - \alpha \Psi(x_\alpha,y_\alpha) = \sup_{x,y \in E} \left\{u(x) - v(y) - \alpha \Psi(x,y) \right\}. \end{equation*} As $y_\alpha$ is such that \begin{equation*} v(y_\alpha) - \left(u(x_\alpha) - \Psi(x_\alpha,y_\alpha)\right) = \inf_y v(y) - \left(u(x_\alpha) - \Psi(x_\alpha,y)\right), \end{equation*} and $v$ is a super-solution, we obtain \begin{equation*} B\left(y_\alpha,-\alpha (\nabla \Psi(x_\alpha,\cdot))(y_\alpha)\right) \leq \frac{v(y_\alpha) - h(y_\alpha)}{\lambda} \end{equation*} As $\Phi$ is a good distance function, we have $- (\nabla \Psi(x_\alpha,\cdot))(y_\alpha) = (\nabla \Psi(\cdot,y_\alpha))(x_\alpha)$. The boundedness of $v$ now implies \begin{equation*} \sup_\alpha B\left(y_\alpha,\alpha (\nabla \Psi(\cdot,y_\alpha))(x_\alpha)\right) \leq \frac{1}{\alpha} \left(v(y_\alpha) - h(y_\alpha) \right) \leq \vn{v - h} < \infty. \end{equation*} \end{proof} \subsection{One-dimensional Ehrenfest model} \label{subsection:Ehrenfest_model_1d} To single out the important aspects of the proof of the comparison principle for equation \eqref{eqn:differential_equation_intro}, we start by proving it for the $d=1$ case of Theorem \ref{theorem:ldp_ehrenfest_dd}. \begin{proposition} \label{proposition:comparison_ehrenfest_1d} Let $E = [-1,1]$ and let \begin{equation*} H(x,p) = v_+(x) \left[e^{2p} -1\right] + v_-(x) \left[e^{-2p} - 1\right], \end{equation*} where $v_+, v_-$ are continuous and satisfy the following properties: \begin{enumerate}[(a)] \item $v_+(x) = 0$ for all $x$ or $v_+$ satisfies the following properties: \begin{enumerate}[(i)] \item $v_+(x) > 0$ for $x \neq 1$. \item $v_+(1) = 0$ and there exists a neighbourhood $U_{1}$ of $1$ on which there exists a decomposition $v_+(x) = v_{+,\dagger}(x)v_{+,\ddagger}(x)$ such that $v_{+,\dagger}$ is decreasing and where $v_{+,\ddagger}$ is continuous and satisfies $v_{+,\ddagger}(1) \neq 0$. \end{enumerate} \item $v_-(x) = 0$ for all $x$ or $v_-$ satisfies the following properties: \begin{enumerate}[(i)] \item $v_-(x) > 0$ for $x \neq -1$. \item $v_+(-1) = 0$ and there exists a neighbourhood $U_{-1}$ of $1$ on which there exists a decomposition $v_-(x) = v_{-,\dagger}(x)v_{-,\ddagger}(x)$ such that $v_{-,\dagger}$ is increasing and where $v_{-,\ddagger}$ is continuous and satisfies $v_{-,\ddagger}(-1) \neq 0$. \end{enumerate} \end{enumerate} Let $\lambda > 0$ and $h \in C(E)$. Then the comparison principle holds for $f(x) - \lambda H(x,\nabla f(x)) - h(x) = 0$. \end{proposition} \begin{proof} Fix $\lambda > 0$, $h \in C(E)$ and pick a sub- and super-solutions $u$ and $v$ to $f(x) - \lambda H(x,\nabla f(x)) - h(x) = 0$. We check the condition for Proposition \ref{proposition:comparison_conditions_on_H}. We take the good distance function $\Psi(x,y) = 2^{-1} (x-y)^2$ and let $x_\alpha,y_\alpha$ satisfy \begin{equation*} u(x_\alpha) - v(y_\alpha) - \frac{\alpha}{2} |x_\alpha-y_\alpha|^2 = \sup_{x,y \in E} \left\{u(x) - v(y) - \frac{\alpha}{2} |x-y|^2 \right\}. \end{equation*} We need to prove that \begin{equation} \label{eqn:comp_proof_1d_basic_inequality} \liminf_{\alpha \rightarrow \infty} H(x_\alpha,\alpha(x_\alpha-y_\alpha)) - H(y_\alpha,\alpha(x_\alpha-y_\alpha)) \leq 0. \end{equation} By Lemma \ref{lemma:doubling_lemma}, we know that $\alpha|x_\alpha - y_\alpha|^2 \rightarrow 0$ as $\alpha \rightarrow \infty$ and any limit point of $x_\alpha, y_\alpha$ is of the form $(z,z)$ for some $z$ such that $u(z) - v(z) = \max_{z' \in E} u(z') - v(z')$. Restrict $\alpha$ to the sequence $\alpha \in \bN$ and extract a subsequence, which we will also denote by $\alpha$, such that $\alpha \rightarrow \infty$ $x_\alpha$ and $y_\alpha$ converge to some $z$. The rest of the proof depends on whether $z = -1, z = 1$ or $z \in (-1,1)$. \smallskip First suppose that $z \in (-1,1)$. By Lemma \ref{lemma:control_on_H}, we have \begin{equation*} \sup_\alpha v_+(y_\alpha) \left[e^{2\alpha(x_\alpha-y_\alpha)} - 1 \right] + v_-(y_\alpha) \left[e^{-2\alpha(x_\alpha-y_\alpha)} - 1 \right] < \infty. \end{equation*} As $e^c -1 > -1$, we see that the $\limsup$ of both terms of the sum individually are bounded as well. Using that $y_\alpha \rightarrow z \in (-1,1)$, and the fact that $v_+,v_-$ are bounded away from $0$ on a closed interval around $z$, we obtain from the first term that $\sup_\alpha \alpha(x_\alpha - y_\alpha) < \infty$ and from the second that $\sup_\alpha \alpha(y_\alpha - x_\alpha) < \infty$. We conclude that $\alpha(x_\alpha - y_\alpha)$ is a bounded sequence. Therefore, there exists a subsequence $\alpha(k)$ such that $\alpha(k)(x_{\alpha(k)} - y_{\alpha(k)})$ converges to some $p_0$. We find that \begin{align*} & \liminf_{\alpha \rightarrow \infty} H(x_\alpha,\alpha(x_\alpha-y_\alpha)) - H(y_\alpha,\alpha(x_\alpha-y_\alpha)) \\ & \leq \lim_{k \rightarrow \infty} H(x_{\alpha(k)},\alpha(x_{\alpha(k)}-y_{\alpha(k)}) - H(y_{\alpha(k)},\alpha(x_{\alpha(k)}-y_{\alpha(k)})) \\ & = H(z,p_0) - H(z,p_0) = 0 \end{align*} We proceed with the proof in the case that $x_\alpha,y_\alpha \rightarrow z = -1$. The case where $z = 1$ is proven similarly. Again by Lemma \ref{lemma:control_on_H}, we obtain the bounds \begin{equation} \sup_\alpha v_+(y_\alpha) \left[e^{2\alpha(x_\alpha-y_\alpha)} - 1 \right] < \infty, \qquad \sup_\alpha v_-(y_\alpha) \left[e^{-2\alpha(x_\alpha-y_\alpha)} - 1 \right] < \infty. \label{eqn:comp_proof_1d_second_sup_bound} \end{equation} As $v_+$ is bounded away from $0$ near $-1$, we obtain by the left hand bound that $\sup_\alpha \alpha(x_\alpha - y_\alpha) < \infty$. As in the proof above, it follows that if $\alpha|x_\alpha - y_\alpha|$ is bounded, we are done. This leaves the case where there exists a subsequence of $\alpha$, denoted by $\alpha(k)$, such that $\alpha(k)(y_{\alpha(k)} - x_{\alpha(k)}) \rightarrow \infty$. Then clearly, $e^{2\alpha(k)(x_{\alpha(k)} - y_{\alpha(k)})}- 1$ is bounded and contains a converging subsequence. We obtain as in the proof where $z \in (-1,1)$ that \begin{align*} & \liminf_{\alpha \rightarrow \infty} H(x_\alpha,\alpha(x_\alpha-y_\alpha)) - H(y_\alpha,\alpha(x_\alpha-y_\alpha)) \\ & \quad = \liminf_{\alpha \rightarrow \infty} \left[v_+(x_\alpha) - v_+(y_\alpha) \right]\left[e^{2\alpha(x_\alpha-y_\alpha)} - 1 \right] \\ & \qquad \qquad \qquad + \left[v_-(x_\alpha) - v_-(y_\alpha) \right]\left[e^{2\alpha(y_\alpha-x_\alpha)} - 1 \right] \\ & \quad \leq \liminf_{k \rightarrow \infty} \left[v_-(x_{\alpha(k)}) - v_-(y_{\alpha(k)}) \right]\left[e^{2\alpha(k)(y_{\alpha(k)}-x_{\alpha(k)})} - 1 \right]. \end{align*} Note that as $\alpha(k)(y_{\alpha(k)} - x_{\alpha(k)}) \rightarrow \infty$, we have $y_{\alpha(k)} > x_{\alpha(k)} \geq 0$. Also for $k$ sufficiently large, $y_{\alpha(k)}, x_{\alpha(k)} \in U_{-1}$. It follows that $v_-(y_{\alpha(k)}) > 0$, which allows us to write \begin{align*} & \left[v_-(x_{\alpha(k)}) - v_-(y_{\alpha(k)}) \right]\left[e^{2\alpha(k)(y_{\alpha(k)}-x_{\alpha(k)})} - 1 \right] \\ & = \left[\frac{v_{-,\dagger}(x_{\alpha(k)})}{v_{-,\dagger}(y_{\alpha(k)})}\frac{v_{-,\ddagger}(x_{\alpha(k)})}{v_{-,\ddagger}(y_{\alpha(k)})} - 1 \right]v_-(y_{\alpha(k)}) \left[e^{2\alpha(k)(y_{\alpha(k)}-x_{\alpha(k)})} - 1 \right]. \end{align*} By the bound in \eqref{eqn:comp_proof_1d_second_sup_bound}, and the obvious lower bound, we see that the non-negative sequence \begin{equation*} u_k := v_-(y_{\alpha(k)}) \left[e^{2\alpha(k)(y_{\alpha(k)}-x_{\alpha(k)})} - 1 \right] \end{equation*} contains a converging subsequence $u_{k'} \rightarrow c$. As $y_{\alpha(k)} > x_{\alpha(k)}$ and $v_{-,\dagger}$ is increasing: \begin{multline*} \limsup_k \frac{v_{-,\dagger}(x_{\alpha(k)})}{v_{-,\dagger}(y_{\alpha(k)})}\frac{v_{-,\ddagger}(x_{\alpha(k)})}{v_{-,\ddagger}(y_{\alpha(k)})} \\ \leq \left(\limsup_k \frac{v_{-,\dagger}(x_{\alpha(k)})}{v_{-,\dagger}(y_{\alpha(k)})}\right)\left(\lim_k \frac{v_{-,\ddagger}(x_{\alpha(k)})}{v_{-,\ddagger}(y_{\alpha(k)})} \right) \leq \frac{v_{-,\ddagger}(-1)}{v_{-,\ddagger}(-1)} = 1. \end{multline*} As a consequence, we obtain \begin{multline*} \liminf_{k} \left[\frac{v_{-}(x_{\alpha(k)})}{v_{-}(y_{\alpha(k)})} - 1 \right]v_-(y_{\alpha(k)}) \left[e^{2\alpha(k)(y_{\alpha(k)}-x_{\alpha(k)})} - 1 \right] \\ \leq \left(\limsup_k\left[\frac{v_{-,\dagger}(x_{\alpha(k)})}{v_{-,\dagger}(y_{\alpha(k)})}\frac{v_{-,\ddagger}(x_{\alpha(k)})}{v_{-,\ddagger}(y_{\alpha(k)})} - 1 \right] \right) \left(\liminf_{k'} u_{k'} \right) \leq 0. \end{multline*} This concludes the proof of \eqref{eqn:comp_proof_1d_basic_inequality} for the case that $z = -1$. \end{proof} \subsection{Multi-dimensional Ehrenfest model} \begin{comment} \begin{proposition} \label{proposition:comparison_ehrenfest_ddim} Consider \begin{equation*} H(x, p) = \sum_i v_+^i(x)\left[e^{2p_i} - 1 \right] + v_-^i(x) \left[e^{-2p_i} - 1 \right]. \end{equation*} for $x \in [-1,1]^d$ Let $v_+^i, v_-^i$ satisfy the properties stated in Theorem \ref{theorem:ldp_ehrenfest_dd}. Then, for any $\lambda > 0$ and $h \in C(E)$ the comparison principle holds for $f(x) - \lambda H(x,\nabla f(x)) - h(x) = 0$. \end{proposition} \end{comment} \begin{proof}[Proof of Theorem \ref{theorem:comparison_ehrenfest_ddim}] Let $u$ be a subsolution and $v$ a supersolution to $f(x) - \lambda H(x,\nabla f(x)) - h(x) = 0$. As in the proof of Proposition \ref{proposition:comparison_ehrenfest_1d}, we check the condition for Proposition \ref{proposition:comparison_conditions_on_H}. Again, for $\alpha \in \bN$ let $x_\alpha,y_\alpha$ satisfy \begin{equation*} u(x_\alpha) - v(y_\alpha) - \frac{\alpha}{2} |x_\alpha-y_\alpha|^2 = \sup_{x,y \in E} \left\{u(x) - v(y) - \frac{\alpha}{2} |x-y|^2 \right\}. \end{equation*} and without loss of generality let $z$ be such that $x_\alpha,y_\alpha \rightarrow z$. \smallskip Denote with $x_{\alpha,i}$ and $y_{\alpha,i}$ the $i$-th coordinate of $x_\alpha$ respectively $y_\alpha$. We prove \begin{multline*} \liminf_{\alpha \rightarrow \infty} H(x_\alpha,\alpha(x_\alpha-y_\alpha)) - H(y_\alpha,\alpha(x_\alpha-y_\alpha)) \\ = \liminf_{\alpha \rightarrow \infty} \sum_i \Bigg\{ \left[v_+^i(x_\alpha) - v_+^i(y_\alpha)\right]\left[e^{\alpha(x_{\alpha,i} - y_{\alpha,i})} - 1 \right] \\ + \left[v_-^i(x_\alpha) - v_-^i(y_\alpha)\right]\left[e^{\alpha(y_{\alpha,i} - x_{\alpha,i})} - 1 \right] \Bigg\} \leq 0, \end{multline*} by constructing a subsequence $\alpha(n) \rightarrow \infty$ such that the first term in the sum converges to $0$. From this sequence, we find a subsequence such that the second term converges to zero, and so on. \smallskip Therefore, we will assume that we have a sequence $\alpha(n) \rightarrow \infty$ for which the first $i-1$ terms of the difference of the two Hamiltonians vanishes and prove that we can find a subsequence for which the $i$-th term \begin{multline} \label{eqn:Hamiltonian_Ehrenfest_comparioson_ith_term} \left[v_+^i(x_\alpha) - v_+^i(y_\alpha)\right]\left[e^{\alpha(x_{\alpha,i} - y_{\alpha,i})} - 1 \right] \\ + \left[v_-^i(x_\alpha) - v_-^i(y_\alpha)\right]\left[e^{\alpha(y_{\alpha,i} - x_{\alpha,i})} - 1 \right] \end{multline} vanishes. This follows directly as in the proof of Proposition \ref{proposition:comparison_ehrenfest_1d}, arguing depending on the situation $z_i \in (-1,1)$, $z_i = -1$ or $z_i = -1$. \end{proof} \subsection{Mean field Markov jump processes} \begin{comment} \begin{proposition} \label{proposition:comparison_mean_field_jump} Consider on $E = E_2$, the Hamiltonian \begin{equation*} H(x,p) = \sum_{a,b} v(a,b,\mu)\left[e^{p_b - p_a} -1\right]. \end{equation*} Suppose that for all $a,b$ the continuous map $\mu \mapsto v(a,b,\mu)$ satisfies the properties stated in Theorem \ref{theorem:ldp_mean_field_jump_process}. Let $\lambda > 0$ and $h \in C(E)$, then the comparison principle holds for $f(\mu) - \lambda H(\mu,\nabla f(\mu)) - h(\mu) = 0$. \end{proposition} \end{comment} The proof of Theorem \ref{theorem:comparison_mean_field_jump} follows along the lines of the proofs of Proposition \ref{proposition:comparison_ehrenfest_1d} and Theorem \ref{theorem:comparison_ehrenfest_ddim}. The proof however needs one important adaptation because of the appearance of the difference $p_b - p_a$ in the exponents of the Hamiltonian. Naively copying the proofs using the distance function $\Psi(\mu,\nu) = \frac{1}{2} \sum_{a} (\mu(a) - \nu(a))^2$ one obtains by Lemma \ref{lemma:control_on_H} , for suitable sequences $\mu_\alpha$ and $\nu_\alpha$, that \begin{equation*} \sup_\alpha v(a,b,\nu_\alpha)\left[e^{\alpha\left(\left(\mu_\alpha(b) - \nu_\alpha(b)\right) - \left(\mu_\alpha(a) - \nu_\alpha(a)\right)\right)} - 1 \right] < \infty. \end{equation*} One sees that the control on the sequences $\alpha(\nu_\alpha(a) - \mu_\alpha(a))$ obtained from this bound is not very good, due to the compensating term $\alpha(\mu_\alpha(b) - \nu_\alpha(b))$. \smallskip The proof can be suitably adapted using a different distance function. For $x \in \bR$, let $x^- := x \wedge 0$ and $x^+ = x \vee 0$. Define $\Psi(\mu,\nu) = \frac{1}{2} \sum_{a} ((\mu(a) - \nu(a))^-)^2 = \frac{1}{2} \sum_{a} ((\nu(a) - \mu(a))^+)^2 $. Clearly, $\Psi$ is differentiable in both components and satisfies $(\nabla \Psi(\cdot,\nu))(\mu) = - (\nabla \Psi(\mu,\cdot))(\nu)$. Finally, using the fact that $\sum_i \mu(i) = \sum_i \nu(i) = 1$, we find that $\Psi(\mu,\nu) = 0$ implies that $\mu = \nu$. We conclude that $\Psi$ is a good distance function. The bound obtained from Lemma \ref{lemma:control_on_H} using this $\Psi$ yields \begin{equation*} \sup_\alpha v(a,b,\nu_\alpha)\left[e^{\alpha\left(\left(\mu_\alpha(b) - \nu_\alpha(b)\right)^- - \left(\mu_\alpha(a) - \nu_\alpha(a)\right)^-\right)} - 1 \right] < \infty. \end{equation*} We see that if $\left(\mu_\alpha(b) - \nu_\alpha(b)\right)^- - \left(\mu_\alpha(a) - \nu_\alpha(a)\right)^- \rightarrow \infty$ it must be because $\alpha(\nu_\alpha(a) - \mu_\alpha(a)) \rightarrow \infty$. This puts us in the position to use the techniques from the previous proofs. \begin{proof}[Proof of Theorem \ref{theorem:comparison_mean_field_jump}] Set $\Psi(\mu,\nu) = \frac{1}{2} \sum_{a} ((\mu(a) - \nu(a))^-)^2$, as above. We already noted that $\Psi$ is a good distance function. \smallskip Let $u$ be a subsolution and $v$ be a supersolution to $f(\mu) - \lambda H(\mu,\nabla f(\mu)) - h(\mu) = 0$. For $\alpha \in \bN$, pick $\mu_\alpha$ and $\nu_\alpha$ such that \begin{equation*} u(\mu_\alpha) - v(\nu_\alpha) - \alpha\Psi(\mu_\alpha,\nu_\alpha) = \sup_{\mu,\nu \in E} \left\{u(\mu) - v(\nu) - \alpha\Psi(\mu,\nu) \right\} \end{equation*} Furthermore, assume without loss of generality that $\mu_\alpha,\nu_\alpha \rightarrow z$ for some $z$ such that $u(z) - v(z) = \sup_{z'\in E} u(z') - v(z')$. By Proposition \ref{proposition:comparison_conditions_on_H}, we need to bound \begin{align} & H(\mu_\alpha,\alpha (\nabla\Phi(\cdot,\nu_\alpha))(\mu_\alpha)) - H(\nu_\alpha,\alpha (\nabla\Phi(\mu_\alpha,\cdot))(\mu_\alpha)) \notag \\ & = \sum_{a,b} \left[v(a,b,\mu_\alpha) - v(a,b,\nu_\alpha) \right]\left[e^{\alpha\left(\left(\mu_\alpha(b) - \nu_\alpha(b)\right)^- - \left(\mu_\alpha(a) - \nu_\alpha(a)\right)^-\right)} - 1\right]. \label{eqn:proof_comparison_Potts_sum} \end{align} As in the proof of Theorem \ref{theorem:comparison_ehrenfest_ddim}, we will show that each term in the sum above can be bounded above by $0$ separately. So pick some ordering of the ordered pairs $(i,j)$, $i,j \in \{1,\dots,n\}$ and assume that we have some sequence $\alpha$ such that the $\liminf_{\alpha \rightarrow \infty}$ of the first $k$ terms in equation \eqref{eqn:proof_comparison_Potts_sum} are bounded above by $0$. Suppose that $(i,j)$ is the pair corresponding to the $k+1$-th term of the sum in \eqref{eqn:proof_comparison_Potts_sum}. \smallskip Clearly, if $v(i,j,\pi) = 0$ for all $\pi$ then we are done. Therefore, we assume that $v(i,j,\pi) \neq 0$ for all $\pi$ such that $\pi(i) > 0$. In the case that $\mu_\alpha, \nu_\alpha \rightarrow \pi^*$, where $\pi^*(i) > 0$, we know by Lemma \ref{lemma:control_on_H}, using that $v(i,j,\cdot)$ is bounded away from $0$ on a neighbourhood of $\pi^*$, that \begin{equation*} \sup_\alpha e^{\alpha\left(\left(\mu_\alpha(j) - \nu_\alpha(j)\right)^- - \left(\mu_\alpha(i) - \nu_\alpha(i)\right)^-\right)} - 1 < \infty. \end{equation*} Picking a subsequence $\alpha(n)$ such that this term above converges and using that $\pi \rightarrow v(i,j,\pi)$ is uniformly continuous, we see \begin{align*} & \liminf_{\alpha \rightarrow \infty} \left[v(i,j,\mu_\alpha) - v(i,j,\nu_\alpha) \right]\left[e^{\alpha\left(\left(\mu_\alpha(j) - \nu_\alpha(j)\right)^- - \left(\mu_\alpha(i) - \nu_\alpha(i)\right)^-\right)} - 1\right] \\ & \quad = \lim_{n \rightarrow \infty} \left[v(i,j,\mu_{\alpha(n)}) - v(i,j,\nu_{\alpha(n)}) \right] \times \\ & \qquad \qquad \qquad \qquad \qquad \left[e^{{\alpha(n)}\left(\left(\mu_{\alpha(n)}(j) - \nu_{\alpha(n)}(j)\right)^- - \left(\mu_{\alpha(n)}(i) - \nu_{\alpha(n)}(i)\right)^-\right)} - 1\right] \\ & \quad = 0 \end{align*} For the second case, suppose that $\mu_\alpha(i),\nu_\alpha(i) \rightarrow 0$. By Lemma \ref{lemma:control_on_H}, we get \begin{equation} \label{eqn:proof_comparison_jump_sup_bound_on_exp} \sup_\alpha v(i,j,\nu_\alpha) \left[e^{\alpha\left(\left(\mu_\alpha(j) - \nu_\alpha(j)\right)^- - \left(\mu_\alpha(i) - \nu_\alpha(i)\right)^-\right)} - 1\right] < \infty. \end{equation} First of all, if $\sup_\alpha \alpha\left(\left(\mu_\alpha(j) - \nu_\alpha(j)\right)^- - \left(\mu_\alpha(i) - \nu_\alpha(i)\right)^-\right) < \infty$, then the argument given above also takes care of this situation. So suppose that this supremum is infinite. Clearly, the contribution $\left(\mu_\alpha(j) - \nu_\alpha(j)\right)^-$ is negative, which implies that $\sup_\alpha \alpha\left(\nu_\alpha(i) -\mu_\alpha(i)\right)^+ = \infty$. This means that we can assume without loss of generality that \begin{equation} \label{eqn:comparison_jump_assumption_measures} \alpha\left(\nu_\alpha(i) - \mu_\alpha(i)\right) \rightarrow \infty, \qquad \nu_\alpha(i) > \mu_\alpha(i). \end{equation} We rewrite the term $a = i$, $b = j$ in equation \eqref{eqn:proof_comparison_Potts_sum} as \begin{equation*} \left[\frac{v(i,j,\mu_\alpha)}{v(i,j,\nu_\alpha)} - 1 \right]v(i,j,\nu_\alpha) \left[e^{\alpha\left(\left(\mu_\alpha(j) - \nu_\alpha(j)\right)^- - \left(\mu_\alpha(i) - \nu_\alpha(i)\right)^-\right)} - 1\right]. \end{equation*} The right hand side is bounded above by \eqref{eqn:proof_comparison_jump_sup_bound_on_exp} and bounded below by $-1$, so we take a subsequence of $\alpha$, also denoted by $\alpha$, such that the right hand side converges. Also note that for $\alpha$ large enough the right hand side is non-negative. Therefore, it suffices to show that \begin{equation*} \liminf_{\alpha \rightarrow \infty} \frac{v(i,j,\mu_\alpha)}{v(i,j,\nu_\alpha)} \leq 1, \end{equation*} which follows as in the proof of Proposition \ref{proposition:comparison_ehrenfest_1d}. \end{proof} \smallskip \textbf{Acknowledgement} The author thanks Frank Redig and Christian Maes for helpful discussions. Additionally, the author thanks anonymous referees for suggestions that improved the text. The author is supported by The Netherlands Organisation for Scientific Research (NWO), grant number 600.065.130.12N109. \bibliographystyle{plain}
1,477,468,751,195
arxiv
\section{INTRODUCTION} In recent years the X-ray properties of Seyfert galaxies have been extensively studied (for a review see Mushotzky, Done \& Pounds 1993). It was soon realised that Seyfert 2 galaxies exhibit much lower X-ray luminosities, at least in the soft X-ray band below $\sim 3$ keV, than those typical of Seyfert 1 galaxies, a result which can now be explained in terms of the standard AGN unification model (Antonucci \& Miller 1985). According to this paradigm, the nucleus (supermassive black hole, accretion disc and broad line region) has basically the same structure in both types of object but, depending on the circumstances, can be hidden from viewed by a thick molecular torus (Krolik \& Begelman 1986). Specifically if the source is observed at a sufficiently high inclination angle, and thus the line of sight intersects the torus, it would be classified as a Seyfert 2, whereas for all other orientations it would be deemed to be a Seyfert 1. As well as obscuring the nucleus in the optical, the molecular torus can strongly suppress soft X-ray emission through the process of photoelectric absorption in cool atomic and molecular gas. In the limit of very high column densities, Thomson scattering will also diminish the more penetrating hard X-ray emission. The X-ray spectra of Seyfert 2 galaxies as observed by {\it Ginga}~, {\it ASCA}~ and recently {\it BeppoSAX}~~have proved to be very complex (e.g. Awaki et al. 1991; Smith \& Done 1996; Turner et al. 1997a; Griffiths et al. 1998). In broad terms most Seyfert 2 X-ray spectra can be well fitted by a power-law continuum (typically $\Gamma \sim 1.8$), plus an Fe-K emission line at 6.4 keV and a reflection component (e.g. Lightman \& White 1988, George \& Fabian 1991). This latter component, which may be produced at the surface of the putative molecular torus, flattens the observed continuum and can dominate the spectrum above $\sim 10$ keV. In most Seyfert 2s the above emission components are viewed through a large absorbing column density, typically $N_H>10^{23}$ $\rm cm^{-2}$. In some sources additional emission in the form of a soft X-ray excess is observed below $\sim 3$ keV probably as a result of scattering of the intrinsic power-law continuum by a strongly photoionised medium. In order to observe such soft X-ray emission it is clearly a requirement that the scattering medium should extend, in projection on the sky, well beyond the bounds of the obscuration of the molecular torus. In contrast to the recent progress in understanding the X-ray spectral characteristics of Seyfert 2 galaxies, our knowledge of their X-ray variability properties remains very limited. According to the standard unification scenario, the hard X-ray continuum should vary with large amplitude in a similar way to that observed in Seyfert 1 galaxies (Mushotzky, Done \& Pounds 1993). However, in type 2 objects the accretion disk, if present, is probably viewed at an acute angle and also soft X-rays emanating directly from the nucleus are suppressed. Since the remaining reprocessed spectral components, namely the Fe-K line, the reflection signal and the soft excess, are likely to originate from regions of parsecs scale-size, it follows that significantly less variability might be expected in Seyfert 2 objects, at least in those parts of the spectrum where the reprocessing makes a substantial contribution to the overall flux. In this paper, we focus on the properties of the Seyfert 2 galaxy Markarian 3 (hereafter Mrk 3) which, at a redshift $z = 0.0137$, is one of the brightest and consequently most well-studied, members of its class. {\it Ginga}~ observations (Awaki et al. 1990; Awaki et al. 1991; Smith \& Done 1996) first revealed an abnormally flat power-law continuum emerging through a high obscuring column ($N_H\sim 6\times 10^{23}$ $\rm cm^{-2}$). A strong Fe line was also detected with a high equivalent width ($\approx 1.3$ keV). Mrk 3 has the hardest spectrum of all 16 Seyfert 2 studied by Smith \& Done (1996), significantly harder than the spectrum of Seyfert 1 galaxies, thus challenging the standard unification models if the observed continuum actually corresponds to the underlying power-law in this source. However, several other Seyfert 2 galaxies have been found to possess flat spectra and strong Fe lines (e.g. Reynolds et al. 1994; Maiolino et al. 1998; Iwasawa \& Comastri 1998) with spectra generally indicating very heavy obscuration along the line of sight and also the presence of a strong Compton reflection component. Observations of Mrk 3 with the high spectral resolution afforded by the {\it ASCA}~ SIS have resolved the Fe-K line into at least two components (Iwasawa et al. 1994). The dominant component at 6.4 keV has an equivalent width of 0.9 keV and a FWHM of $\sim 10^4$ $\rm km~s^{-1}$, while the second component at 7 keV has an equivalent width of 0.2 keV and appears to be narrower than the first. The same {\it ASCA}~ observations (Iwasawa et al. 1994) require a spectral index of $\Gamma \approx 1.8$ but unfortunately the limited spectral bandpass (0.6--10 keV) of {\it ASCA}~ provides only weak constraints on the properties of the intrinsic continuum. A re-analysis of the Mrk 3 spectrum using non-simultaneous {\it Ginga}~, {\it ROSAT}~ and {\it ASCA}~ observations (Griffiths et al. 1998), covering a wide spectral band (0.1--30 keV), yielded a near canonical value for the power-law, $\Gamma \approx 1.7$, when either an additional absorption edge at 8 keV (perhaps originating in a warm absorber), or reflection was included in the spectral model. Recent observations with {\it BeppoSAX} (Cappi et al. 1999), which extend the spectral coverage to 150 keV, indeed confirm the presence of a steep ($\Gamma\sim 1.8$) intrinsic power-law. Turner et al. (1997) have also re-analysed the {\it ASCA}~ data and propose an alternative model in which the intrinsic continuum is viewed through a very large absorbing column ($N_H>10^{24}$ $\rm cm^{-2}$) while the reflection component is unobscured (in contrast to the standard reflection scenario in which the direct power-law and reflection components are observed through the same $N_H$). The {\it BeppoSAX} observations (Cappi et al. 1999) again support the above picture. Such a model would be applicable, for example, if we have a direct, unobscured view of the illuminated (far) inner walls of the torus. Time variability studies can provide additional constraints on the geometry of the nucleus and the surrounding region. Comparison of the {\it Ginga}~, {\it BBXRT} and the {\it ASCA}~ measurements shows a decrease in the 2--10 keV continuum flux by almost a factor of 3 (Iwasawa et al. 1994). During the same period the Fe line flux decreased by a factor of 1.8 (Griffiths et al. 1998), substantially less than the continnum variation. However, Turner et al. (1997) examined the short-term variability using {\it ASCA}~ observations but found no significant ($<$90 per cent) variability on timescales as short as one day. Here, we present the results of a X-ray monitoring campaign carried out on Mrk 3 by the Rossi X-ray Timing Explorer ({\sl RXTE}) mission, spanning a period of $\sim 200$ days. Our objective is to use the variability exhibited in the 4--20 keV band to place constraints on the geometry of the Mrk 3 nucleus and any surrounding gaseous media. The extended energy range of the {\sl RXTE} detectors also provides the opportunity to explore further the spectral composition of the X-ray emission emanating from Mrk 3. \section{THE OBSERVATIONS} Mrk 3 was observed with the {\sl RXTE} between 25 December 1996 and 6 July 1997. In total 12 observations were obtained, with a duration of about 5 ksec each. We have both Proportional Counter Array (PCA) and High Energy X-ray Timing Experiment (HEXTE) data but here we present the PCA analysis only. The PCA consists of five collimated (1$^{\circ}$ FWHM) Xenon proportional counter units (PCU). The PCU are sensitive to energies between 2 and 60 keV. However, the effective area drops very rapidly below 4 and above 20 keV. The energy resolution is 18 per cent at 6.6 keV (Glasser, Odell \& Seufert 1994). The collecting area of each PCU is 1300 $\rm cm^2$. We extract PCU light curves and spectra from only the top Xenon layer in order to maximize the signal-to-noise ratio. We use only 3 PCUs (0 to 2); the data from the other two PCU were discarded as these detectors were turned off on some occasions. The data were selected using standard screening criteria: we exclude data taken at an Earth elevation angle of less than 10$^{\circ}$, pointing offset less than 0.01$^{\circ}$ and during and 30 minutes after the satellite passage through the South Atlantic Anomaly (SAA). The resulting total integration time is 59 ksec. In both the spectral and the timing analysis, we use only data between 4 and 20 keV where the effective area is the highest. We use the {\small PCABACKEST v2} routine of {\small FTOOLS v 4.1.1} to generate the background models which take into account both the cosmic and the internal background. The internal background is estimated by matching the conditions of the observations with those in various model files. Most of the internal background is correlated with the L7 rate, the sum of 7 rates from pairs of adjacent anode wires. However, there is a residual background component correlated with recent passages from the SAA. Therefore, the use of a second, activation component is also necessary. The level of the residual internal background errors after background subtraction with {\small PCABACKEST} is about 20 per cent of the cosmic X-ray background $1\sigma$ fluctuations in the 2-10 keV band. The observation date for each dataset together with the observed background-subtracted count rate in the full 2-60 keV PCA energy band are given in Table 1. \begin{table} \begin{center} \caption{Log of the 12 {\sl RXTE} observations} \begin{tabular}{ccc} Obs.No. & Date & Count Rate \\ & & $\rm ct~s^{-1}$ \\ \hline 1 & 25/12/96 & 2.5 \\ 2 & 17/02/97 & 3.2 \\ 3 & 16/03/97 & 3.5 \\ 4 & 21/03/97 & 3.8 \\ 5 & 31/03/97 & 4.3 \\ 6 & 04/04/97 & 4.0 \\ 7 & 14/04/97 & 5.2 \\ 8 & 15/04/97 & 5.0 \\ 9 & 16/04/97 & 4.3 \\ 10 & 17/04/97 & 5.0 \\ 11 & 30/05/97 & 2.5 \\ 12 & 06/07/97 & 2.3 \\ \hline \end{tabular} \end{center} \end{table} \section{Time Variability} Here, we address the issue of flux and spectral variability in a model independent way, using the background subtracted light curves. We divide the 4 to 20 keV range into 4 bands, namely 4--6 keV and 7--10 keV, where the underlying power-law probably dominates the flux, 6--7 keV where a significant fraction of the flux should originate from the Fe line at 6.4 keV and finally 10--20 keV where the flux is quite possibly dominated by a reflection bump. The light curves for these bands are shown in Fig. \ref{lc} with $\pm1\sigma$ error bars. It is evident that there is variability in all four bands, with a minimum-to-maximum amplitude of at least a factor of two. This result is statistically significant at a high level of confidence; a constant value for the count rate gives $\chi^2$ values of 33, 134, 307 and 850 (11 degrees of freedom) for the 4--6, 6--7, 7--10 and 10--20 keV bands respectively. The rms variation in the respective light curves, after correcting for the variance due to the photon statistics, is 15, 17, 22 and 28 per cent. The apparent increase in the fractional variability amplitude towards higher energy might arise, for example, if a less variable or constant spectral component contributes preferentially to the softer bands; however spectral analysis does not substantiate this simple picture (see the next section). \begin{figure*} \rotatebox{0}{\includegraphics[height=13.0cm]{lc.ps}} \caption{The background subtracted light curves in four different energy bands (upper panels). Plots of the 10--20 keV/7--10 keV (HR1) and 7--10 keV/4--6 keV (HR2) hardness ratios as a function of time are also shown (lower two panels).} \label{lc} \end{figure*} We investigated the possibility of spectral variability by plotting the ratios of the 10--20 keV/7--10 keV and 7--10 keV/4--6 keV count rates, denoted as HR1 and HR2 respectively, as a function of time (see Fig. \ref{lc}). Both the HR1 and HR2 ratios provide some evidence for such temporal variability; against the constant ratio hypothesis the $\chi^2$ values are respectively 20 and 27 (for 11 degrees of freedom), corresponding to only a 5 and 0.5 per cent chance probabilities. Hence, the data do suggest the presence of subtle spectral variations, but unfortunately do not provide any strong pointers to a preferred spectral model. \section{Spectral Analysis} The PCA data from each observation were binned to give a minimum of 20 counts per channel. All data below 4 and above 20 keV were ignored due to their poor signal-to-noise ratio. By discarding the data below 4 keV we also avoid the complications associated with the soft X-ray excess in this source (e.g. Griffiths et al. 1998). The spectral fitting analysis was carried out using the {\small XSPEC v.10} software package on the basis of ``joint simultaneous fits'' to the 12 {\sl RXTE} observations. \begin{figure*} \rotatebox{270}{\includegraphics[height=12.5cm]{typical.ps}} \caption{The {\sl RXTE} PCA spectrum of Mrk 3 from observation 1. The solid line corresponds to the best fitting version of model A after folding through the instrument response. The lower panel shows the corresponding residuals to the fit.} \label{typical} \end{figure*} Following previous {\it Ginga}~ and {\it ASCA}~ results, we first employ a very simple spectral model consisting of a power-law continuum, with photon spectral index $\Gamma$, modified by absorption in a column density, $N_H$, of cool neutral material. A Gaussian line was also included to account for Fe-K emission. For simplicity the line energy and intrinsic width were fixed at the values obtained from {\it ASCA}~ by Griffiths et al. (1998), {\it i.e.} $E_{line} = 6.38$ keV and $\sigma_{line}=0.1$ keV (since consistent values were obtained in free fits of these parameters). The values of the $N_H$, the photon index and the normalization of the Fe line are free parameters but they are tied to the same value. However, the normalizations of the power law are allowed to vary freely. The results of fitting this spectral model are presented in Table 2 as model A, where the errors correspond to the 90 per cent confidence level for one interesting parameter. The derived photon index is very flat ($\Gamma\approx 1.1$), consistent with the original Smith \& Done (1996) analysis of the {\it Ginga}~ data. The resulting $\chi^2$ of 365 for 489 degrees of freedom (dof) implies an excellent fit (although the errors arising from the background model, may have been somewhat overestimated). Fig. \ref{typical} shows a typical PCA count rate spectrum from a single observation, together with the best-fitting model prediction and the resulting residuals. As can be seen from Fig. \ref{typical} we clearly observe signal up to 20 keV, even in the case of the lowest flux observations. The derived Fe-K line flux was $4.7\pm0.84\times 10^{-5} \rm~photon~s^{-1}~cm^{-2}$ implying an equivalent width varying from 1.4 keV (observation 12) to 0.4 keV (observation 2). The flux of the Fe line is entirely consistent with that obtained by Iwasawa et al. (1994). When the normalisation of the Fe-K component was ``untied'' across the set of observations, the resulting $\chi^2$ reduced to 357 for 478 dof, but this is not a statistically significant improvement (according to the F-test for 11 additional parameters). We conclude that there is no strong evidence for variability in the Fe-K line flux in the {\sl RXTE} data. In fact this conclusion holds for all spectral models (models A-D) considered in this section, and so in each case we have kept the Fe-K normalisation tied to a single value for all the datasets. \begin{table*} \begin{center} \caption{Results from the spectral fitting of the 12 {\it RXTE} datasets} \begin{tabular}{ccccccc} Model & $N_H$ & $\Gamma$ & R & $E_{\small edge}$ & $\tau$ & $\chi^2$/dof \\ \hline {\small A} & $63^{+4}_{-4}$ & $1.06^{+0.04}_{-0.06}$ & - & - & - & 365/489 \\ {\small B} & $75^{+3}_{-3}$ & 1.8 & $0.1^{+1.0}_{-0.1}-3.4^{+0.5}_{-0.5}$ & - & - & 337/478 \\ {\small C} & $110^{+6}_{-6}$ & $1.85^{+0.09}_{-0.09}$ & $0.7^{+0.14}_{-0.14}-2.5^{+0.5}_{-0.5}$ & - & - & 322/489 \\ {\small D} & $74^{+3}_{-3}$ & $1.3^{+0.14}_{-0.14}$ & - & 8.1 & 0.24 & 376/489 \\ \hline \end{tabular} \end{center} \end{table*} The next step was to investigate whether there is any evidence for changes in either the photon spectral index or the column density. The result of spectral fitting with each of these parameters in turn ``untied'' was a $\chi^2$ of 337 and 335 respectively for 478 dof. In both cases the change in $\chi^2$ (compared to model A) is statistically significant at over 99 per cent confidence. The range of the apparent variation in the spectral index is $\Gamma = 0.94^{+0.16}_{-0.16}$ to $1.54^{+0.35}_{-0.29}$ while the best fit column density is $N_H=66^{+5}_{-9}\times 10^{22}$ $\rm cm^{-2}$. Similarly, when the $N_H$ is untied we obtain $N_H= 47^{+6}_{-6}$ to $ 69^{+5}_{-5} \times 10^{22} \rm~cm^{-2}$ while the best fit photon index is $\Gamma= 1.14 ^{+0.11}_{-0.11}$. This confirms the evidence from the hardness ratios for underlying spectral variability. Note that there is clearly a dependence of the photon index on the column density in the sense that the data cannot easily discriminate between a flat photon index and a high column density. As an additional test, we checked for possible spectral variations correlated with the X-ray brightness of the source. For this purpose we separated the observations into high and low-states on the basis a flux threshold of $3\times 10^{-11}$ $\rm erg~cm^{-2}~s^{-1}$~in the 4--20 keV band. Observations 4--10 were thereby classified as high state, with observations 1--3 and 11--12 comprising the low-state. The results (based on the model A prescription) are summarised in Fig. \ref{contours} where we plot the joint 68, 90 and 99 per cent confidence contours in the $\Gamma$ versus $N_H$ plane for the high-flux and the low-flux states. We see that although the best fit centroids are offset, the 90 per cent confidence contours show considerable overlap, suggesting that we cannot be confident that either the continuum slope or column density shows any consistent change with increasing flux. \begin{figure} \rotatebox{270}{\includegraphics[height=10.0cm]{contours.ps}} \caption{The $\Gamma-N_H$ contours for the high-flux (rightmost) and the low-flux data (leftmost); in each case the three levels correspond to 68, 90 and 99 per cent confidence contours.} \label{contours} \end{figure} Although, the model A prescription defined above gives an acceptable fit in terms of the $\chi^{2}$ statistic, there is evidence from both {\it ASCA}~ and {\it Ginga}~ observations (Griffiths et al. 1998) and recently {\it BeppoSAX} observations (Cappi et al. 1999) that the anomalously flat power-slope derived for Mrk 3 is due to the presence of Compton reflection. The next step in the current analysis was therefore to include a reflection component in the spectral modelling. Specifically we use the {\small PEXRAV} model (Magdziarz \& Zdziarski 1995) in {\small XSPEC}. We initially assume that both the reflection component and the power-law are absorbed by the cold gas column density according to the standard reflection prescription. The strength of the reflection component is governed by the parameter R, representing the strength of the reflected signal relative to the level of the incident power-law continuum. Following the results of Griffiths et al. (1998), we fix the inclination angle for the disk at $i=60^{\circ}$; this large inclination angle implies that a large part of the reflection component originates in the torus. We also set the spectral index of the power-law continuum to ($\Gamma=1.8$) following the results of Griffiths et al. (1998) and Cappi et al. (1999). Note that without this constraint the fit reverts to a very flat power-law slope and negligible reflection. Our initial approach was to tie the {\it effective normalisation} of the reflection signal to a single value across the set of observations. The resulting best-fitting model gave a $\chi^{2}$ of 395 for 489 dof and required values of $R$ varying from 0.7$\pm 0.2$--2.1$\pm0.9$. In this model even though the reflection component has a fixed level, R changes since the normalisation of the intrinsic power-law varies from observation to observation. However, when we allow the normalisation of the reflection component to vary freely, we obtain a $\Delta\chi^2\approx 58$ which is highly statistically significant. Details of this fit are summarised in Table 2 (Model B) and Fig. \ref{refl} shows the derived temporal variation in the normalisation of both the power-law and reflection components. There is clearly a suggestion that the latter responds to variations in the former with any lag between the two being $ _{\sim}^{<} 1$ month. Unfortunately there is insufficient data to set a more precise constraint. \begin{figure} \rotatebox{0}{\includegraphics[height=8.0cm]{refl.ps}} \caption{The normalization of the power-law (top) and the reflection components (bottom) in the case of the standard reflection model (model B). The errors correspond to a 90 per cent confidence level.} \label{refl} \end{figure} As noted earlier an alternative reflection model for Mrk 3 was proposed by Turner et al. (1997) in which the intrinsic power-law is seen through an increased column but the reflection is largely unobscured. The application of such a model to the {\sl RXTE} data gives the results presented in Table 2 (model C) when the reflection normalisations are tied to a single value. The best-fitting power-law slope is $\Gamma=1.85^{+0.09}_{-0.09}$, consistent with the canonical AGN power-law, and $N_H =1.1 \times 10^{24}$ $\rm cm^{-2}$, {\it i.e.} approximately 30\% higher than the value obtained earlier (as model B). Note that at such high column densities the effects of Thomson scattering become important. Then the derived column density from fitting simple absorption models will overestimate the true column (Leahy et al. 1989). The model above gives the lowest reduced $\chi^2$ of all the models we consider ($\chi^2= 322/489$). Comparison with the standard reflection model (in the case where the reflection component has a constant normalisation) using the F-test, suggests that the Turner et al. model represents a better fit to the data at the 90 per cent confidence level. Interestingly in this second reflection scenario we obtain only a small improvement in the $\chi^2$ ($\Delta \chi^2\sim 6$) when the normalisation of the reflection component is allowed to vary across the observations. Thus in this description the reflection component essentially remains constant. Finally, we note that Griffiths et al. (1998) suggest an alternative explanation for the abnormally flat spectrum of Mrk 3. These authors include an additional absorption edge near 8 keV in their spectral model which serves to steepen the slope determined for the underlying power-law component. The edge feature could originate in a putative warm absorber (e.g. Reynolds 1997; George et al. 1998) which in the case of Mrk 3 may also produce the scattered soft excess flux observed below 3 keV (assuming an extensive distribution of this hot medium in the nuclear region of the galaxy). We have therefore also considered the effect of including such a feature in the spectral fitting of the {\it RXTE} datasets. The results are given in Table 2 (model D), where we have fixed the edge energy and optical depth at the values obtained by Griffiths et al. ({\it i.e.} $E_{edge} =8.1$ keV, $\tau = 0.24$). In this case the power-law remains flat ($\Gamma \approx 1.3$) and the best fit is actually worse than that obtained in the absorbed power-law model (model A). We conclude that the absorption edge alone is not sufficient to explain the apparently anomalously hard spectrum measured for Mrk 3 by {\sl RXTE}. \section{DISCUSSION} The {\sl RXTE} observations presented in this paper confirm the finding of previous studies namely that the X-ray continuum emanating from Mrk 3 is exceptionally hard, at least within the 4-20 keV bandpass. The preferred interpretation of this flat spectrum is that this source exhibits particularly strong Compton reflection and we find that two variants on this reflection theme are broadly consistent with the {\sl RXTE} data both in terms of the average spectrum and the observation to observation spectral variability. In the standard reflection description (e.g. Griffiths et al. 1998) the intrinsic power-law continuum, the reflection component and the Fe-K emission are all affected by photoelectric absorption in a large column density of cool absorbing gas ({with $N_H \sim 7\times 10^{23}$ $\rm cm^{-2}$). In this model both the continuum and the reflected component vary together; with any lag in the response of the latter constrained to $ _{\sim}^{<} 1$ month. In contrast, the {\sl RXTE} data show no evidence for variability in the Fe line flux. It is quite plausible that the (bulk of the) Fe-K flux and the reflection signal originate in different regions. For example a significant fraction of the Fe-K line flux might originate in a very extended region which is optically thin to Fe-K photons whereas the reflection could arise in a partially covering screen of optically thick clouds situated within a light month of the nucleus. In the alternative version of the reflection model (e.g. Turner et al. 1998) our line of sight to the reflecting material is largely unobscured, although the nucleus itself is covered by a very substantial screen of absorption ($N_H\sim 10^{24}$ $\rm cm^{-2}$). This model actually gave the best fit to the average spectrum and a fairly canonical value for the intrinsic power-law slope. Also the only temporal variation required is in the level of the underlying continuum. This leads to arguably the most plausible explanation of the anomalously hard spectrum observed in Mrk 3, namely that we see strong Compton reflection from the far illuminated wall of a putative molecular torus in its nucleus. Presumably our line of sight to this region passes over the nearside of the torus without intercepting anything like the column density that lies directly in front of the central nuclear source. Future monitoring observations by satellites such as XMM will greatly improve the photon statistics as well as the spectral resolution and thus are expected to shed further light on the detailed geometry of the central region of Mrk 3. Such observations will in fact provide a critical and detailed test of current unification schemes. \section{Acknowledgements} We thank the anonymous referee for many useful comments and suggestions. RGG acknowledges support from PPARC in the form of a research studentship. \section*{References} Antonucci, R.R.J., Miller, J.S., 1985, ApJ, 297, 621 \\ Awaki, H., Koyama, K., Kunieda H., Tawara, Y., 1990, Nature, 346, 544 \\ Awaki, H., Koyama, K., Inoue, H., Halpern, J.P., 1991, PASJ, 43, 195 \\ Cappi, M., et al., 1999, A\&A, in press \\ George, I.M., Fabian, A.C., 1991, MNRAS, 249, 352 \\ George, I.M., Turner, T.J., Netzer, N., Mandra, K., Mushotzky, R.F., Yaqoob, T., 1998, ApJS, 114, 73 \\ Glasser, C.A., Odell, C.E., Seufert, S.E., 1994, IEEE Trans. Nucl. Sci., 41, 4\\ Griffiths, R.E., Warwick, R.S., Georgantopoulos, I., Done, C., Smith, D.A., 1998, MNRAS, 298, 1159 \\ Iwasawa, K., Yaqoob, T., Awaki, H., Ogasaka, Y., 1994, PASJ, 46, L167 \\ Iwasawa, K., Comastri, A., 1998, MNRAS, 297, 1219 \\ Krolik, J.H., Begelman, M.C., 1986, ApJ, 308, 55 \\ Lee, J.C., Fabian, A.C., Reynolds, C.S., Iwasawa, K., Brandt, W.N., 1998, MNRAS, 300, 583 \\ Leahy, D.A., Matsuoka, M., Kawai, N., Makino, F., 1989, MNRAS, 236, 603 \\ Lightman, A.P., White, T.R., 1988, ApJ, 335, 57 \\ Maiolino, R., Salvati, M., Bassani, L., Dadina, M., Della Ceca, R., Matt, G., Risaliti, G., Zamorani, G., 1998, A\&A, 338, 781 \\ Magdziarz, P., Zdziarski, A., 1995, MNRAS, 273, 837 \\ Mushotzky, R.F., Done, R.F., Pounds, K.A., 1993, ARA\&A, 31, 717 \\ Smith, D.A., Done, C., 1996, MNRAS, 280, 355 \\ Nandra, K., Pounds, K.A., 1994, MNRAS, 268, 405 \\ Nandra, K., Mushotzky, R.F., Yaqoob, T., George, I.M., Turner, T.J., 1997, MNRAS, 284, L10 \\ Reynolds, C.S., 1997, MNRAS, 286, 513 \\ Reynolds, C.S., Fabian, A.C., Makishima, K., Fukazawa, Y., Tamura, T., 1994, MNRAS, 268, L55 \\ Turner, T.J., George, I.M., Nandra, K., Mushotzky, R.F., 1997, ApJS, 113, 23 \\ Turner, T.J., George, I.M., Nandra, K., Mushotzky, R.F., 1997, ApJ, 488, 164 \\ \end{document} \section*{References} We ignore all data above 20 keV as the background models may be unreliable above this energy. We also ignore all data with energy below 4 keV due a possible inadequate calibration at these energies with a a residual Xe feature. This spectral variability trend may provide evidence against the standard reflection model. According to this model the reflection component is probably produced in the obscuring torus which has a size of a few pc. Then the flux of the reflection component should stay relatively constant to the changes of the power-law. In this case the harder energies should exhibit less variability compared to the 4-6 keV band which is dominated by the power-law. However, the above tentative result largely depends on the highest point in the plot (observation 12). As discussed later in the spectral section, this observation may suffer from poor background subtraction at energies below 5 keV. After the rejection of this observation we obtain a reduced $\chi^2=17/10$, which rejects the null hypothesis of a constant flux ratio at only the $<2\sigma$ level. We conclude that there is no strong evidence for spectral variability within the limited statistics of our observations. We first fit the 12 individual observations. These provide snapshots of the Mrk 3 spectrum at different epochs. The results are presented in table 1. Column (1) gives the observation dates; column (2) the best-fit power law together with the 90 per cent confidence errors; column (3) the flux in the 4-20 keV band in units of $\rm erg~cm^{-2}~s^{-1}$; column (4) the reduced $\chi^2$ value and column (5) the probability that the model is accepted. We obtain acceptable fits in all 12 observations. We note that observation 12 prefers a low column ($N_H\sim 3 \times 10^{22}$ $\rm cm^{-2}$) flat ($\Gamma \sim 0.3$) spectrum with $\chi_{\nu}^2=20.8/38$. This may be due to inadequate background modelling at low energies ($<$5 keV). Apart from observation 12, the constancy of the spectral slope is remarkable. We present a couple of typical examples of the spectra obtained in Fig. \ref{typical} where we give the folded spectrum for the observations 1 and 12. Griffiths et al. (1998) obtain $\Gamma=1.37^{+0.15}_{-0.11}$ from the Ginga data alone. They attribute the origin of this discrepancy to the fact that they used the 4.5-27 keV energy band (both the top and mid layer of the {\it Ginga}~ Large Area Counter) while Smith \& Done (1996) used a narrower energy range (2-18 keV). \begin{figure*} \rotatebox{0}{\includegraphics[height=10.0cm]{gauss.ps}} \caption{ The variability of the power-law flux (filled circles) and Fe line flux (open circles) in the case of (top to bottom): power-law/gaussian line model; power-law/gaussian line/reflection model where the reflection is tied to the line normalization (see text); power-law/gaussian/reflection model where both the gaussian line amd the reflection normalization are allowed to vary free. All errors correspond to the 90 per cent confidence level } \label{gauss} \end{figure*}
1,477,468,751,196
arxiv
\section{Introduction} Countless students in introductory physics learn that the ``exchange of virtual particles'' is responsible for the fundamental forces of nature. Several popular introductory textbooks contain diagrams which sketch how classical particle exchange could plausibly explain the qualitative nature of repulsive forces.\cite{BauerWestfall,Mazur} Furthermore, some texts even attempt to construct analogies for how attractive forces could arise from complicated exchanges of classical objects.\cite{Giancoli,YoungFreedman} In this paper, we wish to address the gaping hole in the literature regarding how such pictures may be quantitatively useful in understanding the connection between fundamental interactions and momentum transfer through mediating particles. Just as physical theories are only useful within certain domains of validity, analogies are only helpful until their meanings are stretched to a point at which the usefulness breaks down. To properly analyze fundamental interactions, the methods of quantum field theory provide the tools necessary for obtaining quantitatively accurate results. Ref.~\onlinecite{Zee} provides a particularly illuminating discussion of how gravitational, electrostatic and nuclear potentials arise as either attractive or repulsive interactions by using the path integral formualtion of quantum field theory. Additionally, by casually invoking the energy-time version of the Heisenberg uncertainty principle, one may obtain surprisingly accurate information regarding the force laws resulting from electromagnetic and nuclear interactions.\cite{Harney} The focus of the present work is not to require an idealized analysis within classical mechanics to describe the nature of fundamental interactions, but to explore how effective forces between particles which are spatially separated {\it can} arise within classical dynamics. A student needs only very basic tools to explore the implications of a particular particle exchange model. With easily acquired numerical results, an advanced student may apply the mathematical analysis required to obtain both exact and asymptotic results. The goal of the present work is to present a quantitative approach, accessible at both introductory and advanced levels, which thoroughly analyzes a particular model for interactions based on classical physics. In particular, we consider a system of two massive particles, each of mass $M$, which interact with each other via the exchange of two mediating particles, each of mass $m \ll M$, which are taken to always move at speed $c\,$ and interact with the heavier particles through inelastic collisions, always emerging with speed $c\,$ relative to a stationary lab (or ``ground'') frame. Though this model is admittedly artificial compared to the quantum field theories describing the known fundamental interactions, the reasoning required for a careful, quantitative analysis are quite useful in understanding the realistic interactions that do occur in nature through mediating quantum fields.\cite{Zee} A notable shortcoming of the classical particle-exchange analogy is its inability to describe attractive forces.\cite{GriffithsPart} While it is possible to invoke quantum fluctuations in energy to explain attractive nuclear forces in a qualitative manner,\cite{Dunne} we emphasize that attractive interactions emerge naturally from classical scalar field theory.\cite{Rubakov} Such a rigorous discussion of the origin of attractive forces implicitly requires a discussion of quantum theory, as these interactions rely on the wave-like nature of matter. Consequently, such treatment is beyond the scope of the present work, as we wish to present a model which may be thoroughly analyzed classically. This paper is arranged as follows: in Sec.~\ref{sec:model} we present a model for classical particle exchange and explore some basic consequences through simulations and physical reasoning, both of which are appropriate for students in introductory physics courses. Sec.~\ref{sec:analytic} contains a thorough analysis of the model employing advanced physical reasoning and special functions to verify the speculative results obtained through careful estimation in Sec.~\ref{sec:model}. Finally, we summarize the results in Sec.~\ref{sec:summary}. \section{Model}\label{sec:model} We wish to investigate the classical picture of particle exchange as a mechanism for interaction between two massive particles. We imagine two particles each of mass $M\,$ exchanging small particles, each of mass $m\ll M$ as shown in Fig.~\ref{fig:collisionfig}. The analogy is often made to a pair of ice skaters (or rollerbladers) tossing a ball back and forth.\cite{BauerWestfall,Mazur,Giancoli,YoungFreedman} Each time one skater catches the ball and throws it back, a small amount of momentum is imparted to the skater, resulting in an effective repulsive force between the skaters which is mediated by the ball being tossed. We construct a quantitative model for this type of interaction by taking the smaller particle's velocity to be a constant, given speed $c$. We choose the label $c\,$ with no reference whatsoever to the speed of light, though we will see that our $c\,$ plays a role in our model which is rather similar to that of the actual speed of light in electromagnetism, allowing us to explore a sort of ``non-relativistic'' limit of the model for speeds $v \ll c$. In order to keep the system's center of mass at rest, we shall consider a symmetric setup in which two small particles are exchanged. When the smaller, mediating particles approach each other we assume that they pass through one another without interaction or collide elastically.\cite{note1} \begin{figure}[h] \begin{center} \includegraphics[totalheight=5.5cm]{collisionfig.pdf} \caption{Two particles of mass $M\,$ experience a repulsive ``force'' which is mediated by the exchange of a smaller particle of mass $m\ll M$.} \label{fig:collisionfig} \end{center} \end{figure} Since the mediating particles always move at speed $c$, the collisions involving the massive\cite{note1a} particles with the mediating particles would not result in momentum transfer if the collisions were elastic. To obtain nontrivial momentum transfer, we must consider inelastic collisions which result in an incremental increase in the system's kinetic energy after each collision. We shall explore whether the work required for this change in kinetic energy may be associated with an effective potential energy for the system. Taking the large, right-moving particle to be moving at speed $v$, momentum conservation applied to a single collision gives \begin{equation} Mv_{n} + mc = Mv_{n+1} - mc, \label{Eq:momentumconservation} \end{equation} or $\delta v \equiv v_{n+1}-v_{n} = 2\frac{mc}{M}$. With repeated collisions of this form, the two massive particles will accelerate away from their common center of mass in a manner qualitatively similar to the motion experienced by two like charges placed near each other and released. We employ two approaches to investigate the quantitative nature of this effective force law. First, we simulate the system as described, obtaining numerically an effective force law which decreases as $r^{-1}\,$ for small velocities $v\ll c$, where $r\,$ is the instantaneous separation between the two massive particles. Second, the discrete sequence of collisions leads to a recursion relation which allows us to obtain a closed-form expression for $r_{n}$, the separation distance immediately preceeding the $n^{\mbox{\scriptsize th}}\,$ collision. While exact, this closed-form expression for $r_{n}\,$ is less than transparent regarding the physics of the system. In the following section, we apply continuum approximation to uncover the effective dynamics analytically in various limits. \subsection{Full simulation} The full simulation consists of integrating the Newtonian equations of motion for free particles moving at constant speeds and monitoring for a ``collision'' at which point each massive particle is given a boost in speed $\delta v = 2mc/M\,$ and the mediating particles are reflected with equal momenta in the opposite directions. Letting $x^{(1)}\,$ ($x^{(2)}$) denote the position of the right-moving (left-moving) particle and $v^{(1)}\,$ ($v^{(2)}$) its velocity, we consider the following initial conditions: \begin{eqnarray} x^{(1)}(0) & = & -x^{(2)}(0) = \frac{r_{0}}{2},\\ v^{(1)}(0) & = & v^{(2)}(0) = 0. \end{eqnarray} The mediating particles are initially located at the origin and begin moving in opposite directions toward the massive particles at $t = 0\,$ with speed $c$. Letting the positions of the mediating particles be given by $X^{(i)}\,$ for $i=1,2$, it is an instructive exercise to numerically integrate the equations of motion \begin{eqnarray} \frac{dx^{(i)}}{dt} & = & v^{(i)},\\ \frac{dX^{(i)}}{dt} & = & V^{(i)}. \end{eqnarray} with $V^{(1)} = +c\,$ and $V^{(2)} = -c\,$ at $t=0$. To monitor for collisions, at each time step $\Delta t\,$ we check for the following condition: \begin{equation} \left| x^{(i)} - X^{(j)} \right| < \epsilon, \end{equation} indicating that the mediating particle nearest the $i^{\mbox{\scriptsize th}}\,$ particle has come within a small distance $\epsilon\,$ of the massive particle's location. When this occurs, we make the following adjustment to the equations of motion: \begin{eqnarray} v^{(i)} & \rightarrow & v^{(i)} + \frac{2mc}{M}\mbox{sign}\left(V^{(j)}\right),\\ V^{(j)} & \rightarrow & - V^{(j)}, \end{eqnarray} indicating that a collision has occurred, resulting in momentum transfer. Results are generally insensitive to the time-step size, provided $\epsilon \precsim c\Delta t$. Fig.~\ref{fig:loglog} depicts the numerically computed average acceleration as a function of separation distance for $m = 0.005M$. For the computation of acceleration, we only use the separation distance and corresponding time just after collision events, since each massive particle's acceleration is formally zero between collisions. Note that for position measurements which are taken at unequal time increments, we require the following discrete representation\cite{Abramowitz} of its second temporal derivative \begin{equation} \left.\frac{d^{2}r}{dt^{2}}\right|_{r=r_{n}} \approx \frac{\frac{r_{n+1}-r_{n}}{t_{n+1}-t_{n}}-\frac{r_{n}-r_{n-1}}{t_{n}-t_{n-1}} }{t_{n+1}-t_{n-1}}. \end{equation} \begin{figure}[h] \begin{center} \includegraphics[totalheight=7.2cm]{loglog.pdf} \caption{Numerically computed acceleration plotted against separation distance for the right-moving mass with logarithmic scales on axes. Also shown is a linear regression for the logarithmic data.} \label{fig:loglog} \end{center} \end{figure} A strong linear trend on a log-log plot demonstrates the power-law nature of the force law, \begin{equation} \frac{d^{2}x^{(2)}}{dt^{2}} \propto r^{b_{1}}, \end{equation} with $b_{1} \approx -1$. This result is consistent with a rough estimation of the rate of momentum transfer for $v \ll c$. Each collision is associated with transfer of momentum \begin{equation} \delta p = 2mc. \end{equation} For $v \ll c$, the massive particles do not move appreciably during one collision cycle. Let $r\,$ denote the instantaneous separation distance between the massive particles. Beginning with the mediating particles at the origin, one cycle requires each particle to cover a distance $\frac{r}{2}\,$ to collide with the massive particles and then another distance of $\frac{r}{2}\,$ to return to the origin. Thus, a single collision cycle associated with a momentum transfer $\delta p\,$ requires a time \begin{equation} \delta t = \frac{r}{c}. \end{equation} The average force experienced by each massive particle is then \begin{equation} F_{\mbox{\scriptsize ave}} \simeq \frac{\delta p}{\delta t} = \frac{2mc^{2}}{r}.\label{eq:force} \end{equation} The validity of this crude estimate will be examined more carefully in the next section, but for now it serves to make the results in Fig.~\ref{fig:loglog} appear rather plausible. One might worry about the implications of an inverse-linear force law, since this could potentially be associated with a logarithmic potential energy function, just as in the case of two uniformly charged wires of infinite length.\cite{GriffithsEM} In the case of point particles, the potential does not asymptotically approach a constant value at large distances and should result in ever-increasing speeds as the massive particles move farther away from each other. This does not appear consistent with the model, as the mechanism for momentum does not allow the mediating particles to travel faster than speed $c$, so the speeds of the massive particles should be bounded by this limit. The resolution of this apparent paradox will be addressed below where we must refine the simulation method in order to access much longer times. \subsection{Calculation of collision times} The results so far suggest a disconnect between the low-energy behavior of the model and the high-energy ``speed limit'' of $c$, which should be enforced by the mediating particles. To obtain some resolution, we must explore extremely large timescales, thus allowing the massive particles to approach high speeds, $v^{(1,2)}\sim c$. Because the time between subsequent collisions grows at an accelerated rate as the massive particles spread apart and speed up, the basic scheme outlined above becomes impractical. In fact, most of the computation is entirely unnecessary since all particles move with constant velocities until a collision occurs. Starting from one collision event, the time for the next collision may be computed using the instantaneous velocities of all particles, and this process may be repeated. Though the time between collisions grows rapidly, the computation time of this scheme grows linearly with number of collisions, not with the elapsed time as before. To proceed, let us consider a single collision event shown in Fig.~\ref{fig:collisionfig}. With both mediating particles at the origin and instantaneous separation $r_{n}\,$ between the outwardly moving massive particles, the next collision will occur after the mediating particles have reached the massive particles, requiring a time \begin{equation} \delta t_{n} = \frac{r_{n}/2}{c-v_{n}},\label{eq:dt} \end{equation} corresponding to traveling a distance of $\frac{r_{n}}{2}\,$ with speed $c-v_{n}\,$ relative to the outwardly moving, massive particles. After time $\delta t_{n}\,$ has elapsed, collisions occur resulting in the mediating particles reversing directions and \begin{equation} v_{n}\rightarrow v_{n+1} \equiv v_{n} + \frac{2mc}{M}.\label{eq:vn} \end{equation} The cycle completes when the mediating particles return to the origin. By symmetry, this also requires time $\delta t_{n}$, so the entire elapsed time for a complete cycle is $2\delta t_{n}$, or \begin{equation} t_{n+1} = t_{n} + \frac{r_{n}}{c-v_{n}}.\label{eq:tn} \end{equation} To update the positions of the massive particles, we note that before the collision, each particle was moving away with speed $v_{n}$ with respect to the ground for time $\delta t_{n}$. After the collision, each particle moves away from the system's center of mass for time $\delta t_{n}\,$ with the updated speed, $v_{n+1}$. Thus, the separation distance increases by an amount $2v_{n}\delta t_{n} + 2v_{n+1}\delta t_{n}$, or \begin{equation} r_{n+1} = r_{n} + 2v_{n}\delta t_{n} + 2v_{n+1} \delta t_{n}.\label{eq:rn} \end{equation} Eqs.~(\ref{eq:dt})-(\ref{eq:rn}) constitute a closed recursion relation which may be iteratively advanced to obtain the velocity, separation distance and time corresponding to the beginning of each collision cycle. \begin{figure}[h] \begin{center} \includegraphics[totalheight=7.2cm]{logplot1.eps} \caption{Long-time, large-distance behavior of massive particle speed (blue circles) computed from Eqs.~(\ref{eq:dt})-(\ref{eq:rn}) and compared to low-speed, non-relativistic (NR) approximation in Eq.~(\ref{eq:nrap}), which provides excellent agreement with the simulation for $v \ll c$.} \label{fig:logplot1} \end{center} \end{figure} For a point of comparison, we may take the approximate force law in Eq.~(\ref{eq:force}) and write Newton's second law for the motion of the right-moving particle, \begin{equation} M\frac{d^{2}x^{(1)}}{dt^{2}} = \frac{2mc^{2}}{r}. \end{equation} Applying the symmetry of the system, we have $r = 2x^{(1)}\,$ and may change variables, \begin{equation} \frac{d^{2}x^{(1)}}{dt^{2}} = \frac{d^{2}r}{dt^{2}} = \frac{1}{2}\frac{d}{dr}\left(\dot{r}^{2}\right). \end{equation} Writing $\dot{r} = 2v$, where $v\,$ represents the speed of each massive particle, we may integrate both sides to obtain \begin{equation} v^{2}(r) = v_{0}^{2} + \frac{2mc^{2}}{M}\ln\frac{r}{r_{0}},\label{eq:nrap} \end{equation} which represents a statement of conservation of energy with a potential energy given by \begin{equation} U(r) = \frac{2mc^{2}}{M}\ln \frac{a}{r}, \label{Eq:potentialenergy} \end{equation} for some arbitrary length scale $a$. We refer to Eq.~(\ref{eq:nrap}) as the {\it non-relativistic approximation}, as its derivation relies on assuming $v\ll c$. The term ``non-relativistic'' (NR) as used here does not refer to speeds much less than the actual speed of light but those significantly smaller than the mediating particle speed $c$. The role played by $c\,$ in this model is similar to that of the actual speed of light in electrodynamics, but we stress that special relativity and the actual speed of light play no role in this model. Improvements to this low-energy approximation will be explored in the next section, but we are in a position to compare its predictions to the full simulation. Fig.~\ref{fig:logplot1} depicts the predictions of Eq.~(\ref{eq:nrap}) compared to the actual simulation information contained in Eqs.~(\ref{eq:dt})-(\ref{eq:rn}). As expected, the non-relativistic approximation breaks down as the massive particles' speeds approach $c$. For large separation distances, the massive particle speeds do not increase as sharply with increasing distance as the non-relativistic approximation predicts. Indeed, once the massive particles reach a speed of $c$, the mediating particles, also traveling at speed $c$, are unable to catch up to the massive particles. Correspondingly the recursion relations break down and no more collisions are found. Specifically, as $v_{n}\rightarrow c\,$ from below, we have $\delta t_{n}\rightarrow \infty$. If the massive particle speed becomes exactly\cite{note2} $c$, $\delta t_{n}\,$ does not exist and no further collisions occur. Another possibility is that a single collision changes $v_{n}\,$ from just below $c\,$ to just above $c$. In this case, $\delta t_{n}\,$ formally becomes negative and we conclude similarly that no further collisions occur. The behavior of the system explored thus far can be summarized as follows: for arbitrary initial separations, the massive particles are repelled from each other by the effective force provided by the mediating particles. At long times, the speeds (with respect to the ground) of the massive particles approach $c$, the speed of the mediating particles. While an approximate statement of energy conservation has been derived (see Eq.~(\ref{eq:nrap})) for low speeds $v \ll c$, the associated potential is problematic as it has no lower bound for $r\rightarrow \infty$. An unlimited amount of potential may be converted into the massive particle's kinetic energy resulting in the erroneous prediction that for any initial separation, both massive particles will continue to accelerate rather than asymptotically approach finite speeds. That the initial separation distance has no effect on the final speeds of the massive particles suggests that the system is not conservative. In the next section, we will carefully examine this system using analytic tools to quantitatively explore some of these issues. \section{Analytic approach}\label{sec:analytic} \subsection{Exact solution to recursion relation} The discrete sequence of collisions described by Eqs.~(\ref{eq:dt})-(\ref{eq:rn}) can be analyzed exactly, yielding a closed-form expression for $r_{n}$, the separation distance after $n\,$ collisions. Eq.~(\ref{eq:vn}) simply states that the velocity increases by a constant amount after each collision, or \begin{equation} v_{n} = \frac{2mnc}{M}.\label{eq:vnsol} \end{equation} Inserting Eq.~(\ref{eq:vnsol}) into Eq.~(\ref{eq:rn}) and using Eq.~(\ref{eq:dt}), we have \begin{eqnarray} r_{n+1} & = & r_{n} + 2\left[v_{n} + \frac{mc}{M}\right]\frac{r_{n}}{c-v_{n}},\\ & = & \left(\frac{1 + \frac{2m(n+1)}{M}}{1 - \frac{2mn}{M}}\right)r_{n}. \end{eqnarray} Proceeding iteratively, \begin{eqnarray} r_{1} & = & \left(1 + \frac{2m}{M}\right)r_{0},\\ r_{2} & = & \frac{\left(1 + \frac{4m}{M}\right)\left(1 + \frac{2m}{M}\right)}{\left(1-\frac{2m}{M}\right)}r_{0},\\ & \vdots & \\ r_{n} & = & \left(1 + \frac{2nm}{M}\right)\prod_{k=0}^{n-1}\left(\frac{1 + \frac{2km}{M}}{1 - \frac{2km}{M}}\right)r_{0}.\label{eq:rn1} \end{eqnarray} By employing the Gamma function, which satisfies\cite{ArfkenWeber} \begin{equation} \Gamma(x+1) = x\Gamma(x),\label{eq:gamma1} \end{equation} and reduces to the factorial for integer arguments, $n! = \Gamma(n+1)$, we may write this as \begin{equation} r_{n} = \frac{\Gamma\left(\frac{M}{2m} + n\right)\Gamma\left(\frac{M}{2m}-n\right)}{\left[\Gamma\left(\frac{M}{2m}\right)\right]^{2}}\left(1 - \left(\frac{2mn}{M}\right)^{2}\right)r_{0}.\label{eq:rn2} \end{equation} The derivation of Eq.~(\ref{eq:rn2}) from Eq.~(\ref{eq:rn1}) requires use of Eq.~(\ref{eq:gamma1}), the property,\cite{ArfkenWeber} \begin{equation} \Gamma(x)\Gamma(1-x) = \frac{\pi}{\sin(\pi x)}, \end{equation} and their mathematical offspring, \begin{equation} \Gamma(x)\Gamma(-x) = -\frac{\pi}{x\sin(\pi x)}. \end{equation} \subsection{Limiting cases} As an exact, closed-form solution, Eq.~(\ref{eq:rn2}) contains all of the physics we have encountered up to this point. The low-energy force law in Eq.~(\ref{eq:force}) was previously derived using physical reasoning, but we can demonstrate that it also follows from the exact solution rather than appealing to comparisons such as Fig.~(\ref{fig:loglog}). To this end, let us define $\alpha \equiv \frac{M}{2m}\,$ and take the natural logarithm of Eq.~(\ref{eq:rn2}), obtaining \begin{eqnarray} \ln \frac{r}{r_{0}} & = & \ln \Gamma\left(\alpha-n\right) + \ln \Gamma \left(\alpha - n\right) - 2\ln\Gamma \left(\alpha\right)\nonumber\\ & +& \ln\left[1 - \left(\frac{n}{\alpha}\right)^{2}\right]. \end{eqnarray} To investigate the dynamics for $m \ll M\,$ and $v \ll c$, we examine the limit $\alpha \rightarrow \infty\,$ with $n \ll \alpha$. We first apply Stirling's approximation\cite{ArfkenWeber} to the Gamma functions, \begin{eqnarray} \ln \Gamma\left(\alpha \pm n\right) & \simeq & \left(\alpha \pm n\right)\ln\left[\alpha \pm n\right],\\ \ln \Gamma (\alpha) & \simeq & \alpha \ln \alpha. \end{eqnarray} Applying the limit $n \ll \alpha\,$ and expanding the logarithms according to \begin{equation} \left(1\pm x\right)\ln \left[1 \pm x\right] \simeq x + \frac{x^{2}}{2}, \end{equation} we recover the result \begin{equation} \ln \frac{r}{r_{0}} \simeq \frac{n^{2}}{\alpha}, \end{equation} which is equivalent to Eq.~(\ref{eq:nrap}) with $v_{0} = 0\,$ upon the identification $n\rightarrow \frac{M}{2m}\frac{v}{c}\,$ (see Eq.~(\ref{eq:vnsol})). Alternatively, we may consider the limit $v\rightarrow c$. Note that Eq.~(\ref{eq:rn2}) diverges as $n\rightarrow \alpha$, indicating that this only occurs as $r\rightarrow \infty$. Implicit in this relation is the upper limit on number of collisions before the massive particles reach terminal velocity, \begin{equation} n_{\mbox{\scriptsize max}} = \frac{M}{2m}. \end{equation} We may probe the system at long times by letting $n = \alpha - \epsilon\,$ for $\epsilon \ll 1$. Eq.~(\ref{eq:rn2}) then becomes \begin{equation} \frac{r_{n}}{r_{0}} = \frac{\Gamma\left(2\alpha\right)}{\left[\Gamma\left(\alpha\right)\right]^{2}}\Gamma\left(\epsilon\right)\cdot \frac{2\epsilon}{\alpha}.\label{eq:smallv1} \end{equation} Employing the small-argument expansion\cite{PeskinSchroeder} \begin{equation} \Gamma\left(\epsilon\right) = \frac{1}{\epsilon} - \gamma +\mathcal{O}(\epsilon), \end{equation} where $\gamma \simeq 0.577\,$ is the Euler-Mascheroni constant, we may expand Eq.~(\ref{eq:smallv1}) to obtain \begin{equation} \frac{r}{r_{0}} \simeq \frac{2\Gamma\left(2\alpha\right)}{\alpha\left[\Gamma(\alpha)\right]^{2}}\left(1-\gamma\epsilon\right). \end{equation} Taking $\epsilon\rightarrow 0\,$ is equivalent to letting $v\rightarrow c$, and we obtain \begin{equation} r \rightarrow r_{c}\equiv \frac{4m\Gamma\left(\frac{M}{m}\right)}{M\left[\Gamma\left(\frac{M}{2m}\right)\right]^{2}}r_{0}\;\;\;\;\;\mbox{ as }v\rightarrow c.\label{eq:rc} \end{equation} For $r>r_{c}$, each massive particle is moving at the same speed as the mediating particles and experiences no subsequent collisions with the mediating particles. Some mystery may be removed from Eq.~(\ref{eq:rc}) by taking the natural logarithm of both sides and applying Stirling's approximation, this time keeping several terms \begin{equation} \ln \Gamma (x) \simeq x\ln x - x - \frac{1}{2}\ln \frac{x}{2\pi}. \end{equation} When the smoke clears, we have the compact result \begin{equation} r_{c} = \sqrt{\frac{2m}{\pi M}}2^{\frac{M}{m}}r_{0}. \end{equation} That is, at a finite separation distance, the massive particles attain their maximum speeds $v = c$. We note that since the massive particles always evolve to this state regardless of initial separation ({\it i.e.}, various amounts of supposed ``potential energy'' in the initial state with no kinetic energy) energy cannot be conserved in this system. States with different energies all evolving into a single high-energy state requires sources or sinks in energy. However, the low-energy, non-relativistic approximation is quite useful for describing the dynamics at low energies. Unfortunately, unlike the Coulomb repulsion, there exist no initial conditions for which the relativistic limit is avoided. \subsection{Relativistic corrections} The discrete relations in Eqs.~(\ref{eq:vn})-(\ref{eq:tn}) may be formally interpreted as differential equations by applying the convention \begin{equation} v_{n+1}-v_{n} = \frac{\Delta v}{\Delta n} \rightarrow \frac{dv}{dn}, \end{equation} with a similar relation for $t_{n}\rightarrow t(n)$. One then obtains \begin{equation} \frac{dv}{dt} = \frac{\frac{dv}{dn}}{\frac{dt}{dn}} = \frac{2mc}{Mr}\left(c-v\right). \end{equation} Note that this corresponds to an acceleration given by the force in Eq.~(\ref{eq:force}) with corrections which are first-order in $\beta \equiv \frac{v}{c}$. Unlike the non-relativistic limit, this acceleration explicitly drops to zero as $v\rightarrow c$. Furthermore the explicit appearance of $v\,$ in the force indicates a non-conservative nature to this force. Employing the chain rule as for the non-relativistic limit, we obtain the following equation for $\beta(r)$, \begin{equation} r\beta \frac{d\beta}{dr} = \frac{m}{M}\left(1-\beta\right).\label{eq:ur} \end{equation} Eq.~(\ref{eq:ur}) is separable and admits the closed-form solution \begin{equation} \left(\frac{r}{r_{0}}\right)^{\frac{m}{M}} = \frac{e^{-\beta}}{1-\beta}. \end{equation} This may be inverted to yield a formula for $v = \beta c\,$ \begin{equation} v(r) = c\left[1 + W\left(-\frac{(r_{0}/r)^{m/M}}{e}\right)\right],\label{eq:ursol} \end{equation} where $W(z)\,$ is the Lambert-W function,\cite{Corless} defined as the principal value of \begin{equation} z = W(z)e^{W(z)}. \end{equation} While the solution clearly satisfies $v(r_{0}) = 0$ and \begin{equation} \lim_{r\rightarrow \infty}v(r) = c\left[1 + 0\right] = c, \end{equation} \begin{figure}[h] \begin{center} \includegraphics[totalheight=7.2cm]{urlimit.eps} \caption{Comparison of Eq.~(\ref{eq:ursol}) to the simulation/exact solution and non-relativistic limit. Surprisingly, this ``improved'' approximation breaks down long before the NR limit ceases to accurately describe the physics.} \label{fig:urlimit} \end{center} \end{figure} the time required for this to happen (rigorously, for $|c-v|<2mc/M$) is quite large, and unfortunately for the theory, this does not appear to agree very well with the simulation or exact solution (see Fig.~\ref{fig:urlimit}), breaking down even before the non-relativistic approximation breaks down. There is an equally curious situation that occurs in electromagnetism. The general solutions to Maxwell's equations for known sources rely on fairly complex expressions involving evaluation of physical quantities at retarded times. However, by expanding these expressions the lowest-order term is the {\it instantaneous} Coulomb term. This appears to be a rather deep result also showing up in quantum electrodynamics\cite{Feynman1} and quantum gravity.\cite{FeynmanGrav} The refined approximation in this section is only part of the required correction to the non-relativistic limit, and some potentially ``fortuitous'' cancellation between this modification and the rest of the terms being neglected is required to obtain a result more accurate than the NR approximation. An example of this sort of fortunate cancellation from classical physics may be observed by considering the electric field due to an arbitrary configuration of currents and charges, given by one of Jefimenko's equations,\cite{GriffithsEM} \begin{equation} {\bf E}\left({\bf r},t\right) = \frac{1}{4\pi \epsilon_{0}}\int \left[\frac{\rho\left({\bf r}',t_{r}\right)}{R^{2}}\hat{\bf R} + \frac{\dot{\rho}\left({\bf r}',t_{r}\right)}{cR}\hat{\bf R} - \frac{\dot{\bf J}\left({\bf r}',t_{r}\right)}{c^{2}R}\right]d\tau ', \label{eq:efield} \end{equation} where $\rho\,$ is charge density, ${\bf J}\,$ is current density, $R = \left|{\bf r} - {\bf r}'\right|$, and the retarded time is given by $t_{r} \equiv t - R/c$. Following an exercise in a popular text on electrodynamics,\cite{GriffithsEM} one may consider constant currents for which $\dot{\bf J} =0\,$ ({\it i.e.}, the third term in Eq.~(\ref{eq:efield}) disappears). In this case, a miraculous cancellation occurs, yielding \begin{equation} {\bf E}({\bf r},t) = \frac{1}{4\pi \epsilon_{0}}\int \frac{\rho({\bf r}',t)}{R^{2}}d\tau' \end{equation} where the correction to the instantaneous Coulomb potential and the second term in Eq.~(\ref{eq:efield}) cancel perfectly. That is, despite the explicit appearance of corrections of order $\beta\,$ and evaluation of functions at $t_{r}\,$ instead of instantaneous time $t$, the field turns out to be the instantaneous Coulomb-like contribution. The ``relativistic correction'' in the particle-exchange model appears to be analogous to evaluating the field at retarded times without including the additional corrections, resulting in a less-accurate result at short times. \section{Summary}\label{sec:summary} In this paper we thoroughly examined a simple model for classical interactions through the exchange of mediating particles in which momentum conservation is enforced for each collision. As demonstrated in simulations and analytic reasoning, the resulting interactions yield an effectively conservative theory at low energies with a $1/r$ force. The conservative approximation breaks down at high energies, and regardless of initial separation, the massive particles both eventually reach the maximum speed allowed by the physical mechanism of energy transfer within the system. The classical particle exchange analogy of ice skaters throwing a ball back and forth has typically been used as an illustration in public outreach presentations and in teaching, from general education science courses to introductory and advanced physics courses. However, the analogy has value as a physical system for students to investigate quantitatively. The phenomenon can be used in various contexts including homework, an in-class activity, a computational physics exercise, or assessment. Furthermore, it can be used at both the introductory and advanced level in the undergraduate curriculum. In introductory physics, students learning computational modeling\cite{Chabay} can investigate the phenomenon numerically. Derivation of the change in speed of a massive particle, $\delta v = 2mc/M\,$, using Conservation of Momentum (Eq. \ref{Eq:momentumconservation}) is a straightforward exercise in introductory physics. Students can also explore and describe the position-time and velocity-time graphs. Because position and velocity change abruptly, introductory students have the opportunity to fit a smooth function to values that change discretely. Furthermore, teachers can use this system to assess understanding of potential energy functions (Eq. \ref{Eq:potentialenergy}) and conservation of energy. Having already studied systems of particles interacting via the inverse-square law, students can practice applying a similar analysis to the $1/r$ force, possibly preparing them for similar forces that arise in an E\&M course. Finally, as shown in this paper, teachers can also use the system as an application in a junior/senior level course in mechanics\cite{Timberlake} or mathematical physics where students are expected to explore the model in its limit using more advanced computational and analytical techniques.
1,477,468,751,197
arxiv
\section{INTRODUCTION} Quantum key distribution~(QKD), whose security is guaranteed by the laws of quantum mechanics, allows two remote and legitimate users to share a private and secret key~\cite{bennett1984,ekert1991,bennett1992a,lo2012}. Nowadays, for the need of high-speed key generation rate, prepare-and-measure QKD protocols~\cite{bennett1984,inoue2002,scarani2004,stucki2005} are the common choice, instead of measurement-device-independent~(MDI) QKD protocol~\cite{lo2012} that removes all security loopholes in measurement devices but has relative low rate. In order to achieve high key generation rate with help of system's high repetition frequency, traditional gated avalanche photodiode~(APD) detector might not be suitable due to the effect of afterpulse noise that is produced by trapped avalanche charge. To reduce the afterpulse noise, it is required that the weaker avalanche signal shall be sensed, which can be satisfied by employing self-differencing~(SD) technique to a APD. Therefore, the SD APD detector is commonly used in gigahertz high-speed QKD systems~\cite{yuan2008,Dixon2008}. Although QKD has been proved to be information-theoretically secure in theory, there are still some loopholes in practical implementation~\cite{brassard2000,lydersen2010a,xu2010,gerhardt2011,bugge2014,huang2016,Sajeed2016,huang2019a,Chistiakov2019,huang2020}. For example, the single-photon detectors(SPDs), which are the core devices in BB84 QKD systems, may be hacked by the eavesdropper Eve via the after-gate attack~\cite{wiechers2011}, the time-shift attack~\cite{qi2007,zhao2008}, the detector-blinding attack~\cite{lydersen2010a,gerhardt2011}, and so on. In order to defend against these attacks on the detection devices, security patches~\cite{yuan2011,silva2012,lim2015} are effective countermeasures. That is, once a new type of attack is discovered, a corresponding countermeasure against this attack may be proposed and realized in an existing QKD system~\cite{Xu2020}. Recently, in order to ensure the most secure conditions to operate SD APD detectors in QKD systems, a set of so called ``best-practice criteria'' for practical security of SD APD detectors has been proposed~\cite{koehlersidki2018}. Continuous-wave~(c.w.) light is usually regarded to achieve a reliable eavesdropping, so as for the ``best-practice criteria'' that only considers the case of c.w. blinding attacks~\cite{koehlersidki2018}. Instead, the power fluctuation of optical pulse may expose the hacking behaviour of an eavesdropper~\cite{koehlersidki2018,jiang2013}. However, in this study, we find that strong pulse illumination attack presents blinding stability. By using strong optical pulse, eavesdropper can blind the SD APD detector continuously and steadily without introducing extra quantum bit rate error~(QBER). In this paper, under the practice criteria~\cite{koehlersidki2018}, we experimentally demonstrate that the SD APD detector in a QKD system can be directly blinded by using strong optical pulses with the repetition frequency as the same as the gating frequency of the SD APD detector. Then we trigger SD APD detector when it is completely blinded and realize the control in the detection probability of the detector from 0\% to 100\%. This study shows that the SD APD detector can be successfully hacked by the pulse illumination attack, which might compromise the security of a high-speed QKD system with SD APD detectors. Afterward, we propose a set of criteria for practical security of SD APD detectors by taking the threat of pulse illumination attack into accounts. The paper is structured as follows. Section~\ref{Ⅱ} introduces the operation principle of SD APD detectors and the general process of strong pulse illumination attack. The experimental setup and selection criteria of discrimination level of the tested SD APD detector are described in Sec.~\ref{Ⅲ}. Under the practice criteria, the methodology and testing results of pulse illumination attack are presented in Sec.~\ref{Ⅳ}. In Sec.~\ref{Ⅴ}, we show the difference between the pulse illumination attack in this work and the previous attack on SD APD detector disclosed in Ref.~\cite{jiang2013}, analyze the incomprehensiveness of practical criteria in Ref.~\cite{koehlersidki2018} and propose a list of practical criteria to resist pulse illumination attack. Finally, we conclude in Sec.~\ref{VI}. \section{WORKING PRINCIPLE OF SD APD DETECTORS AND STRONG PULSE ILLUMINATION ATTACK} \label{Ⅱ} In this part, we firstly introduce the operation principle of SD APD detectors by taking the SD APD detector tested in this study as an example. Then we introduce general process of strong pulse illumination attack that proposed in this work. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{SD-APD1.pdf}% \caption{(a) The schematic circuit of the tested SD APD detector. (b) Output waveform of the tested SD APD detector shows a single avalanche rising above the capacitive response residual. The red dashed line represents the discrimination level, which is set to be \SI{25}{\milli \volt}.} \label{1} \end{figure} Figure~\ref{1}(a) shows the schematic circuit of the tested SD APD detector. A DC bias voltage combined with the periodic gating signals is reversely loaded on the APD. When the reversed bias voltage is higher than the breakdown voltage, the APD works on Geiger mode where a single photon can result in detectable macroscopic avalanche current. However, the repetition rate of the gating signal is so fast that weak avalanche signals are often buried within the APD's capacitive response~\cite{yuan2007}. In order to remove the capacitive response, the SD technique is applied. That is, first divide the response of APD into two halves, then shift one of them by one gate period, and recombine the two halves to cancel the strong capacitive response. The weak avalanche signal processed after the SD technique is shown in Fig.~\ref{1}(b). Through SD technique, only weak avalanche signals and capacitive response residual remain, which can be distinguished by setting a discrimination level. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Attack_flow.pdf}% \caption{General process of strong pulse illumination attack.} \label{2} \end{figure} Due to the intrinsic imperfection of SD APD detectors, under the strong c.w.\ light illumination, the SD APD detector might be blinded~\cite{koehlersidki2018}. To eliminate the threat of c.w.\ blinding attack, Ref.~\cite{koehlersidki2018} investigated the behavior of a SD APD under c.w.\ bright-light illumination and proposed practice criteria for practical security of SD APD detectors employed in a QKD system. Under the proposed practice criteria, once Eve uses c.w.\ bright-light to blind SD APD detectors, the large blinding photocurrent exposes the existence of Eve. In addition, the increase of error rate caused by residual capacitive background can also help Bob discover Eve. Therefore, SD APD detectors under this practice criteria can defend c.w.\ bright-light illumination. However, the effectiveness of this practical criteria under pulsed illumination attack is not fully investigated yet. In this work, we thoroughly test the behavior of SD APD detector under strong pulse illumination. Figure~\ref{2} shows the general process of strong pulse illumination attack. By using the strong optical pulses with the same repetition rate of the gated signal applied to SD APD detector, each optical pulse triggers a stable avalanche photocurrent. The stable and periodic avalanche photocurrent is cancelled out after the SD processing. Therefore, the remaining avalanche photocurrent is lower than the discrimination threshold. As a result, the SD APD detector is blinded and its output detection can be controlled by Eve's classical trigger pulses with tailored energy, which triggers a click only when Bob selects the same basis as Eve. Furthermore, the drop of bias voltage under the strong pulses illumination attack is not as much as that under c.w.\ bright-light attack, which keeps the capacitive response residual being lower than the discrimination threshold. \section{EXPERIMENTAL SETUP} \label{Ⅲ} In order to experimentally explore the behavior of SD APD detectors under strong pulse illumination, the test is conducted using the setup shown in Fig.~\ref{3}. An arbitrary wave generator~(AWG) is used to drive laser diodes. The laser diode 1 (LD1) is driven to emit blinding pulses at \SI{1550}{\nano \meter}, whose repetition frequency is \SI{625}{\MHz} as the same as that of the gating signal applied to the SD APD detector under test. Similarly, the laser diode 2~(LD2) is driven by the AWG to generate \SI{312.5}{}-\SI{}{\MHz} trigger pulses used to control the blinded SD APD detector. The laser diode 3~(LD3) emits pulses with repetition frequency of \SI{100}{\kilo \hertz} to synchronize the attacking setup and the SD APD detector under test to ensure that blinding pulses and trigger pulses can stably illuminate inside the gating period of the SD APD. Variable optical attenuators~(VOAs) and erbium-doped fiber amplifiers~(EDFAs) are used to tune the optical intensity of blinding pulses and trigger pulses. The optical power meter 1~(OPM1) monitors the optical power of the blinding pulses and the optical power meter 2~(OPM2) serves to monitor the optical power of the trigger pulses. Meanwhile, the 50:50 beam splitter 1~(BS1) merges the blinding pulses and the trigger pulses. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Setup.pdf}% \caption{Schematic diagram of experimental setup. The red lines represent the optical signal, and the blue lines represent the electrical signal. AWG, arbitrary wave generator; LD, laser diode; VOA, variable optical attenuator; EDFA, Erbium-doped fiber amplifier; BS, beam splitter; OPM, optical power meter; SD APD, self-differencing avalanche photodiode detector. As the testing target, the SD APD connected in series with a \SI{1}{\kilo \ohm} bias resistor works at gating frequency of \SI{625}{\MHz}.} \label{3} \end{figure} The SD APD detector under tested is cooled down to \SI{-40}{\degreeCelsius} and applied by \SI{64.2}{\volt} bias voltage. As shown in Fig.~\ref{1}(a), the gating frequency of the APD is \SI{625}{\MHz} and the bias resistor connected in series is \SI{1}{\kilo \ohm}. The resistance value of the bias resistor satisfies the requirement b) of the practice criteria in Ref.~\cite{koehlersidki2018}, which recommends to avoid using a bias resistor exceeding \SI{50}{\kilo \ohm}. It is important to note that we realize the SD operation of avalanche signals by means of software processing instead of practical SD circuit. Compared to the physical realization, software processing can remove the effect of the timing jitter, which makes the result of the SD more precise. Setting an appropriate discrimination level can not only improve the detection efficiency of the SD APD detector, but also perceive the reduction of excess voltage~\cite{koehlersidki2018}. Therefore, the choice of discrimination level of a SD APD detector is important. Figure.~\ref{4} shows the dark count rate as a function of the discrimination level. As observed from Fig.~\ref{4}, there is a kink at discrimination level of \SI{6}{\milli \volt}, indicating the dark avalanches replace the capacitive response residual to be the dominant contribution to the measured dark count rate when the discrimination level is higher than \SI{6}{\milli \volt}. Therefore, weak avalanche signal and capacitive response residual can be distinguished when the discrimination voltage is higher than \SI{6}{\milli \volt}. However, for a SD APD detector working in a real QKD system, the change of working environment may introduce extra electronic noise, and the detection noise of the SD APD may increase during long-time running. Thus, the set discrimination level not only needs to distinguish between capacitive response residual and weak avalanche signal, but also needs to resist the noise caused by above reasons. To enhance noise resistance of the SD APD detector, the discrimination level is set to \SI{25}{\milli \volt} by the third party who provides the SD APD detector. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Dark_count_rate.pdf}% \caption{Dark count rate as a function of the discrimination level. When the discrimination level is lower than \SI{6}{\milli \volt}, the dark count rate mainly comes from the capacitive response residual. Otherwise, the dark avalanches are the major source of the dark count rate. The red dashed line represents the minimum average value of peak amplitude when the SD APD detector is blinded.} \label{4} \end{figure} \section{EXPERIMENT RESULTS} \label{Ⅳ} In this study, we conduct an attacking experiment on Bob's SD APD detector with strong optical pulse. LD3 is firstly turned on to send \SI{100}{\kilo \hertz} synchronizing pulses to the SD APD detector for synchronizing the whole testing setup. Then, LD1 is switched on to generate \SI{625}{}-\SI{}{\MHz} blinding pulses, whose intensity is modulated by VOA1 and EDFA1, to illuminate the SD APD detector. Under each intensity of the incident pulses, we measure the avalanche signal after SD processing, collect 480 consecutive periods, and make statistics on the peak amplitude in each period. Figure~\ref{5}(a) shows the average value of peak amplitude $\bar V_\text{peak}^\text{SD}$ and standard deviation $\sigma^\text{SD}$ of the SD avalanche signal depending on the energy of each blinding pulse. When the blinding pulse energy is small, the $\bar V_\text{peak}^\text{SD}$ of the SD avalanche signal is higher than the discrimination level, \SI{25}{\milli \volt}. By gradually increasing the blinding pulse energy, the $\bar V_\text{peak}^\text{SD}$ of the SD avalanche signal firstly decreases and then starts increasing at \SI{0.01}{\pico \joule} blinding pulse. Remarkably, there is a deep when the blinding pulse energy is \SI{0.01}{\pico \joule}. It is because for this amount of energy, each pulse almost triggers an avalanche in each period, resulting in a smaller amplitude remaining after SD processing. When the blinding pulse energy is higher than \SI{7.76}{\pico \joule}, the $\bar V_\text{peak}^\text{SD}$ of the SD avalanche signal begins to decrease rapidly. Finally, the $\bar V_\text{peak}^\text{SD}$ of the SD avalanche signal is lower than the discrimination level when the blinding pulse energy is higher than \SI{8.92}{\pico \joule}. It means that the SD APD detector can be directly blinded by lowering the amplitude of SD avalanche signal under the strong pulse illumination. After the SD APD detector is blinded, even though the average power of the blinding pulse is increased to \SI{61.09}{\pico \joule}, the count rate of the SD APD detector still does not recover, indicating that the SD APD detector can be blinded stably. To further understand the blindness of SD APD detector. We conduct the same statistics on the original avalanche signal before SD processing. Figure~\ref{5}(b) shows the $\bar V_\text{peak}$ and $\sigma$ of original avalanche signal as a function of each blinding pulse's energy. With the increasing energy of blinding pulse, $\bar V_\text{peak}$ of the original avalanche signal first increases and then rapidly decreases to \SI{81}{\milli \volt} at \SI{8.92}{\pico \joule}, in which case the SD APD detector is blinded. Figure~\ref{6}(a) and (c) respectively show in detail the amplitude of original avalanche signal when blinding pulse energy is \SI{8.09}{\pico \joule} and \SI{10.24}{\pico \joule}, which are the cases right before and after blinding happened. In Fig.~\ref{6}(a), amplitude is relatively large and the waveform is very unstable, which results that the SD amplitude is higher than the discrimination level. With the increase of energy, the amplitude of the original avalanche signal in each period become smaller and is very stable in Fig.~\ref{6}(c). It means strong pulse illumination lowers amplitude and fluctuation the original avalanche signal, consequently, blinding the SD APD detector. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{SD_original.pdf} \caption{Average value and standard deviation (a) of SD avalanche signal's peak amplitude and (b) of original avalanche signal's peak amplitude as a function of the blinding pulse energy. The blue dashed line represents the discrimination level. When the peak amplitude of SD avalanche signal is lower than it, the detector is blinded.} \label{5} \end{figure} \begin{figure*}[htbp] \centering \includegraphics[width=\linewidth]{avalanche_detail.pdf} \caption{The amplitude of avalanche signal under specific blinding pulse energies. (a) The waveform of original avalanche signal and (b) the waveform of SD avalanche signal when the blinding pulse energy is \SI{8.09}{\pico \joule}, in which case the SD APD detector is not blinded. (c) The waveform of original avalanche signal and (d) the waveform of SD avalanche signal when the blinding pulse energy is \SI{10.24}{\pico \joule}, in which case the SD APD detector is blinded. The red dashed line represents the discrimination level.} \label{6} \end{figure*} After the SD APD detector is blinded, in order to control detection outcome of the SD APD detector, LD2 is turned on to send \SI{312.5}{}-\SI{}{\MHz} trigger pulses to the SD APD detector. The trigger pulses are superimposed on blinding pulses through BS1. For each trigger pulse energy, LD2 sends trigger pulses of 960 periods in total. The number of SD avalanche signal's amplitude exceeding the discrimination level is immediately afterwards counted, so as to obtain the detection probability. Figure~\ref{7} shows the detection probability as the function of trigger pulse energy under different amounts of blinding pulse energy, which indicates that the detection probability can vary from 0\% to 100\% with the increase of trigger pulse energy. Therefore, Eve can obtain the key by conducting a fake-state attack~\cite{lydersen2010a}. Specifically, Eve firstly intercepts the single photon sent by Alice and randomly selects a basis to measures it as Bob does. Then she resends Bob a trigger pulse superimposed on the blinding pulse according to the measurement result. If Bob's basis choice is consistent with Eve, there could be a detection event. Otherwise, no SD APD detector clicks. Finally, Eve can acquire the identical final key by monitoring the classical channel between Alice and Bob. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{trigger_pulse.pdf} \caption{The detection probability as the function of trigger pulse energy under different amounts of blinding pulse energy.} \label{7} \end{figure} Although the trigger pulse enables the detection probability to reach 100\% in the case of high energy, in order to conduct a perfect eavesdropping, the energy of the trigger pulse needs to satisfy the requirement in a BB84 QKD system proposed in Ref.~\cite{lydersen2010a}, which can be expressed as \begin{equation} E_{always} \leq 2 \times E_{never}, \label{a} \end{equation} where $E_{always}$ and $E_{never}$ represent the energy of a trigger pulse when the detection probability is 100\% and 0\%, respectively. In the experiment, the case that blinding pulse energy is \SI{11.55}{\pico \joule} satisfies this requirement. When the trigger pulse energy is lower than \SI{6.656}{\pico \joule} the detection probability is 0\%. At the same blinding pulse energy, the maximum energy of trigger pulse that Eve can send is \SI{13.312}{\pico \joule}, and the corresponding detection probability of SD APD detector is 100\%. Therefore, Eve can completely control the output of SD and does not increase the error rate of the final key, which does not expose the existence of Eve. Similarly, for blinding pulse energy of \SI{16.51}{\pico \joule}, \SI{21.47}{\pico \joule}, \SI{24.77}{\pico \joule}, \SI{28.06}{\pico \joule}, and \SI{31.36}{\pico \joule}, the corresponding maximum detection probability is 99.37\%, 27.3\%, 7.08\%, 9.2\%, and 3.67\% respectively when no QBER is introduced. It is notable that these non-100\% detection probabilities can be hidden by the channel loss during the fake-state attack. \section{DISCUSSION} \label{Ⅴ} So far some investigations have contributed to the security of SD APD detectors. In Ref.~\cite{jiang2013}, researchers disclosed a type of pulsed blinding method, in which $\bar V_\text{peak}^\text{SD}$ is at a relatively large value. Therefore, fluctuation in the blinding pulses may cause avalanche signal amplitude to overcome discrimination level, making detector resume counting again. However, in our experiment, by using blinding pulse with higher energy intensity, we directly lower the $\bar V_\text{peak}^\text{SD}$ of avalanche signal. Compared to the previous blinding attack, pulse illumination attack demonstrated in this work drastically reduces the influence of optical power fluctuation and makes the detector blinding more stable. Significantly, pulse illumination attack might partially invalidate practice criteria proposed in Ref.~\cite{koehlersidki2018}. First, for the criterion of monitoring the photocurrent~\cite{koehlersidki2018}, although it is an effective method to defend c.w.\ bright-light attack, it may be bypassed by a group of blinding pulses that are used in the pulse illumination attack. Specifically, a group of blinding pulses accumulatively introduces a high photocurrent may be averaged before sensing by a photocurrent monitor~\cite{Wu_2020}. Thus, the instant high photocurrent may lower the bais voltage across the APD to blind the SD APD detector. Secondly, for the criterion of avoiding use a quenching or biasing resistor with resistance value higher than \SI{50}{\kilo \ohm}~\cite{koehlersidki2018}, even though the bias resistance of the tested SD APD detetor is only \SI{1}{\kilo \ohm} that satisfied the requirement, strong pulse illumination still blind the SD APD detector. Thirdly, according the requirements c) and e) proposed in Ref.~\cite{koehlersidki2018}, setting a well-selected discrimination level can perceive the reduction of excess voltage through the residual capacitive background, because the capacitive response residual can overcome the discrimination level when the APD's reverse bias voltage decreases~\cite{koehlersidki2018}. However, for the tested SD APD detector, the capacitive response residual does not greatly increase to overcome the discrimination level with the reduction of excess voltage. We perform an experiment to explore the relationship between voltage drop and the SD APD capacitive response measured before SD processing. By varying the DC reverse bias voltage of the APD, the capacitive response of the APD is measured under dark condition. For each set voltage drop, the capacitive amplitude of 1920 consecutive periods is recorded, and we make statistics on the peak values of the capacitive amplitude in each period. As shown in Fig.~\ref{8}, by decreasing the bias voltage of SD APD, the amplitude of APD capacitive response does not increase greatly and is far below the discrimination level. It is different from the explanation proposed in Ref.~\cite{koehlersidki2018}. To understand the origin of the discrepancy, we analyze the internal circuit of the SD APD detector. The failure to recover the count rate may be due to the existence of a filter in the circuit. The capacitive response is filtered in advance, thus reducing the influence of the capacitive response residual. Although the filter can help better distinguish weak avalanche signal from the capacitive response, it also leads to the SD APD being blinded under strong pulse illumination. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{capacitive_response.pdf}% \caption{APD capacitive response measured before the SD processing as a function of the DC bias reduction below its normal value. The red dashed line represents discrimination level, and the blue line represents average amplitude and standard deviation of capacitive response.} \label{8} \end{figure} Based on the experimental testing, we propose a list of criteria as follows to resist the pulse illumination attack on SD APD detectors. \begin{enumerate} \item[a)] Remove bias resistor. Under strong pulse illumination, reverse bias voltage across the APD is reduced rapidly when there a is bias resistor in the circuit. \item[b)] Use an optical power limiter or optical fuse. Adding a special passive component, such as optical power limiter~\cite{Zhang2021} or optical fuse~\cite{Todoroki2004}, to sense and response instant high-optical power at the SD APD detector's input can prevent strong pulse from passing through. \item[c)] Remove possible filtering component. The filtering device existing in the circuit makes capacitive response residual ignore the the reduction of excess voltage of APD, which leads to SD APD detectors being blinded stably. \item[d)] Set an appropriate discrimination level. No matter temperature change of working environment or long time running, doing so not only ensures that the SD APD is not disturbed by noise, but also enables capacitive response residual overcomes the discrimination level when excess voltage reduces. \end{enumerate} \section{CONCLUSION} \label{VI} In summary, we experimentally investigate the behavior of SD APD under strong pulse illumination. We show that strong pulse illumination can hack SD APD detectors in high-speed quantum key distribution systems to learn the secret key without introducing extra QBER. Based on the testing results, we find that strong pulse illumination attack presents blinding stability and strong optical pulse can be used as a new tool for eavesdropper to blind the SD APD detector. Meanwhile, we propose a list of criteria to enhance the practical security of SD APD detectors. This work greatly contributes to improve the security of the practical high-speed QKD system. \begin{acknowledgments} We thank Konstantin Zaitsev and Vadim Makarov for helpful discussions. This work was funded by the National Natural Science Foundation of China (grants 61901483 and 62061136011), the National Key Research and Development Program of China (grant 2019QY0702), and the Research Fund Program of State Key Laboratory of High Performance Computing (grant 202001-02). \end{acknowledgments} \def~{~}
1,477,468,751,198
arxiv
\section{Introduction} The dynamics of opinions in society is a very intricate and intriguing process. Especially in today's world, with the pervasive infiltration of social networks like Facebook and Twitter that allow quick broadcasts of opinions, phenomena which were once wild speculations by philosophers, such as the viral spread of memes \cite{Dawkins76}, are now easily observed and quantified. This acceleration of social processes is turning sociology into a quantitative science, where concrete models for social phenomena can be proposed and rigorously tested. Sociologists have long identified several different processes that determine opinion dynamics, most notably, {\em normative} and {\em informational} \cite{DG55}. Normative influence refers to the influence that causes people to conform to a group's social norms. On the other hand, informational influence refers to the way people acquire the opinions of others, driven by the assumption that neighbors possess information about a situation. Informational influence is especially relevant in the context of understanding how opinions {\em change}, and it arguably is the dominant process for determining trends in, say, fashion, mobile phones and music. In this work, we focus on quantitative models for informational influence. Such a model should specify how an individual agent updates its opinion using information learned from its ``neighbors". By now, this area has been heavily studied; see \cite{Jackson08} for a survey. The most basic such model is the classic DeGroot model \cite{deGroot74, French56, Harary59} where each agent's opinion is a real number between 0 and 1, and at every time step, each agent moves to some weighted average of its neighbors' positions, where the neighbors are determined according to an unchanging undirected graph. Such a system always reaches consensus, contrary to the existence of polarized states in society. Another classic model is the {\em voter model} \cite{CS73, HL75}. Here, there is an unchanging directed graph among the agents, and at every time step, a random agent selects a random neighbor and adopts the neighbor's opinion as its own. Again, such a system always reaches a consensus, and coalescing random walks \cite{DKS91} can be used to bound the convergence time. In order to explain why consensus doesn't always arise in the real world, one can posit the presence of {\em stubborn agents}, agents which never change their own opinions (though they may certainly influence others). More generally, multiple studies \cite{Asch55, DG55, LRL79} have confirmed that even when agents are not stubborn, they usually have a {\em conformity bias}, i.e., they assign more weight to opinions that are already close to their own. This notion gives rise to the definition of {\em influence systems} or {\em flocking models}. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth, height=2cm]{nonfixedbd.png} \caption{No freezing time} \label{fig:nofrz}a \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth, height=2.5cm]{nonfixedbd2.png} \caption{Time to converge can be arbitrarily long} \label{fig:initdep} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth, height=4cm]{orderchange.png} \caption{Order of agents can change} \label{fig:noorder} \end{subfigure} \caption{Properties of social HK systems} \label{fig:sochk} \end{figure*} The most popular such flocking model is the {\em Hegselmann-Krause} system \cite{Krause97, HK02}, and this is the system that we focus on here. In its simplest incarnation, the system consists of $n$ agents placed on the real line, with the $i$'th agent at $x_t(i)$ at time $t$, and at every time step $t \geq 0$, the positions update as follows for all $i \in [n]$ synchronously: \begin{equation}\label{eqn:hkupdate} x_{t+1}(i) = \frac{1}{|\mathcal N_{t}(i)|} \sum_{j \in \mathcal N_{t}(i)} x_{t}(j) \end{equation} where $\mathcal N_{t}(i) = \{j : |x_t(j) - x_t(i)|\leq 1\}$. Here, $1$ is the {\em confidence bound}, as each agent only has confidence in those agents which are within this bound. The HK system has become quite popular as a mathematically clean and simple formalization of an endogenous dynamical system that captures interesting qualitative properties, such as polarization and conformity bias, found in opinion dynamics\footnote{{\small Moreover, the HK system can also model robotic rendezvous problems in plane and space \cite{BCM09}.}}. The expectation is that a systematic understanding of the dynamics of the HK system can lend insight into more detailed models and, hopefully, (some aspects of) reality. Indeed, convergence for the HK system is immediate, and the time complexity needed for the system to freeze is known upto a linear factor in $n$ (the best upper bound is $O(n^3)$ \cite{BBCN13,MT13} and the best lower bound $\Omega(n^2)$ \cite{WH14}). Note that it is not a priori clear that there exists a bound which depends only on $n$ and is independent of the agents' initial positions. But perhaps surprisingly, it has turned out that changing the model in even very simple ways leads to problems which we cannot handle mathematically. For example, one of the most common variations is to let the confidence bound depend on the agent \cite{Lorenz10, MB12}. That is, suppose $\mathcal N_t(i) = \{j : |x_t(j) - x_t(i)| \leq r_i\}$ for some $r_i \geq 0$. This is called the {\em heterogeneous Hegselmann-Krause} system. A rigorous proof of its convergence is still missing, although convergence seems clear from simulations! This situation clarifies that we need to develop new technical tools in order to understand the dynamics of influence systems. Indeed, for general influence systems, Chazelle, in an impressive sequence of papers \cite{Chazelle11, Chazelle12}, developed a new algorithmic calculus to show that general influence systems are almost surely asymptotically periodic. However, these general results do not imply convergence for the specific case of heterogeneous HK. \subsection{Our Contributions} In this work, we rigorously study the convergence behavior of two variations of the HK model. Apart from their intrinsic interest from a sociology perspective, their analysis seem to pose similar mathematical difficulties as heterogeneous HK. Nevertheless, we show that for both these systems, we can develop tools to understand their convergence behavior. Specifically, we study the following two models: \begin{itemize} \item \textbf{{Social Hegselmann-Krause}}: One criticism of the Hegselmann-Krause model is that it only considers informational influence and ignores normative effects altogether. It ignores the fact that individuals belong to groups, and generally, information exchange only occurs between individuals in the same group. To model this fact, we assume that there exists an underlying social network such that two agents can interact with each other only when there is an edge between the two in this graph. We formally define the {\em social Hegselmann-Krause} system as follows: given an undirected graph $G$ on $n$ nodes and a collection of $n$ agents initially at positions $x_0(1), \dots, x_0(n) \in \mathbb{R}$ respectively, the social HK system updates the agents' positions synchronously for $t \geq1$ according to Equation (\ref{eqn:hkupdate}) where\footnote{{\small $E(G)$ denotes the edge set of $G$.}} $\mathcal N_t(i) = \{j : (i, j) \in E(G) \text{ and } |x_t(j) -x_t(i)| \leq 1\}$. The social HK model differs in some very basic ways from the usual HK model, as shown in \cref{fig:sochk}. First of all, it is no longer true that the agents freeze after some time; agents can keep moving by some tiny amount indefinitely as the example in \cref{fig:nofrz} shows. We might hope though that, for every $\epsilon > 0$, after some time bound that depends on $n$ and $\epsilon$, the points stay within intervals of $\epsilon$. Even this is not true as the situation in \cref{fig:initdep} of the panel shows; there, by making $\delta >0$ arbitrarily small, the agent initially at $0$ can take arbitrarily long to ``see" the agent initially at $2-\delta$. Finally, unlike the usual HK system, the agents do not preserve their order, as is clear from \cref{fig:noorder}. As far as we know, we are the first to formally study the convergence behavior of the social HK model. Fortunato \cite{Fortunato05} also investigated the same system but to address a very different problem: Given a random initial configuration of agents in the interval $[0,1]$, what is the minimum confidence bound that ensures that the agents come to a consensus when the dynamics is that of social HK on a random graph of degree $d$? Fortunato's empirical result\footnote{{\small A similar empirical result \cite{Fortunato04} has been rigorously proven \cite{ LU07} for the closely related Deffuant-Weisbuch model~\cite{WDAN02} on a social network.}} is that this threshold is $\sim 0.2$ when $d=\omega(1)$ and is $0.5$ when $d$ is constant. Perhaps in the style typical of physicists, their work focused on the equilibrium outcome, whereas we study the transient behavior. Given {\em $\epsilon$}, let us call a step of the dynamical system {\em $\epsilon$-non-trivial} if at least one pair of interacting agents is separatated by distance at most {\em $\epsilon$}. \begin{theorem}\label{thm:socmain} Given an arbitrary initial configuration of $n$ agents evolving according to the social HK model defined by an arbitrary graph, the number of $\epsilon$-non-trivial steps in the dynamical system is $O(n^5/\epsilon^2)$. \end{theorem} Chazelle's result \cite{Chazelle11} implies an $n^{O(n)}$ bound for this system whereas our bound is polynomial. We also show that the same bound holds when the social network itself changes with time, provided its evolution follows certain constraints. Informally, we require that if two agents interact at time $t$ and they are within each other's confidence bound at time $t+1$, then they should keep on interacting at time $t+1$. In particular, if edges are never deleted from the social network, then only polynomial number of non-trivial steps take place \item \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth, height=4cm]{nondetorder.png} \end{center} \caption{Order not preserved by non-deterministic dynamics} \label{fig:nondet} \end{figure} \textbf{Non-deterministic Hegselmann-Krause}: A different criticism of the HK system is that it is very rigid in that each agent must move to exactly the mass center of the agents within unit distance. In particular, if two agents have the same set of agents within unit distance, then they move to exactly the same opinion at the next time step (and stay together thereafter). This is clearly not very realistic, as the effects of chance and variation are not taken into account. To address these issues, for any fixed $\epsilon \in [0,1]$, we formally define the {\em $\epsilon$-non-deterministic Hegselmann-Krause} system. The system again consists of $n$ agents placed on the real line, with the $i$'th agent at $x_t(i)$ at time $t$, and at every time step $t \geq 0$, the positions update for all $i\in[n]$ synchronously as: \begin{equation} x_{t+1}(i) = x_t(i) + (1 + \epsilon_{i,t}) \sum_{j \in \mathcal N_t(i)} \frac{x_t(j)-x_t(i)}{|\mathcal N_t(i)|} \end{equation} where $\epsilon_{i,t} \in [-\epsilon, \epsilon]$ for every $i$ and $t$ independently, and $\mathcal N_i(t) = \{j: |x_t(j) - x_t(i)|\leq 1\}$. Note that we term the system ``non-deterministic" instead of ``noisy", because the $\epsilon_{i,t}$'s are not assumed to be random. In fact, they can be entirely arbitrary values in $[-\epsilon,\epsilon]$, even chosen by an adversary depending on the current state. However, note that consensus remains a fixed point of this dynamics. The dynamics of non-deterministic HK is quite different from that of HK. As \cref{fig:nondet} shows, the order of agents can change even when there are two agents and $\epsilon$ is arbitrarily small. Also, the fact that agents do not coalesce together once they see the same set of agents complicates the system's behavior significantly. The work here is the first to handle such a general type of non-determinism in doing convergence analysis. In \cite{BBCN13}, a one-sided version of noise was discussed but there, the authors could adapt the argument to bound the convergence time of exact HK in a simple way. In contrast, we are not able do so here. Our main result is: \begin{theorem}\label{thm:ndmain} Suppose $\epsilon < \frac{1}{4n^2}$. Starting with an arbitrary initial arangement of $n$ agents evolving according to the non-deterministic HK model, the number of steps before which all the agents are confined within intervals of length $\rho$ thereafter is $O(n^4 + \log(1/\rho)/\log n)$. \end{theorem} Again, in this case too, Chazelle's general result on bidirectional influence systems \cite{Chazelle11} implies an $n^{O(n)}$ bound whereas we show a polynomial bound. \end{itemize} \subsection{Our Techniques} \paragraph{Social HK.} In \cite{BBCN13}, the classical HK system is shown to converge in $O(n^3)$ steps by simply showing that the diameter of the system must shrink by a significant amount at each timestep, unless the leftmost and rightmost agents have already frozen. It is not clear how to extend this proof approach to the social HK system, because as \cref{fig:initdep} shows, the leftmost agent may not be frozen but may take an arbitrary long time before it moves by a significant amount. Therefore, we must search for some other energy function, which is not the diameter but which is also decreasing. In fact, such an energy function has already been introduced \cite{RMF08}. For a given configuration $x = (x(1), \dots, x(n))$, define $\mathcal E(x) = \sum_{(i,j)} \min(|x(i)-x(j)|^2, 1)$. The fact that this energy is decreasing is an important ingredient in several previous works \cite{BHT07, BHT09, BBCN13, EB14}. We generalize this energy function to the social HK setting as: $\mathcal E(x) = \sum_{i\sim_t j} |x(i) - x(j)|^2 + \sum_{i \not\sim_t j} 1$, where $i \sim_t j$ means that agents $i$ and $j$ interact at time $t$. To give a lower bound on the decrease of this energy function, we use an elegant approach introduced recently by Martinsson \cite{Martinsson15}. Martinsson relates the decrease to the eigenvalue gap of the communication graph, which is well-known to be $\geq 1/\mathrm{poly}(n)$. We show that the same approach applies to the social HK system also. We get that the energy decrement in every $\epsilon$-non-trivial step is $\Omega(\frac{\epsilon^2}{n^3})$, and as the energy of any initial configuration lies between 0 and $O(n^2)$, this gives an upper bound of $O(\frac{n^5}{\epsilon^2})$ on the number of $\epsilon$-non-trivial steps. This approach continues to apply even when the social network graph is changing under some natural restrictions. \ignore{ For social HK model, most of the techniques used to prove an upper bound for convergence of the HK model don?t carry forward directly. In this paper, we show that for social HK model one cannot hope to bound the convergence time since it can be arbitrarily large. To deal with this problem we ignore the microscopic movements and just upper bound the number of non-trivial steps, which will be made formal later.\\ The convergence for this model is assured from \cite{Cha11}, where in Chazelle proves convergence and upper bounds the number of non-trivial steps for any bi-directional influence systems. The upper bound we get from \cite{Cha11} on the number of non-trivial steps for this model is $n^{O(n)}$. In this paper, we bring down this upper bound to \emph{poly(n)}, which is a drastic improvement. To do so, we define an energy function $\mathcal{E}(x_t)$, which we show to be non-increasing in $t$. The technique we use to estimate the energy decrement in a step in the social HK model is similar to the one presented in \cite{Martinsson15}. We show that the energy decrement in every non-trivial step is $\Omega(\frac{\epsilon}{n^3})$, where $\epsilon \in (0,1]$ is any arbitrary fixed number and the energy $\mathcal{E}(x_0)$ of any initial configuration lies between 0 and $O(n^2)$ giving an upper bound of $O(\frac{n^5}{\epsilon})$ on the number of non-trivial steps.} \paragraph{Non-deterministic HK.} To analyze the non-deterministic HK system, we use a mixture of geometric and algebraic techniques similar to the ones presented in \cite{BBCN13}. However, the technical details are significantly more involved. The basic approach in \cite{BBCN13} is to look at the neighbor immediately to the right of the leftmost agent and analyze its influence. However, in our case, we need to partition the neighbourhood of the leftmost agent into two subsets and treat the subsets collectively. One might also notice that proving a lower bound on the movement of the leftmost agent as in \cite{BBCN13} is not enough, as the order of positions is not preserved and the leftmost agent may change over time. To overcome this difficulty, in our work we lower bound the amount by which the diameter (difference between the rightmost and the leftmost agents, whose identities change with time) shrinks within $n$ time steps, or else, we show that some agents separate out leaving behind sub-systems with smaller number of agents, where they both converge independently thereafter. \section{Social HK model}\label{sec:soc} We reformulate the social HK model in the multidimensional setting. This is very natural when an opinion consists of positions along multiple axes instead of just one. Let $x_t(i) \in \mathbb{R}^d$ (for $d \geq 1$) be the position of the $i$th agent at time $t$, and let $G$ be a fixed undirected graph on $n$ nodes. The dynamics is given by the following equation: \begin{equation} \label{e1} x_{t+1}(i) = \frac{1}{|\mathcal N_t(i)|} \sum_{j \in N_t(i)} {x_t(j)} \end{equation} where $\mathcal N_t(i) = \{j : (i,j) \in E(G) \text{ and }\|x_t(j) - x_t(i)\|_2 \leq 1\}$. We denote the new state at time $t+1$ by: \begin{equation} \label{e2} x_{t+1} = P_t x_t \end{equation} where $P_t$ is a row-stochastic matrix. As mentioned in the Introduction, our proof follows the same line as the recent analysis by Martinsson \cite{Martinsson15} but with some twists due to the presence of the social network graph. For any configuration $x = (x(1), x(2), \dots, x(n))$ of $n$ agents, define the {\em communication graph} $C_x$ so that two nodes $i$ and $j$ are adjacent exactly when $(i,j) \in E(G)$ and $\|x(i) - x(j)\|_2 \leq 1$. That is, the communication graph is now the conjunction of the social network $G$ and the standard HK communication graph. Also, for a configuration $x$ of $n$ agents, we define its {\em energy} as\\ \begin{equation} \label{e3} \mathcal E(x) = \sum_{(i,j) \in E(C_x)}{\|x(i)-x(j)\|_2^2} + \sum_{(i,j) \notin E(C_x)}{1} \end{equation} Note that the energy of any configuration lies between 0 and $n^2$. For the standard HK system, a very useful fact is that the energy $\mathcal E(x_t)$ is non-increasing in $t$; see Theorem 2 of \cite{RMF08} for a proof. In fact, this fact is the driver for the bound on the freezing time of multidimensional HK found in \cite{BBCN13}. Our proof shows that the same energy decreases over time for the social HK system also. For a given state $x$ and for any ordered pair $(i,j) \in [n]^2$, we say that $(i, j)$ is {\em active} if $(i,j) \in E(C_x)$. We consequently define the {\em active part of the energy of $x$} as\\ \begin{equation} \label{e4} \mathcal E_{act}(x) = \sum_{(i,j) \text{ active}}{\|x(i)-x(j)\|_2^2} =\sum_{(i,j) \in E(C_x)}{\|x(i)-x(j)\|_2^2} \end{equation} Now, let $\{x_t\}$ be a sequence of configurations evolving according to (\ref{e2}). For simplicity of notation, let $E_t$ denote the edge set of the communication graph $C_{x_t}$. \begin{lemma}\label{lem:init} \begin{align*} \sum_{(i,j) \in E_{t+1}}&{\|x_{t+1}(i)-x_{t+1}(j)\|_2^2} + \sum_{(i,j) \notin E_{t+1}}{1} \\ &\leq \sum_{(i,j) \in E_{t}}{\|x_{t+1}(i)-x_{t+1}(j)\|_2^2} + \sum_{(i,j) \notin E_{t}}{1} \end{align*} \end{lemma} \begin{proof} There are four cases to look at: \begin{enumerate} \item \textbf{$\bm{(i,j) \in E_t}$ and $\bm{(i,j) \in E_{t+1}}$}\\ In this case. we are adding the same term ($\|x_{t+1}(i)-x_{t+1}(j)\|_2^2$) to both LHS and RHS. \item \textbf{$\bm{(i,j) \notin E_t}$ and $\bm{(i,j) \notin E_{t+1}}$}\\ In this case too, we are adding the same term (1) to both LHS and RHS. \item \textbf{$\bm{(i,j) \in E_t}$ and $\bm{(i,j) \notin E_{t+1}}$}\\ In this case, note that $\|x_{t+1}(i)-x_{t+1}(j)\|_2^2>1$, because otherwise $(i,j) \notin E(G)$ which contradicts the fact that $(i,j) \in E_t$. Hence in this case, we are adding a greater term to the RHS ($\|x_{t+1}(i)-x_{t+1}(j)\|_2^2>1$) than to the LHS (1). \item \textbf{$\bm{(i,j) \notin E_t}$ and $\bm{(i,j) \in E_{t+1}}$}\\ Since $ (i,j) \in E_{t+1}$, $\|x_{t+1}(i)-x_{t+1}(j)\|_2^2 \leq 1$. Hence in this case too we are adding a term (1) to RHS which is at least the term ($\|x_{t+1}(i)-x_{t+1}(j)\|_2^2 \leq 1$) added to the LHS. \end{enumerate} As the inequality is true term-wise, we have LHS $\leq$ RHS. \end{proof} \newtheorem{proposition}{Proposition} \begin{proposition}[Proposition 2.2 in \cite{Martinsson15}] For each $t \geq 0$, let $$\lambda_t = \max \{ |\lambda| : \lambda \neq 1 \text{ is an eigenvalue of } P_t \}.$$ Then: \begin{equation} \label{e6} \mathcal E(x_t)-\mathcal E(x_{t+1}) \geq (1-\lambda_t^2)\mathcal E_{act}(x_t) \end{equation} \end{proposition} \begin{proof} We reproduce the proof from \cite{Martinsson15} for completeness. Let $A_t$ denote the adjacency matrix of $C_{x_t}$, and let $D_t$ denote its degree matrix, that is, the diagonal matrix whose $(i,i)$'th element is given by the degree of $i$(Recall that every vertex in $C_{x_t}$ has an edge to itself). Observe that $P_t = D_t^{-1}A_t$. Recall $E_t= E(C_{x_t})$. We have: \begin{align*} \mathcal E(x_t) &= \sum_{(i,j) \in E_t}{||x_t(i)-x_t(j)||_2^2} + \sum_{(i,j) \notin E_t}{1}\\ &= 2\text{Tr}(x_t^T(D_t-A_t)x_t) + \sum_{(i,j) \notin E_t}{1} \end{align*} where Tr($\cdot$) denotes trace. Here, we interpret $x_t$ as an $n \times d$ matrix. Now consider: \begin{align*} \mathcal E(x_{t+1}) &= \sum_{(i,j) \in E_{t+1}}{\|x_{t+1}(i)-x_{t+1}(j)\|_2^2} + \sum_{(i,j) \notin E_{t+1}}{1} \\ &\leq \sum_{(i,j) \in E_{t}}{\|x_{t+1}(i)-x_{t+1}(j)\|_2^2} + \sum_{(i,j) \notin E_{t}}{1} &&\text{(by \cref{lem:init})}\\ & = 2Tr(x_{t+1}^T(D_{t}-A_{t})x_{t+1}) + \sum_{(i,j) \notin E_t}{1}\\ &= 2Tr((D_t^{-1}A_tx_t)^T(D_{t}-A_{t})D_t^{-1}A_tx_t) + \sum_{(i,j) \notin E_t}{1} \end{align*} Hence, it suffices to show that\\ \begin{align} \label{e7} Tr((D_t^{-1}&A_tx_t)^T(D_{t}-A_{t})D_t^{-1}A_tx_t) \nonumber\\ &= Tr(x_t^TA_tD_t^{-1}(D_{t}-A_{t})D_t^{-1}A_tx_t)\nonumber\\ &\leq \lambda_t^2Tr(x_t^T(D_t-A_t)x_t) \end{align} Let $y_t = D^{\frac{1}{2}}x_t$ and $B = D_t^{\frac{1}{2}}P_t D_t^{-\frac{1}{2}} = D_t^{-\frac{1}{2}}A_tD_t^{-\frac{1}{2}}$. It is straightforward to show that (\ref{e7}) simplifies to\\ \begin{equation} \label{e8} Tr(y_t^TB_t(I-B_t)B_ty_t) \leq \lambda_t^2Tr(y_t^T(I-B_t)y_t) \end{equation} This inequality follows by standard linear algebra. \end{proof} The following is a standard result in spectral graph theory: \begin{proposition}\label{prop22} For any $t \geq 0$, we have \begin{equation} \label{e10} | \lambda_t| \leq 1- \frac{1}{n^2\text{diam}(C_{x_t})} \end{equation} where $diam(C_{x_t})$ denotes the graph diameter of $C_{x_t}$. If $C_{x_t}$ is not connected, we interpret $diam(C_{x_t})$ as the largest diameter of any connected component of $C_{x_t}$. \end{proposition} We are now ready to prove our main theorem. Observe that $\mathcal E_{act}(x_t) > \epsilon^2$ whenever the $t$'th step is $\epsilon$-non-trivial. \cref{thm:socmain} is therefore immediately implied by the result below. \begin{theorem} For any $\epsilon > 0$, given a social HK system with $n$ agents in $\mathbb{R}^d$, there are $O(n^5/\epsilon)$ values of $t$ for which $\mathcal E_{act}(x_t) > \epsilon$. \end{theorem} \begin{proof} Now given any initial configuration, we have $\text{diam}(C_{x_t}) \leq n$ (or else, we can decompose the system into independent subsystems and analyze each separately). Applying Proposition \ref{prop22}, it follows that the energy decrement in each step with $\mathcal E_{act}(x_t) > \epsilon$ is $\Omega(\frac{\epsilon}{n^3})$ and such steps can hence occur at most $O(\frac{n^5}{\epsilon})$ times.\\ \end{proof} \subsection{Changing social network}\label{sec:friendly} One may ask what happens to the convergence rate when the social network itself evolves with time and is not fixed. Let $G_t$ denote the social network graph at time $t$. To make sure that the above proof carries through, we need to suitably restrict the evolution of $G_t$. Given a sequence of configurations $x_t = (x_t(1), \dots, x_t(n))$, we again have the communication graph $C_{x_t}$ where two nodes $i$ and $j$ are adjacent if $(i,j) \in E(G_t)$ and $\|x_t(i) - x_t(j)\|_2 \leq 1$. As before, let $E_t = E(C_{x_t})$.\\[-1em] \begin{definition} Call a social HK system defined by a sequence of time-varying social networks $G_t$ as {\em friendly} if it is the case that whenever $(i,j) \in E_t$ and $\|x_{t+1}(i) - x_{t+1}(j)\|_2 \leq 1$, $(i,j) \in E(G_{t+1})$ (and hence, $(i,j) \in E_{t+1})$). \end{definition} In other words, in a friendly HK system, if two agents interact at time $t$, and they stay within distance $1$ in the next time step, then they keep interacting with each other at time $t+1$. Note that the evolution of $G_t$ may be endogenous (i.e., depend on the states $x_t$). We observe that under this natural condition of friendliness, the above proof goes through without any changes. \begin{theorem} For any $\epsilon > 0$, given a friendly social HK system with $n$ agents in $\mathbb{R}^d$, there are $O(n^5/\epsilon)$ values of $t$ for which $\mathcal E_{act}(x_t) > \epsilon$. \end{theorem} \begin{proof} The only part of the proof which needs a second look is case 3 in the proof of \cref{lem:init}. Now, the definition of friendliness ensures that if $(i,j) \notin E_{t+1}$ and $(i,j) \in E_t$, then $\|x_{t+1}(i) - x_{t+1}(j)\|_2 > 1$. \end{proof} Note that without the friendliness assumption, Chazelle \cite{Chazelle11} shows a bound of $n^{O(n)}$ for the number of non-trivial steps. We conjecture that the friendliness assumption is necessary for a polynomial bound. \subsection{Experimental Results}\label{sec:exp} \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{Friendship_Graph} \end{center} \caption{{\small The plot shows the convergence time for social HK systems when the initial states of the $n$ agents are uniform in $[1,n]$ and the social network is the random graph $G(n,p)$ of edge density $p$. $n$ was varied from $1000$ to $10000$ as shown, and $p$ was varied from $0.01$ to $1$ in steps of size $0.01$. See text for discussion of ``convergence time''.}} \label{fig:exp} \end{figure} Our analysis above is very general in the sense that our bound for the number of non-trivial steps does not depend on the structure of the social network. Is it true that some social network structures allow faster convergence than others? As a first cut at this question, we explore how the edge density of the social network affects dynamics. Let $G(n,p)$ be the Erd\H{o}s-Renyi random graph on $n$ nodes, where each pair of nodes is an edge with probability $p$ independently. How does $p$ change the time needed to converge? We study this question when the initial positions of the agents are uniform in the interval $[1,n]$. \cref{fig:exp} summarizes the results of our computer experiments. We need to clarify what we mean by ``convergence time'' in the figure. As \cref{fig:initdep} shows, the time needed to converge can, in general, be arbitrarily long. However, it seems that if the initial positions of the agents are chosen randomly, such pathological cases occur with probability $0$, so that after a finite time, all agents are confined to an interval of length $10^{-6}$ ever after. We do not have a rigorous proof of this claim, but all our simulations support it. In view of this, the notion of convergence time we used to arrive at \cref{fig:exp} was, for any given $n$ and $p$, the least time $t$ at which the sum of the movement of all the $n$ agents is less than $10^{-6}$, averaged over $1000$ random initializations of the agents and random graphs from $G(n,p)$ . What is most interesting about \cref{fig:exp} is that there exists a value of $p$ between $0$ and $0.3$ for which the convergence time is maximized. When $p$ is close to $0$, the communication graph consists of many small disconnected components and convergence occurs fast. $p=1$ corresponds to the standard HK model. Somewhere in between, the time needed to converge reaches a maximum. The lesson seems to be that opinions take the longest time to converge when the probability of two agents interacting is neither too small nor too large. We also conducted the experiment when the social network was chosen from the Barab\'asi-Albert generative model for scale-free networks. The results there (not shown) are qualitatively similar. Note that the maximizing value of $p$ seems to decrease slowly with $n$ in \cref{fig:exp}. It is not clear what the limiting behavior is as $n$ grows. Also, we currently do not have any analytical way to understand the experimental results. \section{Non-Deterministic Hegselmann-Krause Model} In this Section, we analyze the non-deterministic HK model. Recall that the update rule for the non-deterministic HK model is: \begin{equation}\label{eqn:fund} x_{t+1}(i)=x_{t}(i)+(1 + \epsilon_{i,t}) \frac{\sum_{j \in \mathcal N_{t}(i)} (x_{t}(j) - x_{t}(i))}{|\mathcal N_{t}(i)|} \end{equation} where $ \epsilon_{i,t} $ is an arbitrary number generated from the interval $(-\epsilon, \epsilon)$ at time $t$, for every $t \geq 0$. We analyze the time needed for convergence when $\epsilon$ is sufficiently small. We first establish some notation. Let $\ell(t)$ be the index of the leftmost agent at time $t$. As already noted in the Introduction, $\ell(t)$ can change with $t$. Also, let $x_t(\ell)$ and $\mathcal N_t(\ell)$ be shorthand for $x_t(\ell(t))$ and $\mathcal N_t(\ell(t))$ respectively. Similarly, let $r(t)$ be the index of the rightmost agent at time $t$, and let $x_t(r)$ and $\mathcal N_t(r)$ denote $x_t(r(t))$ and $\mathcal N_t(r(t))$ respectively. \begin{lemma}\label{l1} Suppose $\epsilon < \frac{1}{n-1}$. Then, for any agent $i \in [n]$ and for all $t \geq 0$, $x_{t+1}(i) \ge x_{t}(\ell)$. In particular, $x_{t+1}(\ell) \geq x_t(\ell)$. \end{lemma} \begin{proof} Let $\delta =x_{t}(i)-x_{t}(\ell)$. At time $t+1$, without noise, agent $i$ can move to the left by at most $\delta (1-\frac{1}{n})$. By substituting in (\ref{eqn:fund}), we get: $$x_{t+1}(i) \ge x_{t}(i) - (1 + \epsilon) \delta \cdot \left(1-\frac{1}{n}\right)$$ Since $\epsilon < \frac{1}{n-1}$, $x_{t+1}(i) \geq x_t(i) - \delta = x_t(\ell)$. \end{proof} \begin{remark} By exactly the same reasoning, the position of the rightmost agent does not move to the right over time if $\epsilon < \frac{1}{n-1}$. \end{remark} \begin{definition}For all $t \geq 0$, define the following sets:\\ \begin{align*} L(t)&=\{i \in \mathcal N_{t}(\ell) \mid \mathcal N_{t}(i)=\mathcal N_{t}(\ell) \}\\ S(t)&=\{i \in \mathcal N_{t}(\ell) \mid \mathcal N_{t}(i) \neq \mathcal N_{t}(\ell) \}\\ T(t)&=\{ i \in [n] \mid i \notin \mathcal N_{t}(\ell) \}\\ M(t)&=S(t) \cup T(t) \end{align*} \end{definition} We next show that any agent in $M(t)$ actually satisfies \cref{l1} with strict inequality. \begin{lemma} \label{lem:t} For any agent $i \in T(t)$ and for all $t \geq 0$, $x_{t+1}(i) \ge x_{t}(\ell) + \frac{1}{n}-\epsilon$. \end{lemma} \begin{proof} Consider $i \in T(t)$ and let $k = |\mathcal N_t(i)|$. \begin{align*} x_{t+1}(i) & = x_{t}(i) + (1 + \epsilon_{i,t}) \sum_{j \in \mathcal N_{t}(i)} \frac{x_{t}(j)-x_{t}(i)}{|\mathcal N_{t}(i)|}\\ & \ge x_{t}(i) - (1 + \epsilon)\frac{k-1}{k} \\ & \ge x_{t}(\ell) +1-(1 + \epsilon)(1-\frac{1}{n}) \\ & \ge x_{t}(\ell) + \frac{1}{n} - \epsilon \end{align*} \end{proof} \begin{lemma}\label{lem:s} For any agent $i \in S(t)$ and for all $t \geq 0$, $x_{t+1}(i) \ge x_{t}(\ell) + \frac{1}{n}-\epsilon$ \end{lemma} \begin{proof} Consider $i \in S(t)$. Let $k= |\mathcal N_{t}(\ell)|$ and $ \delta= x_{t}(i)-x_{t}(\ell)$. Then agent \textit{i} moves to its left by at most: \begin{align} x_{t+1}(i) & \geq x_{t}(i) - ( 1 + \epsilon_{i,t} ) \frac{ \delta ( k-2 ) + 0 - (1 - \delta)}{k} \label{eqn:chg1} \\ & \geq x_{t}(i) - ( 1 + \epsilon_{i,t} ) \frac{ \delta ( k-1 ) + 0 - (1 - \delta)}{k} \\ &= x_{t}(i) - (1 + \epsilon_{i,t}) ( \delta - \frac{1}{k} ) \nonumber \\ & \geq x_{t}(\ell) + \delta - (1 + \epsilon_{i,t}) ( \delta - \frac{1}{n} ) \nonumber \\ & \geq x_{t}(\ell) + \frac{1}{n}- \epsilon \nonumber \end{align} \end{proof} Combining \cref{lem:t} and \cref{lem:s}: \begin{corollary}\label{c1} For any agent $i \in M(t)$ and $t \geq 0$, $x_{t+1}(i) \ge x_{t}(\ell) + \frac{1}{n}-\epsilon$ \end{corollary} \begin{lemma}\label{l5} Suppose $\epsilon < \frac{1}{n-1}$. For any $t \geq 0$, if $S(t) = \emptyset$, then $\mathcal N_{t}(\ell) = L(t) $ evolves as an independent system and for any $\rho > 0$, all the agents in $L(t)$ lie within an interval of length at most $\rho$ in time $O((\log 1/\rho)/(\log 1/\epsilon))$ time. \end{lemma} \begin{proof} Because $S(t) = \emptyset$, $\mathcal N_{t}(\ell) =\mathcal N_{t}(i) $ $ \forall \textit{i} \in \mathcal N_{t}(\ell) $. Then, by definition, \begin{align*} x_{t+1}(i) &=a_{t+1}(i) + \epsilon_{i,t} b_{t+1}(i) \end{align*} where $ a_{t+1}(i)=\frac{\sum_{j \in \mathcal N_{t}(i)} x_{t}(j)}{|\mathcal N_{t}(i)|}$ and $ b_{t+1}(i)= \frac{\sum_{j \in \mathcal N_{t}(i)} (x_{t}(j) - x_{t}(i))}{|\mathcal N_{t}(i)|}$ \newtheorem{obs}{Observation} \begin{obs} For any $i,j \in \mathcal N_{t}(\ell)$, we have $\mathcal N_{t}(i)=\mathcal N_{t}(j) \implies$ $a_{t+1}(i)=a_{t+1}(j)$ \end{obs} \begin{obs} Since $\epsilon < \frac{1}{n-1}$, by \cref{l1}, for any agent $i \in \mathcal N_{t}(\ell)$, $\mathcal N_{t+s}(i)=\mathcal N_{t}(i)$ for all positive integers $s$. \end{obs} Let $\delta_{t} := x_{t}(r)-x_{t}(\ell) \leq 1$. Then: \begin{align*} \delta_{t+1} = \epsilon_{r,t+1} b_{t+1}(r(t+1)) - \epsilon_{\ell,t+1} b_{t+1}(\ell(t+1)) \leq 2 \epsilon \cdot b_{t+1}^{\max} \end{align*} where $b_{t+1}^{\max} =\max_{i}(|b_{t+1}(i)|) \leq \delta_t$. Hence $ \delta_{t+1} \leq 2 \epsilon \delta_{t}$. Therefore, for any $\rho > 0$, all the agents will lie within an interval of length $\rho$ in time $O(\log(1/\rho)/\log(1/\epsilon))$. \end{proof} \begin{lemma} At any time $t \geq 0$, one of the following three cases must occur: \begin{enumerate} \item[S1)] $S(t+1) = \emptyset$ \item[S2)] $|L(t+1)| < |L(t)| $ \item[S3)] $M(t) \cap \mathcal N_{t+1}(\ell) \neq \emptyset$ \end{enumerate} \end{lemma} \begin{proof} Assume Case S3 does not occur, meaning for all $s \in M(t)$, $s \notin \mathcal N_{t+1}(\ell)$. Then, we show that either case S1 or S2 occurs. By our assumption, $\mathcal N_{t+1}(\ell) \subseteq \mathcal N_{t}(\ell) \setminus S(t) = L(t)$. Hence, $L(t+1) = \mathcal N_{t+1}(\ell) \backslash S(t+1) \subseteq L(t) \backslash S(t+1)$. Now, either \begin{itemize} \item $S(t+1) = \emptyset$, which is case S1. \item $S(t+1) \neq \emptyset$, and so, $L(t+1) \subseteq L(t) \setminus S(t+1) \subsetneq L(t)$, implying case S2. \end{itemize} \end{proof} Note that case S1 of the above Lemma is irreversible in the sense that by \cref{c1}, each time it occurs, a subset of the agents converges independently into an interval of an arbitrarily small length $\rho$. We next establish that case S3 can also occur only a finite number of times. \begin{lemma}\label{lem:s3} Suppose $\epsilon < \frac{1}{4n^2}$. If $M(t) \cap \mathcal N_{t+1}(\ell) \neq \emptyset$, then for all $i \in [n]$, $x_{t+2}(i) \geq x_{t}(\ell) + \frac{1}{4n^2}$. \end{lemma} \begin{proof} Suppose $s \in M(t)\cap \mathcal N_{t+1}(\ell)$ . Then, by \cref{c1}, $x_{t+1}(s)-x_{t}(\ell) \geq \frac{1}{n}-\epsilon \geq \frac{1}{n}-2\epsilon$. Now, we consider two cases. \begin{itemize} \item Suppose $x_{t+1}(\ell) > x_{t}(\ell) + \frac{1}{2n}-\epsilon$. Then, for any $i$, by \cref{l1}, $x_{t+2}(i) \geq x_{t+1}(\ell) > x_{t}(\ell) + \frac{1}{2n}-\epsilon \geq x_{t}(\ell) + \frac{1}{4n}$ ($\because \epsilon < \frac{1}{4n^2}$). \item Otherwise, suppose $x_{t+1}(\ell) \leq x_{t}(\ell) + \frac{1}{2n} - \epsilon$. Then, $x_{t+1}(s) - x_{t+1}(\ell) \geq \frac{1}{2n} - \epsilon$. Now for any agent $i \in L(t+1)$, we have: \begin{align} x_{t+2}(i) &= x_{t+1}(i) + (1 + \epsilon_{i,t+1}) \sum_{j \in \mathcal N_{t+1}(i)} \frac{x_{t+1}(j)-x_{t+1}(i)}{|\mathcal N_{t+1}(i)|} \label{eqn:chg2}\\ &=\sum_{j \in \mathcal N_{t+1}(\ell)} \frac{x_{t+1}(j)}{|\mathcal N_{t+1}(l)|} + \epsilon_{i,t+1} \sum_{j \in \mathcal N_{t+1}(i)} \frac{(x_{t+1}(j) - x_{t+1}(i))}{|\mathcal N_{t+1}(i)|} && (\because i \in L(t+1)) \nonumber \\ &\ge x_{t+1}(\ell) + \frac{x_{t+1}(s)-x_{t+1}(\ell)}{n} - \epsilon (1-\frac{1}{n}) \nonumber\\ &\ge x_{t+1}(\ell) + \frac{(\frac{1}{2n}-\epsilon)}{n}-\epsilon (1-\frac{1}{n}) \nonumber\\ &\ge x_{t+1}(\ell) + \frac{1}{2n^2} - \epsilon \nonumber \end{align} If $\epsilon < \frac{1}{4n^2}$, then $x_{t+2}(i) > x_{t+1}(\ell) + \frac{1}{4n^2} \geq x_{t}(\ell) + \frac{1}{4n^2}$, by \cref{l1}. If $i \in M(t+1)$, then by \cref{c1}, we have $ x_{t+2}(i) \geq x_{t+1}(\ell) + \frac{1}{2n} \geq x_{t}(\ell) + \frac{1}{2n}$, the second inequality again due to \cref{l1}. So, the claim is proved for all $i$. \end{itemize} \end{proof} We are now ready to prove our main theorem.\\ \noindent \textbf{\cref{thm:ndmain} (recalled)} {\em Suppose $\epsilon < \frac{1}{4n^2}$. Starting with an arbitrary initial arangement of $n$ agents evolving according to the non-deterministic HK model, the number of steps before which all the agents are confined within intervals of length $\rho$ thereafter is $O(n^4 + \log(1/\rho)/\log n)$.} \begin{proof} Since for any $t$, $|L(t)| \leq n$, case S2 can occur consecutively at most $n$ times. So, within every $n$ time steps, case {S1} or case {S3} must occur at least once. Case S1 can clearly occur at most $n$ times, whereas by \cref{lem:s3}, case S3 can occur $O(n^3)$ times. Hence, after time $O(n^4)$ time steps, all agents lie in independent subsystems, each of diameter at most $1$. Each of these subsystems, by \cref{l5}, cluster into intervals of length at most $\rho$ in $O(\log(1/\rho)/\log(1/\epsilon)) = O(\log(1/\rho)/\log n)$ time steps. \end{proof} \begin{remark}\label{rem:ndgen} Suppose we change the definition of non-deterministic HK models so that each agent is influenced non-uniformly by its neighbors. Specifically, let the update rule be: \begin{equation} x_{t+1}(i) = x_t(i) + \frac{\sum_{j \in \mathcal N_t(i)} (1+\epsilon_{i,j,t}) (x_t(j) - x_t(i))}{|\mathcal N_t(i)|} \label{eqn:nd2} \end{equation} where each $\epsilon_{i,j,t}$ is an arbitrary\footnote{perhaps generated endogenously} number generated from the interval $(-\epsilon, \epsilon)$. Most of the above proof needs no modification. (\ref{eqn:chg1}) changes to: \begin{align*} x_{t+1}(i) & \geq x_{t}(i) - \frac{ ( 1 + \epsilon)\delta ( k-2 ) + 0 - (1- \epsilon) (1 - \delta)}{k} \\ & \geq x_{t}(i) - \frac{ ( 1 + \epsilon)\delta ( k-1 ) + 0 - (1- \epsilon) (1 - \delta)}{k} \\ & = x_{t}(\ell) +\frac{1}{k} -\epsilon \left(\delta -2\delta/k + 1/k\right)\\ &\geq x_t(\ell) + \frac{1}{n} - 2\epsilon \geq x_t(\ell) + \frac{1}{2n} \end{align*} if $\epsilon < \frac1{4n^2}$. Everywhere else, the claims hold straighforwardly using the upperbound of $\epsilon$ on each $\epsilon_{i,j,t}$. Hence, for this system also, \cref{thm:ndmain} holds. An interesting aspect about (\ref{eqn:nd2}) is that an agent might move in the opposite direction than it would move in the classical HK model, whereas in (\ref{eqn:fund}), the direction of each agent's movement is the same as in classical HK. \end{remark} \section{Future Directions} There are a number of open directions suggested by the problems studied in this work. \begin{itemize} \item In our formulation of the social HK model, we required the underlying social network to be undirected. This leads to bidirectional dynamical systems. What happens if the social network is directed? Proving convergence for the HK model with a directed social network seems quite challenging because it includes, as a special case, the ill-understood HK model with stubborn agents (i.e., there are some agents with confidence bound 0 while all others have confidence bound 1). To see this, let every non-stubborn agent have edges to all agents and every stubborn agent have no outgoing edges. \item We introduced the notion of friendly social HK systems in \cref{sec:friendly} and showed that these allow only a polynomial number of non-trivial steps. We conjecture that friendliness is necessary for a polynomial bound. Can one demonstrate a non-friendly HK system for which there are an exponential number of non-trivial steps? Is friendliness indeed a tight condition for a polynomial bound? \item Is there a rigorous justification for the empirical results reported in \cref{sec:exp}? In general, it would be interesting to understand the effect of the social network structure on the dynamics of the HK model. \item For the non-deterministic HK model, it is important to increase the range of $\epsilon$ for which \cref{thm:ndmain} is valid. Note that if $\epsilon$ is allowed to be in $[-1,0]$, then we could prove convergence for the HK system with stubborn agents (by simply setting $\epsilon_{i,t} = -1$ for all $t$ if agent $i$ is stubborn and $\epsilon_{i,t}=0$ otherwise). Moreover, in the generalized non-deterministic HK systems described in \cref{rem:ndgen}, if $\epsilon$ is allowed to be in $[-1,n-1]$, then we can simulate arbitrary heterogeneous HK systems (by setting $\epsilon_{i,j,t} = -1$ if $j \notin \mathcal N_t(i)$ and $\epsilon_{i,j,t} = \frac{|\{k : |x_t(k) - x_t(i)| \leq 1\}|}{|\mathcal N_t(i)|}-1$ otherwise). \item Can we prove a polynomial bound for the convergence of the non-deterministic HK model in multiple dimensions? Our current proof does not extend while the approach used in \cref{sec:soc} seems sensitive to the presence of non-determinism. \end{itemize} \section*{Acknowledgments} We thank Ashish Goel for very helpful discussions and Vinay Vasista for assisting with the experiments reported in \cref{sec:exp}. \bibliographystyle{alpha}
1,477,468,751,199
arxiv
\section{Introduction} A number of dedicated observing programs have shown that most massive stars are formed in systems with at least one binary companion~\citep{2007ApJ...670..747K,2007ApJ...664.1102K,2009AJ....137.4608K,2012ApJ...751....4K,2012Sci...337..444S,2014ApJS..213...34K}. Those binaries in tight orbits can undergo one or more mass transfer phases where mass from a star, typically as it expands (e.g. in a giant phase) and overfills its Roche lobe, flows onto its companion. If the expansion is faster than the companion can incorporate the overflowing mass, the system can go through a common envelope (CE) phase where the expanding star envelops its companion, causing the core of the expanding star and its companion to share a common envelope. The CE phase causes the binary to tighten its orbit and is postulated to explain many of the tight-orbit, massive-star binaries~\citep{2013A&ARv..21...59I}. In the formation of a variety of massive star binaries including X-ray binaries and double compact object systems, the stellar system evolves through one, and often two, CE phases. In the first CE phase, the more massive star (primary) evolves off the main sequence, enveloping its companion. In some cases, the resultant tightening of the orbit produces a binary that is sufficiently close that, even after the subsequent collapse and explosion of the primary, the wind of the companion can accrete onto the neutron star. This is a common scenario behind the production of massive X-ray binaries. In some cases, when the companion star evolves off the main sequence, a second CE phase where a neutron star is enveloped by the companion occurs. This can tighten the orbit prior to the supernova explosion of the companion that produces binary pulsar systems and compact binaries that are believed to be the site of short-duration gamma-ray bursts \citep[GRBs;][]{1999ApJ...526..152F,1999MNRAS.305..763B}, including merging neutron star systems~\citep{2012ApJ...759...52D,2013ApJ...779...72D,2015ApJ...806..263D} like the one recently detected by advanced LIGO/Virgo~\citep{2017ApJ...848L..12A}. If the neutron star (NS) spirals into the core of the companion, it can produce a long-duration GRB, the so-called helium-merger model~\citep{1998ApJ...502L...9F}. The ultimate fate of the binary in this CE phase depends on the masses of the stars and the orbital separation at the onset of the phase. Many systems eject the hydrogen envelope, forming a binary consisting of a helium star and a NS. Others do not have sufficient orbital energy to eject the hydrogen envelope prior to merging with the helium core. These helium mergers were initially proposed to be a long-lived giant star powered by a central Eddington-rate accreting neutron star known as a Thorne-Zytkow object~\citep{1975ApJ...199L..19T}. However, calculations including neutrino processes found that most of the energy released in the neutron star accretion would be radiated efficiently through neutrinos, allowing the neutron star to accrete at the Bondi-Hoyle rate, causing it to rapidly collapse to a black hole~\citep{1996ApJ...460..801F,1998ApJ...502L...9F}. This helium-merger system, forming a black hole accreting system became one of the proposed black hole accretion disk (BHAD) GRB models~\citep{1999ApJ...518..356P,2001ApJ...550..357Z}. Subsequent simulations have studied the potential of this system to produce ultra-long duration gamma-ray bursts or peculiar supernovae~\citep{2013ApJ...764..181F,2018MNRAS.475.1198S}. Material accreting onto neutron stars is not completely incorporated into the neutron star. If the material has enough angular momentum, it can form a disk that could ultimately drive a jet. This is believed to be rare in most CE scenarios~\citep{2017ApJ...845..173M}. For neutron star systems, even if the material does not have a sufficient angular momentum to form a disk, some of the accreting material will be reheated and ejected~\citep{2006ApJ...646L.131F, 2009ApJ...699..409F}. During this accretion process, temperatures and densities become so high that both neutrino emission (that can alter the electron fraction) and nuclear burning can significantly alter the composition of the material. For the high accretion rates of supernova fallback, the reheated ejecta can burn into heavy r-process elements~\citep{2006ApJ...646L.131F}. Fallback accretion rates range from a few times $10^{-3}$ to $1~ \, {\rm M_{\odot} s^{-1}}$. CE accretion rates are typically lower than these rates: ranging from $10^{-4}-10^6~ \,\rm{{M}_{\odot}yr^{-1}}$ (note the former is in per second while the latter is per year). In this paper, we will study the yields from these lower CE accretion rates. In section~\ref{sec:mdot} we review NS accretion in CE, estimating accretion rates for a range of stellar models at different phases in the star's life. In section~\ref{sec:yields} we review the range of yields expected as a function of accretion rate from our single zone models. To determine the effect CE yields have on galactic chemical evolution, we must calculate the distribution of binaries and CE scenarios. By using these distributions and stellar models, we can estimate the accretion rates. In section~\ref{sec:binary}, we use binary population systems to study yields from stars and stellar populations. We conclude in section \ref{sec:conclusions} with a discussion of the role these yields play in galactic chemical evolution. \section{NS Accretion in CE Evolution} \label{sec:mdot} \subsection{Estimating Mass Accretion} \label{sec:mass_accretion_est} When a massive-star companion in a binary with a NS overfills its Roche lobe, its material accretes onto the neutron star. The accretion rate can be much faster than the NS can incorporate, ultimately developing into a CE phase. A number of assumptions are made in estimating this accretion rate and, especially for NSs, there seems to be some confusion on the validity of these assumptions. Here we review the basic physics assumptions and approximations used in this paper to estimate accretion rates. In astrophysics, the standard estimate for accretion onto a point source is the Bondi-Hoyle-Littleton solution \citep{1941MNRAS.101..227H,1952MNRAS.112..195B}. The Bondi radius ($R_{\rm B}$) for a neutron star of mass ($M_{\rm NS}$) can be determined by the radius that material of velocity $v$ is bound to the NS, i.e.: \begin{equation} v^2/R_{\rm B} = G M_{\rm NS}/R_{\rm B}^2 \rightarrow R_{\rm B} = G M_{\rm NS}/v^2 \end{equation} where $G$ is the gravitational constant. In the simplest case, $v$ is set to the sound speed ($c_s$). But if the NS is moving with respect to the material, the relative motion ($v_m$) should also be included. One simple, and often standard, way to include both velocities is through a quadratic sum: $v=\sqrt{v_m^2+c_s^2}$: \begin{equation} R_{\rm B} = 2 G M_{\rm NS}/(c_s^2+v_m^2) \end{equation} The accretion rate is roughly the mass within this Bondi radius divided by the free-fall time at this radius. More accurately, this accretion rate is: \begin{equation} \dot{M}_{\rm B}=\lambda_{\rm BHL} 4 \pi R_{\rm B}^2 \rho v \label{eq_acc_rate} \end{equation} where $\rho$ is the density of the ambient medium and $\lambda_{\rm BHL}$ is a non-dimensional parameter: $\lambda_{\rm BHL}=2/(3\pi)$ if we assume free-fall. Calculations of Bondi-Hoyle accretion allow refinement of the value for $\lambda_{\rm BHL}$ and determination of the accuracy of our solution to include the different velocity terms~\citep{1994A&AS..106..505R}. In scenarios like our CE phase, there is both a velocity and density gradient across the Bondi radius, and these features drive instabilities in the accretion that can decrease the accretion rate~\citep{1994ApJ...427..351R,1994ApJ...427..342R,macleod2014accretion,2015APS..APR.U2004M,2017ApJ...838...56M,2017ApJ...845..173M}. Bondi accretion also assumes that matter falling onto the neutron star accretes passively onto the neutron star. However, the gravitational potential energy released as matter accretes onto the neutron star is emitted in radiation and matter outflows that can significantly decrease the accretion rate below the Bondi rate. The Eddington limit is an extreme case of this radiative feedback that assumes all of the energy released is converted into radiation, and the momentum carried by this radiation exerts a force on the inflowing material. This radiation limits the amount of accretion onto an object. The radiative force at radius $r$ is: \begin{equation} F_{\rm rad} = \frac{L_{\rm rad}}{4 \pi r^2} \frac{\sigma}{c} \end{equation} where $L_{\rm rad}$ is the radiative luminosity, $\sigma$ is the cross section, and $c$ is the speed of light. Setting this force equal to the gravitational force ($F_{\rm grav}=G M_{\rm NS} m/r^2$ where $m$ is the mass of the accreting particle), we derive the Eddington luminosity: \begin{equation} L_{\rm Edd} = 4 \pi G M_{\rm NS} m/ (\sigma c). \end{equation} For accreting, fully ionized hydrogen, $m$ is the proton mass and $\sigma$ can be set to the Thompson cross-section. Although most studies use these assumptions to calculate the Eddington luminosity, for some scenarios, such as fallback accretion in supernova, the opacity per unit mass can be much higher~\citep{1999ApJ...511..885F}, lowering the Eddington accretion rate. The Eddington limit on accretion assumes that the radiative luminosity is equal to the gravitational potential energy released: \begin{equation} \dot{M}_{\rm Edd} = 4 \pi m r_{\rm NS}/(\sigma c) \end{equation} where $r_{\rm NS}$ is the neutron star radius. This limit on the accretion rate assumes spherical symmetry, the radiation is not trapped in the flow and that all the accretion energy is released in radiation. For low accretion rates, many of these assumptions are valid. But as the accretion rate increases, these assumptions lose their validity. Here we review each assumption individually. Determining whether the radiation is trapped in the flow is difficult without full calculations, but a first order estimate can be made by comparing the diffusive transport velocity ($v_{\rm diff}$): \begin{equation} v_{\rm diff} = \lambda/D c \end{equation} where $D$ is the size of the transport region (some fraction of the stellar radius) to the infall velocity, typically set to the free-fall velocity ($v_{\rm ff}$): \begin{equation} v_{\rm ff} = \sqrt{2G M_{\rm enc}/r} \end{equation} where $M_{\rm enc}$ is the enclosed mass of the star at radius $r$. Using these approximations, it is found that, except at the beginning of the CE phase when the neutron star is in the outer layers of the hydrogen envelope when $\dot{M} < 10^{-4} ~\rm{{M}_{\odot}yr^{-1}}$, the radiation is trapped in the flow~\citep{1993ApJ...411L..33C,1996ApJ...460..801F}. Recall that, for our nucleosynthesis models, we are only concerned with accretion rates above 1 $\rm{{M}_{\odot}yr^{-1}}$ where the radiation is truly trapped in the flow ($v_{\rm diffusion} << v_{\rm infall}$) and the assumptions needed for the Eddington limit are not applicable. What about the assumption that all of the energy is emitted in photons? If the radiation is trapped in the flow, the material will shock and settle onto the neutron star. By calculating the post-shock entropy of the material and assuming it piles onto the neutron star, we can derive the temperature and density properties of this accreted material. These estimates find that, unless the entropy is above $10,000 {~\rm k_B \, per \, nucleon}$, the temperature and density conditions are such that most of the gravitational energy released will be converted into neutrinos and escape the star without impeding the inflow~\citep{1989ApJ...346..847C,1996ApJ...460..801F}. In CE models, the entropy ranges between $10-100 {~\rm k_B \, per \, nucleon}$ and, to date, no one has constructed a way to avoid rapid neutrino cooling in CE scenarios. For CE systems, only a fraction of the energy is emitted in photons and this assumption of the Eddington approximation is also never satisfied. For systems where the CE phase ends up with the NS merging with its companion's helium core, neutrino emission increases dramatically, allowing the neutron star to incorporate material at the high Bondi rates predicted for these dense conditions. This merger ultimately forms a rapidly accreting black hole that may produce ultra-long gamma-ray bursts~\citep{1998ApJ...502L...9F}. Although it seems that Bondi-Hoyle accretion assumptions are most applicable to our problem, not all of the energy is converted to neutrinos and photons, some goes into kinetic energy that ejects a fraction of the accreting material (see Section~\ref{sec:ejecta}). It is this ejecta that is the subject of our nucleosynthesis study. We will discuss this ejecta in more detail in Section~\ref{sec:ejecta}, but it is important to understand that the ejecta may also alter the accretion. We will decrease the accretion rate in our Bondi-Hoyle solution to approximate this effect. In this project we use two bounds to match the range of efficiencies in simulations: $\lambda_{\rm BHL}=1/4, 1/40$. The lower value is set to try to capture both asymmetric accretion and ejecta affects that lower the accretion rate. \begin{figure} \centering \caption{Radius evolution of the stellar models used in this study. The x-axis is time remaining until core collapse (or, in the case of some of the 8~$\rm{M_\odot}$ models, envelope ejection). The points indicate at which times stellar structures were used from the models in order to calculate the accretion rates.} \label{fig:stellarmodels} \includegraphics[width=\columnwidth]{r_ltl.pdf} \includegraphics[width=\columnwidth]{8msun.pdf} \end{figure} The accretion during the CE phase depends on the structure of the star which, in turn, depends upon both the stellar mass and evolutionary stage. To estimate the NS accretion rates in the CE phase, we use a coarse grid of stellar models computed using the MESA stellar evolution code~\citep{Paxton2011a,Paxton2013a,Paxton2015a,Paxton2018a}, ranging from 8 to 25 solar masses with initial metallicities in the range $10^{-4}\leq Z \leq 2\times10^{-2}$. The massive star models ($M_\mathrm{ini} \geq 12~\rm{M_\odot}$) are the ones from \citet{ritter2018nugrid}. The $8~\rm{M_\odot}$ models were computed using the same input physics as in \citet{Jones2013a}. We refer the reader to those papers for a more thorough description of the stellar evolution calculations. We study the structure of each of these models as the star evolves, focusing on periods of time when the star is expanding and a CE phase is likely to occur. Figure~\ref{fig:stellarmodels} shows the radius evolution of our stars as a function of time with the points showing the specific times used in our study. As the neutron star spirals into the star, the accretion rate onto it increases. The corresponding accretion rates (assuming $\lambda_{\rm BHL}=1/4$) as a function of radius for the 12, 15 and 20\,M$_\odot$ stars at different evolutionary times is shown in Figure~\ref{fig:mdot}. In the bulk of the envelope, the accretion rate lies between $1-10^5 {\rm \, M_\odot \, yr^{-1}}$ and we will focus on these rates, but if the neutron star spirals into the core, the accretion rate will be higher. \begin{figure} \includegraphics[width=\columnwidth,clip=true,trim=0cm 3.5cm 0cm 4cm]{mdot} \caption{Accretion rate for a 1.4\,M$_\odot$ neutron star as a function of the position of the neutron star within the mass coordinate of its companion for the 12, 15 and 20\,M$_\odot$, solar metallicity stars for a range of times. At early times, the envelope is compact and the accretion rate is higher. As the star expands, the accretion rate in the envelope decreases. In the bulk of the envelope, the accretion rate lies between $1-10^4 {\rm \, M_\odot \, yr^{-1}}$. Here we assume $\lambda_{\rm BHL}=1/4$. The accretion rates may be an order of magnitude lower.} \label{fig:mdot} \end{figure} \subsection{Ejection of Accreted Mass} \label{sec:ejecta} The accreted material is explosively unstable and early calculations suggested that some of the infalling material would gain enough energy to be ejected~\citep{1996ApJ...460..801F}. Estimates of the convective timescale using the Brunt-V\"ais\"ala frequency~\citep[see, for example,][]{1983apum.conf.....C,1996ApJ...460..801F} suggest that the timescale that the material spends near the proto-neutron star surface is milliseconds in duration. This initial study focused on CE accretion scenarios, but most of the subsequent, more systematic, multi-dimensional work focused on the higher accretion rates seen in supernova fallback~\citep{2006ApJ...646L.131F,2009ApJ...699..409F}. Although these studies focused on accretion rates above $10^{4}$\,M$_\odot$\,$yr^{-1}$, they showed the same features as the CE models studied in \cite{1996ApJ...460..801F}. The accreted material falls down toward the proto-neutron star surface. A fraction of this material is heated and accelerated to above escape velocities, ejecting it from the system. We designed a set of twelve trajectories based on these simulations, guided by the analytic models developed to understand these simulations. The accreting material accelerates nearly at free-fall until it falls within 10\,km of the neutron star surface. The uncertainty in the flow lies in determining how quickly the flow is reversed and material is ejected. We study two extremes: a bounce scenario where the reverse is instantaneous, a convective scenario where the acceleration timescale is on par with the convective turnover timescale. For the latter, convective timescale, we can estimate the acceleration timescale from the Brunt=V\"ais\"ala freqency ($t_{\rm acceleration} = 1/\omega_{\rm BV}$) where \begin{equation} \omega_{\rm BV}^2 = g/\rho (\partial \rho/\partial S)_P (\partial S/\partial r), \end{equation} where $\rho$ and $S$ are the density and entropy respectively\citep{cox83}. For conditions in supernovae and fallback this timescale is on the range of 2\,ms\citep{fryer07}. This timescale estimates the growth timescale of Rayleigh-Taylor instabilities, but this provides an approximate timescale for the acceleration timescale to reverse the shock. Fallback simulations suggest that the true answer lies between these two extremes. We model these two extremes by two parameterized simulations: bounce and convective trajectories. The bounce trajectory assumes that material falls in adiabatically at free-fall until it reaches 20\,km (10\,km from the surface) and where it is assumed that the material bounce and is ejected at the escape velocity. For the convective trajectory, we assume that the material falls in at free-fall until it reaches a depth of 50\,km, where we turn on a force that is strong enough to turn around the trajectory within the roughly 2\,ms convective timescale. The temperature evolution of these two paradigms is shown in Figure \ref{fig:trajt}. {\rm The corresponding density evolution is shown in Figure \ref{fig:trajrho}. In the bounce trajectory, In our first model, sharp increase of the temperature evolution profile (Figure~\ref{fig:trajt}) mimics a hard stop for the infalling material prior to this expulsion. In our the convective model, the acceleration begins sooner but is more gradual. } \begin{figure} \includegraphics[width=0.5\textwidth,clip=true,trim=0cm 3.5cm 0cm 4cm]{trajt.pdf} \caption{Temperature as a function of time for our trajectories at different accretion rates: 1$~\rm{{M}_{\odot}yr^{-1}}$ (black lines), 10$~\rm{{M}_{\odot}yr^{-1}}$ (blue lines), 10$^{2}~\rm{{M}_{\odot}yr^{-1}}$ (cyan lines), 10$^{3}~\rm{{M}_{\odot}yr^{-1}}$ (green lines), 10$^{4}~\rm{{M}_{\odot}yr^{-1}}$ (magenta lines) and 10$^{5}~\rm{{M}_{\odot}yr^{-1}}$ (red lines). Solid lines refers to our model mimicking a hard stop of the infalling material, dotted lines correspond to a more gradual turn-around of the ejecta (both described in section \ref{sec:ejecta}). } \label{fig:trajt} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{trajrho.pdf} \caption{Density as a function of time for our trajectories at different accretion rates: 1$~\rm{{M}_{\odot}yr^{-1}}$ (black lines), 10$~\rm{{M}_{\odot}yr^{-1}}$ (blue lines), 10$^{2}~\rm{{M}_{\odot}yr^{-1}}$ (cyan lines), 10$^{3}~\rm{{M}_{\odot}yr^{-1}}$ (green lines), 10$^{4}~\rm{{M}_{\odot}yr^{-1}}$ (magenta lines) and 10$^{5}~\rm{{M}_{\odot}yr^{-1}}$ (red lines). Solid lines refers to our model mimicking a hard stop of the infalling material, dotted lines correspond to a more gradual turn-around of the ejecta (both described in section \ref{sec:ejecta}). } \label{fig:trajrho} \end{figure} Although we expect some gain in entropy during the heating phase, we assume constant entropy evolution for our models. If we increase the entropy, we increase the temperature, and more heavy elements will be produced. Likewise, if we make the evolution even more gradual, the material will remain in a region of high nuclear burning longer, also producing more heavy elements. But our simulation-guided \citep{1996ApJ...460..801F,2006ApJ...646L.131F,2009ApJ...699..409F} toy models will provide a gauge of the importance of this ejecta in galactic chemical evolution. Typically $\sim$10-25\% of the accreted material is ejected along the angular momentum axis~\citep{2006ApJ...646L.131F}. It is this material that is the focus of our nucleosynthetic studies. In this paper, we focus on accretion rates between 1~$\rm{{M}_{\odot}yr^{-1}}$ and 10$^5~\rm{{M}_{\odot}yr^{-1}}$. The rate rarely exceeds our highest accretion rate in common envelop situations and lower rates (occurring in initial phases of the CE) do not produce much nuclear burning. The inner infall, nuclear burning and ejection phases are so rapid ($<1s$) that we can assume the accretion rate is constant over any cycle. Table \ref{tab:summary one-zone} lists the trajectories investigated in this paper, and shown in Figure \ref{fig:trajt}. The trajectories labeled with ".C." are representative of the first model described above, while those labeled with ".D." are representative of the second delayed model. \begin{table*} \begin{tabular}{c|c|c|c|c|} \toprule Trajectory Name & Peak T$_{9}C$ & Trajectory Name & Peak T$_{9}D$ & Accretion Rate ($\rm{{M}_{\odot}yr^{-1}}$)\\ \midrule mod.C.ar1d0 & 1.303 & mod.D.ar1d0 & 0.728 & 1 \\ mod.C.ar1d1 & 2.166 & mod.D.ar1d1 & 1.278 & 10\\ mod.C.ar1d2 & 3.981 & mod.D.ar1d2 & 2.306 & 10$^{2}$\\ mod.C.ar1d3 & 6.952 & mod.D.ar1d3 & 4.090 & 10$^{3}$\\ mod.C.ar1d4 & 10.00 & mod.D.ar1d4 & 7.286 & 10$^{4}$\\ mod.C.ar1d5 & 10.00 & mod.D.ar1d5 & 10.00 & 10$^{5}$\\ \bottomrule \end{tabular} \caption{List of trajectories investigated in this paper, along with peak temperatures in those trajectories. The first column includes those trajectories in Figure \ref{fig:trajt} which have undergone a sudden change in direction during the accretion, while those with a more gradual turnaround are listed in the third column. The accretion rates range from 1 to 10$^{5}~\rm{{M}_{\odot}yr^{-1}}$ as detailed in section \ref{sec:mdot}. Temperatures have been clipped in these models at 10 GK, as reaction rates tables above this threshold are not available.} \label{tab:summary one-zone} \end{table*} \section{Nucleosynthesis calculations from Neutron Star Accretion} \label{sec:yields} In this section we present the nucleosynthesis yields of the trajectories described in section \ref{sec:ejecta} and listed in Table \ref{tab:summary one-zone}. The composition of the accreted material has a scaled solar isotopic distribution~\citep[][]{asplund:09}, with metallicity $Z_{m}=0.02$. The nuclear network includes 5234 isotopes and 74313 reactions from the different nuclear physics compilations and rates available \citep[see, e.g.,][]{pignatari:16}. The 3$-\alpha$ and $^{12}$C($\alpha$,$\gamma$)$^{16}$O by \citet{fynbo:05} and \citet{kunz:02} respectively, and the $^{14}$N(p,$\gamma$)$^{15}$O by \citet{imbriani:05}. The reaction rate $^{13}$C($\alpha$,n)$^{16}$O is taken from \citet{heil:08}, and the $^{22}$Ne($\alpha$,n)$^{25}$Mg and $^{22}$Ne($\alpha$,$\gamma$)$^{26}$Mg rates are from \citet{jaeger:01} and \citet{angulo:99}. Experimental neutron capture reaction rates are taken when available from the KADoNIS compilation \citep[][]{dillmann:06}. For neutron capture rates not included in KADoNIS, we refer to the JINA REACLIB database, V1.1 \citep[][]{cyburt:11}. The weak rates for light and intermediate-mass species are provided by \citet{oda:94} or \citet{fuller:85}, by \citet{langanke:00} for mass number 45 $<$ A $<$ 65 and for the weak interaction between protons and neutrons. Finally, for heavy species with A $>$ 65 we use \citet{goriely:99}. Besides the $^{14}$N(p,$\gamma$)$^{15}$O mentioned above, proton capture rates have different sources like the NACRE compilation \citep[][]{angulo:99} and \cite{iliadis2001proton} for isotopes in the mass region between $^{20}$Ne and $^{40}$Ca. Proton captures on isotopes heavier than $^{40}$Ca are given by the JINA REACLIB database. The $^{13}$N(p,$\gamma$)$^{14}$O rate is taken from \cite{caughlan1988thermonuclear}, and proton captures on $^{27}$Al from \cite{champagne:92}. For our simulations we have used material accreted with solar composition. This approximation does not affect conclusions in this paper. Indeed, at the lower accretion rates seed nuclei might have large effects on the distribution of products, but their impact on the final yields are marginal. On the other hand, at the higher accretion rates investigated in this paper the material accreted enters in nuclear statistical equilibrium at extreme conditions (Table \ref{tab:summary one-zone}), so that the final yields are largely insensitive to the initial isotopic distribution (and to the electron fraction y$_{e}$). As we will see in the following sections, nucleosynthesis in these trajectories is dominating the total integrated ejecta for GCE. \subsection{Yields at Differing Accretion Rates} \label{sec: ppn set} Production of isotopes at various accretion rates onto the surface of the compact object were investigated. Accretion rates varied from 1$~\rm{{M}_{\odot}yr^{-1}}$, where the neutron star first comes into contact with the companion, to the maximum accretion rate of 10$^{5}~\rm{{M}_{\odot}yr^{-1}}$, corresponding to the final stages of accretion in the system when the neutron star reaches the helium core. Tables \ref{tab:tabulated_overproduction_mod_c} and \ref{tab:tabulated_overproduction_mod_d} summarize the results of this section and give a list of the isotopes with the highest overproduction, the increase in the abundance of an isotope relative to its initial abundance in the accreted material, at the accretion rates 10$^{4}~\rm{{M}_{\odot}yr^{-1}}$ and 10$^{5}~\rm{{M}_{\odot}yr^{-1}}$ for the trajectories investigated. Trajectories at lower accretion rates do not show the same extent of overproduction, and therefore contribute minimally to our GCE results. Table \ref{tab:most_overproduced_tab} shows the top 5 isotopes overproduced in each of the trajectories investigated. Those isotopes most produced in these trajectories are also given in the relevant sections. In Tables \ref{tab:centraj_abunds} and \ref{tab:delayed_abundances_table} the isotopic yields of stable isotopes (with contributions from decayed unstable isotopes) are provided for the trajectories listed in Table \ref{tab:summary one-zone}. Complete radiogenic contribution is considered for these abundances. Undecayed abundances are also provided in separate tables available on-line at \href{http://apps.canfar.net/storage/list/nugrid/nb-users/Common_Envelope_Data_Keegans_2018}{CANFAR}. The corresponding overproduction factors are given in Table \ref{tab:overprod_factors_centraj} and table \ref{tab:overprod_factors_delay}. Complete isotopic production factors and elemental abundances are shown in Figures \ref{fig:overabunds_multii} and \ref{fig:mass_frac_multii}, respectively, where results from the two sets of trajectories at accretion rates can be compared (see discussion here below). Figure \ref{fig:flux_multi_figure} shows the integrated fluxes for each of the Mod.C accretion rates. Figures \ref{fig:multiplot_element_overabnd_with_legend} and \ref{fig:multiplot_element_overabnd_edlayed_with_legend} show the final isotopic production factors, zoomed in the mass region 40 $\lesssim$ A $\lesssim$ 100, and the electron fraction Ye obtained. Our results show that the largest contributions to GCE must come from those trajectories at higher accretion rates. In these conditions, hydrogen is fully burned allowing high abundance overproductions at the iron group and beyond. Lower accretion rates contribute marginally to the enrichment of ejected material, both because the material is incompletely burnt and the amount of material ejected is much smaller. Proton rich material is highly overproduced in trajectory mod.C.ar1d4, and this is the source of most of the proton rich material ejected from the system. This trajectory reaches high enough temperatures that the accreted fuel burns efficiently, with a low enough peak density such that electron captures do not become dominant causing a shift in the $Y_\mathrm{e}$ and peak production to more neutron rich isotopes. \begin{table} \begin{tabular}{p{2cm}|p{2.6cm}|p{2.6cm}|} \toprule Overproduction $x_{fin}/x_{ini}$ & 10$^{4}~\rm{{M}_{\odot}yr^{-1}}$ & 10$^{5}~\rm{{M}_{\odot}yr^{-1}}$\\ \midrule 10$^{4} < x < 10^{5} $& $^{76,77}$Se, $^{60,61}$Ni, $^{72,73}$Ge, $^{86,87}$Sr, $^{75}$As, $^{82}$Kr, $^{69,71}$Ga, $^{63,65}$Cu, $^{96}$Ru, $^{89}$Y, $^{66,67,68}$Zn, $^{94}$Mo, & $^{84}$Kr, $^{67,68,70}$Zn, $^{63,65}$Cu, $^{89}$Y, $^{58}$Fe, $^{86}$Kr, \\ 10$^{5} < x < 10^{6} $ & $^{70}$Ge, $^{64}$Zn, $^{80}$Kr, $^{93}$Nb,& $^{88}$Sr, $^{87}$Rb, $^{62,64}$Ni\\ 10$^{6} < x < 10^{7} $ & $^{74}$Se,~$^{78}$Kr, $^{84}$Sr,~$^{90,91}$Zr& - \\ 10$^{7} < x $ & $^{92}$Mo & - \\ \bottomrule \end{tabular} \caption{ Table showing those isotopes overproduced in the given ranges for each of the accretion rates shown in the mod.C trajectories } \label{tab:tabulated_overproduction_mod_c} \end{table} \begin{table} \begin{tabular}{p{2cm}|p{2.6cm}|p{2.6cm}|} \toprule Overproduction $x_{fin}/x_{ini}$ & 10$^{4}~\rm{{M}_{\odot}yr^{-1}}$ & 10$^{5}~\rm{{M}_{\odot}yr^{-1}}$\\ \midrule 10$^{4} < x < 10^{5} $ & $^{58,61}$Ni & $^{76}$Ge, $^{50}$Ti, $^{61,62}$Ni, $^{63,65}$Cu, $^{82}$Se, $^{66,67,68,70}$Zn, $^{58}$Fe, \\ 10$^{5} < x < 10^{6} $ & - & $^{88}$Sr, $^{87}$Rb, $^{64}$Ni, $^{86}$Kr, $^{54}$Cr \\ \bottomrule \end{tabular} \caption{ Table showing those isotopes overproduced in the given ranges for each of the accretion rates shown in the mod.D trajectories } \label{tab:tabulated_overproduction_mod_d} \end{table} \begin{table} \begin{tabular}{p{3.0cm}|p{4.5cm}|} \toprule Model & Most overproduced isotope \\ \midrule mod.C.ar1d0 & $^{42}$Ca, $^{21}$Ne, $^{74}$Se, $^{18}$O, $^{ 7}$Li \\ mod.C.ar1d1 & $^{32}$Ba, $^{80}$Ta, $^{98}$Ru, $^{31}$P, $^{38}$La \\ mod.C.ar1d2 & $^{58}$Ni, $^{98}$Ru, $^{62}$Ni, $^{51}$V, $^{61}$Ni \\ mod.C.ar1d3 & $^{64}$Zn, $^{51}$V, $^{58}$Ni, $^{62}$Ni, $^{61}$Ni \\ mod.C.ar1d4 & $^{91}$Zr, $^{90}$Zr, $^{78}$Kr, $^{74}$Se, $^{92}$Mo\\ mod.C.ar1d5 & $^{86}$Kr, $^{62}$Ni, $^{64}$Ni, $^{87}$Rb, $^{88}$Sr, \\ mod.D.ar1d0 & $^{31}$P, $^{33}$S, $^{15}$N, $^{18}$O, $^{ 7}$Li \\ mod.D.ar1d1 & $^{23}$Na, $^{33}$S, $^{42}$Ca, $^{21}$Ne, $^{ 7}$Li \\ mod.D.ar1d2 & $^{36}$Ce, $^{41}$K, $^{44}$Ca, $^{43}$Ca, $^{42}$Ca \\ mod.D.ar1d3 & $^{64}$Zn, $^{51}$V, $^{58}$Ni, $^{61}$Ni, $^{62}$Ni \\ mod.D.ar1d4 & $^{64}$Zn, $^{60}$Ni, $^{62}$Ni, $^{58}$Ni, $^{61}$Ni \\ mod.D.ar1d5 & $^{54}$Cr, $^{88}$Sr, $^{86}$Kr, $^{87}$Rb, $^{64}$Ni \\ \end{tabular} \caption{5 most overproduced isotopes for each of the accretion rates investigated.} \label{tab:most_overproduced_tab} \end{table} \subsubsection{Accretion rate 1 $\rm{{M}_{\odot}yr^{-1}}$: mod.C.ar1d0 and mod.D.ar1d0} \label{1_M_dot_sec} The dominant reactions are proton captures from the H rich accreted material, followed by $\beta^{+}$ decays bringing the material back towards the valley of stability in the mod.C.ar1d0 case. Some (p,$\alpha$) reactions are evident in this region, hindering the flow of material to heavier masses. Significant increases in the abundance of $^{7}$Li can be seen for both mod.C.ar1d0 and mod.D.ar1d0 (top left panel of Figure \ref{fig:overabunds_multii}) with both trajectories showing overproduction factors of $\sim$10$^{6}$. This occurs due to efficient $^{3}$He ($\alpha,\gamma$)$^{7}$Be, followed by {$\beta^{+}$} decays. $^{15}$N and $^{18}$O are also increased significantly, by factors of about 150 and 250 for mod.C.ar1d0, and 280 and 510 for the delayed trajectory. Enhancements of light and intermediate-mass elements are comparable for the two trajectories, until the mass region A $\sim$ 50. Beyond this mass, the delayed trajectory does not show efficient production for the heavier elements, while in mod.C.ar1d0 final abundances are enhanced up to A $\sim$ 80. This is due to the higher peak temperature reached compared to the mod.D.ar1d0 model (1.303 GK compared to 0.728 GK), causing a build up of heavier nuclei mostly via proton capture reactions. Panel a of Figure \ref{fig:mass_frac_multii} shows that neither of the trajectories at this accretion rate allows any burning of material above Z=40. The conditions at this accretion rate are not extreme enough to allow proton captures on these heavier nuclei, or to cause photo-disintegration on the heavier elements present in the accreted material. $^{74}$Se, $^{78}$Kr and $^{84}$Sr (the lightest of the p-nuclei) all have overproduction factors greater than 10$^{2}$ for mod.C.ar1d0 at this accretion rate, where as the delayed trajectory shows no production of these isotopes. Inspection of flux charts for mod.C.ar1d0 shows a proton capture path with $\beta^{+}$ decays in the proton-rich side of the valley of stability, like for the rp-process \citep[][]{schatz:01}. This becomes most evident above the N=20 neutron magic number (see panel (a) of Figure \ref{fig:flux_multi_figure}). As can be seen from the first columns of Tables \ref{centraj_abundances_table} and \ref{tab:overprod_factors_delay}, the H fuel accreted from the companion star remains largely unburnt under these conditions - only 0.2\% of the initial abundance of H is burnt for both of these trajectories. \subsubsection{Accretion rate 10$~\rm{{M}_{\odot}yr^{-1}}$: mod.C.ar1d1 and mod.D.ar1d1} \label{10_M_dot_sec} Nucleosynthesis in mod.C.ar1d1 results in overproduction of a large number of isotopes up to A$\sim$157, due to the higher peak temperatures experienced in this trajectory as compared with the mod.C.ar1d0 case, allowing for more efficient proton captures in the accreted material, whilst remaining below the threshold for activation of photodisintegration of heavier material seen in higher accretion rate models. The highest mass isotope to be overproduced under these conditions is $^{180}$Ta (with a production factor of 5.4$\times $10$^{2}$), and the isotopes with the greatest overproduction are $^{138}$La and $^{31}$P (3.3$\times $10$^{3}$ and 1.0$\times $10$^{3}$ times initial abundance respectively). 14 other isotopes - $^{33}$S, $^{35, 37}$Cl, $^{42}$Ca, $^{62}$Ni, $^{84}$Sr, $^{96, 98}$Ru, $^{102}$Pd, $^{120}$Te, $^{126}$Xe, $^{130, 132}$Ba and $^{136, 138}$Ce all have production factors of between 10$^{2}$ and 10$^{3}$. In mod.D.ar1d1 abundances are greatest at lower masses. $^{7}$Li, $^{21}$Ne, $^{23}$Na, $^{33}$S, $^{42}$Ca and $^{84}$Sr have overproduction factors of between 10$^{2}$ and 10$^{3}$. The much reduced range of highly overproduced isotopes is again due to the lower temperatures experienced by the delayed trajectory on in-fall of the material. The peak temperature of mod.D.ar1d1 is similar to the peak temperature of mod.C.ar1d0 trajectory discussed before (see Table \ref{tab:summary one-zone}). Differences in the two abundance distributions are due to the temperature histories of the two trajectories, with the delayed trajectory exposed for longer to more extreme conditions. Production factors and abundances are shown in the top right panels of Figures \ref{fig:overabunds_multii} and \ref{fig:mass_frac_multii}, respectively. The mass fraction of the majority of stable isotopes above Z=40 remains unchanged, as was the case for the previous section. Peak temperatures from our simulations for these trajectories do not increase beyond $\approx$2 GK (Figure \ref{fig:trajt}) and burning occurs on a timescale of order milliseconds. Neither $\alpha$ nor proton captures have high enough probability at these temperatures to trigger complete burning of the accreted material. Flux nucleosynthesis plots for this accretion rate are similar to those for the 1$~\rm{{M}_{\odot}yr^{-1}}$ case, and are shown in panel (b) of Figure \ref{fig:flux_multi_figure}. \begin{figure*} \includegraphics[width=1\textwidth]{overabund_final_2} \caption{ Overproduction factors for trajectories listed in Table \ref{tab:summary one-zone} are shown. Abundances for delayed trajectories ("mod.D.") are shown in red, and in black for "mod.C." trajectories. Relative accretion rates are indicated in each panel. } \label{fig:overabunds_multii} \end{figure*} \begin{figure*} \includegraphics[width=1\textwidth]{fin_abundances_2} \caption{ Element abundance distributions in mass fraction with respect to atomic number are shown for the same cases in figure \ref{fig:overabunds_multii}. Contribution from radioactive decay is considered. Abundances for delayed trajectories ("mod.D.") are shown in red, and in black for "mod.C." trajectories. Most of the element abundances in the first two panels remains unchanged from their initial distributions, while significant changes can be observed at higher accretion rates. } \label{fig:mass_frac_multii} \end{figure*} \begin{figure*} \centering \subfigure[]{\label{fig:centraj_1_flux_image}}\includegraphics[clip, trim = 9cm 0.2cm 2cm 1cm, width=0.45\textwidth]{centraj_1_flux} \subfigure[]{\label{fig:b}}\includegraphics[clip, trim = 9cm 0.2cm 2cm 1cm, width=0.45\textwidth]{centraj_10_flux} \subfigure[]{\label{fig:b}}\includegraphics[clip, trim = 9cm 0.2cm 2cm 1cm, width=0.45\textwidth]{centraj_100_flux} \subfigure[]{\label{fig:b}}\includegraphics[clip, trim = 9cm 0.2cm 2cm 1cm, width=0.45\textwidth]{centraj_1000} \subfigure[]{\label{fig:b}}\includegraphics[clip, trim = 9cm 0.2cm 2cm 1cm, width=0.45\textwidth]{centraj_10000_flux_corrections_zoomed} \subfigure[]{\label{fig:b}}\includegraphics[clip, trim = 9cm 0.2cm 2cm 1cm, width=0.45\textwidth]{centraj_100000_flux_corrections_zoomed} \caption{Figure showing the distribution of fluxes ($\delta X/ \delta t)$) for each reaction ( units s$^{-1}$) for the mod.C trajectories, normalised to the maximum flux in each of those trajectories}. Top left to bottom, these correspond to the different accretion rates investigated in this paper for the mod.C case, lowest accretion rate to highest. \label{fig:flux_multi_figure} \end{figure*} \subsubsection{Accretion rate 10$^{2}~\rm{{M}_{\odot}yr^{-1}}$: mod.C.ar1d2 and mod.D.ar1d2} \label{100_M_dot_sec} Abundance production for mod.C.ar1d2 is clustered at mass between A $\sim$ 25 and 100. $^{51}$V and $^{61,62}$Ni are overproduced by factors larger than 10$^{3}$. Eight other isotopes have production factors greater than 10$^{2}$: $^{42}$Ca, $^{48}$Ti, $^{52}$Cr, $^{58, 60}$Ni, $^{64}$Zn and $^{96, 98}$Ru. The flux nucleosynthesis plot (panel (c) of Figure \ref{fig:flux_multi_figure}) for this trajectory shows a relevant difference compared to lower accretion rates. Burning proceeds further from the valley of stability, due to the more efficient proton captures at high temperatures. The $\alpha$-captures on intermediate-mass isotopes are activated. Heavier isotopes are destroyed by ($\gamma$,n) photodisintegration reactions, and a mild flux of (n,$\gamma$) reactions is also activated. As for lower accretion rates, a large proportion of the infalling material is not burnt (Table \ref{tab:centraj_abunds}) with only $\approx$0.3\% of the hydrogen fuel being consumed. The peak temperatures in this trajectory is 3.981 GK. The delayed trajectory mod.D.ar1d2 shows an abundance distribution extremely different from mod.C.ar1d2. Production extends to higher mass isotopes due to the longer time that the material spends at high temperature, and without yet relevant activation of photo-disintegration reactions. There are 22 isotopes enhanced by a factor between 10$^{2}$ and 10$^{3}$, and among them 8 have masses above A = 150. $^{38}$Ar, $^{41}$K, $^{43,44}$Ca, $^{45}$Sc, $^{47,48}$Ti and $^{136,138}$Ce show overabundances between 10$^{3}$ and 10$^{4}$, and $^{42}$Ca greater than 10$^{4}$. Temperatures are not high enough to initiate the photodisintegration reactions observed in mod.C.ar1d2. It can be seen from the middle left panel of Figure \ref{fig:mass_frac_multii} that there is a significant change in the mass fraction of elements with 50 < Z < 65, although the very heaviest elements (with Z > 65) which are investigated in this work remain largely unchanged in their abundances - The distributions of both of the C and D model are identical in the upper panels of Figure \ref{fig:mass_frac_multii}, implying that the distribution cannot be changed from the initial abundances. This is verified by comparison with Figure \ref{fig:overabunds_multii}, where the production factors for all isotopes above A = 150 and most above A=100 are negligible. Some elements with 50 > Z > 65 have abundances increased by up to 3 orders of magnitude. However, because of their small intrinsic abundances and the small contribution to the final integrated ejecta, this effect is not important to GCE models. \subsubsection{Accretion rate 10$^{3}~\rm{{M}_{\odot}yr^{-1}}$: mod.C.ar1d3 and mod.D.ar1d3} \label{1000_M_dot_sec} Material at this accretion rate reaches a peak temperature of 6.952 GK in the mod.C.ar1d3 trajectory and 4.090 GK in the mod.D.ar1d3 case, although for both trajectories the accreted material remains largely unburnt. The trajectory mod.C.ar1d3 reaches nuclear statistical equilibrium, and the destruction of heavier elements by photodisintegration as already seen for mod.C.ar1d2 now occurs in both trajectories. Isotopes are produced most efficiently in the mass region 50 $\lesssim$ A $\lesssim$ 100, including species in the iron-group region and up to Ru. For both trajectories, a significant increase of greater than 10$^{3}$ in the abundances of $^{58,61,62}$Ni is seen. In mod.C.ar1d3 overabundances greater than 10$^{2}$ are obtained for 5 other isotopes: $^{48}$Ti, $^{42}$Ca, $^{60}$Ni, $^{64}$Zn and $^{51}$Va, and 9 in the delayed trajectory mod.D.ar1d3: $^{45}$Sc, $^{42,44}$Ca, $^{48,49}$Ti, $^{52}$Cr, $^{60}$Ni, $^{80}$Kr and $^{96}$Ru. Elemental distributions of the two trajectories for this accretion rate are similar (middle left panel of Figure~\ref{fig:mass_frac_multii}) up to Z = 30, however there are significant differences in the abundances of Mo, Nb and Ru, none of which are overproduced in mod.C.ar1d3, and for some isotopes of Kr and Sr where a difference in overproduction factors of over an order of magnitude can be observed. In panel (d) of figure \ref{fig:flux_multi_figure}, the integrated reaction flows are shown for mod.C.ar1d3. In these conditions, $\alpha$ captures and $\beta$ decays are now the dominant reaction pathways. Proton captures are still active, but they do not push material from the line of stability as efficiently as in panel (c) of Figure \ref{fig:flux_multi_figure}. Approximately 50\% of the initial abundance of He is burnt in this trajectory. \subsubsection{Accretion rate 10$^{4}~\rm{{M}_{\odot}yr^{-1}}$: mod.C.ar1d4 and mod.D.ar1d4} \label{10000_M_dot_sec} Among the trajectories described in this section, mod.C.ar1d4 is the first case to undergo complete burning of hydrogen, with this trajectory entering nuclear statistical equilibrium (NSE). Significant overproduction for a large number of isotopes is observed: $^{74}$Se, $^{78}$Kr, $^{84}$Sr and $^{90,91}$Zr have final abundances over 6 orders of magnitude higher than initial, $^{92}$Mo is overproduced by a factor of 2.6$\times10^{7}$. Excepting the Zr isotopes, all of these nuclei are classically defined as products of the p-process \citep[][]{arnould:03,rauscher:13,pignatari:16}. While these are proton rich isotopes, for mod.C.ar1d4 we obtain a final neutron rich distribution (Ye = 0.46, table \ref{tab:centraj_abunds}). This is mostly due to the neutron-rich abundance signature in the Ni region, with 58\% of mass fraction as $^{62}$Ni and only a minor contribution to $^{58}$Ni. The zoomed isotopic distribution is shown in Figure \ref{fig:multiplot_element_overabnd_with_legend}, bottom left panel. The abundance pattern of heavy isotopes is similar to neutrino-driven winds ejecta with proton-rich composition \citep[e.g.,][]{froehlich:06,roberts:10,arcones:11}. Partial burning of the hydrogen is observed for the delayed trajectory mod.D.ar1d4. Production for heavy isotopes is marginal compared to mod.C.ar1d4 (Figure \ref{fig:overabunds_multii}), while intermediate-mass elements are made more efficiently (Figure \ref{fig:mass_frac_multii}). Isotopes $^{58,61}$Ni both have overproduction factors greater than 10$^{4}$. As also shown in the bottom left panel of Figure \ref{fig:multiplot_element_overabnd_edlayed_with_legend}, p-process isotopes are not made in mod.D.ar1d4, with no significant production above A $\sim$ 80. As we have seen for trajectories in the previous section, material above this mass has been destroyed by photodisintegration. Panel (e) of Figure \ref{fig:flux_multi_figure} shows the flux plot for mod.C.ar1d4. The material is in NSE, with the large amount of hydrogen allowing proton captures to occur and extending the abundance distribution to the proton rich side of the line of stability. The lower densities reached in this model as compared with the mod.C.ar1d5 trajectory ensure that electron capture reactions are not favorable enough to reduce the $Y_\mathrm{e}$. The trajectory is in NSE with large fluxes in the iron group region, lower average and lower peak densities in this model lead to a more proton rich distribution of products than the mod.C.ar1d5 case (see panel f of Figure \ref{fig:flux_multi_figure} for comparison). The flux plot is dominated by the reactions in NSE equilibrium around the iron group region during the NSE phase. In these simulations, the temperature and density freezout is extremely fast and do not allow to significantly modify the integrated fluxes after the trajectories exit from NSE. \subsubsection{Accretion rate 10$^{5}~\rm{{M}_{\odot}yr^{-1}}$: mod.C.ar1d5 and mod.D.ar1d5} \label{100000_M_dot_sec} More neutron rich material is produced under these conditions with isotopic distributions skewed towards more neutron rich isotopes. This is due to the higher peak and average density conditions experienced in this model as compared with the mod.C.ar1d4 trajectory. The most overproduced isotope in trajectory mod.C.ar1d5 is $^{88}$Sr, with an overproduction factor of 9.8$\times$10$^{5}$ along with $^{87}$Rb and $^{62,64}$Ni, all with overproduction factors of greater than 10$^{5}$. A similar distribution is observed in the mod.D.ar1d5 model, however $^{64}$Ni is the most overproduced isotope in this case, with $^{87}$Rb, $^{86}$Kr, $^{88}$Sr and $^{54}$Cr being the other isotopes with a production factor greater than 10$^{5}$. A large number of isotopes in both the mod.C.ar1d5 and mod.D.ar1d5 models are overproduced by factors of $\sim$ 10$^{4}$ - 9 for the mod.C.ar1d5 case and 12 for the mod.D.ar1d5, all of which are clustered around the iron group region Temperatures in mod.C.ar1d5 have been clipped at 10 GK as reaction rate tables are not available beyond this value. The overall distribution and range of production is similar between mod.C.ar1d5 and mod.D.ar1d5, as can be seen in the bottom right panel of Figure \ref{fig:mass_frac_multii} and Figures \ref{fig:multiplot_element_overabnd_with_legend} and \ref{fig:multiplot_element_overabnd_edlayed_with_legend}. As discussed by \cite{2006ApJ...646L.131F}, fall-back trajectories can produce r-process abundances with mild neutron-rich conditions. In the scenario discussed here, within a realistic range of accretion rates, mod.C.ar1d5 and mod.D.ar1d5 both show neutron-rich $Y_\mathrm{e}$ (0.447 and 0.443, respectively) and an abundance signature similar to the weak r-process \citep[e.g.,][]{seeger:65,kratz:93,arcones:11,wanajo:13}. Collapsars, as investigated by \cite{siegel2018neutron}, with accretion rates from ~0.3 to 30 times those investigated in these ar1d5 trajcectories have also been shown to have an increase in neutron rich material. Panel (f) of Figure \ref{fig:flux_multi_figure} shows the integrated flux plot for the mod.C.ar1d5 trajectory. It can be seen that the reaction pathways for this trajectory are through more neutron rich isotopes than in the mod.C.ar1d4 case. This is due to the higher density over the period of infall and ejection, making electron capture decays more efficient. In contrast to the mod.C.ar1d4 case (panel (e) of Figure \ref{fig:flux_multi_figure}), the material is more neutron rich, due to the higher peak density and increased time spent at higher densities in this trajectory, favoring electron capture reactions in the accreted material. \begin{figure*} \includegraphics[width=1\textwidth]{mod_C_fin} \caption{The isotopic distribution between A = 40 and 100 is shown for trajectories "mod.C." at different accretion rates. } \label{fig:multiplot_element_overabnd_with_legend} \end{figure*} \begin{figure*} \includegraphics[width=1\textwidth]{mod_D_fin} \caption{As for Figure \ref{fig:multiplot_element_overabnd_with_legend} but for delayed trajectories ("mod.D."). } \label{fig:multiplot_element_overabnd_edlayed_with_legend} \end{figure*} \section{Implications for Galactic Chemical Evolution} \label{sec:binary_and_GCE} In the previous section we have explored the large variety of nucleosynthesis that can be found in material infalling close to the neutron star at different accretion rates, and ejected back in the stellar host, and into the interstellar medium following evolution of the companion star. Both light and heavy elements can be made, showing proton-rich or neutron-rich isotopic patterns by slightly changing the trajectory conditions within a realistic parameter space. Interestingly, all of the different nucleosynthesis patterns shown could be produced in the same merging event, during the evolution of the merging system. The toy model presented in section \ref{sec:mdot} made to access the possible nucleosynthesis found in these systems is meant to be a first step to explore the production of elements in these systems, and motivate fully resolved hydrodynamics simulations. Based on the calculations present here, we can now also verify if these systems are relevant also for galactic chemical evolution (GCE). \subsection{Population synthesis study of CE phases} \label{sec:binary} Before studying the role of these stellar objects in a galactic chemical evolution context, we need to determine the yields mass ejection in CE, and couple the accretion rates from Section~\ref{sec:mdot} with the properties of the CE systems. To study the population of CE phases, we employ the StarTrack population synthesis code \citep{belczynski2002comprehensive,belczynski2008compact} to generate a population of binary compact objects . The code is based on revised formulas from \citet{hurley2000comprehensive}; updated with new wind mass loss prescriptions, calibrated tidal interactions, physical estimation of donor’s binding energy in CE calculations and convection driven, neutrino enhanced supernova engines. A full description of these updates is given in \citet{dominik2012double}. The two most recent updates take into account measurements of initial parameter distributions for massive O stars \citep{sana2012binary} as well as a correction of a technical bug that has limited the formation of BH-BH binaries for high metallicity (e.g., Z = 0.02). With this code, we modeled the binary interactions of 500,000 stars for 3 different metallicities (Table~\ref{tab:popsynth}). Only a small fraction of the systems actually go through a CE phase. For each of these systems, we calculate the progenitor star mass, the radius at the onset of the CE phase, and the final separation. Using the progenitor mass and radius at the onset of the CE phase, we can determine which of our MESA progenitors to use and the time in its evolution. There is a slight inconsistency in our approach, because the formulae for the stellar radii versus time in the population synthesis models are not identical to our MESA models, but this discrepancy allows us to make a first approximation of the accretion rates. Once we determine the mass and time of the CE interaction, we can use the final separation to determine how deep the neutron star inspirals into its companion. \begin{table} \centering \caption{Population Synthesis Calculations} \label{tab:popsynth} \begin{tabular}{lc} \toprule Metallicity & CE systems \\ \midrule 0.02 & 7172 \\ 0.002 & 12234 \\ 0.0002 & 7172 \\ \bottomrule \end{tabular} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{aovaf} \caption{Initial versus final separation for all the binary systems undergoing a CE phase from our population synthesis simulations, given at 3 metallicities.} \label{fig:aovaf} \end{figure} With our stellar structures and the separation evolution, we can follow the full range of accretion rates in the CE phase. During the CE phase, the NS is continuously accreting and ejecting mass, evolving through a range of accretion rates as the NS spirals down toward the stellar core. The accretion and ejection is rapid compared to the CE phase, and we can approximate this evolution as a series of phases with different accretion rates. For each binary system in our population synthesis calculation, we know the mass of the stellar component, the evolutionary timescale of the star at the initiation of CE phase (from the initial binary separation) and the range of accretion rates (based on the final binary separation and our stellar models). We then assume that 25\% of the accreted mass is reheated and ejected based on the results of accretion simulations\citep{2006ApJ...646L.131F} to get the rate of mass ejection. To get the total amount of mass ejected, we integrate the mass ejection rate over the inward spiral of the NS. Based on CE simulations \citep{2008ApJ...672L..41R,2012ApJ...746...74R,2012ApJ...744...52P,2013A&ARv..21...59I}, we assume that the duration of a typical CE phase persists for roughly 3 times the orbital period ($P_{\rm onset}$) at the onset of the CE phase. In our calculations, we will assume that the time spent at each radius ($t_r$) is 3 times this orbital period ($P_r$) at radius r, for each bin i, normalized by the orbital period of each radius: \begin{equation} t_r = 3 P_{\rm onset} P(r_i) / \sum _{i} (P(r_i)) \end{equation} In this paper, we consider 3 options for the mass accretion and ejection: $\lambda_{\rm BHL}= 1/4$, 25\% mass ejecta, $\lambda_{\rm BHL}= 1/40$, 25\% mass ejecta, and $\lambda_{\rm BHL}= 1/40$, 10\% mass ejecta. Recall that $\lambda_{\rm BHL}$ is a non-dimensional parameter relating to the efficiency of accretion in our system (see section \ref{sec:mass_accretion_est}). Figure~\ref{fig:masseji} shows the distribution of accretion rates for 10 of the close binaries in our population synthesis calculation with our model using $\lambda_{\rm BHL}= 1/40$, 25\% mass ejecta. \begin{figure} \includegraphics[width=\columnwidth,clip=true,trim=0cm 3.5cm 0cm 4cm]{masseji} \caption{Mass ejected versus accretion rate for 10 sample binary systems in our population synthesis calculations. As the neutron star spirals into its companion, the accretion rate increases. Throughout this inspiral phase, mass is ejected and the conditions of this ejecta evolve as the neutron star spirals into deeper and deeper stellar layers. The peak accretion rate depends upon the structure of the stellar core and the depth of the inspiral.} \label{fig:masseji} \end{figure} Using our full population of binary systems, we can estimate the average mass ejected per binary system as a function of metallicity and our value for $\lambda_{\rm BHL}$ (Figure~\ref{fig:massej}). The fraction of merging systems in our models are 34\%, 61\% and 75\% for 1, 0.1 and 0.01 solar metalicities repectively. On average, a binary system ejects a few hundredths of a solar mass of material (in practice, some systems can eject nearly a solar mass of material while many systems eject very little mass). Our MESA stellar models consist of a course grid in mass and time, which presents challenges in the outer layers of the companion star in matching with our population synthesis models, extracted from \citet{hurley2000comprehensive}. This is not a concern however at higher accretion rates near the core of the companion, where the majority of burning in these models is observed, so long as the core masses of our stars are similar. As we focus on higher accretion rates for our yields, inconsistencies introduced by this are minimal in our models. \begin{figure} \includegraphics[width=\columnwidth,clip=true,trim=0cm 3.5cm 0cm 4cm]{massej} \caption{Average mass ejected per binary system (based on the population synthesis models) as a function of metallicity (solar, 1/10th solar and 1/100th solar) for two extremes in our assumptions for accretion rate and mass ejected ("high" denotes $\lambda_{\rm BHL}= 1/4$, 25\% mass ejecta, low denotes $\lambda_{\rm BHL}= 1/40$, 10\% mass ejecta). Some binaries will not eject much mass at all, others will eject over a solar mass.} \label{fig:massej} \end{figure} \subsection{Galactic Chemical Evolution} \label{sec:gce} To estimate the contribution of CE mass ejection events in a galactic chemical evolution context, we use the OMEGA code described in \cite{2017ApJ...835..128C}. This is a classical one-zone open-box model (e.g., \citealt{1980FCPh....5..287T}) that is part of the open-source NuPyCEE package\footnote{\url{https://github.com/NuGrid/NuPyCEE}}. We adopt the same default Milky Way setup as in \cite{2018ApJ...854..105C}. Our input parameters are tuned to reproduce the star formation rate, the gas fraction, the gas inflow rate, and the Type~Ia and core-collapse supernova rates currently observed in the Milky Way (see Table~1 in \citealt{2015A&A...580A.126K}), within the observational errors. Our model is also calibrated to reach solar metallicity ($Z_{m}=0.014$, \citealt{asplund:09}) when the Sun forms, which is 4.6\,Gyr before the end of the simulation (\citealt{2017GeCoA.201..345C}). The evolution of metallicity in our model is generated using NuGrid yields (\citealt{Ritter2017a}) for massive and low-mass stars, and the yields of \cite{1999ApJS..125..439I} for Type~Ia supernovae. To include the contribution of CE events, we use the NuPyCEE delayed-extra source implementation\footnote{\url{https://github.com/NuGrid/NuPyCEE/blob/master/DOC/Capabilities/Delayed_extra_sources.ipynb}} which allows to include additional enrichment sources based on input metallicity-dependent delay-time distribution (DTD) functions and yields. Because the high NS accretion rate CE events studied here occur in systems involving two massive stars, the mass ejection rate is assumed to follow the lifetime of massive stars. In practical terms, for each stellar population formed in our model, all CE events occur between 5 and 40~Myr following the formation of the progenitor stars. This first-order implementation will be improved in follow-up studies. For the chemical composition of CE events ejecta, we convolve our nucleosynthesis calculations (see Section~\ref{sec: ppn set}) with the metallicity-dependent mass ejection rates inferred from our population synthesis analysis (Section~\ref{sec:binary}). An example of the resulting yields is shown in Figure~\ref{fig_yields_GCE}. The complete set of yields used in our chemical evolution calculations is available on-line at \href{http://apps.canfar.net/storage/list/nugrid/nb-users/Common_Envelope_Data_Keegans_2018}{CANFAR}. Figure~\ref{fig_GCE_centraj} shows the contribution of CE events to the solar isotopic composition predicted by our models, using our three mass accretion and ejection options described in Section~\ref{sec:binary}: $\lambda_{\rm BHL}= 1/4$, 25\% mass ejecta, $\lambda_{\rm BHL}= 1/40$, 25\% mass ejecta, and $\lambda_{\rm BHL}= 1/40$, 10\% mass ejecta. We also varied the binary fraction of massive stars between 25\,\% and 100\,\%. Overall, the different mass accretion and ejection options cause more variations in our predictions than the binary fraction. Those figures show that in some cases, CE events could significantly contribute to the chemical evolution of some iron-peak and first-peak neutron-capture isotopes in the Galaxy. In fact, some isotopes are overproduced in our models relative to the solar composition. Our result here show only the contributions from common envelope events to the solar composition, as direct comparison to isotopic abundances of other contributing events depend sensitively on the models chosen. \begin{figure} \includegraphics[width=\columnwidth]{yields_GCE.pdf} \caption{Yields ejected by CE events on average per massive star binary (MSB) at $Z=0.02$, representing the convolution of our population synthesis models (Section~\ref{sec:binary}) with the yields calculated for different accretion rates (see Section~\ref{sec: ppn set}), using the trajectories "mod.C" (black line) and "mod.D" (red line). This is for $\lambda_\mathrm{BHL}=1/4$ and 25\% mass ejecta (see Section~\ref{sec:binary}). The complete set of yields used in our galactic chemical evolution calculations for different metallicities and mass accretion and ejection options is available on-line at \href{http://apps.canfar.net/storage/list/nugrid/nb-users/Common_Envelope_Data_Keegans_2018}{CANFAR}.} \label{fig_yields_GCE} \end{figure} As mentioned in the previous sections, the nucleosynthesis of CE events is sensitive to the physical conditions (i.e., temperature and density). For example, when using the yields from the trajectories "mod.C.", the p-isotopes $^{74}$Se, $^{78}$Kr, $^{84}$Sr, and $^{92}$Mo, as well as $^{90,91}$Zr, are always overestimated by more than an order of magnitude. However, when using the yields from the delayed trajectories "mod.D.", none of these isotopes are significantly produced. Instead, the isotopes contributing the most to the solar compositions are rather in that case concentrated on the neutron-rich side, such as $^{64}$Ni, $^{70}$Zn, and $^{86}$Kr. We note that the nucleosynthesis has been calculated assuming an initial metallicity Z = 0.014, and that we used those yields for all metallicities. The purpose of our chemical evolution calculations is to verify whether or not CE events could be important for galactic chemical evolution. Our results should be seen as a first-order approximation and a motivation for future work. \begin{figure*} \includegraphics[width=0.95\textwidth]{solar.pdf} \caption{Contribution of CE events on the solar isotopic (mass number) composition of the Sun, using the galactic chemical evolution framework described in Section~\ref{sec:gce} and the yields from the trajectories "mod.C" (top) and "mod.D" (bottom). Different colors represent our different mass accretion and ejection options (see Section~\ref{sec:binary}), where $\lambda_\mathrm{BHL}$ defines the magnitude of the accretion rates (see Equation~\ref{eq_acc_rate}) and the percentage represent the fraction of accreted mass that is reheated and ejected. For each color, the band represents the range of solutions assuming different binary fractions from 25\,\% to 100\,\% for massive stars. The dotted horizontal line shows a contribution of 100\,\%. Anything above this line implies an overestimation relative to the solar composition. Isotopes of interest which are overproduced are labeled.} \label{fig_GCE_centraj} \end{figure*} \section{Final discussion and Conclusions} \label{sec:conclusions} In this work we have explored the nucleosynthesis produced by neutron stars accreting in binary common envelopes. A realistic range of accretion rate conditions is explored (between 1$~\rm{{M}_{\odot}yr^{-1}}$ and 10$^5~\rm{{M}_{\odot}yr^{-1}}$), for two sets of trajectories. In the first set, material is assumed to be near free-fall, and suddenly ejected with acceleration comparable (and in opposite direction) to the gravitational acceleration. In the second set, material is ejected with more gradual acceleration, resulting in lower temperature and density peaks compared to the first set. A large variety of nucleosynthesis patterns were obtained. Heavier elements are produced with increasing accretion rate, due to the higher temperature and density peaks. In particular, weak interactions become extremely important in defining the final composition at accretion rates of 10$^{4}$-10$^{5}~\rm{{M}_{\odot}yr^{-1}}$, and leading to a proton-rich or neutron-rich nucleosynthesis pattern for heavy elements between Fe and Ru. We test for the first time the impact of CE events in a galactic chemical evolution context. We find that accreting neutron stars could contribute in a non-negligible way to the solar composition for some isotopes. In particular, using yields from the first set of trajectories, we overproduce many p-isotopes such as $^{74}$Se, $^{78}$Kr, $^{84}$Sr, and $^{92}$Mo, among others. Using the second set of trajectories where there is a gradual change in trajectory, these events do not contribute to the solar abundances of these isotopes. The yields from these events is therefore highly dependent on the specific physical conditions experienced in the CE event, and the conditions in which the nucleosynthesis takes place is therefore crucial for galactic chemical evolution. To summarize, we have shown that neutron stars accreting in binary common envelopes are potentially an important (unaccounted for) nucleosynthesis site for the chemical evolution of the Galaxy. Due to the simple approximations made in this first study for the nucleosynthesis trajectories, present sets of yields are still highly uncertain. But these results are a first important step, demanding more detailed simulations in the future. \section*{Acknowledgments} NuGrid acknowledges support from NSF grant PHY-1430152 (JINA Center for the Evolution of the Elements) and STFC (through the University of Hull's Consolidated Grant ST/R000840/1). MP and JK also acknowledge support by ongoing resource allocations on the University of Hull's High Performance Computing Facility viper. BC acknowledges support from the ERC Consolidator Grant (Hungary) funding scheme (project RADIOSTAR, G.A. n. 724560). KB acknowledges support from the Polish National Science Center (NCN) grants Sonata Bis 2 (DEC-2012/07/E/ST9/01360), OPUS (2015/19/B/ST9/01099), Maestro 2015/18/A/ST9/00746 and LOFT/eXTP 2013/10/M/ST9/00729. This work was, in part, supported by the US Department of Energy through the Los Alamos National Laboratory. Additional funding was provided by the Laboratory Directed Research and Development Program and the Center for Nonlinear Studies at Los Alamos National Laboratory under project number 20170508DR. \begin{table*}\label{centraj_abundances_table} \begin{tabular}{c|c|c|c|c|c|c|c|c|} \toprule Element & A & 1$\rm{{M}_{\odot}yr^{-1}}$ & 10$\rm{{M}_{\odot}yr^{-1}}$ & 10$^{2}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{3}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{4}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{5}\rm{{M}_{\odot}yr^{-1}}$\\ \midrule H & 1 & 7.27e-01 & 7.27e-01 & 7.27e-01 & 7.24e-01 & 9.71e-12 & 1.27e-21 \\ H & 2 & 2.20e-18 & 2.39e-18 & 2.60e-18 & 2.81e-18 & 2.39e-25 & 4.72e-34 \\ He & 3 & 7.20e-12 & 3.44e-15 & 5.93e-15 & 1.16e-14 & 3.42e-29 & 3.15e-24 \\ He & 4 & 2.59e-01 & 2.54e-01 & 2.23e-01 & 1.44e-01 & 5.03e-02 & 9.54e-06 \\ Li & 7 & 3.99e-05 & 4.68e-10 & 4.17e-11 & 2.09e-11 & 6.79e-21 & 3.61e-23 \\ B & 11 & 5.77e-10 & 4.91e-15 & 3.18e-16 & 9.10e-17 & 6.25e-32 & 2.15e-31 \\ C & 12 & 2.60e-09 & 1.30e-11 & 7.56e-12 & 8.86e-12 & 3.00e-05 & 3.03e-12 \\ C & 13 & 2.67e-07 & 1.08e-09 & 8.62e-10 & 2.75e-10 & 1.30e-11 & 4.91e-27 \\ N & 14 & 1.27e-06 & 1.88e-06 & 2.11e-06 & 1.08e-06 & 2.76e-18 & 1.04e-26 \\ N & 15 & 2.98e-04 & 1.21e-08 & 4.00e-08 & 5.89e-08 & 1.22e-18 & 8.35e-16 \\ O & 16 & 2.54e-07 & 2.48e-09 & 1.08e-08 & 1.71e-08 & 6.14e-08 & 1.12e-11 \\ O & 17 & 6.08e-08 & 7.24e-09 & 2.15e-09 & 2.56e-10 & 1.69e-17 & 5.65e-23 \\ O & 18 & 2.68e-03 & 1.10e-05 & 1.40e-05 & 7.64e-06 & 1.86e-23 & 5.20e-24 \\ F & 19 & 8.83e-08 & 7.62e-12 & 2.44e-11 & 3.36e-11 & 8.00e-19 & 6.47e-17 \\ Ne & 20 & 2.57e-05 & 5.33e-08 & 1.05e-07 & 1.01e-07 & 7.16e-08 & 3.38e-13 \\ Ne & 21 & 3.60e-04 & 1.72e-09 & 2.08e-09 & 5.11e-09 & 4.37e-14 & 6.95e-21 \\ Ne & 22 & 4.77e-03 & 9.40e-06 & 7.06e-06 & 1.56e-06 & 1.44e-17 & 1.69e-16 \\ Na & 23 & 3.70e-04 & 9.51e-06 & 1.90e-05 & 1.33e-05 & 1.06e-11 & 4.17e-15 \\ Mg & 24 & 1.69e-03 & 2.53e-05 & 4.08e-05 & 2.80e-05 & 4.66e-08 & 1.73e-14 \\ Mg & 25 & 2.05e-05 & 1.77e-07 & 7.95e-07 & 1.46e-06 & 4.15e-13 & 5.87e-16 \\ Mg & 26 & 8.21e-08 & 1.16e-07 & 2.25e-09 & 4.39e-09 & 2.20e-10 & 2.12e-14 \\ Al & 27 & 5.67e-04 & 1.34e-04 & 3.88e-05 & 2.06e-05 & 5.77e-08 & 3.16e-14 \\ Si & 28 & 2.77e-04 & 6.40e-04 & 1.47e-04 & 8.13e-05 & 6.38e-08 & 6.26e-15 \\ Si & 29 & 1.36e-08 & 4.16e-05 & 5.47e-08 & 6.50e-08 & 3.47e-09 & 8.51e-14 \\ Si & 30 & 1.79e-06 & 1.02e-06 & 6.21e-09 & 3.47e-09 & 4.32e-07 & 2.49e-13 \\ P & 31 & 5.47e-04 & 4.39e-03 & 9.98e-05 & 5.76e-05 & 2.16e-08 & 7.18e-15 \\ S & 32 & 6.98e-05 & 7.98e-03 & 1.50e-04 & 8.73e-05 & 3.05e-07 & 2.40e-13 \\ S & 33 & 3.07e-04 & 8.05e-04 & 1.45e-06 & 2.21e-06 & 9.65e-08 & 6.35e-14 \\ S & 34 & 1.17e-08 & 6.46e-05 & 3.48e-07 & 2.64e-08 & 1.30e-06 & 1.19e-12 \\ S & 36 & 7.00e-99 & 7.00e-99 & 4.63e-35 & 2.73e-35 & 5.05e-10 & 2.51e-13 \\ Cl & 35 & 1.36e-08 & 1.62e-03 & 1.86e-04 & 1.11e-04 & 2.53e-07 & 3.84e-14 \\ Cl & 37 & 4.47e-05 & 2.05e-04 & 8.47e-07 & 1.19e-06 & 1.38e-07 & 3.33e-13 \\ Ar & 36 & 2.38e-06 & 2.43e-03 & 2.56e-04 & 1.57e-04 & 7.30e-07 & 1.40e-14 \\ Ar & 38 & 8.44e-10 & 1.79e-04 & 1.84e-04 & 1.06e-04 & 4.36e-06 & 3.22e-12 \\ Ar & 40 & 2.42e-33 & 3.44e-39 & 4.52e-35 & 7.19e-39 & 8.07e-09 & 5.48e-13 \\ K & 39 & 2.32e-08 & 3.77e-07 & 3.66e-06 & 4.13e-06 & 2.97e-07 & 3.07e-14 \\ K & 40 & 3.46e-27 & 4.09e-28 & 8.11e-28 & 1.38e-27 & 1.88e-08 & 3.81e-15 \\ K & 41 & 8.10e-09 & 2.45e-06 & 2.11e-07 & 3.90e-07 & 2.20e-07 & 7.32e-14 \\ Ca & 40 & 1.09e-10 & 7.40e-08 & 1.10e-06 & 1.17e-06 & 1.34e-06 & 1.18e-16 \\ Ca & 42 & 5.99e-05 & 5.05e-05 & 1.81e-04 & 7.80e-05 & 9.18e-06 & 9.35e-14 \\ Ca & 43 & 1.46e-08 & 1.12e-07 & 3.01e-07 & 3.46e-07 & 4.04e-07 & 1.39e-13 \\ Ca & 44 & 3.65e-07 & 1.37e-05 & 1.01e-04 & 8.75e-05 & 1.27e-06 & 9.41e-13 \\ Ca & 46 & 9.00e-99 & 9.00e-99 & 9.00e-99 & 5.28e-37 & 7.65e-14 & 2.04e-11 \\ Ca & 48 & 8.00e-99 & 8.00e-99 & 8.00e-99 & 2.96e-38 & 1.26e-21 & 6.82e-11 \\ Sc & 45 & 1.16e-08 & 6.51e-08 & 9.97e-07 & 1.96e-06 & 4.01e-06 & 2.08e-13 \\ Ti & 46 & 1.68e-07 & 8.36e-08 & 1.18e-05 & 9.79e-07 & 8.96e-06 & 3.76e-14 \\ Ti & 47 & 5.35e-08 & 1.98e-07 & 3.47e-06 & 3.55e-06 & 1.72e-06 & 2.35e-11 \\ Ti & 48 & 2.32e-08 & 6.27e-07 & 6.72e-04 & 3.89e-04 & 7.55e-06 & 5.11e-09 \\ Ti & 49 & 7.77e-08 & 7.82e-08 & 5.66e-06 & 8.81e-06 & 8.71e-06 & 1.66e-08 \\ Ti & 50 & 1.00e-98 & 1.00e-98 & 1.00e-98 & 1.60e-24 & 7.33e-12 & 2.18e-05 \\ V & 50 & 1.00e-99 & 1.00e-99 & 1.00e-99 & 4.25e-29 & 3.54e-08 & 1.01e-09 \\ $Y_\mathrm{efin}$ & N/A & 8.64e-01 & 8.63e-01 & 8.63e-01 & 8.61e-01 & 4.68e-01 & 4.47e-01 \\ \bottomrule \end{tabular} \caption{Abundances for trajectories mod.C.ar1d0, mod.C.ar1d1, mod.C.ar1d2, mod.C.ar1d3 mod.C.ar1d4 and mod.C.ar1d5 respectively, up to isotope $^{50}$V. Complete tables are available online. Final electron fractions (Ye) are provided at the end of the table. } \label{tab:centraj_abunds} \end{table*} \begin{table*} \begin{tabular}{c|c|c|c|c|c|c|c|c|} \toprule Element & A & 1$\rm{{M}_{\odot}yr^{-1}}$ & 10$\rm{{M}_{\odot}yr^{-1}}$ & 10$^{2}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{3}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{4}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{5}\rm{{M}_{\odot}yr^{-1}}$\\ \midrule H & 1 & 7.28e-01 & 7.27e-01 & 7.26e-01 & 7.25e-01 & 2.79e-01 & 2.92e-23 \\ H & 2 & 2.20e-18 & 2.39e-18 & 2.60e-18 & 2.81e-18 & 1.07e-18 & 1.13e-34 \\ He & 3 & 3.49e-11 & 1.02e-14 & 5.71e-15 & 1.12e-14 & 3.65e-15 & 1.58e-23 \\ He & 4 & 2.61e-01 & 2.58e-01 & 2.42e-01 & 1.57e-01 & 8.04e-02 & 6.70e-07 \\ Li & 7 & 6.22e-05 & 2.13e-06 & 2.45e-11 & 1.54e-11 & 1.90e-11 & 5.18e-24 \\ B & 11 & 7.91e-10 & 2.28e-11 & 2.03e-16 & 7.38e-17 & 4.50e-17 & 4.72e-30 \\ C & 12 & 4.92e-09 & 6.21e-11 & 9.70e-12 & 1.14e-11 & 1.74e-11 & 6.18e-13 \\ C & 13 & 4.07e-07 & 3.33e-08 & 1.11e-09 & 3.62e-10 & 1.22e-10 & 1.25e-24 \\ N & 14 & 1.90e-03 & 2.34e-06 & 2.59e-06 & 1.36e-06 & 3.90e-07 & 3.78e-22 \\ N & 15 & 5.68e-04 & 2.13e-06 & 5.10e-08 & 7.54e-08 & 5.32e-08 & 2.42e-16 \\ O & 16 & 5.68e-07 & 1.23e-06 & 1.46e-08 & 2.09e-08 & 2.92e-09 & 1.41e-12 \\ O & 17 & 5.75e-06 & 8.21e-09 & 2.87e-09 & 3.52e-10 & 2.20e-10 & 5.56e-21 \\ O & 18 & 5.10e-03 & 2.14e-05 & 1.78e-05 & 9.78e-06 & 2.95e-06 & 8.34e-23 \\ F & 19 & 1.49e-07 & 5.69e-10 & 3.13e-11 & 4.32e-11 & 3.11e-11 & 5.27e-18 \\ Ne & 20 & 3.91e-07 & 3.31e-05 & 1.43e-07 & 1.24e-07 & 1.66e-08 & 4.38e-15 \\ Ne & 21 & 2.44e-06 & 4.38e-04 & 2.69e-09 & 6.21e-09 & 1.92e-09 & 3.73e-19 \\ Ne & 22 & 8.73e-04 & 5.74e-05 & 9.60e-06 & 2.10e-06 & 8.16e-07 & 2.41e-18 \\ Na & 23 & 5.11e-06 & 2.56e-03 & 3.06e-05 & 1.68e-05 & 4.64e-06 & 8.55e-17 \\ Mg & 24 & 2.15e-05 & 6.82e-03 & 6.57e-05 & 3.54e-05 & 9.73e-06 & 9.50e-17 \\ Mg & 25 & 2.58e-06 & 4.82e-04 & 1.30e-06 & 1.85e-06 & 1.20e-06 & 9.90e-18 \\ Mg & 26 & 5.72e-08 & 1.90e-06 & 3.64e-09 & 5.55e-09 & 3.71e-09 & 2.55e-16 \\ Al & 27 & 5.22e-04 & 5.60e-04 & 7.50e-05 & 2.68e-05 & 7.86e-06 & 3.12e-16 \\ Si & 28 & 6.36e-05 & 1.97e-03 & 2.87e-04 & 1.05e-04 & 3.27e-05 & 1.46e-16 \\ Si & 29 & 2.18e-08 & 6.16e-08 & 1.57e-07 & 8.60e-08 & 7.09e-08 & 4.47e-16 \\ Si & 30 & 2.07e-06 & 7.29e-07 & 1.11e-08 & 4.61e-09 & 5.70e-09 & 2.22e-14 \\ P & 31 & 6.08e-04 & 3.01e-04 & 2.69e-04 & 7.81e-05 & 2.20e-05 & 2.76e-16 \\ S & 32 & 6.52e-06 & 3.34e-04 & 4.06e-04 & 1.18e-04 & 3.41e-05 & 1.61e-15 \\ S & 33 & 3.06e-04 & 3.07e-04 & 4.59e-06 & 2.99e-06 & 2.01e-06 & 5.10e-15 \\ S & 34 & 3.97e-09 & 7.29e-08 & 5.65e-06 & 3.99e-08 & 1.81e-08 & 1.72e-13 \\ S & 36 & 3.98e-31 & 2.21e-34 & 1.75e-35 & 2.49e-35 & 2.45e-35 & 1.39e-12 \\ Cl & 35 & 8.35e-07 & 5.31e-08 & 2.70e-03 & 1.67e-04 & 4.25e-05 & 8.99e-15 \\ Cl & 37 & 4.48e-05 & 4.50e-05 & 3.33e-04 & 1.89e-06 & 1.20e-06 & 1.46e-13 \\ Ar & 36 & 1.47e-06 & 2.49e-06 & 3.72e-03 & 2.36e-04 & 6.26e-05 & 9.50e-17 \\ Ar & 38 & 1.62e-09 & 2.35e-09 & 6.93e-03 & 1.69e-04 & 4.22e-05 & 3.66e-13 \\ Ar & 40 & 8.39e-31 & 9.21e-34 & 2.06e-35 & 7.71e-36 & 3.66e-35 & 7.49e-13 \\ K & 39 & 2.40e-08 & 2.01e-08 & 2.64e-04 & 7.14e-06 & 1.03e-06 & 9.58e-15 \\ K & 40 & 9.24e-22 & 3.28e-29 & 1.00e-99 & 2.93e-27 & 2.07e-27 & 7.28e-17 \\ K & 41 & 2.78e-06 & 5.34e-08 & 3.71e-04 & 6.61e-07 & 2.79e-07 & 7.05e-15 \\ Ca & 40 & 2.22e-05 & 1.12e-10 & 8.46e-05 & 2.05e-06 & 1.45e-07 & 9.98e-20 \\ Ca & 42 & 3.42e-05 & 5.62e-05 & 5.95e-03 & 1.32e-04 & 5.94e-05 & 6.79e-14 \\ Ca & 43 & 2.30e-08 & 3.73e-08 & 6.71e-04 & 6.01e-07 & 5.51e-07 & 1.81e-13 \\ Ca & 44 & 1.67e-07 & 4.06e-06 & 4.76e-03 & 1.48e-04 & 1.20e-05 & 3.79e-11 \\ Ca & 46 & 4.26e-26 & 9.00e-99 & 9.00e-99 & 6.41e-37 & 8.43e-37 & 1.32e-08 \\ Ca & 48 & 2.89e-29 & 8.00e-99 & 8.00e-99 & 8.00e-99 & 8.00e-99 & 4.03e-07 \\ Sc & 45 & 4.49e-08 & 5.72e-08 & 4.29e-05 & 3.32e-06 & 1.05e-06 & 2.25e-11 \\ Ti & 46 & 6.65e-09 & 5.51e-08 & 2.87e-05 & 1.75e-06 & 1.73e-05 & 5.48e-13 \\ Ti & 47 & 2.57e-07 & 2.90e-07 & 2.54e-04 & 6.26e-06 & 2.45e-06 & 5.54e-09 \\ Ti & 48 & 9.20e-07 & 1.68e-07 & 2.89e-03 & 7.22e-04 & 1.42e-04 & 3.65e-08 \\ Ti & 49 & 4.63e-07 & 7.25e-08 & 3.25e-05 & 1.67e-05 & 7.03e-06 & 4.13e-06 \\ Ti & 50 & 2.52e-11 & 2.80e-32 & 1.40e-35 & 1.01e-36 & 8.80e-24 & 1.59e-03 \\ V & 50 & 7.96e-09 & 1.00e-99 & 1.00e-99 & 9.80e-29 & 8.00e-27 & 2.95e-09 \\ $Y_\mathrm{efin}$ & N/A & 8.64e-01 & 8.63e-01 & 8.63e-01 & 8.62e-01 & 6.37e-01 & 4.42e-01 \\ \bottomrule \end{tabular} \label{tab:isotopic_yields} \caption{The same as Table \ref{tab:centraj_abunds}, but for trajectories mod.D.ar1d0, mod.D.ar1d1, mod.D.ar1d2, mod.D.ar1d3 mod.D.ar1d4 and mod.D.ar1d5 respectively } \label{tab:delayed_abundances_table} \end{table*} \begin{table*} \begin{tabular}{c|c|c|c|c|c|c|c|c|} \toprule Element & A & 1$\rm{{M}_{\odot}yr^{-1}}$ & 10$\rm{{M}_{\odot}yr^{-1}}$ & 10$^{2}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{3}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{4}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{5}\rm{{M}_{\odot}yr^{-1}}$\\ \midrule H & 1 & 9.98e-01 & 9.97e-01 & 9.98e-01 & 9.93e-01 & 1.33e-11 & 1.74e-21 \\ H & 2 & 1.56e-13 & 1.69e-13 & 1.84e-13 & 1.99e-13 & 1.69e-20 & 3.34e-29 \\ He & 3 & 1.66e-07 & 7.93e-11 & 1.37e-10 & 2.68e-10 & 7.88e-25 & 7.26e-20 \\ He & 4 & 9.93e-01 & 9.71e-01 & 8.54e-01 & 5.50e-01 & 1.93e-01 & 3.65e-05 \\ Li & 7 & 9.16e+05 & 1.07e+01 & 9.57e-01 & 4.79e-01 & 1.56e-10 & 8.28e-13 \\ B & 11 & 2.17e-01 & 1.85e-06 & 1.20e-07 & 3.43e-08 & 2.36e-23 & 8.12e-23 \\ C & 12 & 1.49e-06 & 7.44e-09 & 4.33e-09 & 5.07e-09 & 1.71e-02 & 1.73e-09 \\ C & 13 & 1.26e-02 & 5.08e-05 & 4.06e-05 & 1.30e-05 & 6.13e-07 & 2.31e-22 \\ N & 14 & 2.51e-03 & 3.73e-03 & 4.18e-03 & 2.15e-03 & 5.47e-15 & 2.06e-23 \\ N & 15 & 1.50e+02 & 6.06e-03 & 2.01e-02 & 2.96e-02 & 6.15e-13 & 4.20e-10 \\ O & 16 & 5.80e-05 & 5.66e-07 & 2.47e-06 & 3.91e-06 & 1.40e-05 & 2.56e-09 \\ O & 17 & 3.51e-02 & 4.17e-03 & 1.24e-03 & 1.48e-04 & 9.75e-12 & 3.26e-17 \\ O & 18 & 2.71e+02 & 1.12e+00 & 1.42e+00 & 7.73e-01 & 1.88e-18 & 5.27e-19 \\ F & 19 & 2.13e-01 & 1.84e-05 & 5.90e-05 & 8.13e-05 & 1.93e-12 & 1.56e-10 \\ Ne & 20 & 3.33e-02 & 6.91e-05 & 1.37e-04 & 1.31e-04 & 9.28e-05 & 4.38e-10 \\ Ne & 21 & 1.85e+02 & 8.84e-04 & 1.07e-03 & 2.63e-03 & 2.25e-08 & 3.58e-15 \\ Ne & 22 & 7.64e+01 & 1.51e-01 & 1.13e-01 & 2.49e-02 & 2.31e-13 & 2.71e-12 \\ Na & 23 & 1.81e+01 & 4.66e-01 & 9.31e-01 & 6.50e-01 & 5.21e-07 & 2.04e-10 \\ Mg & 24 & 4.40e+00 & 6.57e-02 & 1.06e-01 & 7.26e-02 & 1.21e-04 & 4.48e-11 \\ Mg & 25 & 4.03e-01 & 3.47e-03 & 1.56e-02 & 2.87e-02 & 8.16e-09 & 1.15e-11 \\ Mg & 26 & 1.41e-03 & 1.99e-03 & 3.86e-05 & 7.54e-05 & 3.77e-06 & 3.65e-10 \\ Al & 27 & 1.49e+01 & 3.53e+00 & 1.02e+00 & 5.42e-01 & 1.52e-03 & 8.32e-10 \\ Si & 28 & 5.52e-01 & 1.28e+00 & 2.93e-01 & 1.62e-01 & 1.27e-04 & 1.25e-11 \\ Si & 29 & 5.14e-04 & 1.58e+00 & 2.07e-03 & 2.46e-03 & 1.31e-04 & 3.23e-09 \\ Si & 30 & 9.94e-02 & 5.68e-02 & 3.45e-04 & 1.93e-04 & 2.40e-02 & 1.38e-08 \\ P & 31 & 1.28e+02 & 1.03e+03 & 2.34e+01 & 1.35e+01 & 5.07e-03 & 1.69e-09 \\ S & 32 & 2.77e-01 & 3.17e+01 & 5.95e-01 & 3.47e-01 & 1.21e-03 & 9.53e-10 \\ S & 33 & 1.50e+02 & 3.92e+02 & 7.09e-01 & 1.08e+00 & 4.71e-02 & 3.10e-08 \\ S & 34 & 9.84e-04 & 5.44e+00 & 2.93e-02 & 2.23e-03 & 1.10e-01 & 9.98e-08 \\ S & 36 & 1.38e-91 & 1.38e-91 & 9.14e-28 & 5.38e-28 & 9.96e-03 & 4.94e-06 \\ Cl & 35 & 2.70e-03 & 3.21e+02 & 3.70e+01 & 2.21e+01 & 5.04e-02 & 7.63e-09 \\ Cl & 37 & 2.63e+01 & 1.20e+02 & 4.98e-01 & 6.99e-01 & 8.10e-02 & 1.96e-07 \\ Ar & 36 & 8.61e-02 & 8.80e+01 & 9.26e+00 & 5.66e+00 & 2.64e-02 & 5.06e-10 \\ Ar & 38 & 1.59e-04 & 3.37e+01 & 3.47e+01 & 2.00e+01 & 8.21e-01 & 6.06e-07 \\ Ar & 40 & 2.71e-25 & 3.85e-31 & 5.05e-27 & 8.05e-31 & 9.03e-01 & 6.13e-05 \\ K & 39 & 8.83e-03 & 1.43e-01 & 1.40e+00 & 1.57e+00 & 1.13e-01 & 1.17e-08 \\ K & 40 & 1.03e-17 & 1.21e-18 & 2.41e-18 & 4.09e-18 & 5.58e+01 & 1.13e-05 \\ K & 41 & 4.07e-02 & 1.23e+01 & 1.06e+00 & 1.96e+00 & 1.10e+00 & 3.68e-07 \\ Ca & 40 & 2.30e-06 & 1.56e-03 & 2.32e-02 & 2.46e-02 & 2.83e-02 & 2.49e-12 \\ Ca & 42 & 1.80e+02 & 1.52e+02 & 5.44e+02 & 2.34e+02 & 2.76e+01 & 2.81e-07 \\ Ca & 43 & 2.05e-01 & 1.58e+00 & 4.23e+00 & 4.86e+00 & 5.68e+00 & 1.96e-06 \\ Ca & 44 & 3.24e-01 & 1.22e+01 & 8.99e+01 & 7.78e+01 & 1.13e+00 & 8.37e-07 \\ Ca & 46 & 3.99e-90 & 3.99e-90 & 3.99e-90 & 2.34e-28 & 3.39e-05 & 9.05e-03 \\ Ca & 48 & 7.27e-92 & 7.27e-92 & 7.27e-92 & 2.69e-31 & 1.15e-14 & 6.20e-04 \\ Sc & 45 & 3.82e-01 & 2.15e+00 & 3.29e+01 & 6.48e+01 & 1.32e+02 & 6.87e-06 \\ Ti & 46 & 9.30e-01 & 4.62e-01 & 6.51e+01 & 5.41e+00 & 4.95e+01 & 2.08e-07 \\ Ti & 47 & 3.21e-01 & 1.19e+00 & 2.08e+01 & 2.13e+01 & 1.03e+01 & 1.41e-04 \\ Ti & 48 & 1.37e-02 & 3.72e-01 & 3.99e+02 & 2.31e+02 & 4.48e+00 & 3.03e-03 \\ Ti & 49 & 6.15e-01 & 6.19e-01 & 4.48e+01 & 6.97e+01 & 6.89e+01 & 1.31e-01 \\ Ti & 50 & 8.09e-92 & 8.09e-92 & 8.09e-92 & 1.29e-17 & 5.93e-05 & 1.77e+02 \\ V & 50 & 1.33e-90 & 1.33e-90 & 1.33e-90 & 5.67e-20 & 4.72e+01 & 1.35e+00 \\ \bottomrule \end{tabular} \caption{Overproduction factors for all accretion rates for trajectories mod.C.ar1d0, mod.C.ar1d1, mod.C.ar1d2, mod.C.ar1d3 mod.C.ar1d4 and mod.C.ar1d5 respectively, up to isotope $^{50}$V. Complete tables are available online. } \label{tab:overprod_factors_centraj} \end{table*} \begin{table*} \begin{tabular}{c|c|c|c|c|c|c|c|c|} \toprule Element & A & 1$\rm{{M}_{\odot}yr^{-1}}$ & 10$\rm{{M}_{\odot}yr^{-1}}$ & 10$^{2}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{3}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{4}\rm{{M}_{\odot}yr^{-1}}$ & 10$^{5}\rm{{M}_{\odot}yr^{-1}}$\\ \midrule H & 1 & 9.98e-01 & 9.97e-01 & 9.97e-01 & 9.94e-01 & 3.83e-01 & 4.00e-23 \\ H & 2 & 1.56e-13 & 1.69e-13 & 1.84e-13 & 1.99e-13 & 7.55e-14 & 8.00e-30 \\ He & 3 & 8.04e-07 & 2.35e-10 & 1.32e-10 & 2.59e-10 & 8.43e-11 & 3.64e-19 \\ He & 4 & 1.00e+00 & 9.89e-01 & 9.28e-01 & 6.03e-01 & 3.08e-01 & 2.56e-06 \\ Li & 7 & 1.43e+06 & 4.90e+04 & 5.64e-01 & 3.55e-01 & 4.37e-01 & 1.19e-13 \\ B & 11 & 2.98e-01 & 8.59e-03 & 7.67e-08 & 2.78e-08 & 1.70e-08 & 1.78e-21 \\ C & 12 & 2.81e-06 & 3.55e-08 & 5.55e-09 & 6.50e-09 & 9.96e-09 & 3.54e-10 \\ C & 13 & 1.92e-02 & 1.57e-03 & 5.21e-05 & 1.71e-05 & 5.75e-06 & 5.88e-20 \\ N & 14 & 3.77e+00 & 4.65e-03 & 5.14e-03 & 2.70e-03 & 7.73e-04 & 7.49e-19 \\ N & 15 & 2.86e+02 & 1.07e+00 & 2.57e-02 & 3.79e-02 & 2.68e-02 & 1.22e-10 \\ O & 16 & 1.30e-04 & 2.80e-04 & 3.34e-06 & 4.77e-06 & 6.68e-07 & 3.23e-10 \\ O & 17 & 3.32e+00 & 4.73e-03 & 1.66e-03 & 2.03e-04 & 1.27e-04 & 3.21e-15 \\ O & 18 & 5.17e+02 & 2.16e+00 & 1.81e+00 & 9.90e-01 & 2.99e-01 & 8.44e-18 \\ F & 19 & 3.59e-01 & 1.37e-03 & 7.56e-05 & 1.04e-04 & 7.50e-05 & 1.27e-11 \\ Ne & 20 & 5.06e-04 & 4.29e-02 & 1.86e-04 & 1.61e-04 & 2.14e-05 & 5.68e-12 \\ Ne & 21 & 1.26e+00 & 2.25e+02 & 1.38e-03 & 3.20e-03 & 9.87e-04 & 1.92e-13 \\ Ne & 22 & 1.40e+01 & 9.19e-01 & 1.54e-01 & 3.37e-02 & 1.31e-02 & 3.87e-14 \\ Na & 23 & 2.50e-01 & 1.25e+02 & 1.50e+00 & 8.24e-01 & 2.27e-01 & 4.19e-12 \\ Mg & 24 & 5.58e-02 & 1.77e+01 & 1.70e-01 & 9.19e-02 & 2.52e-02 & 2.46e-13 \\ Mg & 25 & 5.07e-02 & 9.47e+00 & 2.56e-02 & 3.63e-02 & 2.37e-02 & 1.95e-13 \\ Mg & 26 & 9.82e-04 & 3.27e-02 & 6.25e-05 & 9.55e-05 & 6.38e-05 & 4.38e-12 \\ Al & 27 & 1.37e+01 & 1.47e+01 & 1.97e+00 & 7.04e-01 & 2.07e-01 & 8.22e-12 \\ Si & 28 & 1.27e-01 & 3.94e+00 & 5.73e-01 & 2.10e-01 & 6.51e-02 & 2.92e-13 \\ Si & 29 & 8.27e-04 & 2.34e-03 & 5.96e-03 & 3.26e-03 & 2.69e-03 & 1.69e-11 \\ Si & 30 & 1.15e-01 & 4.06e-02 & 6.20e-04 & 2.56e-04 & 3.17e-04 & 1.23e-09 \\ P & 31 & 1.43e+02 & 7.07e+01 & 6.32e+01 & 1.83e+01 & 5.16e+00 & 6.48e-11 \\ S & 32 & 2.59e-02 & 1.33e+00 & 1.61e+00 & 4.70e-01 & 1.35e-01 & 6.40e-12 \\ S & 33 & 1.49e+02 & 1.50e+02 & 2.24e+00 & 1.46e+00 & 9.79e-01 & 2.49e-09 \\ S & 34 & 3.35e-04 & 6.14e-03 & 4.76e-01 & 3.36e-03 & 1.52e-03 & 1.45e-08 \\ S & 36 & 7.84e-24 & 4.36e-27 & 3.46e-28 & 4.91e-28 & 4.83e-28 & 2.74e-05 \\ Cl & 35 & 1.66e-01 & 1.05e-02 & 5.37e+02 & 3.33e+01 & 8.44e+00 & 1.79e-09 \\ Cl & 37 & 2.63e+01 & 2.65e+01 & 1.96e+02 & 1.11e+00 & 7.07e-01 & 8.61e-08 \\ Ar & 36 & 5.31e-02 & 9.01e-02 & 1.34e+02 & 8.52e+00 & 2.26e+00 & 3.43e-12 \\ Ar & 38 & 3.05e-04 & 4.42e-04 & 1.30e+03 & 3.18e+01 & 7.95e+00 & 6.90e-08 \\ Ar & 40 & 9.39e-23 & 1.03e-25 & 2.30e-27 & 8.63e-28 & 4.09e-27 & 8.38e-05 \\ K & 39 & 9.15e-03 & 7.66e-03 & 1.01e+02 & 2.72e+00 & 3.94e-01 & 3.65e-09 \\ K & 40 & 2.74e-12 & 9.73e-20 & 2.97e-90 & 8.69e-18 & 6.16e-18 & 2.16e-07 \\ K & 41 & 1.40e+01 & 2.68e-01 & 1.86e+03 & 3.32e+00 & 1.40e+00 & 3.54e-08 \\ Ca & 40 & 4.67e-01 & 2.36e-06 & 1.78e+00 & 4.32e-02 & 3.04e-03 & 2.10e-15 \\ Ca & 42 & 1.03e+02 & 1.69e+02 & 1.79e+04 & 3.96e+02 & 1.79e+02 & 2.04e-07 \\ Ca & 43 & 3.24e-01 & 5.24e-01 & 9.43e+03 & 8.45e+00 & 7.75e+00 & 2.54e-06 \\ Ca & 44 & 1.49e-01 & 3.61e+00 & 4.23e+03 & 1.32e+02 & 1.06e+01 & 3.37e-05 \\ Ca & 46 & 1.89e-17 & 3.99e-90 & 3.99e-90 & 2.84e-28 & 3.74e-28 & 5.85e+00 \\ Ca & 48 & 2.62e-22 & 7.27e-92 & 7.27e-92 & 7.27e-92 & 7.27e-92 & 3.67e+00 \\ Sc & 45 & 1.48e+00 & 1.89e+00 & 1.42e+03 & 1.10e+02 & 3.45e+01 & 7.43e-04 \\ Ti & 46 & 3.68e-02 & 3.04e-01 & 1.59e+02 & 9.69e+00 & 9.56e+01 & 3.03e-06 \\ Ti & 47 & 1.54e+00 & 1.74e+00 & 1.52e+03 & 3.76e+01 & 1.47e+01 & 3.32e-02 \\ Ti & 48 & 5.45e-01 & 9.96e-02 & 1.72e+03 & 4.28e+02 & 8.41e+01 & 2.17e-02 \\ Ti & 49 & 3.67e+00 & 5.74e-01 & 2.57e+02 & 1.32e+02 & 5.57e+01 & 3.26e+01 \\ Ti & 50 & 2.04e-04 & 2.26e-25 & 1.14e-28 & 8.19e-30 & 7.12e-17 & 1.28e+04 \\ V & 50 & 1.06e+01 & 1.33e-90 & 1.33e-90 & 1.31e-19 & 1.07e-17 & 3.93e+00 \\ \bottomrule \end{tabular} \caption{The same as Table \ref{tab:overprod_factors_centraj}, but for trajectories mod.D.ar1d0, mod.D.ar1d1, mod.D.ar1d2, mod.D.ar1d3 mod.D.ar1d4 and mod.D.ar1d5 respectively. } \label{tab:overprod_factors_delay} \end{table*} \bibliographystyle{mnras}
1,477,468,751,200
arxiv
\section{Introduction} Surface groups have a foundational role in geometric topology and geometric group theory. In this paper, we study a class of amalgamated products of surface groups, which in many ways resemble fundamental groups of $3$--manifolds. Let $\mathcal{C}_{m,n}$ be the collection of amalgamated free products of the form $\pi_1(S_g)*_{\langle a^m=b^n \rangle}\pi_1(S_h)$, where $m \leq n$, and $S_g$ and $S_h$ are closed orientable surfaces of genus $g$ and~$h$ greater than one, and $a$ and $b$ are the homotopy class of an essential simple closed curve on $S_g$ and~$S_h$, respectively. Let $\mathcal{C} = \bigcup_{m\leq n} \mathcal{C}_{m,n}$. Each group in $\mathcal{C}$ is the fundamental group of a complex that consists of two closed orientable surfaces $S_g$ and $S_h$ and an annulus, where one boundary component of the annulus is identified to an essential simple closed curve $a$ on $S_g$ by a degree--$m$ map of the circle, and the other boundary component of the annulus is identified to an essential simple closed curve $b$ on $S_h$ by a degree--$n$ map of the circle for positive integers $m \leq n$. An example appears in Figure~\ref{figure:surf_amalgam}. A primary feature of the surface amalgams in $\mathcal{C}$ is their strong resemblance in many ways to $3$--manifolds with three different types of geometric decomposition. More specifically, $\mathcal{C}$ is divided into three basic families of amalgams. Amalgams in the first family are word hyperbolic and resemble Kleinian groups. The amalgams in the second family are hyperbolic relative to virtually abelian subgroups (Klein bottle groups), and strongly resemble $3$--manifolds formed by gluing two hyperbolic components along a cusp torus. Amalgams in the third family are hyperbolic relative to groups of the form $\presentation{a,b}{a^m = b^n}$, which has a finite-index subgroup isomorphic to $F\times \field{Z}$ for a nonabelian free group $F$. We note that when $m$ and $m$ are relatively prime, the group $\presentation{a,b}{a^m = b^n}$ is the fundamental group of a torus knot complement in $S^3$. Surface amalgams of this third family closely resemble ``mixed'' type $3$--manifolds that contain both hyperbolic and Seifert fibered JSJ components. However, in spite of their strong resemblance to various $3$--manifolds, we prove, for most choices of $m$ and $n$, the surface amalgams in $\mathcal{C}$ are not the fundamental groups of $3$--manifolds. \begin{thm} \label{sec1_3man_classification} Let $G \cong \pi_1(S_g) *_{\left\langle a^m=b^n \right\rangle} \pi_1(S_h) \in \mathcal{C}_{m,n}$, where $a\in \pi_1(S_g)$ and $b \in \pi_1(S_h)$ are homotopy classes of essential simple closed curves. Then $G$ is the fundamental group of a $3$--manifold if and only if one of the following holds: \begin{enumerate} \item $m=n=1$; \item $m=1$, $n=2$, and $b$ is the homotopy class of a non-separating curve; or, \item $m=n=2$, and $a$ and $b$ are homotopy classes of non-separating curves. \end{enumerate} \end{thm} \begin{figure \begin{overpic}[scale=.7, tics=5]{figure-surf_amalgam.pdf} \put(22,9){\small{$a$}} \put(32,9){\small{$a^m$}} \put(61,9){\small{$b^n$}} \put(75.5,8.5){\small{$b$}} \end{overpic} \caption{{\small A $2$--complex whose fundamental group is a surface amalgam in the family $\mathcal{C}_{m,n}$. The left boundary curve of the annulus is glued to the curve $a^m$, and the right boundary curve of the annulus is glued to the curve $b^n$.}} \label{figure:surf_amalgam} \end{figure} On the other hand, we prove all groups in $\mathcal{C}$ are virtually $3$--manifold groups. \begin{thm} \label{sec1:virt3man} Each surface amalgam in the family $\mathcal{C}$ has a finite-index subgroup that is the fundamental group of a $3$--manifold. \end{thm} Kapovich--Kleiner \cite{kapovichkleiner} introduced the first examples of torsion-free hyperbolic groups that are not $3$--manifold groups but are virtual $3$--manifold groups. In their later work on coarse Poincar\'{e} duality spaces \cite{kapovichkleiner05}, they observe that many ``higher genus Baumslag--Solitar groups'' are also not $3$--manifold groups. Higher genus Baumslag--Solitar groups are certain HNN-extensions of hyperbolic surface groups with two boundary components over powers of the boundary curves. In this paper, we extend and generalize the Kapovich--Kleiner construction to a broad family of surface group amalgams. In the present setting of surface amalgams over powers of embedded curves in closed surfaces, the combinatorial obstruction to acting on a $3$--manifold is substantially simpler than the obstruction found by Kapovich--Kleiner. Ultimately, the obstruction we found boils down to a very elementary argument about subgroups of finite dihedral groups (see for example ``Case~1'' in the proof of Theorem~\ref{thm:3manclassification}). Moreover, some interesting features arise when considering closed surfaces, which contain both separating and nonseparating curves. In contrast, the Kapovich--Kleiner examples are formed by gluing over powers of boundary curves, which are homologically trivial. There has been much interest in geometric group theory recently in the problem of embedding groups as quasi-convex subgroups of right-angled Artin groups (see for example \cite{wise}). Some of the strongest consequences follow from the fact that right-angled Artin groups embed in right-angled Coxeter groups, which are subgroups of $SL_n(\field{Z})$. In particular, every virtually special cubulated group is linear over $\field{Z}$ because of its embedding in a right-angled Coxeter group \cite{HsuWise99,DavisJanuszkiewicz00}. In this paper, we obtain a strong embedding theorem; we prove each surface amalgam in $\mathcal{C}$ virtually embeds as a finite-index subgroup in a particularly simple right-angled Coxeter group, one whose nerve is a planar graph called a {\it generalized $\Theta$-graph} (see Definition~\ref{def:RACG}). \begin{thm} \label{thm:sec1_CW_AC} Each surface amalgam in $\mathcal{C}$ is abstractly commensurable to a right-angled Coxeter group with nerve a generalized $\Theta$-graph. \end{thm} By an elementary construction of Davis--Okun \cite{davisokun}, each right-angled Coxeter group with nerve a planar, connected simplicial complex of dimension at most two acts properly on a contractible $3$--manifold; see Theorem~\ref{thm:racg_3man} for more details. Thus, Theorem~\ref{sec1:virt3man} follows from Theorem~\ref{thm:sec1_CW_AC}. The class $\mathcal{W}$ of right-angled Coxeter with generalized $\Theta$-graph nerve was introduced by Dani--Thomas \cite{danithomas}, as a generalization of a family of groups investigated by Crisp--Paoluzzi \cite{crisppaoluzzi}. A natural problem in geometric group theory proposed by Gromov is to characterize the quasi-isometry classes within a class of finitely-generated groups. Recent work of Cashen--Martin \cite{cashenmartin} provides a far-reaching quasi-isometry classification for groups which split over $2$-ended subgroups using highly intricate combinatorial machinery. In an appendix, we provide the quasi-isometry classification of all groups in the families $\mathcal{C}$ and $\mathcal{W}$. The classification obtained in the appendix is a special case of the general classification theorem of Cashen--Martin. In particular, within the family $\mathcal{C}$ we obtain a simple statement of the quasi-isometry classification in terms of the degree of the gluing maps in the amalgams. \begin{thm} \label{thm:sec1_C_QI} \it Let $G \in \mathcal{C}_{m,n}$ and $G' \in \mathcal{C}_{m',n'}$. Then $G$ and $G'$ are quasi-isometric if and only if one of the following conditions hold: \begin{enumerate} \item $m=m'=1$ and $n=n'$; or, \item $m=m'=n=n'=2$; or, \item $m\geq 2, n\geq 3, m'\geq 2$ and $n'\geq 3$. \end{enumerate} \end{thm} Although the conclusion of Theorem~\ref{thm:sec1_C_QI} may not be surprising to experts, we have included it as an illustration for a less-expert reader of some simple ways that the powerful tools of Behrstock--Neumann \cite{behrstockneumann} and Whyte \cite{whyte} may be used to produce quasi-isometries between trees of spaces whose vertex spaces have ``treelike'' geometries. Furthermore, we relate the geometry of surface amalgams in the family $\mathcal{C}$ to the right-angled Coxeter groups in the family $\mathcal{W}$. Dani--Thomas provide a quasi-isometry classification of the hyperbolic groups of $\mathcal{W}$, and we extend this classification to cover the non-hyperbolic groups in $\mathcal{W}$ in Theorem~\ref{sec1_thm_cW_QI}. A surface amalgam in $\mathcal{C}_{m,n}$ is $\delta$--hyperbolic if and only if $m=1$. Thus, by Theorem~\ref{thm:sec1_C_QI} there are infinitely many quasi-isometry classes among the $\delta$--hyperbolic groups in $\mathcal{C}$ and exactly two quasi-isometry classes among the non-hyperbolic groups in $\mathcal{C}$. The quasi-isometry classification of $\delta$--hyperbolic groups in $\mathcal{C}$ follows in spirit from the quasi-isometry classification of geometric amalgams of free groups given in the thesis of Malone \cite{malone}. A proof of the quasi-isometry classification in the hyperbolic case that $m=n=1$ is given by Stark \cite{stark}, and the result in the hyperbolic setting also follows from Cashen--Martin \cite{cashenmartin}. The non-hyperbolic case above would also follow by translating \cite[Theorem~8.5]{cashenmartin} to this setting. \subsection{Methods of proof} \label{subsec:method} \subsubsection{$3$--manifold groups} In Theorem~\ref{sec1_3man_classification} we determine which surface amalgams in the family $\mathcal{C}$ are fundamental groups of $3$--manifolds. The proof uses results due to Kapovich--Kleiner \cite{kapovichkleiner05} on coarse Poincar\'{e} duality spaces. In dimension~$3$, Kapovich--Kleiner prove (with additional mild hypotheses, see Section~\ref{sec:3man},) that if $W$ is a union of $k$ half-planes glued along their boundaries, then a uniformly proper embedding of~$W$ into a coarse $PD(3)$ space $X$ coarsely separates the space $X$ into $k$ deep components, and there is a cyclic order on the set of components that is preserved by any homeomorphism of $X$ stabilizing the image of~$W$. If the conditions of Theorem~\ref{sec1_3man_classification} do not hold, we analyze the action of a group $G \in \mathcal{C}$ on a model geometry for the group to find a union of $k$ half-planes whose $G$--stabilizer cannot preserve any cyclic order, which implies that $G$ cannot act properly on any coarse $PD(3)$ space. On the other hand, if the conditions of Theorem~\ref{sec1_3man_classification} hold for $G \in \mathcal{C}$, we explicitly construct a $3$--manifold with fundamental group $G$ by gluing together an $I$--bundle over each surface along an annulus on the boundary of each $I$--bundle. By Theorem~\ref{sec1:virt3man} each group in $\mathcal{C}$ is virtually the fundamental group of a $3$--manifold. Thus, by work of Bestvina--Kapovich--Kleiner \cite{bestvinakapovichkleiner} if $G \in \mathcal{C}$ acts properly and cocompactly on a $\CAT(0)$ space $X$, then the visual boundary of $X$ does not contain an embedded non-planar graph. Finally, observe the following, which contrasts with the classification theorem obtained in Theorem~\ref{sec1_3man_classification}. A $\delta$--hyperbolic group in $\mathcal{C}_{1,n}$ is quasi-isometric to (and, in fact, abstractly commensurable to) the fundamental group of the union of $2n+2$ hyperbolic surfaces with one boundary component identified to each other along their boundary curves. The fundamental group of this complex is also the fundamental group of a hyperbolic $3$--manifold with boundary called a book of $I$--bundles \cite{cullershalen}. (See also \cite{haissinskypaoluzziwalsh} for more about these examples.) In contrast, by Theorem~\ref{sec1_3man_classification}, most $\delta$--hyperbolic groups in $\mathcal{C}$ are not fundamental groups of $3$--manifolds. \subsubsection{Quasi-isometry classification} To study the quasi-isometry classes within $\mathcal{C}$, we prove that each group in $\mathcal{C}$ has a model geometry built from gluing together copies of a fattened tree in the sense of Behrstock--Neumann (see Definition~\ref{fattenedtree}) and copies of $T_{m,n} \times \field{R}$, where $T_{m,n}$ denotes the biregular tree with vertices of valance $m$ and $n$ (see Definition~\ref{treedef}). We provide a quasi-isometry classification of such model geometries, and we use results of Whyte \cite{whyte} and Behrstock--Neumann \cite{behrstockneumann} on the geometry of the components of these spaces to construct a quasi-isometry. To distinguish the quasi-isometry classes within these model geometries, we show that each right-angled Coxeter group with generalized $\Theta$--graph nerve, as defined below, also has a model geometry of the form described above. See Figure~\ref{NaO} for an illustration of a generalized $\Theta$--graph. \begin{defn} \label{def:RACG} Let $\Gamma$ be a finite simplicial graph with vertex set $S$ and edge set $E$. The \emph{right-angled Coxeter group associated to $\Gamma$} is the group $W_{\Gamma}$ with presentation \[ W_{\Gamma} = \bigpresentation{s \in S}{\text{$s^2 = 1$ for all $s \in S$, and $[s,t]=1$ for all $\{s,t\} \in E$}}. \] If $\Gamma$ has no triangles (i.e. $\Gamma$ is a flag simplicial complex) then $\Gamma$ is equal to the \emph{nerve} of $W_\Gamma$ in the sense of Davis \cite{davis}. A {\it generalized $\Theta$--graph} is a graph that contains two vertices of valance $k$ and with $k$ edges connecting these two vertices. In addition, the $i^{th}$ edge of the graph is subdivided into $n_i+1$ edges by inserting $n_i$ vertices along this edge. The resulting graph is denoted $\Theta(n_1, \ldots, n_k)$, and we always assume $1 \leq n_i \leq n_j$ for $i<j$. The {\it linear degree} $\ell \in \field{N}$ of a generalized $\Theta$--graph $\Theta(n_1, \ldots, n_k)$ is the cardinality of the $n_i$ that equal one, and the {\it hyperbolic degree} is equal to $k-\ell$. Let $\mathcal{W}$ denote the set of all right-angled Coxeter groups with generalized $\Theta$--graph nerve. \end{defn} A group in $\mathcal{W}$ is $\delta$--hyperbolic if and only if its linear degree is at most one. The quasi-isometry classification of the $\delta$--hyperbolic groups in $\mathcal{W}$ follows from Dani--Thomas \cite{danithomas}. In particular, there are infinitely many quasi-isometry classes among $\delta$--hyperbolic groups in~$\mathcal{W}$. We give the quasi-isometry classification of the remaining non-hyperbolic groups in $\mathcal{W}$; it follows from the next theorem that there are three quasi-isometry classes among the non-hyperbolic groups in $\mathcal{W}$. \begin{thm} \label{sec1_thm_cW_QI} Let $\Theta$ and $\Theta'$ be generalized $\Theta$--graphs with linear degree $\ell \geq 2$ and $\ell' \geq 2$, respectively, and hyperbolic degree $h \geq 0$ and $h'\geq 0$, respectively. Then the non-hyperbolic right-angled Coxeter groups $W_{\Theta}$ and $W_{\Theta'}$ are quasi-isometric if and only if one of the following three conditions holds: \begin{enumerate} \item $\ell = \ell' = 2$ and $h,h'\geq 1$; or, \item $\ell,\ell' \geq 3$ and $h,h'\geq 1$; or, \item $\ell,\ell' \geq 3$, and $h=h'=0$. \end{enumerate} \end{thm} By Caprace \cite{caprace,caprace-erra} each group in $\mathcal{W}$ is hyperbolic relative to the right-angled Coxeter subgroup with nerve the union of the vertices of valance $k$ and the set of paths of length two connecting these two vertices. We apply work of Dru{\cb{t}}u--Sapir \cite{drutusapir} on quasi-isometries between relatively hyperbolic groups to distinguish the quasi-isometry classes within $\mathcal{W}$. A consequence of our quasi-isometry classification of the model geometries of groups in $\mathcal{C}$ and groups in $\mathcal{W}$ is that each surface amalgam in $\mathcal{C}$ is quasi-isometric to a right-angled Coxeter group in $\mathcal{W}$, but the converse does not hold; see Corollary~\ref{cor:main3}. \subsubsection{Abstract commensurability classes} Theorem~\ref{thm:sec1_CW_AC} states that each group in $\mathcal{C}$ is abstractly commensurable to a group in $\mathcal{W}$. Each group in $\mathcal{C}$ is the fundamental group of a complex consisting of two surfaces and an annulus identified using prescribed gluing maps, and each group in $\mathcal{W}$ is the orbifold fundamental group of an orbicomplex built from right-angled reflection orbifolds. We prove that for each surface complex there is an associated orbicomplex so that these spaces have homotopy equivalent finite-sheeted covering spaces. The hyperbolic case $m=n=1$ was established previously by Stark \cite[Section~5.2]{stark}. The abstract commensurability classification is known for $\delta$--hyperbolic groups in $\mathcal{C}$ and $\mathcal{W}$ and is open for non-hyperbolic groups in $\mathcal{C}$ and $\mathcal{W}$. A consequence of Theorem~\ref{thm:sec1_CW_AC} is that solving the abstract commensurability classification within $\mathcal{C}$ reduces to the question of solving the abstract commensurability classification problem within $\mathcal{W}$. The abstract commensurability classification for $\delta$--hyperbolic groups in $\mathcal{W}$ is given by Dani--Stark--Thomas \cite{danistarkthomas}. In the $\delta$--hyperbolic setting, if two groups in $\mathcal{W}$ have the same linear degree, then the groups are abstractly commensurable if and only if their Euler characteristic vectors are commensurable vectors (see Definition~\ref{eulercharvector}). In the non-hyperbolic setting, we prove in Lemmas \ref{hypdegree} and~\ref{cover} that the commensurability classification reduces to the case in which two groups in $\mathcal{W}$ have the same linear degree, and we prove if such groups have commensurable Euler characteristic vectors, then the groups are abstractly commensurable. It is an open problem to determine whether the converse holds as well. \subsection{Outline of the paper} The preliminary metric notions are given in Section~\ref{sec:Preliminaries}. Section~\ref{sec:ModelSpaces} contains the construction of the model spaces and model geometries for surface amalgams and right-angled Coxeter groups considered in this paper. Section~\ref{sec:ACclasses} contains results on the abstract commensurability classes within $\mathcal{C}$ and $\mathcal{W}$. In Section~\ref{sec:AC_thm_proof} we prove each group in $\mathcal{C}$ is virtually a $3$--manifold group and is abstractly commensurable to a group in $\mathcal{W}$. Section~\ref{sec:3man} contains the classification of which groups in $\mathcal{C}$ are the fundamental group of a $3$--manifold and the proof that each group in $\mathcal{W}$ acts properly on a contractible $3$--manifold. Appendix~\ref{sec:QIClassification} contains the quasi-isometry classification; the construction of quasi-isometries is given in Section~\ref{sec:QIConstruction}; the quasi-isometry classification within $\mathcal{W}$ is given in Section~\ref{sec:QIRACG}; and, the quasi-isometry classification within $\mathcal{C}$ is given in Section~\ref{sec:QISurfaceAmalgam}. \subsection*{Acknowledgments} The authors are thankful for insightful discussions with Kevin Schreve and Genevieve Walsh and for comments from Chris Cashen on a draft of the paper. The authors also benefited from helpful conversations with Ric Ancel, Pallavi Dani, Craig Guilbault, Jason Manning, Boris Okun, Kim Ruane, and Kevin Whyte about ideas related to this paper. This work was partially supported by a grant from the Simons Foundation ($\#318815$ to G. Christopher Hruska). The second author was partially supported by the Azrieli Foundation and ISF grant 1941/14. \section{Preliminaries} \label{sec:Preliminaries} \begin{defn} Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces. A map $\Phi$ from $X$ to $Y$ is an \emph{$(L,C)$--quasi-isometry} if there are constants $L\geq 1$ and $C \geq 0$ such that the following hold: \begin{enumerate} \item The map $\Phi$ is an {\it $(L,C)$--quasi-isometric embedding}: for all $x_1, x_2 \in X$, \[ \frac{1}{L}\,d_X(x_1,x_2)-C\leq d_Y\bigl(\Phi(x_1),\Phi(x_2)\bigr)\leq L\,d_X(x_1,x_2)+C. \] \item The map $\Phi$ is {\it $C$--quasi-surjective}: every point of $Y$ lies in the $C$--neighborhood of $f(X)$. \end{enumerate} \end{defn} \begin{defn} Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces. A map $\Phi$ from $X$ to $Y$ is {\it $K$--bilipschitz} if there exists $K \geq 1$ so that for all $x_1, x_2 \in X$, \[ \frac{1}{K}\,d_X(x_1,x_2) \leq d_Y\bigl(\Phi(x_1), \Phi(x_2)\bigr) \leq K\,d_Y(x_1, x_2). \] \end{defn} \begin{defn} Metric spaces $X$ and $Y$ are \emph{quasi-isometric} if there is a quasi-isometry from $X$ to $Y$. Two finitely generated groups are {\it quasi-isometric} if their Cayley graphs constructed with respect to finite generating sets are quasi-isometric. \end{defn} \begin{defn} A {\it model geometry} for a group $G$ is a proper metric space on which $G$ acts properly discontinuously and cocompactly by isometries. \end{defn} A group $G$ is quasi-isometric to any model geometry for $G$. The groups considered in this paper are fundamental groups of finite graphs of groups (see \cite{serre,ScottWall79}). These groups have model geometries that are graphs of spaces in the following sense. \begin{defn} \label{union} Let $Y_1$ and $Y_2$ be topological spaces with subspaces $A_1 \subset Y_1$ and $A_2 \subset Y_2$. Let $f\colon A_1 \to A_2$ be a homeomorphism. The space obtained by identifying $Y_1$ and $Y_2$ along $A_1$ and $A_2$ is the space $X = Y_1 \cup_f Y_2$ defined by $Y_1 \sqcup Y_2 /\bigl(y \sim f(y)\bigr)$ for all $y \in A_1$. If $A$ is the image of $A_1$ and $A_2$ under the quotient map, we also use the notation $X = Y_1 \cup_A Y_2$. \end{defn} \begin{defn} Let $G = (V,E)$ be a graph. A {\it geometric graph of spaces} $\Gamma$ consists of a set of vertex spaces $\{X_v \, | \, v \in V\}$ and a set of edge spaces $\{ X_e \, | \, e \in E\}$ so that the vertex and edge spaces are geodesic metric spaces, and there are isometric embeddings $X_e \rightarrow X_v$ and $X_e \rightarrow X_w$ as convex subsets for each edge $e = \{v,w\} \in E$. The {\it geometric realization of $\Gamma$} is the metric space $X$ consisting of the disjoint union of the vertex and edge spaces, identified according to the adjacencies of $G$, and given the induced path metric. Observe that all edge and vertex spaces include as convex subspaces of $X$. The {\it underlying graph} of the graph of spaces $\Gamma$ is the abstract graph $G$ specifying $\Gamma$. When the underlying graph of $\Gamma$ is a tree, $\Gamma$ is a \emph{tree of spaces}. \end{defn} The graphs of spaces defined in this paper differ slightly from a common definition of a graph of spaces. Often, one takes the product of each edge space $E$ with an interval $[0,1]$ and glues the spaces $E \times \{0\}$ and $E \times \{1\}$ to the incident vertex spaces; in our setting, the edge spaces are directly glued to vertex spaces. \section{Model spaces} \label{sec:ModelSpaces} \subsection{Construction of model spaces} The model spaces considered in this paper are geometric trees of spaces that contain the following as vertex spaces. \begin{defn}[Fattened tree, \cite{behrstockneumann}] Let $T$ be a tree whose vertices have valence in the interval $[3,K]$ for some $K$. Fix a positive constant $L$ and assume that $T$ has been given a simplicial metric in which each edge has length between $1$ and $L$. The {\it fattened tree $X$} has each edge $E$ replaced by a strip isometric to $E \times [-\epsilon, \epsilon]$ for some $\epsilon >0$, and each vertex of valence $k$ replaced by a regular $k$--gon with side lengths $2\epsilon$ and so that around the boundary of the polygon the strips replacing the incoming edges of the vertex are attached in some given order. Let $X_0$ be similarly constructed, but starting with the regular $3$--valence tree with all edges having length one and with $\epsilon = \frac{1}{2}$; call $X_0$ the {\it standard fattened tree}. \label{fattenedtree} \end{defn} \begin{defn} \label{treedef} For each positive integer $n$, the \emph{regular tree} $T_n$ is the tree in which every vertex has valence $n$. We define $T_0$ to be a single point. For positive integers $m$ and $n$ the \emph{biregular tree} $T_{m,n}$ is the tree in which every vertex has valence $m$ or $n$ so that if two vertices of $T_{m,n}$ are adjacent, one of them has valence $m$ and the other has valence $n$. Metrize each tree so that each edge has length one. \end{defn} \begin{defn} Let $T$ be a tree and $u$ a vertex of $T$. In the metric space $T\times \field{R}$ each line $\{u\}\times \field{R}$ is an \emph{essential line}. \end{defn} In Construction~\ref{racgmodel} and Construction~\ref{surfmodel}, we show that each group considered in this paper has a model geometry of the following form. \begin{cons}[Model spaces] \label{modelspaces} Let $m,n,s \geq 1$ be integers, and let $\mathbb{X} = \{X_1, \ldots, X_t\}$ be a finite set of fattened trees. We say $Y$ is {\it a model space of type $(m,n, \mathbb{X},s)$} if $Y$ is the geometric realization of a geometric tree of spaces $\Gamma$ where \begin{enumerate} \item The underlying tree of $\Gamma$ is bipartite with vertex spaces of two types: either a copy of a fattened tree $X_i \in \field{X}$ or a copy of $T_{m,n}\times \field{R}$. Each edge space is a bi-infinite line. \item Each essential line in each copy of $T_{m,n} \times \field{R}$ is identified by an isometry to one boundary component of $s$ fattened trees, each isometric to some $X_i\in \field{X}$. \item Each boundary component of each fattened tree is identified to exactly one copy of $T_{m,n} \times \field{R}$. \end{enumerate} If $\field{X} = \{X_0\}$, where $X_0$ is the standard fattened tree, we call $Y$ the {\it standard model space} of type $(m,n,s)$. Note that if $Y$ is of type $(m,n,0)$, then $Y$ is isometric to $T_{m,n} \times \field{R}$. \end{cons} \subsection{Right-angled Coxeter groups} \begin{defn}[Generalized $\Theta$--graph] Let $k\geq 3$ and $1 \leq n_1 \leq n_2 \leq \cdots \leq n_k$ be positive integers. Let $\Psi_k$ be the graph with two vertices $a$ and $b$ each of valence $k$ and with $k$ edges $e_1, e_2, \cdots e_k$ connecting $a$ and $b$. The \emph{generalized $\Theta$--graph} $\Theta=\Theta(n_1, n_2, \cdots n_k)$ is obtained by subdividing the edge $e_i$ of $\Psi_k$ into $n_i+1$ edges by inserting $n_i$ new vertices along $e_i$ for $1 \leq i \leq k$. An example appears in Figure~\ref{NaO}. The vertices $a$ and $b$ are called the {\it essential vertices} of $\Theta$, and each path obtained by subdividing the edge $e_i$ is called an {\it essential path of degree $n_i$}. The {\it linear part} of $\Theta$, denoted $\Theta_L$, is the subgraph of $\Theta$ that consists of the union of all essential paths of degree $1$. The {\it hyperbolic part} of $\Theta$, denoted $\Theta_H$, is the subgraph of $\Theta$ consisting of all essential paths of degree at least two. The number of essential paths in $\Theta_L$ is called the {\it linear degree} of $\Theta$, and the number of essential paths in $\Theta_H$ is called the {\it hyperbolic degree} of $\Theta$. \end{defn} \begin{figure} \centering \includegraphics[scale=0.60]{figure-gen_theta.pdf} \caption{{\small On the left is the generalized $\Theta$--graph $\Theta=\Theta(1, 1, 2, 2, 2, 3)$. On the right is an orbi-complex $\mathcal{O}_{\Theta}$ with orbifold fundamental group the right-angled Coxeter group with defining graph $\Theta$. All edges of the orbi-complex are reflection edges except for the branching edge. }} \label{NaO} \end{figure} \begin{rem} A generalized $\Theta$--graph $\Theta$ is the union of its linear part $\Theta_L$ and its hyperbolic part $\Theta_H$. The intersection of these parts is the set of two essential vertices in $\Theta$. \end{rem} \subsubsection{Model geometry} \begin{defn}[Davis complex] Given a nontrivial, connected, finite, simplicial, triangle-free graph $\Gamma$ with a set $S$ of vertices, the \emph{Davis complex} $\Sigma_{\Gamma}$ is the Cayley $2$--complex for the presentation of the right-angled Coxeter group $W_{\Gamma}$ given above, in which each disk bounded by a loop with label $s^2$ for $s$ in $S$ has been collapsed to an unoriented edge with label $s$. Then the $1$--skeleton of $\Sigma_{\Gamma}$ is the Cayley graph of $W_{\Gamma}$ with respect to the generating set $S$. Since all relators in this presentation other than $s^2 = 1$ are of the form $stst = 1$, the space $\Sigma_{\Gamma}$ is a square complex. Moreover, the Davis complex $\Sigma_{\Gamma}$ is a $\CAT(0)$ space, and the group $W_{\Gamma}$ acts properly and cocompactly on $\Sigma_{\Gamma}$ (see \cite{davis}). \end{defn} \begin{cons}[Model geometry for right-angled Coxeter groups with generalized $\Theta$--graph nerve] \label{racgmodel} Let $\Theta = \Theta(n_1, \ldots, n_k)$ be a generalized $\Theta$--graph, and let $W_\Theta$ be the right-angled Coxeter group with defining graph $\Theta$. The quotient of the Davis complex $\Sigma_\Theta$ is a reflection orbi-complex $\mathcal{O}_\Theta$ with orbifold fundamental group $W_\Theta$. The space $\mathcal{O}_\Theta$ is the union of right-angled reflection orbifolds each with one non-reflection edge so that the orbifolds are identified to each other along their non-reflection edges to form one branching edge. An illustration of $\mathcal{O}_\Theta$ appears in Figure~\ref{NaO}. We metrize $\mathcal{O}_\Theta$ so that the universal cover of the orbi-complex, which is topologically the Davis complex, is isometric to a model space given in Construction~\ref{modelspaces}. We first metrize the orbifolds in $\mathcal{O}_\Theta$ coming from the hyperbolic part of $\Theta$ so that the universal cover of each orbifold is a fattened tree. Suppose $n_1 = \cdots = n_{\ell} = 1$, so the linear degree of $\Theta$ is $\ell$ with $0 \leq \ell \leq k$, and the hyperbolic degree of $\Theta$ is $k-\ell$. For each $i > \ell$, let $O_i \subset \mathcal{O}_\Theta$ be a orbifold with one non-reflection edge and $n_i +2 \geq 4$ reflection edges labeled in order $\sigma_1, \ldots, \sigma_{n_i+2}$ so that $\sigma_i$ and $\sigma_{i+1}$ are adjacent for $1 \leq i \leq 2n_i+1$. Subdivide $\sigma_2$ and $\sigma_{n_i+1}$ into two edges by inserting a vertex in the middle of each edge. Subdivide $\sigma_3, \ldots, \sigma_n$ into three edges by inserting two equally spaced vertices on each edge. Subdivide the boundary component of $O_i$ into $n_i$ edges by inserting $n_i -1$ equally spaced vertices along the boundary edge. Connect the vertices along the boundary edge to the new vertices along the reflection edges so that edge vertex on the boundary line has two out-going edges, the edges do not intersect, and, together with the boundary of $O_i$, they subdivide $O_i$ into a cell complex where each cell has four sides, as illustrated in Figure~\ref{racgcell}. Metrize these cells as Euclidean rectangles so that the new interior edges each have length one, each pair of edges emanating from the same vertex on the boundary edge span a square of side lengths one, and each edge along the boundary has length $L_i \geq 1$, so that $n_i L_i = n_j L_j = L$ for $\ell < i,j \leq k$. Since $O_i$ deformation retracts onto its reflection edges, whose universal cover is a tree, the universal cover of $O_i$ with this metric is a fattened tree: the vertex spaces are squares and the edge spaces have length either $L_i$ or $2L_i$. \begin{figure} \centering \includegraphics[scale=0.15]{figure-racgcell.pdf} \caption{{\small A right-angled reflection orbifold with one darkened non-reflection edge and seven reflection edges. The orbifold has been given a metric as a rectangular complex, as described in Construction~\ref{racgmodel}, so that the universal cover of the orbifold is a fattened tree. The orbifold deformation retracts onto a neighborhood of the union of the reflection edges, whose universal cover is a tree. }} \label{racgcell} \end{figure} Metrize each linear orbifold in $\mathcal{O}_\Theta$ as a rectangle with non-reflection side of length $L$ and adjacent pair of edges of length one. Then, the orbifolds coming from the essential paths in $\Theta$ can be glued by an isometry along their non-reflection edges to form $\mathcal{O}_\Theta$, whose universal cover is isometric to a model space of type $(\ell, \ell, \field{X},k-\ell)$ given in Construction~\ref{modelspaces}. \end{cons} \subsection{Surface group amalgams} \label{subsec:QI_surf_amals} \subsubsection{Model geometry} \begin{notation} \label{hatcher} Let $A_{m,n}$ be homeomorphic to the quotient space of the cylinder $S^1 \times [0,1]$ under the identifications $(z,0) \sim (e^{2\pi i/m}z, 0)$ and $(z,1) \sim (e^{2\pi i/n}z, 1)$. Let $A_m$ and $A_n$ be the two halves of $A_{m,n}$ formed by the quotients of $S^1 \times \bigl[0, \frac{1}{2}\bigr]$ and $S^1 \times \bigl[\frac{1}{2}, 1\bigr]$. Then, $A_m$ and $A_n$ are mapping cylinders of $z \mapsto z^m$ and $z \mapsto z^n$, respectively. The fundamental group of $A_{m,n}$ is $\presentation{a,b}{a^m = b^n}$. \end{notation} \begin{figure \begin{overpic}[scale=.5, tics=5]{figure-A_n_cover.pdf} \put(50,54){$3$} \put(75,20){$\frac{2\pi}{3}$} \put(69,12){\Small{rotation}} \end{overpic} \caption{{\small The degree--$3$ cover $Y_3 \times S^1 \rightarrow A_3$ described in Lemma~\ref{kmncover}. The cover is given by the $\field{Z}/3\field{Z}$ action on the space on the left generated by the screw motion that cyclically permutes the three fins and translates upward by one unit. }} \label{figure:A_n_cover} \end{figure} As explained by Hatcher in \cite[Example~1.24]{hatcher}, the $2$--complex $A_{m,n}$ is a $2$--dimensional spine of a torus knot complement when $\gcd(m,n)=1$ and is closely related to its standard Seifert fibered structure. The following result is analogous to the well-known fact that a Seifert fibered space has a finite-sheeted cover that is a product of a surface with a circle. The result is implicit in the discussion of Example~1.35 from \cite{hatcher}. \begin{lem} \label{kmncover} The space $K_{m,n} \times S^1$ forms a degree--$mn$ cover of the space $A_{m,n}$, where $K_{m,n}$ denotes the complete bipartite graph on $m$ and $n$ vertices, and $A_{m,n}$ is described above. \end{lem} \begin{proof} Let $Y_n$ denote the finite biregular tree $T_{1,n}$, the ``$n$--pointed asterisk'' (see Definition~\ref{treedef}). Let $\rho_n\colon Y_n \rightarrow Y_n$ denote a cyclic permutation of the $n$ leaves of $Y_n$. We claim that $Y_n \times S^1$ forms a degree--$n$ cover of the space $A_n$ defined in Notation~\ref{hatcher}. Indeed, let $\tau_n\colon S^1 \rightarrow S^1$ be given by $z \mapsto e^{2\pi i/n}z$. Then, there is a $\field{Z} /n\field{Z}$ action on $Y_n \times S^1$ generated by the map $(\rho_n, \tau_n)\colon Y_n \times S^1 \rightarrow Y_n \times S^1$. The quotient under this action is the space $A_n$. An example is given in Figure~\ref{figure:A_n_cover}. Each boundary curve of $Y_n \times S^1$ is mapped homeomorphically to the boundary curve of $A_n$. Similarly, the space $Y_m \times S^1$ forms a degree--$m$ cover of $A_m$ so that each boundary curve of $Y_m \times S^1$ is mapped homeomorphically to the boundary curve of $A_n$. The covering maps $Y_n \times S^1 \rightarrow A_n$ and $Y_m \times S^1 \rightarrow A_m$ restrict to homeomorphisms on the boundary curves, so copies of these spaces may be glued together to form a cover of $A_{m,n}$ homeomorphic to $K_{m,n} \times S^1$ as illustrated in the central column of Figure~\ref{bounding_pair_cover}. Indeed $K_{m,n} \times S^1$ consists of $m$ copies of $Y_n\times S^1$ and $n$ copies of $Y_m \times S^1$ such that each pair of opposite types intersects in a single circle. We define a covering map from $K_{m,n} \times S^1 \to A_{m,n}$ in the obvious way by pasting together the various covering maps defined on these subsets. \end{proof} The following corollary is immediate. See \cite[Example~1.35]{hatcher} for an alternate explanation. \begin{cor} \label{A_mn_universal} The universal cover of $A_{m,n}$ is $T_{m,n} \times \field{R}$, where $T_{m,n}$ denotes the biregular tree with vertices of valance $m$ and $n$. \qed \end{cor} \begin{cons}[Model geometry for $G \in \mathcal{C}$] \label{surfmodel} Let $X$ be the union of surfaces $S_g$, $S_h$, and an annulus, so that one boundary component of the annulus is attached to the curve $\gamma_g$ on $S_g$ by a degree--$m$ map and the other boundary component of the annulus is attached to the curve $\gamma_h$ on $S_h$ by a degree--$n$ map. Then, $A_{m,n}$ is a subspace of $X$. We will metrize $X$ so that $\widetilde{X}$, the universal cover of $X$, is isometric to a model space given in Construction~\ref{modelspaces}. We first put a metric on the surfaces with boundary $S_g \backslash \gamma_g$ and $S_h \backslash \gamma_h$ so that the universal cover is a fattened tree. The construction is illustrated in Figure~\ref{surfmetric}. \begin{figure} \centering \includegraphics[scale=.4]{figure-surfmetric.pdf} \caption{{\small Identify opposite sides of the outer octagons to form a surface with genus two; on the left the surface has one boundary component, and on the right, the surface has two boundary components. The universal cover of the surface, with the metric on the cells described in Construction~\ref{surfmodel}, is a fattened tree. }} \label{surfmetric} \end{figure} Let $S$ be a connected surface with genus $g \geq 1$ and either one or two boundary components. Realize $S$ as a regular $4g$--gon $P$ with opposite sides identified and with boundary components in the interior of the polygon. Suppose first that $S$ has one boundary component. Subdivide each side of $P$ into three segments of equal length, and subdivide the boundary component of $S$ into $4g$ segments of equal length. On each side of $P$ connect each of the two interior vertices to a vertex on the boundary component of $S$ by edges whose interiors are pairwise disjoint such that each vertex on the boundary of $S$ connects to two vertices that lie on adjacent sides of the polygon $P$. This construction realizes $S$ as a cell complex whose $0$--cells all lie on the boundary of $S$. The $2$--cells of $S$ that do not contain corners of $P$ are rectangles, and $S$ contains one more $2$--cell that is a $4g$--gon containing all of the corners of the polygon $P$. Metrize the cell complex $S$ so that the $4g$--gon is a regular Euclidean polygon with sides of length one, each rectangle is isometric to $[-\frac{1}{2},\frac{1}{2}] \times [0,L]$, where the value of $L \geq 1$ depends on $G$ and is chosen below, and the edges $[-\frac{1}{2},\frac{1}{2}] \times \{0\}$ and $[-\frac{1}{2},\frac{1}{2}] \times \{L\}$ of the rectangle are identified to the edges of the $4g$--gon by isometries. Then, the boundary curve of $S$ is a circle of length $4gL$, and, since $S$ deformation retracts onto the sides of $P$, the universal cover of $S$ is isometric to a fattened tree. Suppose now that $S$ has two boundary components. Add a diagonal to $P$ that subdivides $P$ into two $(2g+1)$--gons each containing one boundary component of $S$. Repeat the above subdivision construction on each of the two polygons to produce a cell structure on $S$ as illustrated on the right side of Figure~\ref{surfmetric} whose $0$--cells all lie on the boundary of $S$. The $2$--cells of $S$ consist of rectangles that do not contain corners of $P$ along with a single $(4g+2)$--gon that contains all of the corners of the polygon $P$. Endow $S$ with a piecewise Euclidean metric as above such that each rectangle is isometric to $\bigl[ -\frac{1}{2},\frac{1}{2} \bigr] \times [0,K]$ for a constant $K$ chosen below. Then each boundary curve of $S$ is a circle of length $(2g+1)K$, and, since $S$ deformation retracts onto the graph formed by the sides and chosen diagonal of $P$, the universal cover of $S$ is isometric to a fattened tree. The choice of the length of the rectangles in the construction depends on the group $G$. If $\gamma_g$ separates $S_g$ into two subsurfaces of genus $g_1$ and $g_2$, the lengths $L_1, L_2 \geq 1$ must satisfy $4g_1 L_1 = 4g_2 L_2$, which will equal the length of $\gamma_g$. If $\gamma_g$ is non-separating, its length is equal to $(2g_3+1)L_3$, where $g_3$ is the genus of $S_g \backslash \gamma_g$ and $L_3$ is the length of the rectangle. The length of $\gamma_h$ is similar. In addition, choose the constants $L_i$ and $K_i$ so that $m$ times the length of $\gamma_g$ is equal to $n$ times the length of $\gamma_h$, which is equal to some constant $C$. By Corollary~\ref{A_mn_universal}, the universal cover of the subspace $A_{m,n}$ is $T_{m,n} \times \field{R}$. So the universal cover $\widetilde{X}$ of $X$ is isometric to a model space of type $(m,n,\field{X},2)$ given in Construction~\ref{modelspaces}. \end{cons} \begin{rem} The construction above can be used to metrize any surface $S$ with nonempty boundary so that the universal cover is a fattened tree: realize $S$ as a polygon with sides identified and boundary components in the interior of the polygon. Choose an embedded graph that $S$ deformation retracts onto and perform the construction described above. \end{rem} \section{Abstract commensurability classes} \label{sec:ACclasses} In this section, we prove Theorem~\ref{3mancovers}, which states that each surface amalgam in $\mathcal{C}$ is abstractly commensurable to a right-angled Coxeter group in $\mathcal{W}$. In Section~\ref{sec:Euler}, we define the Euler characteristic vector for a group in $\mathcal{W}$; preliminary covering maps are given in Section~\ref{sec:prelim_covers}; Theorem~\ref{3mancovers} is proven in Section~\ref{sec:AC_thm_proof}; and, Section~\ref{sec:add_comm} contains additional commensurabilities. \subsection{Euler characteristic vector} \label{sec:Euler} Our results about abstract commensurability classes involve the Euler characteristic vector. The abstract commensurability classification is open for the non-hyperbolic groups in $\mathcal{C}$ and $\mathcal{W}$; in the $\delta$--hyperbolic setting, the abstract commensurability classes can be given in terms of the Euler characteristic vector; see \cite{danistarkthomas}. \begin{defn} (Euler characteristic vector.) Let $\Theta = \Theta(n_1, \ldots, n_k)$ be a generalized $\Theta$-graph, and let $W_{\Theta}$ be the associated right-angled Coxeter group. As described in Construction~\ref{racgmodel}, the group $W_{\Theta}$ is the orbifold fundamental group of a right-angled reflection orbi-complex $\mathcal{O}_{\Theta}$. The space $\mathcal{O}_{\Theta}$ is the union of right-angled reflection orbifolds $P_i$ for $1 \leq i \leq k$, where $P_i$ has $n_i+2$ reflection edges and one non-reflection edge, and the union identifies the non-reflection edges of $P_i$ for $1 \leq i \leq k$. The orbifold Euler characteristic of $P_i$ is given by the formula \[ \chi_i = \chi(P_i) = 1 - \left(1+\frac{n_i+2}{2}\right) + \left(\frac{2}{2} + \frac{n_i+1}{4}\right) = \frac{1-n_i}{4}. \] If $\ell$ is the linear degree of $\Theta$ and $h$ is the hyperbolic degree of $\Theta$, then $\chi_i = 0$ for $1 \leq i \leq \ell$ and $\chi_i <0$ for $\ell+1 \leq i \leq \ell+h$. The {\it Euler characteristic vector} of $W_{\Theta}$ is \[ \chi_{\Theta} = (\chi_1, \ldots, \chi_k) = (\underbrace{0, \ldots, 0}_{\text{$\ell$}}, \chi_{\ell+1}, \ldots, \chi_{\ell+h} ). \] \label{eulercharvector} \end{defn} \subsection{Covering maps and finite-index subgroups} \label{sec:prelim_covers} In this section, we describe some tools used to construct finite-index subgroups. First, we use the following lemma from \cite{danistarkthomas}, which proves that an orbi-complex with fundamental group a right-angled Coxeter group with generalized $\Theta$--graph nerve has a finite cover consisting of surfaces with two boundary curves, identified to each other along these curves. An example of the covering space appears in the lower left of Figure~\ref{2foldcovers}. \begin{lem}[\cite{danistarkthomas}, Section~4] \label{degree16} Let $W$ be a right-angled Coxeter group with generalized $\Theta$--graph nerve and with Euler characteristic vector $(\chi_1,\ldots, \chi_k)$. Let $\mathcal{O}$ be the orbi-complex given in Construction~\ref{racgmodel} with orbifold fundamental group $W$. Then there is a degree--$16$ cover $Z \rightarrow \mathcal{O}$, so that $Z$ consists of $k$ surfaces $S_1, \ldots, S_k$, where $S_i$ has two boundary curves, $C_{i1}$ and $C_{i2}$ and $\chi(S_i) = 16\chi_i$ for $1 \leq i \leq k$. The curves $\set{C_{i1}}{1 \leq i \leq k}$ are identified to form a single curve $C_1$ and the curves $\set{C_{i2}}{1 \leq i \leq k}$ are identified to form a single curve $C_2$. \end{lem} The following lemma states that a $d$--fold covering of the boundary of a surface can be extended to a $d$--fold covering of the entire surface provided a natural parity condition holds. \begin{lem}[\cite{neumann}, Lemma~3.2] \label{neumann} If $S$ is an orientable surface of positive genus, a degree $d \geq 1$ is specified, and for each boundary component of $S$ a collection of degrees summing to $d$ is also specified, then a connected $d$--fold covering $S'$ of $S$ exists with prescribed degrees on the boundary components of $S'$ over each boundary component of $S$ if and only if the prescribed number of boundary components of the cover has the same parity as $d\chi(S)$. \end{lem} We make repeated use of the following simple procedure to paste together covering maps. \begin{lem} \label{coverpaste} Let $A$ and $B$ be topological spaces with $A \cup B = X$ and $A \cap B = C \neq \emptyset$. Let $p_1\colon\widehat{A} \rightarrow A$ and $p_2\colon\widehat{B} \rightarrow B$ be covering maps, and let $C_1 = p_1^{-1}(C) \subset \widehat{A}$ and $C_2 = p_2^{-1}(C)\subset \widehat{B}$. Suppose $f\colon C_1 \rightarrow C_2$ is a covering isomorphism, so $p_1= p_2 \circ f$ on $C_1$. Then, $\widehat{X} = \widehat{A} \cup_f \widehat{B}$, (with the union notation defined in Definition~\ref{union}) is a cover of $X$. \end{lem} \begin{defn} \label{bounding} A \emph{bounding pair} in a surface is a pair of disjoint, homologous, nonseparating simple closed curves. \end{defn} An image of bounding pairs and an illustration of the following lemma appears in Figure~\ref{bounding_pair_cover}. \begin{lem} \label{boundingpair} If $\gamma\colon S^1 \rightarrow S_g$ is an essential simple closed curve, then there exists a double cover $\widehat{S}_g \rightarrow S_g$ where the pre-image of $\gamma$ is a bounding pair. \end{lem} \begin{proof} Let $\gamma\colon S^1 \rightarrow S_g$ be an essential simple closed curve. Suppose first that $\gamma$ is non-separating. Then, the surface $S = S_g \backslash \gamma$ is connected and has boundary $S^1 \sqcup S^1$. The surface $\widehat{S}_g = S \cup_{S^1 \sqcup S^1} S$ obtained by identifying two copies of $S$ together along their boundary forms a degree--$2$ cover of $S_g$ with the covering map given by rotation about the handle formed by the gluing. The pre-image of $\gamma$ in $\widehat{S}_g$ is a bounding pair that separates $\widehat{S}_g$ into two subsurfaces each of Euler characteristic $\chi(S_g)$. Suppose now that $\gamma$ separates $S_g$ into two subsurfaces $A$ and $B$. By Lemma~\ref{neumann}, there is a degree--$2$ cover $\widehat{A} \rightarrow A$ so that $\widehat{A}$ has two boundary components, which each cover $\gamma$ by degree one. Similarly, by Lemma~\ref{neumann}, there is a degree--$2$ cover $\widehat{B} \rightarrow B$ so that $\widehat{B}$ has two boundary components, which each cover $\gamma$ by degree one. Let $\widehat{S}_g = \widehat{A} \cup_{S^1 \sqcup S^1} \widehat{B}$. By Lemma~\ref{coverpaste}, $\widehat{S}_g$ forms a degree--$2$ cover of $S_g$. The pre-image of $\gamma$ in $\widehat{S}_g$ is a bounding pair that separates $\widehat{S}_g$ into two surfaces of Euler characteristic $2\chi(A)$ and $2\chi(B)$. \end{proof} \subsection{Surface group amalgams are virtual $3$--manifold groups and are abstractly commensurable to right-angled Coxeter groups} \label{sec:AC_thm_proof} Each surface amalgam in $\mathcal{C}$ is the fundamental group of a complex $X$ defined in Construction~\ref{surfmodel}, and each right-angled Coxeter group in $\mathcal{W}$ is the orbifold fundamental group of an orbicomplex~$\mathcal{O}$ defined in Construction~\ref{racgmodel}. To prove that each surface amalgam in $\mathcal{C}$ is abstractly commensurable to a right-angled Coxeter group in $\mathcal{W}$, for each complex $X$ we define an orbicomplex $\mathcal{O}$, and we construct covers of $X$ and $\mathcal{O}$ which are homotopy equivalent. Along the way, in Theorem~\ref{thm:surf_amal_3mancover}, we construct a cover of $X$ which is a deformation retract of a $3$--manifold with boundary, which can be constructed using similar arguments to those in \cite[Section~8]{kapovichkleiner}. The remaining covering maps and proof of commensurability are given in Theorem~\ref{3mancovers}. An alternative proof of Theorem~\ref{thm:surf_amal_3mancover} proved using Coxeter groups is given in Corollary~\ref{cor:surf_3man}. \begin{thm} \label{thm:surf_amal_3mancover} If $G \in \mathcal{C}_{m,n}$, then $G$ has a finite-index subgroup that acts freely on a $3$--manifold. \end{thm} \begin{proof} Let $G \cong \pi_1(S_g) *_{\left\langle a^m = b^n \right\rangle} \pi_1(S_h) \in \mathcal{C}_{m,n}$. Let $X$ be the space given in Construction~\ref{surfmodel} with fundamental group $G$. We construct two finite covers $X_2 \xrightarrow{mn} X_1 \xrightarrow{2} X$ so that the space $X_2$ is a deformation retract of a $3$--manifold with boundary. Recall that the space $X$ consists of the surfaces $S_g$ and $S_h$, and suppose $a_0\colon S^1 \rightarrow S_g$ and $b_0\colon S^1 \rightarrow S_h$ are essential simple closed curves in the homotopy classes of $a \in \pi_1(S_g)$ and $b \in \pi_1(S_h)$, respectively. The space $X$ also contains the complex $A_{m,n} = S^1 \times I / \sim$ defined in Notation~\ref{hatcher}; the quotient of $S^1 \times \{0\}$ under the identification is glued to the curve $a_0$, and the quotient of $S^1 \times \{1\}$ under the identification is glued to the curve $b_0$. We first describe the cover $X_1 \xrightarrow{2} X$; an illustration of the covering map appears in Figure~\ref{bounding_pair_cover}. The space $X_1$ contains a double cover of $S_g$, a double cover of $S_h$, and a double cover of $A_{m,n}$. By Lemma~\ref{boundingpair}, there is a double cover $\widehat{S}_g \rightarrow S_g$ so that the pre-image of $a_0$ in $\widehat{S}_g$ is a bounding pair (see Definition~\ref{bounding}) that separates $\widehat{S}_g$ into subsurfaces $S_1$ and $S_2$ of Euler characteristics $v_1$ and $v_2$. Similarly, by Lemma~\ref{boundingpair}, there is a double cover $\widehat{S}_h \rightarrow S_h$ so that the pre-image of $b_0$ in $\widehat{S}_h$ is a bounding pair that separates $\widehat{S}_h$ into subsurfaces $S_3$ and $S_4$ of Euler characteristics $v_3$ and $v_4$. Let $\widehat{A}_{m,n}$ be a degree--$2$ cover of $A_{m,n}$ that consists of two disjoint copies of $A_{m,n}$. Then the pre-image of $a_0$ in $\widehat{S}_g$ and the pre-image of $a_0$ in $\widehat{A}_{m,n}$ is $S^1 \sqcup S^1$ with covering maps given by homeomorphisms. Likewise, the pre-image of $b_0$ in $\widehat{S}_h$ and the pre-image of $b_0$ in $\widehat{A}_{m,n}$ is $S^1 \sqcup S^1$ with covering maps given by homeomorphisms. Thus, by Lemma~\ref{coverpaste}, the pre-image of $a_0$ in $\widehat{S}_g$ and in $\widehat{A}_{m,n}$ can be identified by a homeomorphism and the pre-image of $b_0$ in $\widehat{S}_h$ and in $\widehat{A}_{m,n}$ can be identified by a homeomorphism to form the space $X_1$ which forms a degree-$2$ cover of $X$. \begin{figure \begin{overpic}[scale=.5, tics=5]{figure-covers-X_2X_1X.pdf} \put(1,5){$X$} \put(1,30){$X_1$} \put(65,95){$X_2$} \put(41,14){\Small{$2$}} \put(41,42){\Small{$6$}} \put(27,11){\Small{$2$}} \put(50,11){\Small{$3$}} \put(27,18){\Small{$2$}} \put(50,18){\Small{$3$}} \put(27,37){\Small{$2$}} \put(50,37){\Small{$3$}} \put(24.5,6.5){\small{$a_0$}} \put(56,-1){\small{$b_0$}} \put(28,96){\small{$c_1$}} \put(28,86){\small{$c_2$}} \put(28,76){\small{$c_3$}} \put(27.5,69){\small{$d_1$}} \put(27.5,59){\small{$d_2$}} \put(27.5,49){\small{$d_3$}} \put(50,89){\small{$e_1$}} \put(50,80){\small{$e_2$}} \put(50,63){\small{$f_1$}} \put(50,54){\small{$f_2$}} \put(13.5,92){\small{$c_1$}} \put(13.5,74.5){\small{$c_2$}} \put(13.5,56){\small{$c_3$}} \put(13.5,88){\small{$d_1$}} \put(13.5,70.5){\small{$d_2$}} \put(13.5,52){\small{$d_3$}} \put(63,83){\small{$e_1$}} \put(63,64){\small{$e_2$}} \put(63,78){\small{$f_1$}} \put(63,59.5){\small{$f_2$}} \end{overpic} \caption{{\small A degree--$2$ cover $X_1 \rightarrow X$ and a degree--$6$ cover $X_2 \rightarrow X_1$. In $X$ and $X_1$, curves of the same color are glued by a degree--$2$ or degree--$3$ map of the circle as indicated. The curves $a_0$ and $b_0$ in $X$ each have pre-image in $X_1$ a {\it bounding pair}. The space $X_2$ consists of five surfaces and two copies of $K_{2,3} \times S^1$, and the colored curves are identified by a homeomorphism as indicated by the labeling. }} \label{bounding_pair_cover} \end{figure} We next describe the cover $X_2 \xrightarrow{mn} X_1$, illustrated in Figure~\ref{bounding_pair_cover}. The space $X_2$ contains a degree--$mn$ cover of each of the surfaces $S_i$ defined in the last paragraph for $1 \leq i \leq 4$; we describe these covers first. For $i=1,2$ color one boundary component of $S_i$ red and color the other boundary component of $S_i$ blue. By Lemma~\ref{neumann}, for $i=1,2$, there is a degree--$m$ cover $\widetilde{S}_i \rightarrow S_i$ so that $\chi(\widetilde{S}_i) = mv_i$; the surface $\widetilde{S}_i$ contains two boundary curves, one colored red and one colored blue; and, the red boundary curve of $\widetilde{S}_i$ covers the red boundary curve of $S_i$ by degree $m$, and the blue boundary curve of $\widetilde{S}_i$ covers the blue boundary curve of $S_i$ by degree $m$. Similarly, for $i=3,4$, color one boundary curve of $S_i$ green and color the other boundary component of $S_i$ black. By Lemma~\ref{neumann}, for $i=3,4$, there is a degree--$n$ cover $\widetilde{S}_i \rightarrow S_i$ so that $\chi(\widetilde{S}_i) = nv_i$; the surface $\widetilde{S}_i$ contains two boundary curves, one colored green and one colored black; and, the green boundary curve of $\widetilde{S}_i$ covers the green boundary curve of $S_i$ by degree $n$, and the black boundary curve of $\widetilde{S}_i$ covers the black boundary curve of $S_i$ by degree $n$. For $i=1,2$, let $T_i$ be $n$ disjoint copies of $\widetilde{S}_i$, and for $i=3,4$, let $T_i$ be $m$ disjoint copies of $\widetilde{S}_i$. Then, for $i \in \{1,2,3,4\}$, the (typically disconnected) surface $T_i$ forms a degree--$mn$ cover of $S_i$. The space $X_2$ also contains a degree--$mn$ cover of each of the two copies of $A_{m,n}$ in $X_1$, given as follows. Let $p\colon K_{m,n} \times S^1 \rightarrow A_{m,n}$ be the degree--$mn$ cover from Lemma~\ref{kmncover}. If $v_m \in K_{m,n}$ and $v_n \in K_{m,n}$ denote vertices of valance $m$ and $n$, respectively, then $p$ restricts to a degree--$m$ cover on $v_m \times S^1$ over the quotient of $S^1 \times \{0\} \subset A_{m,n}$ and restricts to a degree--$n$ cover on $v_n \times S^1$ over the quotient of $S^1 \times \{1\} \subset A_{m,n}$. Let $K_1$ and $K_2$ denote two copies of $K_{m,n} \times S^1$. The space $X_2$ is obtained by identifying the spaces $\{K_1, K_2, T_1, \ldots, T_4\}$ in the following way. The space $K_1$ contains $n$ circles of the form $v_m \times S^1$, where $v_m \in K_{m,n}$ is a vertex of valance $m$. This subspace of $K_1$ is homeomorphic to $\bigsqcup_{i=1}^n S^1$. For $i=1,2$, the set of $n$ red circles in $T_i$ is also homeomorphic to $\bigsqcup_{i=1}^n S^1$. Identify these three subspaces homeomorphic to $\bigsqcup_{i=1}^n S^1$ by a homeomorphism. Similarly, the space $K_1$ contains $m$ circles of the form $v_n \times S^1$, where $v_n \in K_{m,n}$ is a vertex of valance $n$. This subspace of $K_1$ is homeomorphic to $\bigsqcup_{i=1}^m S^1$. For $i=3,4$, the set of green curves in $T_i$ is also homeomorphic to $\bigsqcup_{i=1}^m S^1$. Identify these three subspaces homeomorphic to $\bigsqcup_{i=1}^m S^1$ by a homeomorphism. Analogously, the space $K_2$ contains $n$ circles of the form $v_m \times S^1$, where $v_m \in K_{m,n}$ is a vertex of valance $m$. This subspace of $K_2$ is homeomorphic to $\bigsqcup_{i=1}^n S^1$. For $i=1,2$, the set of $n$ blue circles in $T_i$ is also homeomorphic to $\bigsqcup_{i=1}^n S^1$. Identify these three subspaces homeomorphic to $\bigsqcup_{i=1}^n S^1$ by a homeomorphism. Similarly, the space $K_2$ contains $m$ circles of the form $v_n \times S^1$, where $v_n \in K_{m,n}$ is a vertex of valance $n$. This subspace of $K_2$ is homeomorphic to $\bigsqcup_{i=1}^m S^1$. For $i=3,4$, the set of black curves in $T_i$ is also homeomorphic to $\bigsqcup_{i=1}^m S^1$. Identify these three subspaces homeomorphic to $\bigsqcup_{i=1}^m S^1$ by a homeomorphism to form $X_2$. By Lemma~\ref{coverpaste}, $X_2$ forms a degree--$mn$ cover of~$X_1$. The space $X_2$ has a structure similar to the graph of spaces described by Kapovich--Kleiner in \cite[Section~8]{kapovichkleiner}. Roughly speaking, they build a $3$--manifold with boundary which deformation retracts to $X_2$ by replacing each branching curve with a solid torus, taking the product of each surface with boundary with an interval, and gluing the boundary annuli of the thickened surfaces to annuli on the solid tori. Thus, the thickening of the space $X_2$ is a deformation retraction of a $3$--manifold with boundary. Therefore, $G$ has a finite-index subgroup which acts freely on a $3$--manifold. \end{proof} \begin{thm} \label{3mancovers} If $G \in \mathcal{C}_{m,n}$, then $G$ is abstractly commensurable to a right-angled Coxeter group with a generalized $\Theta$--graph nerve. More specifically, suppose $G \cong \pi_1(S_g) *_{\left\langle a^m = b^n \right\rangle} \pi_1(S_h) \in \mathcal{C}_{m,n}$. Then, $G$ is abstractly commensurable to the right-angled Coxeter group $W$ whose nerve is the generalized $\Theta$--graph with Euler characteristic vector \[ w = (\!\!\!\!\!\!\!\!\!\underbrace{0, \ldots, 0}_\text{$ 2(mn-m-n+1)$}\!\!\!\!\!\!\!\!\! , \underbrace{mv_1, \ldots, mv_1}_\text{\ \ \ \ $n$ times}, \underbrace{mv_2, \ldots, mv_2}_\text{$n$ times}, \underbrace{nv_3, \ldots, nv_3}_\text{$m$ times}, \underbrace{nv_4, \ldots, nv_4}_\text{$m$ times} ), \] where if $a \in \pi_1(S_g)$ is the homotopy class of the curve $a_0$ and $a_0$ is non-separating, define $v_1 = v_2 = \chi(S_g)$, and if $a_0$ separates $S_g$ into two subsurfaces $A$ and $B$ with $\chi(A) \leq \chi(B)$, define $v_1 = 2\chi(A)$ and $v_2 = 2\chi(B)$. Define $v_3$ and $v_4$ analogously. \end{thm} \begin{proof} Let $G \cong \pi_1(S_g) *_{\left\langle a^m = b^n \right\rangle} \pi_1(S_h) \in \mathcal{C}_{m,n}$ and let $W$ be the right-angled Coxeter group given in the statement of the theorem. Let $N =mn-m-n+1$. Let $\mathcal{O}$ be the orbi-complex given by Construction~\ref{racgmodel} with orbifold fundamental group $W$, and let $X$ be the space given by Construction~\ref{surfmodel} with fundamental group $G$. To prove $G$ and $W$ are abstractly commensurable, we construct two finite towers of maps $Z_2 \xrightarrow{2} Z_1 \xrightarrow{16} \mathcal{O}$ and $X_5 \xrightarrow{16} X_4 \xrightarrow{2} X_3 \xrightarrow{\simeq} X_2 \xrightarrow{mn} X_1 \xrightarrow{2} X$. All spaces in these towers are connected $2$--complexes, the map $X_3 \rightarrow X_2$ is a homotopy equivalence, and each remaining map is a covering map with the degree specified above the arrow. The $2$--complexes $Z_2$ and $X_5$ at the top of the towers are homeomorphic. Since each map in the two towers is $\pi_1$--injective with finite index image, we get an isomorphism $\pi_1(Z_2) \cong \pi_1(X_5)$ between finite index subgroups of $G$ and $W$ as desired. We first describe the covering maps $Z_2 \xrightarrow{2} Z_1 \xrightarrow{16} \mathcal{O}$. Let $Z_1 \xrightarrow{16} \mathcal{O}$ be the cover given by Lemma~\ref{degree16}. An example of $Z_1$ appears on the lower left side Figure~\ref{2foldcovers}. By construction, $Z_1$ has two singular curves $C_1$ and $C_2$. There is a set $\mathcal{A}$ of $2N$ annuli in $Z_1$ and each annulus in $\mathcal{A}$ has one boundary curve glued to $C_1$ and one boundary curve glued to $C_2$. In addition, $Z_1$ has a set $\mathcal{B}$ of $2(m+n)$ surfaces with negative Euler characteristic and two boundary components. Each surface in $\mathcal{B}$ has one boundary curve glued to $C_1$ and one boundary curve glued to $C_2$. In particular, the set of surfaces $\mathcal{B}$ consists of $n$ surfaces with Euler characteristic $16mv_1$, $n$ surfaces with Euler characteristic $16mv_2$, $m$ surfaces with Euler characteristic $16nv_3$, and $m$ surfaces with Euler characteristic $16nv_4$. Let $Z_2$ be the following space; an example of $Z_2$ appears at the top of Figure~\ref{2foldcovers}. The space $Z_2$ contains four singular curves $D_1, D_2, D_3, D_4$, two copies of the set $\mathcal{A}$ denoted $A$ and $A'$, and two copies of the set $\mathcal{B}$ denoted $B$ and $B'$. Attach to the curves $\{D_1, D_2\}$ each annulus in $A$ so that each annulus has one boundary curve glued to $D_1$ and the other boundary curve glued to $D_2$. Attach to the curves $\{D_3, D_4\}$ each annulus in $A'$ so that each annulus has one boundary curve glued to $D_3$ and the other boundary curve glued to $D_4$. Similarly, attach to the curves $\{D_1, D_4\}$ each surface in the set $B$ so that each surface has one boundary curve glued to $D_1$ and one boundary curve glued to $D_4$. Finally, attach to the curves $\{D_2, D_3\}$ each surface in the set $B'$ so that each surface has one boundary curve glued to $D_2$ and one boundary curve glued to $D_3$. Then, $Z_2$ forms a degree-$2$ cover of $Z_1$ with the covering map given by rotation as pictured in Figure~\ref{2foldcovers}. \begin{figure \begin{overpic}[scale=.6, tics=5]{figure-2foldcovers_nolabel.pdf} \put(43,59.4){\Small{$2N$}} \put(43,43.6){\Small{$2N$}} \put(36,2){\Small{$2N$}} \put(97,17){\Small{$N$}} \put(97,0){\Small{$N$}} \put(102,25){\small{$X_3$}} \put(87,63){\small{$X_4$}} \put(-2,25){\small{$Z_1$}} \put(5,63){\small{$Z_2$}} \end{overpic} \caption{{\small Illustrated above are two degree--$2$ covers given by rotation about an axis positioned through the space above. On the top and on the bottom-left, there are collections of $2N$ annuli connecting singular curves; on the bottom-right there are two collections of $N$ tori identified to the singular curves. }} \label{2foldcovers} \end{figure} We now describe the maps $X_5 \xrightarrow{16} X_4 \xrightarrow{2} X_3 \xrightarrow{\simeq} X_2 \xrightarrow{mn} X_1 \xrightarrow{2} X$. The space $X$ and the covers $X_2 \xrightarrow{mn} X_1 \xrightarrow{2} X$ are described above in the proof of Theorem~\ref{thm:surf_amal_3mancover}. The homotopy equivalence $X_3 \rightarrow X_2$ is given as follows. The construction is based on the well-known fact that any quotient of a CW-complex formed by collapsing a contractible subcomplex to a point is a homotopy equivalence \cite[Proposition~0.17]{hatcher}. In particular the quotient $q\colon K_{m,n} \to K_{m,n} / T_0$ is a homotopy equivalence, where $T_0$ is a maximal subtree of $K_{m,n}$. The quotient is homeomorphic to the rose $R_N$ on $N$ petals, where $N = mn-m-n+1$. It follows that the quotient map $\bar{q}\colon K_{m,n} \times S^1 \rightarrow R_N \times S^1$ collapsing each subspace $T_0 \times \{x\}$ to a point is a homotopy equivalence. Let $X_3$ be the quotient space $X_2 / {\sim}$ in which the identification described above is performed separately on each of the two disjoint copies of $K_{m,n} \times S^1$ in $X_2$. The quotient map $X_2 \rightarrow X_3$ is again a homotopy equivalence by the homotopy extension property. Indeed the homotopy extension argument is nearly identical to the proof of \cite[Proposition~0.17]{hatcher}. The desired map $X_3 \rightarrow X_2$ is a homotopy inverse of $X_2 \rightarrow X_3$. Let us take a moment to describe the structure of the quotient space $X_3$ produced above and illustrated in the lower right portion of Figure~\ref{2foldcovers}. The space $X_3$ contains two singular curves $C$ and $C'$: all red and green curves in $X_2$ are identified, and all blue and black curves in $X_2$ are identified. In $X_3$ there are $N$ tori identified to $C$ along a simple closed curve on each torus, similarly, there are $N$ tori identified to $C'$ along a simple closed curve on each torus. In addition, there is a set $\mathcal{T}$ of $2(m+n)$ surfaces with negative Euler characteristic and two boundary components identified to $C$ and $C'$. In the set $\mathcal{T}$, there are $n$ surfaces with Euler characteristic $mv_1$, $n$ surfaces with Euler characteristic $mv_2$, $m$ surfaces with Euler characteristic $nv_3$, and $m$ surfaces with Euler characteristic $mv_4$. There is a degree--$2$ cover $X_4 \rightarrow X_3$ given as follows; an example of the space $X_4$ is pictured on the top of Figure~\ref{2foldcovers}, with the covering map given on the right-hand side. The space $X_4$ contains four singular curves $E_1, E_2, E_3, E_4$, sets $\mathcal{N}$ and $\mathcal{N}'$, which are each homeomorphic to $2N$ annuli, and, sets $T$ and $T'$, which are each homeomorphic to $\mathcal{T}$ described in the previous paragraph. The gluings are given as follows. Attach to the curves $\{E_1, E_2\}$ each annulus in $\mathcal{N}$ so that each annulus has one boundary component glued to the curve $E_1$ and one boundary component glued to the curve $E_2$. Attach to the curves $\{E_3, E_4\}$ each annulus in $\mathcal{N}'$ so that each annulus has one boundary component glued to $E_3$ and one boundary component glued to $E_4$. Similarly, attach to the curves $\{E_1, E_4\}$ each surface in the set $T$ so that each surface has one boundary component glued to $E_1$ and one boundary component glued to $E_4$. Finally, attach to the curves $\{E_2, E_3\}$ each surface in the set $T'$ so that each surface has one boundary component glued to $E_2$ and one boundary component glued to $E_3$. Then $X_4$ forms a degree--$2$ cover of $X_3$ with the covering map given by rotation about an axis that passes vertically through the two sets of annuli shown in Figure~\ref{2foldcovers} and has $N$ annuli from each set on each side. Alternatively, this covering map may be seen by cutting each torus in $X_3$ along a meridian curve parallel to the singular curve, taking two copies of the resulting space, and re-gluing the boundary components in pairs. The space $X_4$ has the same underlying structure as the space $Z_2$, but the Euler characteristic of the surfaces in $X_4$ are less than those in $Z_2$ by a factor of $16$; so, construct one final cover $X_5 \rightarrow X_4$ of degree $16$ so that $X_5 \cong Z_2$. By Lemma~\ref{neumann}, there is a set of surfaces $\widetilde{\mathcal{T}}$ that consists of a degree--$16$ cover $\widetilde{S}$ of each $S \in \mathcal{T}$ so that $\widetilde{S}$ has two boundary components and $\chi(\widetilde{S}) = 16mv_i$ for $i=1,2$ or $\chi(\widetilde{S}) = 16nv_j$ for $j=3,4$. Let $X_5$ be the space with the same underlying structure as $X_4$ and formed as follows. The space $X_5$ contains four singular curves $E_1', E_2', E_3', E_4'$, the sets of annuli $\mathcal{N}$ and $\mathcal{N}'$, and spaces $\widetilde{T}$ and $\widetilde{T}'$, each homeomorphic to $\widetilde{\mathcal{T}}$. The gluings are given as follows. Attach to the curves $\{E_1', E_2'\}$ each annulus in $\mathcal{N}$ so that each annulus has one boundary component glued to the curve $E_1'$ and one boundary component glued to the curve $E_2'$. Attach to the curves $\{E_3', E_4'\}$ each annulus in $\mathcal{N}'$ so that each annulus has one boundary component glued to $E_3'$ and one boundary component glued to $E_4'$. Similarly, attach to the curves $\{E_1', E_4'\}$ each surface in the set $\widetilde{T}$ so that each surface has one boundary component glued to $E_1'$ and one boundary component glued to $E_4'$. Finally, attach to the curves $\{E_2', E_3'\}$ each surface in the set $\widetilde{T}'$ so that each surface has one boundary component glued to $E_2'$ and one boundary component glued to $E_3'$. Then, each surface with boundary in the subspaces $\mathcal{N}$, $\mathcal{N}'$, $\widetilde{T}$, and $\widetilde{T}'$ of $X_5$ covers a corresponding surface in $X_4$ so that the degree restricted to the boundary components is equal to 16. Thus, by Lemma~\ref{coverpaste}, the space $X_5$ forms a degree--$16$ cover of $X_4$. By construction, $X_5$ and $Z_2$ are homeomorphic. Therefore, $G$ and $W$ are abstractly commensurable. \end{proof} \subsection{Additional commensurabilities} \label{sec:add_comm} Two vectors $v,w \in \field{Z}^n$ are {\it commensurable} if there exist non-zero integers $K$ and $L$ so that $Kv=Lw$. The following lemma, which generalizes a technique of Crisp--Paoluzzi \cite{crisppaoluzzi} and is illustrated in the left of Figure~\ref{orbicovers}, implies that if two right-angled Coxeter groups in $\mathcal{W}$ have commensurable Euler characteristic vectors, then they are abstractly commensurable. \begin{figure} \centering \includegraphics[scale=.5]{figure-orbicovers.pdf} \caption{{\small Illustrated above are two degree--$3$ covers of an orbi-complex with fundamental group a right-angled Coxeter group. The covers are given locally by reflection along the dashed lines. The orbifold fundamental group of each covering space is a right-angled Coxeter group with generalized $\Theta$--graph nerve.}} \label{orbicovers} \end{figure} \begin{lem} \label{cover} Let $W_{\Theta}$ be the right-angled Coxeter group whose nerve is the generalized $\Theta$--graph with Euler characteristic vector \[ w = (\underbrace{0,\ldots, 0}_\text{$\ell$}, \chi_{\ell+1}, \ldots, \chi_{\ell+h} ). \] Then $W_{\Theta}$ is abstractly commensurable with the right-angled Coxeter group $W$ whose nerve is the generalized $\Theta$--graph with Euler characteristic vector $Kw$ for each integer $K>0$. \end{lem} \begin{proof} Let $\mathcal{O}_{\Theta}$ be the orbi-complex described in Construction~\ref{racgmodel} with orbifold fundamental group $W_{\Theta}$. We show that $\mathcal{O}_{\Theta}$ has a $K$--fold cover $\mathcal{O}$ with orbifold fundamental group $W$. To construct the cover, take $K$ copies of the orbi-complex $\mathcal{O}_{\Theta}$. Glue branching reflection edges incident to the a vertex in one copy to branching reflection edges incident to a vertex in another copy so that orbifolds with the same number of sides are glued together and so that there is one branching edge in the cover $\mathcal{O}$. The orbifold fundamental group $W$ of $\mathcal{O}$ is a right-angled Coxeter group with generalized $\Theta$--graph nerve and linear degree $\ell$. Since Euler characteristic is multiplicative under covering maps, $W$ has Euler characteristic vector $Kw$ as desired. \end{proof} \begin{cor} If groups $W, W' \in \mathcal{W}$ have commensurable Euler characteristic vectors, then $W$ and $W'$ are abstractly commensurable. \end{cor} The following lemma, illustrated in the right of Figure~\ref{orbicovers}, states that within each commensurability class in the non-hyperbolic setting, for a given group $W \in \mathcal{W}$, there are right-angled Coxeter groups in $\mathcal{W}$ commensurable to $W$ and with arbitrarily large hyperbolic degree. \begin{lem} \label{hypdegree} Let $\Theta$ be the generalized $\Theta$--graph with Euler characteristic vector \[ (\underbrace{0,\ldots, 0}_\text{$\ell$}, \chi_{\ell+1}, \ldots, \chi_{\ell+h} ). \] Then for all $m \geq 1$, the group $W_{\Theta}$ is abstractly commensurable to the right-angled Coxeter group $W$ whose nerve is the generalized $\Theta$--graph with Euler characteristic vector \[ (\underbrace{0,\ldots, 0,}_\text{$m(\ell-2)+2$} \underbrace{\underbrace{\chi_{\ell+1}, \ldots, \chi_{\ell+h}}, \underbrace{ \chi_{\ell+1}, \ldots, \chi_{\ell+h}}, \ldots, \underbrace{\chi_{\ell+1}, \ldots, \chi_{\ell+h}}}_\text{$m$ times}). \] \end{lem} \begin{proof} Let $\mathcal{O}_{\Theta}$ be the orbi-complex described in Construction~\ref{racgmodel} with orbifold fundamental group $W_{\Theta}$. We show that $\mathcal{O}_{\Theta}$ has an $m$-fold cover $\mathcal{O}$ with orbifold fundamental group $W$. To construct the cover, take $m$ copies of the orbi-complex $\mathcal{O}_{\Theta}$ and glue exterior reflection edges of the rectangles to each other in pairs to form an orbi-complex $\mathcal{O}$ with $m$ branching edges and so that the branching edges lie along one rectangular orbifold as pictured in Figure~\ref{orbicovers}. The orbifold covering map is locally trivial away from the glued reflection edges, where the covering map is locally a reflection. The orbi-complex $\mathcal{O}$ has $m$ copies of each hyperbolic reflection orbifold in $\mathcal{O}_{\Theta}$ and $2(\ell-1)+(m-2)(\ell-2) = m(\ell-2)+2$ rectangular orbifolds with one boundary edge attached to the branching lines along their boundary edges. The orbi-complex $\mathcal{O}$ is homotopy equivalent to an orbi-complex with one branching line: collapse the rectangular orbifolds with two boundary lines that are attached by their boundary lines to distinct branching lines. Then, the fundamental group of this orbi-complex is the right-angled Coxeter group whose nerve is the generalized $\Theta$--graph with the desired Euler characteristic vector. \end{proof} \section{$3$--manifold groups} \label{sec:3man} In Section~\ref{sec:coarse_separation} we state the results of Kapovich--Kleiner \cite{kapovichkleiner05} that we will need; in Section~\ref{subsec:3mantop} we review some background on $3$--manifolds; and, in Section~\ref{3manproof} we characterize which groups in $\mathcal{C}$ are fundamental groups of $3$--manifolds, and we show each group in $\mathcal{W}$ acts properly and cocompactly on a contractible $3$--manifold. \subsection{Coarse separation}\label{sec:coarse_separation} \begin{defn} A {\it metric simplicial complex} $X$ is the geometric realization of a connected simplicial complex, metrized so that each edge has length one. The complex $X$ is said to have {\it bounded geometry} if all links have a uniformly bounded number of simplices. A metric simplicial complex is {\it uniformly acyclic} if for every $R_1\in \field{R}$ there exists $R_2\in \field{R}$ such that for each subcomplex $K \subset X$ of diameter less than $R_1$ the inclusion $K \rightarrow N_{R_2}(K)$ induces zero on reduced homology groups. A component $C$ of $X \backslash N_R(K)$ is called {\it deep} if it is not contained within a finite neighborhood of $K$. A subcomplex~$K$ {\it coarsely separates} $X$ if there is an $R$ so that $X \backslash N_R(K)$ has at least two deep components. A map $f\colon X \rightarrow Y$ between metric spaces is {\it coarse Lipschitz} if there exist $L\geq 1$ and $A \geq 0$ so that $d\bigl( f(x), f(x') \bigr) \leq L\,d(x,x') +A$ for all $x,x' \in X$. A coarse Lipschitz map is {\it uniformly proper} if there is a proper function $\phi\colon\field{R}^+ \rightarrow \field{R}^+$ so that $d\bigl(f(x), f(x')\bigr) \geq \phi\bigl(d(x,x')\bigr)$ for all $x,x' \in X$. (In particular, a quasi-isometric embedding is uniformly proper.) \end{defn} We refer the reader to \cite{kapovichkleiner05} for the definition of a coarse $PD(n)$ space. We will not use the definition explicitly, rather, we will use a characterization of certain coarse $PD(n)$ spaces. \begin{lem}[\cite{kapovichkleiner05}, Lemma~6.2] \label{lem:char_pd3} Let $M$ be an aspherical closed $n$--manifold equipped with a finite triangulation. Then its universal cover $\tilde{M}$ is a coarse $PD(n)$ space on which $\pi_1(M)$ acts properly, cocompactly, and simplicially. \end{lem} \begin{lem}[\cite{kapovichkleiner05}, Lemma~7.11] \label{separation} Let $W$ be a bounded geometry metric simplicial complex which is homeomorphic to a union $W = \bigcup_{i\in I} W_i$ of $k$ half-spaces $W_i \cong \field{R}_{+}^2$ along their boundaries. Assume that for $i \neq j$, the union $W_i \cup W_j$ is uniformly acyclic and is uniformly properly embedded in $W$. Let $f\colon W \rightarrow X$ be a uniformly proper map of $W$ into a coarse $PD(3)$ space $X$. Then $f(W)$ coarsely separates $X$ into $k$ components. Moreover, there is a unique cyclic ordering on the index set $I$ so that for $R$ sufficiently large, the boundary of each deep component $C$ of $X \backslash N_R\bigl(f(W)\bigr)$ is at finite Hausdorff distance from $f(W_i) \cup f(W_j)$, where $i$ and $j$ are adjacent with respect to the cyclic ordering. \end{lem} \subsection{$3$--manifold topology} \label{subsec:3mantop} To apply the techniques of Kapovich--Kleiner to groups in $\mathcal{C}$, we will need the following proposition. \begin{prop} \label{prop:3-man_to_pd3} Suppose a finitely generated, one-ended group $G$ is the fundamental group of a $3$--manifold. Then $G$ acts properly and simplicially on a coarse $PD(3)$ space $X$. In particular, for all $x \in X$, the orbit map $G \rightarrow X$ given by $g \mapsto g \cdot x$ is uniformly proper. \end{prop} A similar result is used implicitly in \cite[Section~8]{kapovichkleiner}, but was not stated explicitly. For the benefit of the reader, we give a detailed proof using standard techniques from the topology of $3$--manifolds. Before proving the proposition, we briefly recall some useful background on $3$--manifolds. For more detail, we refer the reader to any of \cite{hempel,kapovich,hatcher_3man}. \begin{defn} A $3$--manifold is {\it aspherical} if it is connected and its universal cover is contractible. A $3$--manifold $M$ is {\it prime} if whenever $M$ can be written as connected sum, $M \cong P \# Q$, then either $P \cong S^3$ or $Q \cong S^3$. A $3$--manifold $M$ is {\it irreducible} if every $2$--sphere $S^2 \subset M$ bounds a ball $B^3 \subset M$. If $M$ is a $3$--manifold with boundary, an aspherical boundary component $F$ of $M$ is {\it incompressible} if the natural map $\pi_1(F) \rightarrow \pi_1(M)$ is injective. \end{defn} \begin{proof}[Proof of Proposition~\ref{prop:3-man_to_pd3}] Suppose the finitely generated group $G$ is the fundamental group of a $3$--manifold $M$. By the Compact Core Theorem \cite{scott73b}, the group $G$ is the fundamental group of a compact manifold $N$ such that each boundary component $\pi_1$--injects into $N$. We may assume, without loss of generality, that $N$ has aspherical boundary, since any $S^2$ or $\field{R} P^2$ boundary component can be capped off by attaching a $3$--cell without changing the fundamental group. Thus $N$ has aspherical, incompressible boundary. Since $G=\pi_1(N)$ is one-ended, a standard argument shows that $N$ itself is aspherical as follows. Let $\hat{N}\to N$ be an orientable cover of degree at most two. Then the finite index subgroup $\hat{G} = \pi_1(\hat{N})$ is also one-ended. By the Prime Decomposition Theorem, $\hat{N}$ is the connected sum $P_1 \# \ldots \# P_k$ of finitely many compact prime $3$--manifolds. Since $\hat{G}$ is one-ended, it does not split as a nontrivial free product. Thus all but one of the prime factors are simply connected and, hence, have empty boundary. By the Poincar\'{e} Conjecture the closed, simply connected, prime factors are all spheres, so that $\hat{N}$ is prime. Because $\hat{G}$ is one-ended, $\hat{N}$ must also be irreducible since the only other possibility is $S^2 \times S^1$, which has a $2$--ended fundamental group. Since $\hat{N}$ is compact, connected, orientable, and irreducible, and $\hat{G}$ is infinite, the Sphere Theorem implies that $\hat{N}$ is aspherical (see for example Corollary~3.9 of \cite{hatcher_3man}). Since $N$ and $\hat{N}$ have the same contractible universal cover, we see that $N$ is a compact, aspherical $3$--manifold with aspherical, incompressible boundary components. To conclude the proof, we will see that $G$ acts properly and simplicially on a coarse $PD(3)$ space. Let $DN$ denote the double of $N$ across its boundary. Since $N$ and its boundary are compact and aspherical, $DN$ is a closed aspherical $3$--manifold. Using any finite triangulation of $DN$, Lemma~\ref{lem:char_pd3} implies that its universal cover is a coarse $PD(3)$ space on which $DG = \pi_1(DN)$ acts simplicially. Since each boundary component of $N$ $\pi_1$--injects into $N$, the double $DG$ is the fundamental group of a graph of groups with one vertex for each copy of $N$ and one edge for each component of $\partial N$. In particular, $\pi_1(N) \to \pi_1(DN)$ is injective. Thus the subgroup $G < DG$ acts properly and simplicially on the universal cover of $DN$. \end{proof} \subsection{Classification theorems} \label{3manproof} \begin{thm} \label{thm:3manclassification} Let $G \cong \pi_1(S_g) *_{\left\langle a^m=b^n \right\rangle} \pi_1(S_h) \in \mathcal{C}_{m,n}$, where $a\in \pi_1(S_g)$ and $b \in \pi_1(S_h)$ are homotopy classes of essential simple closed curves. Then $G$ is the fundamental group of a $3$-manifold if and only if one of the following holds: \begin{enumerate} \item $m=n=1$; \item $m=1$, $n=2$, and $b$ is the homotopy class of a non-separating curve; or, \item $m=n=2$, and $a$ and $b$ are homotopy classes of non-separating curves. \end{enumerate} \end{thm} \begin{proof} Let $G \cong \pi_1(S_g) *_{\left\langle a^m=b^n \right\rangle} \pi_1(S_h) \in \mathcal{C}_{m,n}$, where $a\in \pi_1(S_g)$ and $b \in \pi_1(S_h)$ are homotopy classes of essential simple closed curves. \begin{center} {\it Construction of $3$--manifold structure.} \end{center} Suppose first that conditions (1), (2), or (3) hold. We will realize $G \cong \pi_1(S_g) *_{\left\langle a^m=b^n \right\rangle} \pi_1(S_h)$ as the fundamental group of $M_G$, an aspherical $3$-manifold with boundary. Suppose $a_0$ is an essential simple closed curve on $S_g$ in the homotopy class of $a \in \pi_1(S_g)$, and suppose $b_0$ is an essential simple closed curve on $S_h$ in the homotopy class of $b \in \pi_1(S_h)$. To construct $M_G$, we will glue an $I$--bundle over $S_g$ to an $I$--bundle over $S_h$ along an annulus in the boundary of each bundle, where the annulus on the $I$--bundle over $S_g$ is freely homotopic to $a_0^m$ and the annulus on the $I$--bundle over $S_h$ is freely homotopic to $b_0^n$. The choice of $I$--bundle over $S_g$ depends on whether $m=1$ or $m=2$; likewise, the choice of $I$--bundle over $S_h$ depends on whether $n=1$ or $n=2$. We describe the $I$--bundle over $S_g$ and the choice of annulus on its boundary; the construction is analogous for $S_h$. \begin{figure \begin{overpic}[scale=.25, tics=5]{figure-Ibundle.pdf} \put(40,47){$a_0$} \end{overpic} \caption{{\small Pictured above is the solid Klein bottle, formed by the $3$-cube $[-1,1]^3$ modulo the identification $(-1,y,z) \sim (1, y, -z)$: the left-hand face of the cube has been identified to the right-hand face of the cube by a reflection about the horizontal mid-plane. The top and bottom shaded faces glue to form an annulus. The solid Klein bottle is a subspace of an $I$--bundle over the surface that is twisted over the a non-separating $a_0$. }} \label{Ibundle} \end{figure} If $m=1$, let $M_g \cong S_g \times [-1,1]$ be the trivial $I$--bundle over $S_g$. Let $N(a_0) \subset S_g$ be a regular neighborhood of $a_0$, and let $A_g = N(a_0) \times \{1\} \subset S_g \times I$ be an annulus on the boundary of $M_g$. Then $A_g$ is freely homotopic to the curve $a_0$. If $m=2$, then by assumption, the homotopy class $a$ is represented by a non-separating simple closed curve $a_0$. In this case, there exists a retraction $r\colon S_g \to \Image(a_0)$, and the induced map $r_*$ gives rise to a homomorphism $\pi_1(S_g) \to \field{Z} \to \field{Z}/2\field{Z}$ sending $a$ to a nontrivial element. Any homomorphism $\phi\colon \pi_1(S_g) \rightarrow \field{Z}/2\field{Z}$ gives rise to an $I$--bundle over $S_g$ whose monodromy---i.e., the twisting over various curves---is given by $\phi$. We briefly recall the construction of this bundle. Let $\field{Z}/2\field{Z}$ act on the interval $I=[-1,1]$ via multiplication by $\pm 1$. Then $\phi$ induces a corresponding action of $\pi_1(S_g)$ on $I$. Let $\pi_1(S_g)$ act on $\tilde{S_g} \times I$ by a diagonal action that consists of the covering space action on the first factor and $\phi$ on the second factor. The quotient by this action is an $I$--bundle over $S_g$ that is twisted over $a_0$: the bundle over $a_0$ is a M\"{o}bius band with center circle $a_0$. Let $N(a_0)$ be a regular neighborhood of $a_0$ on $S_g$, and let $K_g \subset M_g$ be the union of the $I$--fibers over $N(a_0)$. Then the subspace $K_g \subset M_g$ forms a solid Klein bottle, as shown in Figure~\ref{Ibundle}. Let $A_g$ be the boundary of this subspace $K_g \subset M_g$, which is an annulus freely homotopic to $a_0^2$ and is shaded in Figure~\ref{Ibundle}. Let $M_h$ be the $I$--bundle over $S_h$ formed analogously, depending on whether $n=1$ or $n=2$. Let $A_h$ be the corresponding annulus on the boundary of $M_h$ that is freely homotopic to $b_0^n$. Identify $A_g\subset M_g$ and $A_h \subset M_h$ via a homeomorphism to form the aspherical space $M_G$ as desired. Thus, if (1), (2), or (3) hold, $G$ is the fundamental group of an aspherical $3$--manifold. \begin{center} {\it Obstruction to the existence of a $3$--manifold structure.} \end{center} Suppose now that none of (1)--(3) hold. Suppose towards a contradiction that $G$ is the fundamental group of a $3$-manifold. Then, by Proposition~\ref{prop:3-man_to_pd3}, $G$ acts properly on a coarse $PD(3)$ space~$X$. As described below, $G$ also acts freely and cocompactly on a model space $Y$, and suppose $W$ is a convex subset of $Y$. Then, the orbit maps $G \rightarrow X$ and $G \rightarrow Y$ given by $g \mapsto g \cdot x$ and $g \mapsto g \cdot y$ for $x \in X$ and $y \in W \subset Y$ define a $G$--equivariant uniformly proper map $f\colon Y\rightarrow X$, which restricts to a $G$--equivariant uniformly proper map $f\colon W \rightarrow X$. We will examine the action of $G$ on a model space $Y$ and apply the coarse separation results of Kapovich--Kleiner to conclude $G$ is not the fundamental group of a $3$--manifold. \begin{center} {\it Action of $G$ on a model space $Y$.} \end{center} As shown in Construction~\ref{surfmodel}, $G$ acts freely and cocompactly by isometries on a model space $Y$ of type $(m,n,\mathbb{X}, 2)$ as defined in Construction~\ref{modelspaces}. The space $Y$ is a locally-finite cell complex by construction, and the cells of $Y$ may be subdivided $G$--equivariantly to give $Y$ the structure of a bounded geometry metric simplicial complex. The subgroup $\left\langle a,b \right\rangle \leq G$ acts geometrically on one copy of $T_{m,n} \times \field{R}$ in $Y$. The action of $a$ and $b$ on $T_{m,n} \times \field{R}$ decomposes as a product. The element $a$ cyclically permutes the $m$ edges adjacent to a vertex $v$ of valance $m$ in $T_{m,n} \times \{0\}$ and translates by $n$ units in the $\field{R}$--direction. Likewise, the element $b$ cyclically permutes the $n$ edges adjacent to a vertex $w$ of valance $n$ adjacent to $v$ in $T_{m,n} \times \{0\}$ and translates by $m$ units in the $\field{R}$--direction. Note that if $m=1$, then $a$ acts only by translation in the $\field{R}$--direction, and if $n=1$, then $b$ acts only by translation in the $\field{R}$--direction. Recall, each line $v \times \field{R} \subset T_{m,n} \times \field{R}$ is called an {\it essential line}. \begin{center} {\it Case 1: $m \geq 2$, $n \geq 3$.} \end{center} Suppose $m \geq 2$, $n\geq 3$. Suppose $\left\langle a,b \right\rangle \leq G$ stabilizes $T_{m,n} \times \field{R} \subset Y$ as described above. Choose a geodesic ray $\rho$ based at the vertex $w$ in $T_{m,n} \times \{0\}$, and let $P_0 = \rho \times \field{R} \subset T_{m,n} \times \field{R}$. The space $P_0$ is homeomorphic to $\field{R}_2^{+}$. Let $P_i = b^iP_0$ for $0 \leq i \leq n-1$. Then $P_i \cap P_j = \{w\} \times \field{R}$, a line we denote by $\ell$. Glued to $\ell$ is also a copy of $\widetilde{S_h}$, the universal cover of the surface $S_h$. Let $H_1$ and $H_2$ be the two half-planes of $\widetilde{S_h}$ bounded by $\ell$. Let $W$ be the union of $P_i$ for $0 \leq i \leq n-1$, $H_1$, and $H_2$ along $\ell$. Then $W$ is union of $n+2$ half-planes and satisfies the conditions of Lemma~\ref{separation}. By Lemma~\ref{separation}, there is a unique cyclic ordering on the index set of the half-planes in $f(W)$, and this cyclic order is preserved by any homeomorphism of $X$ that preserves $f(W)$. The group of permutations of $n+2$ elements which preserves a cyclic order is isomorphic to the dihedral group $D_{n+2}$ of order $2(n+2)$. Since $b \in G$ preserves $W$ and $f$ is $G$-equivariant, $b$ preserves $f(W)$, and thus $b$ preserves the cyclic order on the index set of the half-planes in $f(W)$. Therefore, $b$ corresponds to an element $\sigma \in D_{n+2}$. However, the element $b$ acts on $W$ by cyclically permuting the half-planes $P_i$ and stabilizing each half-plane $H_i$, $i = 1,2$. Thus, $\sigma \in D_{n+2}$ is an $n$--cycle. When $n \geq 3$, $D_{n+2}$ does not contain an $n$--cycle, a contradiction. Thus, we conclude that $G$ is not the fundamental group of a $3$--manifold. \begin{center} {\it Case 2: $m=1$, $n \geq 3$.} \end{center} Suppose $m=1$ and $n \geq 3$. The arguments in this case are similar to those in Case 1 and account for the fact that $T_{1,n}$ is finite, so no geodesic ray $\rho \subset T_{1,n}$ exists. Let $T_{1,n} \times \field{R} \subset Y$ be the complex stabilized by $\left\langle a, b \right\rangle$. Let $\ell_0$ be the essential line in $T_{1,n} \times \field{R}$ stabilized by $b$, and let $\ell_1, \ldots, \ell_n$ be the other $n$ essential lines in $T_{1,n} \times \field{R}$. In the model space $Y$, the line $\ell_0$ is glued to a copy of $\widetilde{S_h}$, the universal cover of $S_h$, along a line. Let $H_1$ and $H_2$ be the two half-planes of $\widetilde{S_h}$ bounded by $\ell_0$. Similarly, each line $\ell_i$ for $1 \leq i \leq n$ is identified to a copy of $\widetilde{S_g}$, the universal cover of $S_g$, along a line. Let $J_{i1}$ and $J_{i2}$ be the two half-planes of $\widetilde{S_g}$ bounded by $\ell_i$. Finally, let $e_1, \ldots, e_n$ be the $n$ edges in $T_{1,n}$ so that $e_i \cap \ell_i\neq \emptyset$, and let $P_{ij}$ be the union of $e_i \times \field{R}$ with $J_{ij}$ for $1 \leq i \leq n$ and $j=1,2$. Let $W$ be the union of $P_{ij}$ for $1\leq i \leq n$, $j=1,2$, $H_1$, and $H_2$ along $\ell_0$. Then $W$ is the union of $2n+2$ half-planes and satisfies the conditions of Lemma~\ref{separation}. By Lemma~\ref{separation}, there is a unique cyclic ordering on the half-planes in $f(W)$ that is preserved by the element $b$ since $b$ preserves $W$. Thus, $b$ corresponds to an element $\sigma \in D_{2n+2}$. The element $b$ stabilizes each half-plane $H_1$ and $H_2$, so $b$ is either trivial or a reflection. However, $b$ cyclically permutes the lines $\ell_i$ for $1 \leq i \leq n$, and $n \geq 3$, so the order of $\sigma$ is greater than two, a contradiction. Thus, in this case, $G$ is not the fundamental group of a $3$--manifold. \begin{center} {\it Case 3: $m=n=2$ and $a$ or $b$ is separating.} \end{center} Suppose $m=n=2$ and $b$ is a separating curve. The argument is analogous if $a$ is a separating curve. Note first that if $b$ is the homotopy class of a separating curve, then $b$ is an element of the commutator subgroup of $\pi_1(S_h)$. So, $b$ maps to the identity in any homomorphism from $\pi_1(S_h)$ to an abelian group. In particular, if $\phi\colon\pi_1(S_h) \rightarrow \field{Z} / 2\field{Z}$ is a homomorphism then $\phi(b) = 0$. A homomorphism $\pi_1(S_h) \rightarrow \field{Z} / 2\field{Z}$ is given as follows. Let $\ell \subset Y$ be the line stabilized by $\left\langle b \right\rangle$. The line $\ell$ is glued to $\widetilde{S_h}$, the universal cover of $S_h$; let $H_1$ and $H_2$ be the two half-planes in $\widetilde{S_h}$ bounded by $\ell$. Let $W = H_1 \cup H_2$, which satisfies the conditions of Lemma~\ref{separation}. The $G$--equivariant uniformly proper map $f\colon Y \rightarrow X$ described above restricts to a $G$--equivariant uniformly proper map $f\colon W\rightarrow X$. By Lemma~\ref{separation}, $f(W)$ coarsely separates $X$ into two deep components. The subgroup $\pi_1(S_h)$ stabilizes $W$ and hence $f(W)$, which yields a homomorphism $\phi\colon\pi_1(S_h) \rightarrow \field{Z}/2\field{Z}$. To conclude this case, we show that $\phi(b) \neq 0$ by applying Lemma~\ref{separation} a second time. The line $\ell$ is also incident to a copy of $T_{2,2} \times \field{R} \equiv \field{E}^2$. Let $E_1$ and $E_2$ be the two half-planes in $T_{2,2} \times \field{R}$ bounded by $\ell$. Let $W' = H_1 \cup H_2 \cup E_1 \cup E_2$, a union of four half-planes that satisfies the conditions of Lemma~\ref{separation}. The $G$--equivariant uniformly proper map $Y \rightarrow X$ restricts to a $G$--equivariant uniformly proper map $f'\colon W' \rightarrow X$ that extends the uniformly proper map $f\colon W\rightarrow X$. By the arguments of Case (1), the element $b \in G$ corresponds to a transposition $\sigma \in D_4$. So, the half-planes $f'(E_1)$ and $f'(E_2)$ must be opposite in the cyclic order. Thus, $f'(E_1)$ and $f'(E_2)$ are in different deep components of $X \backslash f'(H_1 \cup H_2) = X \backslash f(H_1 \cup H_2)$. (This fact, while intuitive, can be more carefully justified using \cite{hruskastark}.) Therefore, $b$ non-trivially permutes the deep components of $X \backslash f(H_1 \cup H_2)$, so $\phi(b) \neq 0$, a contradiction. Thus, in this case, $G$ is not the fundamental group of a $3$--manifold. \begin{center} {\it Case 4: $m=1$, $n=2$, and $b$ is separating.} \end{center} The modification we used above to pass from Case~1 to Case~2 can also be used in a similar way to modify our argument from Case~3 into a proof for Case~4. Cases 1--4 cover all possibilities that conditions (1)--(3) of the theorem do not hold. Thus, if conditions (1)--(3) do not hold, the group $G$ is not the fundamental group of a $3$-manifold. \end{proof} The next theorem implies that each right-angled Coxeter group with generalized $\Theta$--graph nerve acts properly on a contractible $3$--manifold, since every generalized $\Theta$--graph is planar. The theorem is due to Davis--Okun, and was used implicitly in the proof of \cite[Theorem~11.4.1]{davisokun}. We give an explicit proof here for the benefit of the reader. \begin{thm}[Davis--Okun] \label{thm:racg_3man} Let $W$ be a right-angled Coxeter group with nerve a flag simplicial complex $L$. Suppose $L$ is planar and connected with dimension at most $2$. Then the group~$W$ acts properly on a contractible $3$--manifold. \end{thm} \begin{proof} Embed the complex $L$ into $S^2$, the $2$--sphere. Fill each complementary region on the sphere with $2$--simplices by adding a vertex in the interior of each region and coning off the boundary of the region to the new vertex. This procedure produces a flag triangulation $S$ of $S^2$, which has $L$ as a full subcomplex. Let $W_S$ be the right-angled Coxeter group with nerve the flag triangulation $S$. Since the nerve $S$ is a $2$--sphere, the Davis complex $\Sigma_S$ associated to $S$ is a contractible $3$-manifold on which $W_S$ acts properly and cocompactly. The original group $W = W_L$ is a subgroup of $W_S$ so it also acts properly on the same $3$--manifold. \end{proof} Using Proposition~\ref{3mancovers} and Theorem~\ref{thm:racg_3man}, we recover an alternate proof of Theorem~\ref{thm:surf_amal_3mancover}. \begin{cor} \label{cor:surf_3man} Let $G \in \mathcal{C}_{m,n}$. Then $G$ has a finite-index subgroup that acts freely on a contractible $3$--manifold. \end{cor}
1,477,468,751,201
arxiv
\section{Introduction} \vspace{-1ex} The loss landscape of neural networks has attracted great research interests in the deep learning community \cite{open_prob,thelosslandscapeofoverparameterizednn,largebatchtraining, essentiallynobarriersin, landscapedesign,empiricalanalysisofhessianofoverparameter}. It provides the basis of designing better optimization algorithms, and helps to answer the question of when and how a deep network can achieve good generalization performance. One hypothesis that draws attention recently is that the local minima of neural networks can be characterized by their flatness, and it is conjectured that sharp minima tend to generalize worse than the flat ones \cite{largebatchtraining}. A plausible explanation is that a flat minimizer of the training loss can achieve lower generalization error if the test loss is shifted from the training loss due to random perturbations. \figurename~\ref{fig:Shirish} gives an illustration for this argument. \begin{figure*} \centering \subfigure[\label{fig:Shirish}]{ \centering \begin{minipage}{0.41\textwidth} \centering \includegraphics[height=1.2in]{firstpic_Shirish.pdf} \end{minipage} } \subfigure[\label{fig:asy_shift}]{ \centering \begin{minipage}{0.26\textwidth} \centering \includegraphics[height=1.2in]{firstpic_2.pdf} \end{minipage} } \subfigure[\label{fig:asym}]{ \centering \begin{minipage}{0.26\textwidth} \centering \includegraphics[height=1.2in]{firstpic_1.pdf} \end{minipage} } \caption{\textbf{(a)} Three kinds of local minima: asymmetric, flat and sharp. If there exists a shift from empirical loss to population loss, flat minimum is more robust than sharp minimum. \textbf{(b)} For asymmetric valleys, if there exists a random shift, the solution $\boldsymbol{ \tilde w}$ biased towards the flat side is more robust than the minimizer $\boldsymbol{\hat w^*}$. \textbf{(c)} SGD tends to stay longer on the flat side of asymmetric valleys, therefore SGD averaging automatically produces the desired bias. } \end{figure*} Although being supported by plenty of empirical observations \cite{largebatchtraining,swa,visualizingthelosslandscape}, the definition of flatness was recently challenged by \cite{sharpminimacangenearlizefordeepnets}, who showed that one can construct arbitrarily sharp minima through weight re-parameterization without changing the generalization performance. In addition, recent evidence suggests that the minima of modern deep networks are connected with simple paths with low generalization error \cite{essentiallynobarriersin,losssurfacesmodeconnectivity}. Similarly, the minima found by large batch training and small batch training are shown to be connected without any ``bumps'' \cite{empiricalanalysisofhessianofoverparameter}. This raises several questions: (1) If all the minima are well connected, why do some algorithms keep finding sharp minima and others keep finding flat ones \cite{largebatchtraining}? (2) Does flatness really affect generalization? In this paper, we address these questions by introducing the concept of \emph{asymmetric valleys}. We observe that the local geometry of the loss function of neural networks is usually asymmetric. In other words, there exist many directions such that the loss increases abruptly along one side, and grows rather slowly along the opposite side (see \figurename~\ref{fig:asy_shift} as an illustration). We formally define this kind of local minima as asymmetric valleys. As we will show in Section~\ref{subsec:samebasin}, asymmetric valleys brings interesting illusions in high dimensional space. For example, located in the same valley, $\boldsymbol{\tilde w}$ may appear to be a wider and flatter minimum than $\boldsymbol{\hat w}$ as the former is farther away from the sharp side. For the second question, we argue that flatness does affect generalization. However, we do not simply follow the argument in \cite{largebatchtraining}, which states that flat minima tend to generalize better because they are more stable. Instead, we prove that in asymmetric valleys, the solution biased towards the flat side of the valley gives better generalization under mild assumptions. This result has at least two interesting implications: (1) converging to \emph{which} local minimum (if there are many) may not be critical for modern deep networks. However, it matters a lot \emph{where} the solution locates; and (2) the solution with lowest \emph{a priori} generalization error is not necessarily the minimizer of the training loss. Given that a biased solution is preferred for asymmetric valleys, an immediate question is how we can find such solutions in practice. It turns out that simply averaging the weights along the SGD trajectory, naturally leads to the desired solutions with bias. We give a theoretical analysis to support this argument, see \figurename~\ref{fig:asym} for an illustration. Note that our result is in line with the empirical observations recently made by \citet{swa}. In addition, we provide empirical analysis to verify our theoretical results and support our claims. For example, we show that asymmetric valleys are indeed prevalent in modern deep networks, and solutions with lower generalization error has bias towards the flat side of the valley. We also find that batch normalization seems to be a major cause for shaping asymmetric loss surfaces. \vspace{-1ex} \section{Related Work} \vspace{-1ex} \textbf{Neural network landscape}. Analyzing the landscape of deep neural networks is an active and exciting area \cite{qualitativelycharacterizing,visualizingthelosslandscape,landscapedesign,geometryofneuralnetworklosssurfaces,towardsunderstandinggeneralizationofdeeplearning,thelosslandscapeofoverparameterizednn,empiricalanalysisofhessianofoverparameter}. For example, \cite{essentiallynobarriersin,losssurfacesmodeconnectivity} observed that essentially all local minima are connected together with simple paths. \cite{snapshotensembles} used cyclic learning rate and took the ensemble of intermediate models to get improved accuracy. There are also appealing visualizations for the neural network landscape \cite{visualizingthelosslandscape}. \textbf{Sharp and flat minima}. The discussion of sharp and flat local minima dates back to \cite{flat_minima_2}, and recently regains its popularity. For example, \citet{largebatchtraining} proposed that large batch SGD finds sharp minima, which leads to poor generalization. In \cite{entropy-sgd}, an entropy regularized SGD was introduced to explicitly searching for flat minima. It was later pointed out that large batch SGD can yield comparable performance when the learning rate or the number of training iterations are properly set \cite{trainlonger,trainimagenetin1hour,dontdecaythelearningrate,revisitingsmallbatchtraining,abayesianperspectiveongeneralization,threefactorsinfluencingminima}. Moreover, \cite{sharpminimacangenearlizefordeepnets} showed that from a given flat minimum, one could construct another minimum with arbitrarily sharp directions but equally good performance. In this paper, we argue that the description of sharp or flat minima is an oversimplification. There may simultaneously exist steep directions, flat directions, and asymmetric directions for the same minimum. \textbf{SGD optimization and generalization}. As the de facto optimization tool for deep networks, SGD and its variants are extensively studied in the literature. For example, it is shown that they could escape saddle points or sharp local minima under reasonable assumptions~\cite{Ge2015,escapesaddlepointsefficiently,onthelocalminimaoftheempriicalrisk,acceleratedgradientdescentescapesaddlepoints,firstorderstochasticalgorithmsforescapingfrom,howtomakethegradientssmall,natasha2,neon2,analternativeview}. For convex functions \cite{polyakaveraging} or strongly convex but non-smooth functions~\cite{makinggradientdescentoptimal}, SGD averaging is shown to give better convergence rate. In addition, it can also achieve higher generalization performance for Lipschitz functions in theory~\cite{shalev2009stochastic,onlinetobatch}, or for deep networks in practice~\cite{snapshotensembles,swa,therearemanyconsistentexplanations}. Discussions on the generalization bound of neural networks can be found in~\cite{spectrally-normalizedmarginbounds,towardsunderstandingtherole,exploringgeneralizationindeeplearning,generalizationindeeplearning,apacbayesianapproach,strongergeneralizationboundsfordeepnets,nonvacuousgeneralizationbound}. We show that SGD averaging has implicit bias on the flat sides of the minima. Previously, it was shown that SGD has other kinds of implicit bias as well \cite{theimplicitbiasofgd,riskandparameterconvergence,characterizingimplicitbias}. \vspace{-1ex} \section{Asymmetric Valleys} \label{sec:asym} In this section, we give a formal definition of asymmetric valley, and show that it is prevalent in the loss landscape of modern deep neural networks. \vspace{-1ex} \paragraph{Preliminaries.} In supervised learning, we seek to optimize $ \boldsymbol{w^*}\triangleq \argmin_{\boldsymbol{w}\in \mathbb{R}^d} \mathsf{L}(\boldsymbol{w}), ~~ \textrm{where}~~ \mathsf{L}(\boldsymbol{w})\triangleq\mathbb{E}_{\boldsymbol{x}\sim \mathcal{D}} [f(\boldsymbol{x};\boldsymbol{w})] \in \mathbb{R}^d \rightarrow \mathbb{R} $ is the population loss, $\boldsymbol{x}\in \mathbb{R}^{m}$ is the input from distribution $\mathcal{D}$, $\boldsymbol{w}\in \mathbb{R}^d$ denotes the model parameter, and $f\in \mathbb{R}^m \times \mathbb{R}^d\rightarrow \mathbb{R} $ is the loss function. Since the data distribution $\mathcal{D}$ is usually unknown, instead of optimizing $\mathsf{L}$ directly, we often use SGD to find the empirical risk minimizer $\boldsymbol{\hat w^*}$ for a set of random samples $\{\boldsymbol{x}_i\}_{i=1}^n$ from $\mathcal{D}$ (a.k.a. training set): $\boldsymbol{\hat w^*}\triangleq \argmin_{\boldsymbol{w}\in \mathbb{R}^d} \hat\mathsf{L}(\boldsymbol{w}), ~~ \mathrm{where}~~ \hat\mathsf{L}(\boldsymbol{w})\triangleq \frac{1}{n}\sum_{i=1}^n f(\boldsymbol{x}_i;\boldsymbol{w}) $. We use a unit vector $\boldsymbol{u} \in \mathbb{R}^d$ to represent a direction such that the points on this direction passing $\boldsymbol{w}\in \mathbb{R}^d$ can be written as $\boldsymbol{w}+ l \boldsymbol{u}$ for $l\in (-\infty, \infty)$. \vspace{-1ex} \subsection{Definition of asymmetric valley} Before formally introducing asymmetric valleys, we first define asymmetric directions. \begin{definition}[Asymmetric direction] Given constants $p>0, r>\zeta>0,c>1$, a direction $\boldsymbol{u}$ is $(r,p,c, \zeta)$-asymmetric with respect to point $\boldsymbol{w}\in \mathbb{R}^d$ and loss function $\hat \mathsf{L}$, if $\nabla_{l} \hat \mathsf{L}(\boldsymbol{w}+l\boldsymbol{u})<p$, and $\nabla_{l} \hat \mathsf{L}(\boldsymbol{w}-l\boldsymbol{u})< -cp$ for $l\in (\zeta, r)$. \end{definition} To put it simply, asymmetric direction is a direction $\boldsymbol{u}$ along which the loss function grows at different rates at the positive/negative direction. The constant $\zeta$ handles the small neighborhood around $\boldsymbol{w}$ with very small gradients. With this definition, we now formally define the \emph{asymmetric valley}. \begin{definition}[Asymmetric valley] \label{def:asy_valley} Given constants $p, r>\zeta>0,c>1$, a local minimum $\boldsymbol{\hat w}^*$ of $\hat\mathsf{L}\in \mathbb{R}^d\rightarrow \mathbb{R}$ is a $(r,p,c, \zeta)$-asymmetric valley, if there exists at least one direction $\boldsymbol{u}$ such that $\boldsymbol{u}$ is $(r,p,c, \zeta)$-asymmetric with respect to $\boldsymbol{\hat w^*}$ and $\hat\mathsf{L}$. \end{definition} Notice that here we abuse the name ``valley'', since $\boldsymbol{\hat w^*}$ is essentially a point at the center of a valley. \vspace{-1ex} \subsection{Find asymmetric directions empirically} \label{subsec:find_asym} \begin{figure}[t] \centering \includegraphics[width=.7\columnwidth]{SGD_asym.pdf} \caption{An asymmetric direction of a local minimum on the loss landscape of ResNet-110 trained on CIFAR-10.} \label{fig:sgd_asym} \end{figure} Empirically, by taking random directions with value $(0,1)$ in each dimension, we could find an asymmetric direction for a given local minimum with decent probability\footnote{By contrast, a random direction with value in $(-1,1)$ is usually not asymmetric. }. We perform experiments with three widely used deep networks, i.e., ResNet-110, ResNet-164~\cite{resnet}, DenseNet-100~\cite{densenet}, on the CIFAR-10 and CIFAR-100 image classification datasets. For each model on each dataset, we conduct 5 independent runs. The results show that we could \emph{always} find asymmetric directions with certain specification $(r,p,c, \zeta)$ with $c> 2$, which means all the local minima\footnote{Notice that empirically we could not verify whether the SGD solution $\boldsymbol{\hat w^*}$ is a local minimum. See the discussion in Section \ref{subsec:samebasin}.} being found are located in asymmetric valleys. Figure \ref{fig:sgd_asym} shows an asymmetric direction for a local minimum in ResNet-110 trained on the CIFAR-10 dataset. We verified that it is a $(2.5, 0.2, 7.5, 1.2)$-asymmetric direction. Asymmetric valleys widely exist in other models as well, see Appendix~\ref{appendix_asy_valleys}. \vspace{-1ex} \section{Bias and Generalization} \vspace{-1ex} \label{sec:generalization} As we show in the previous section, most local minima in practice are \emph{asymmetric}, i.e., they might be sharp on one direction, but flat on the opposite direction. Therefore, it is important to investigate the generalization ability of a solution $\boldsymbol{w}$ in this scenario. In this section, we prove that a \emph{biased} solution on the flat side of an asymmetric valley yields lower generalization error than the empirical minimizer $\hat{\boldsymbol{w}}^*$ in that valley. \vspace{-1ex} \subsection{Theoretical analysis} Before presenting our theorem, we first introduce two mild assumptions. We will show that they empirically hold on modern deep networks in Section \ref{sec:verify_assump}. The first assumption (Assumption \ref{assump:random_shift_assumption}) states that there exists a shift between the empirical loss and true population loss. This is a common assumption in the previous works, e.g., \cite{largebatchtraining}, but was usually presented in an informal way. Here we define the ``shift'' in a formal way. Without loss of generality, we will compare the empirical loss $\hat \mathsf{L}$ with $\mathsf{L}'\triangleq \mathsf{L} \!-\! \min_{\boldsymbol{w}} \mathsf{L}(\boldsymbol{w}) \!+\! \min_{\boldsymbol{w}} \hat \mathsf{L}(\boldsymbol{w}) $ to remove the ``vertical difference'' between $\hat \mathsf{L}$ and $\mathsf{L}$. Notice that $\min_{\boldsymbol{w}} \mathsf{L}(\boldsymbol{w})$ and $\min_{\boldsymbol{w}} \hat \mathsf{L}(\boldsymbol{w})$ are constants and do not affect our generalization guarantee. \begin{definition}[$(\boldsymbol{\delta},R)$-shift gap] \label{def:shiftgap} For $\xi\geq 0$, $\boldsymbol{\delta}\in \mathbb{R}^d$, and fixed functions $\mathsf{L}$ and $\hat \mathsf{L}$, we define the $(\boldsymbol{\delta}, R)$-shift gap between $\mathsf{L}$ and $\hat \mathsf{L}$ with respect to a point $\boldsymbol{w}$ as \[ \xi_{\boldsymbol{\delta}}(\boldsymbol{w})=\max_{\boldsymbol{v}\in \mathbb{B}(R)}| \mathsf{L}'(\boldsymbol{w}+\boldsymbol{v} +\boldsymbol{\delta})-\hat \mathsf{L}(\boldsymbol{w}+\boldsymbol{v} )| \] where $\mathsf{L}'(\boldsymbol{w})\triangleq \mathsf{L}(\boldsymbol{w})- \min_{\boldsymbol{w}} \mathsf{L}(\boldsymbol{w}) + \min_{\boldsymbol{w}} \hat \mathsf{L}(\boldsymbol{w}) $, and $\mathbb{B}(R)$ is the $d$-dimensional ball with radius $R$ centered at $\boldsymbol{0}$. \end{definition} From the above definition, we know that the two functions match well after the shift $\boldsymbol{\delta}$ if $\xi_{\boldsymbol{\delta}}(\boldsymbol{w})$ is very small. For example, $\xi_{\boldsymbol{\delta}}(\boldsymbol{w})=0$ means $\mathsf{L}$ is locally identical to $\hat \mathsf{L}$ after the shift $\boldsymbol{\delta}$. Since $\hat \mathsf{L}$ is computed on a set of random samples from $\mathcal{D}$, the actual shift $\boldsymbol{\delta}$ between $\hat \mathsf{L}$ and $\mathsf{L}$ is a random variable, ideally with zero expectation. \begin{assump}[Random shift assumption] \label{assump:random_shift_assumption} For a given population loss $\mathsf{L}$ and a random empirical loss $\hat \mathsf{L}$, constants $R>0, r\geq \zeta>0, \xi\geq 0$, a vector $\boldsymbol{\bar \delta}\in\mathbb{R}^d$ with $r \geq \boldsymbol{\bar \delta}_i \geq \zeta$ for all $i\in [d]$, a minimizer $\boldsymbol{\hat w}^*$, we assume that there exists a random variable $\boldsymbol{\delta}\in \mathbb{R}^d$ correlated with $\hat \mathsf{L}$ such that $\Pr(\boldsymbol{ \delta}_i = \boldsymbol{\bar \delta}_i)=\Pr(\boldsymbol{ \delta}_i= -\boldsymbol{\bar \delta}_i)=\frac12 $ for all $i\in[d]$, and the $(\boldsymbol{ \delta} ,R)$-shift gap between $\mathsf{L}$ and $\hat \mathsf{L}$ with respect to $\boldsymbol{\hat w}^*$ is bounded by $\xi$. \end{assump} Roughly, the above assumption says that the local landscape of the empirical loss and population loss match well after applying a shift vector $\boldsymbol{\delta}$, which has equal probability of being positive or negative in each dimension. Therefore, $\boldsymbol{\delta}$ has $2^d$ possible values for a given shift vector $\boldsymbol{\bar \delta}$, each with probability $2^{-d}$. The second assumption stated below can be seen as an extension of Definition \ref{def:asy_valley}. \begin{assump}[Locally asymmetric] \label{assump:Locally_identical_assumption} For a given population loss $\hat \mathsf{L}$, and a minimizer $\boldsymbol{\hat w}^*$, there exist orthogonal directions $\boldsymbol{u}^1, \cdots, \boldsymbol{u}^k\in \mathbb{R}^d $ s.t. $\boldsymbol{u}^i$ is $(r,p_i, c_i, \zeta)$-asymmetric with respect to $\boldsymbol{\hat w}^* + \boldsymbol{v}- \langle \boldsymbol{v}, \boldsymbol{u}^i\rangle \boldsymbol{u}^i $ for all $\boldsymbol{v}\in \mathbb{B}(R')$ and $i\in [k]$. \end{assump} Assumption \ref{assump:Locally_identical_assumption} states that if $\boldsymbol{u}^i$ is an asymmetric direction at $\boldsymbol{\hat w}^*$, then the point $\boldsymbol{\hat w}^* \!+\! \boldsymbol{v} \!-\! \langle \boldsymbol{v}, \boldsymbol{u}^i\rangle \boldsymbol{u}^i $ that deviates from $\boldsymbol{\hat w}^*$ along the perpendicular direction of $\boldsymbol{u}^i$, is also asymmetric along the direction of $\boldsymbol{u}^i$. In other words, the \emph{neighborhood} around $\boldsymbol{\hat w^*}$ is an asymmetric valley. Under the above assumptions, we are ready to state our theorem, which says the empirical minimizer is not necessarily the optimal solution, while a biased solution leads to better generalization. We defer the proof to Appendix \ref{appendix:proof_dimd}. \begin{thm}[Bias leads to better generalization] \label{thm:dimd} For any $\boldsymbol{l}\in \mathbb{R}^k$, if Assumption \ref{assump:random_shift_assumption} holds for $R=\|\boldsymbol{l}\|_2$, Assumption \ref{assump:Locally_identical_assumption} holds for $R'= \|\boldsymbol{\bar \delta}\|_2+\|\boldsymbol{l}\|_2$, and $ \frac{4\xi }{(c_i-1)p_i}< \boldsymbol{l}_i\leq \max\{r-\boldsymbol{\bar \delta}_i, \boldsymbol{\bar \delta}_i -\zeta \}$, then we have \vspace{-0.1in} \begin{align*} &\mathbb{E}_{\boldsymbol{\delta}}\mathsf{L}(\boldsymbol{\hat w}^*) -\mathbb{E}_{\boldsymbol{\delta}}\mathsf{L}\left (\boldsymbol{\hat w}^* +\sum_{i=1}^k\boldsymbol{l}_i \boldsymbol{u}^i\right ) \\\geq & \sum_{i=1}^k (c_i-1)\boldsymbol{l}_i p_i/2 -2k\xi>0 \end{align*} \end{thm} \paragraph{Remark on Theorem \ref{thm:dimd}.} It is widely known that the empirical minimizer is usually different from the true optimum. However, in practice it is difficult to know how the training loss shifts from the population loss. Therefore, the best we could do is minimizing the empirical loss function (with some regularizers). On the contrary, Theorem~\ref{thm:dimd} states that under the asymmetric case, we should pick a biased solution to minimize the expected population loss even the shift is unknown. Moreover, it is possible to distill our insight into practical algorithms, as we will discuss in Section \ref{sec:avg_is_good}. \vspace{-1ex} \subsection{Verification of assumptions} \label{sec:verify_assump} \paragraph{Verification of Assumption \ref{assump:random_shift_assumption}.} We show that a shift between $\mathsf{L}$ and $\hat \mathsf{L}$ is quite common in practice, by taking a ResNet-110 trained on CIFAR-10 as an example. Since we could not visualize a shift in a high dimensional space, we randomly sample an asymmetric direction $\boldsymbol{u}$ (more results are shown Appendix \ref{appendix_shift}) at the SGD solution $\boldsymbol{\hat w}^*$. The blue and red curves shown in \figurename~\ref{fig:random_shift} are obtained by calculating $\hat \mathsf{L}(\boldsymbol{\hat w}^* + l \boldsymbol{u})$ and $ \mathsf{L}'(\boldsymbol{\hat w}^*+ l \boldsymbol{u})$ for $l \in [-3,3] $, which correspond to the training and test loss, respectively. We then try different shift values of $\boldsymbol{\delta}$ to ``match'' the two curves. As shown in \figurename~\ref{fig:random_shift}, after applying a horizontal shift $\boldsymbol{\delta}\!=\!0.4$ to the test loss, the two curves overlap almost perfectly. Quantitatively, we can use the \emph{shift gap} defined in Definition \ref{def:shiftgap} to evaluate how well the two curves match each other after shifting. It turns out that $\xi_{\boldsymbol{\delta}=0.4}\!=\!0.0335$, which is much lower than $\xi_{\boldsymbol{\delta}=0} \!=\! 0.223$ before shifting ($\boldsymbol{ \delta}$ has only one dimension here). In \figurename~\ref{fig:ratio}, we plot $\xi_{\boldsymbol{\delta}}/\xi_{\boldsymbol{0}}$ as a function of $\boldsymbol{\delta}$. Clearly, there exists a $\boldsymbol{\delta}$ that minimizes this ratio, indicating a good match. We conducted the same experiments for different directions, models and datasets, and similar observations were made. Please refer to Appendix \ref{appendix_shift} for more results. \begin{figure}[htbp] \centering \subfigure[Shfit between $\mathsf{L}'$ and $\hat\mathsf{L}$\label{fig:random_shift} ]{ \centering \begin{minipage}{0.22\textwidth} \centering \includegraphics[width=4cm]{parallel0.pdf} \end{minipage} } \subfigure[$ \xi_{\boldsymbol{\delta}} /\xi_{\boldsymbol 0}$ vs different shift $\boldsymbol{\delta}$\label{fig:ratio}]{ \centering \begin{minipage}{0.22\textwidth} \centering \includegraphics[width=4cm]{ratio_2.pdf} \end{minipage} } \caption{Shift exists between empirical loss and population loss for ResNet-110 on CIFAR-10.} \end{figure} \paragraph{Verification of Assumption \ref{assump:Locally_identical_assumption}.} \begin{figure}[t] \centering \includegraphics[width=.75\columnwidth]{explore_new_sgd_aux0_sum_C10_25_local_shift_regulated-crop.pdf} \caption{Training loss mean and standard variance for the neighborhood of $\boldsymbol{\hat w^*}$ at the direction of $\boldsymbol{u}$.} \label{fig:local_shift} \end{figure} This is a mild assumption that can be verified empirically. For example, we take a SGD solution of ResNet-110 on CIFAR-10 as $\boldsymbol{\hat w}^*$, and specify an asymmetric direction $\boldsymbol{u}$ for $\boldsymbol{\hat w^*}$. We then randomly sample $100$ different local adjustments for $\boldsymbol{v}\in \mathbb{B}(25)$. Based on these adjustments, we present the mean loss curves and standard variance zone on the asymmetric direction $\boldsymbol{u}$ for all the points $\boldsymbol{\hat w^*}+\boldsymbol{v} - \langle \boldsymbol{v}, \boldsymbol{u} \rangle \boldsymbol{u} $ in Figure \ref{fig:local_shift}. As we can see, the variance of these curves are very small, which means all of them are similar to each other. Moreover, we verified that $\boldsymbol{u}$ is $(4,0.1,5.22,2)$-asymmetric with respect to all neighboring points. \section{Averaging Generates Good Bias} \label{sec:avg_is_good} In the previous section, we show that when the loss landscape of a local minimum is asymmetric, a solution with bias towards the flat side of the valley has better generalization performance. One immediate question is that how can we obtain such a solution via practical algorithms? Below we show that it can be achieved by simply taking the average of SGD iterates during the course of training. We first analyze the one dimensional case in Section~\ref{sec:onedim}, and then extend the analysis to the high dimensional case in Section~\ref{sec:highdim}. \subsection{One dimensional case} \label{sec:onedim} For asymmetric functions, as long as the learning rate is not too small, SGD will oscillate between the flat side and the sharp side. Below we focus on one round of oscillation, and show that the average of the iterates in each round has a bias on the flat side. Consequently, by aggregating all rounds of oscillation, averaging SGD iterates leads to a bias as well. For each individual round $i$, we assume that it starts from the iteration when SGD goes from sharp side to flat side (denoted as $w^i_0$), and ends at the iteration exactly before the iteration that SGD goes from sharp side to flat side again (denoted as $w^i_{T_i}$). Here $T_i$ denotes the number of iterations in the $i$-th rounds. The average iterate in the $i$-th round can be written as $\bar w\triangleq \frac1{T_i} \sum_{j=0}^{T_i} w^i_j$. For notational simplicity, we will omit the super script $i$ on $w^i_j$. The following theorem shows that the expectation of the average has bias on the flat side. To get a formal lower bound on $\bar w$, we consider the asymmetric case where $\zeta=0$, and also assume lower bounds for the gradients on the function. Notice that we made little effort to optimize the constants or bounds on the parameters, and we defer the proof to Appendix~\ref{appendix_main_theorem}. \begin{thm}[SGD averaging generates a bias] \label{thm:asym_avg} Assume that a local minimizer $w^*=0$ is a $(r,a_+,c, 0)$-asymmetric valley, where $b_- \leq \nabla \mathsf{L}(w) \leq a_- <0$ for $w< 0$, and $0<b_+ \leq \nabla \mathsf{L}(w) \leq a_+$ for $w\geq 0$. Assume $-a_-= c a_+$ for a large constant $c$, and $\frac{-(b_--\nu)}{b_+}=c'<\frac{e^{c/3}}{6}$. The SGD updating rule is $w_{t+1} = w_t - \eta (\nabla L(w)+\omega_t)$ where $\omega_t$ is the noise and $|\omega_t|<\nu$, and assume $\nu \leq a_+$. Then we have \begin{align} \vspace{-1ex} \mathbb{E}[\bar w]>c_0>0, \nonumber \vspace{-1ex} \end{align} where $c_0$ is a constant that only depends on $\eta, a_+, a_-, b_+, b_-$ and $\nu $. \end{thm} Theorem \ref{thm:asym_avg} can be intuitively explained by Figure \ref{fig:Asy_onedim_occillate}. If we run SGD on this one dimensional function, it will stay at the flat side for more iterations as the magnitude of the gradient on this side is much smaller. Therefore, the average of the locations is biased towards the flat side. \begin{figure}[hbpt] \centering \includegraphics[width=0.7\columnwidth]{synthetic_asymmetric_soft_pertubate-crop.pdf} \caption{SGD iterates and their average on an asymmetric function: the oscillation case.} \label{fig:Asy_onedim_occillate} \end{figure} Of course, if the learning rate is sufficiently small, there will be no oscillations on the SGD trajectory, as shown in Figure \ref{fig:Asy_onedim}. In this case, the bias on the sharp side tends to be closer to the center compared to the bias on the flat side, as the gradient on the sharp side is much larger than the gradient on the flat side, so SGD converges much faster. In other words, even if there is no oscillation and Theorem \ref{thm:asym_avg} does not apply, SGD averaging creates more bias on flat sides than sharp sides in expectation. Thus in all the scenarios, taking average of SGD iterates would be beneficial for asymmetric loss function. In addition, for symmetric loss functions, averaging SGD iterates may also be helpful in terms of denoising (see Appendix~\ref{appendix:other_sgd_pattern} for concrete examples). Therefore, taking the average of the SGD trajectory may always improve generalization, regardless of whether the loss function is symmetric or not. \begin{figure}[htbp] \centering \includegraphics[width=0.45\columnwidth]{synthetic_asymmetric_soft_steep-crop.pdf} \includegraphics[width=0.45\columnwidth]{synthetic_asymmetric_soft_flat-crop.pdf} \caption{SGD iterates and their average on an asymmetric function with small learning rate: starting from the sharp side (\emph{Left}); and starting from the flat side (\emph{Right}) } \label{fig:Asy_onedim} \end{figure} \subsection{High dimensional case} \label{sec:highdim} For high dimensional functions, the analysis on averaging SGD iterates would be more complicated compared to that given in the previous subsection. However, if we only care about the bias on a specific direction $\boldsymbol{u}$, we could directly apply Theorem \ref{thm:asym_avg} with one additional assumption. Specifically, if the projections of the loss function onto $\boldsymbol{u}$ along the SGD trajectory satisfy the assumptions in Theorem \ref{thm:asym_avg}, i.e., being asymmetric and the gradient on both sides have upper and lower bounds, then the claim of Theorem \ref{thm:asym_avg} directly applies. This is because only the gradient along the direction $\boldsymbol{u}$ will affect the SGD trajectory projected onto $\boldsymbol{u}$, and we could safely omit all other directions. Empirically, we find that this assumption generally holds. For a given SGD solution, we fix a random asymmetric direction $\boldsymbol{u}\in\mathbb{R}^d$, and sample the loss surface on direction $\boldsymbol{u}$ that passes the $t$-th epoch of SGD trajectory (denoted as $\boldsymbol{w_t}$), i.e., evaluate $\hat \mathsf{L}(\boldsymbol{w_t} + l\boldsymbol{\boldsymbol{u} })$, for $0\leq t\leq 200$ and $l \in [-15,15]$. As shown in the \figurename~\ref{fig:sgd_high_dimension_slice}, after the first $40$ epochs, the projected loss surfaces becomes relatively stable. Therefore, we could directly apply Theorem \ref{thm:asym_avg} to the direction $\boldsymbol{u}$. As we will see in Section \ref{subsec:illusion_swa}, compared with SGD solutions, SGD averaging indeed creates bias along different asymmetric directions, as predicted by our theory. \begin{figure}[t] \centering \includegraphics[width=.45\textwidth]{sgd_slice9.pdf} \caption{Projection of the training loss surface onto an asymmetric direction $\boldsymbol{u}$} \label{fig:sgd_high_dimension_slice} \end{figure} \section{Sharp and Flat Minima Illusion} \label{subsec:samebasin} In this section, we show that \emph{where} the solution locates at a local minimum basin is very important, which is a refinement of judging the generalization performance by the sharpness/flatness of a local minimum. All of our observations support our theoretical analysis in the previous sections. First we remark that rigorously testing whether a point is a local minimum, or even close to a local minimum, is extremely hard for deep models, see e.g. \cite{spuriouslocalminimaarecommon}. In fact, the Hessian of most empirical solutions still have plenty of small negative eigenvalues \cite{entropy-sgd}, so technically they are saddle points. But we choose to ignore these technicalities, and treat all these points as ``local minima''. \subsection{Illusion case 1: SWA algorithm} \label{subsec:illusion_swa} Recently, \citet{swa} proposed the stochastic weight averaging (SWA) algorithm, which explicitly takes the average of SGD iterates to achieve better generalization. Inspired by their observation that ``SWA leads to solutions corresponding to wider optima than SGD'', we provide a more refined explanation in this subsection. That is, averaging weights leads to ``biased'' solutions in an asymmetric valley, which correspond to better generalization. Specifically, we run the SWA algorithm (with deceasing learning rate) with three popular deep networks, i.e., ResNet-110, ResNet-164 and DenseNet-100, on the CIFAR-10 and CIFAR-100 datasets, following the configurations in \cite{swa} (denoted as SWA). Then we run SGD with small learning rate \emph{from the SWA solutions} to find a solution located in the same basin (denoted as SGD). In Figure \ref{fig:inter_swasgd_pic}, We draw an interpolation between the solutions obtained by SWA and SGD\footnote{\citet{swa} have done a similar experiment.}. One can observe that there is no ``bump'' between these two solutions, meaning they are located in the same basin. Clearly, the SWA solution is biased towards the flat side, which verifies our theoretical analysis in Section~\ref{sec:avg_is_good}. Further, we notice that although the biased SWA solution has higher training loss than the empirical minimizer, it indeed yields lower test loss. This verifies our analysis in Section~\ref{sec:generalization}. Similar observations are made on other networks and other datasets, which we present in Appendix \ref{appendix:swa_asym_figures}. \begin{figure}[htbp] \centering \includegraphics[width=.45\textwidth]{inter_swaandsgd_300_nomomentum_4_C100_inter_pic.pdf} \caption{SWA solution and SGD solution interpolation (ResNet-164 on CIFAR100)} \label{fig:inter_swasgd_pic} \end{figure} To further support our claim, we list our result in Table \ref{table 2:sgdafterswa}, from which we can observe that SGD solutions always have higher training accuracy, but worse test accuracy, compared to SWA solutions. This supports our claim in Theorem \ref{thm:dimd}, which states that a bias towards the flat sides of asymmetric valleys could help improve generalization, although it yields higher training error. \begin{table}[!htbp] \vspace{-0ex} \centering \small \caption{Training accuracy of various networks on CIFAR-100.} \label{table 2:sgdafterswa} \begin{tabular}{|l|l|l|l|} \hline \multirow{2}{*}{Network} & \multicolumn{2}{c|}{CIFAR-100} \\ \cline{2-3} & \multicolumn{1}{c|}{train} & \multicolumn{1}{c|}{test} \\ \hline ResNet-110-SWA & 94.98\% & 78.94\% \\ \hline ResNet-110-SGD & 97.52\% & 78.29\% \\ \hline ResNet-164-SWA & 97.48\% & 80.69\% \\ \hline ResNet-164-SGD & 99.12\% & 76.56\% \\ \hline DenseNet-100-SWA & 99.84\% & 72.29\% \\ \hline DenseNet-100-SGD & 99.87\% & 71.46\% \\ \hline \end{tabular}\\ \vspace{-0ex} \end{table} \paragraph{Verifying Theorem \ref{thm:asym_avg}.} We further verify that averaging SGD solutions could create a bias towards the flat side in expectation for many other asymmetric directions, not just for the specific direction we discussed above. We take a ResNet-110 trained on CIFAR-100 as an example. Denote $\boldsymbol{u}_{inter}$ as the unit vector pointing from the SGD solution to the SWA solution. We pick another unit random direction $\boldsymbol{u}_{rand}$. Then, we use the direction $\boldsymbol{u}_{inter}+\boldsymbol{u}_{rand}$ to verify our claim. \begin{figure}[htbp] \centering \includegraphics[width=.45\textwidth]{high_dimension_find_110_1_C100_0.pdf} \caption{The average of SGD has a bias on flat side (ResNet-110 on CIFAR100)} \label{fig:high_dimension_find_110_1_C100} \end{figure} The results are shown in Figure \ref{fig:high_dimension_find_110_1_C100}, from which we can observe that SWA has a bias on the flat side compared with the SGD solution. We create $10$ different random vectors for each network and each dataset, and similar observations can be made (see more examples in Appendix \ref{appendix:verify_high_dimension}). \subsection{Illusion case 2: large batch SGD} \label{subsec:largebatchtraining} \citet{largebatchtraining} observed that training with small batch size using SGD algorithm generalizes better than training with large batch size. They argue that it is because large batch SGD tends to converge to sharp minima, while small batch SGD generally converges to flat minima. Here we show that it may not be the case in practice. We use a PreResNet-164 trained on CIFAR-100 as an example. We first running SGD with a batch size of 128 for 200 epochs to find a solution (denoted as \emph{Large batch solution}), and then contintue the training with batch size 32 for another 80 epoch to find a nearby solution (denoted as \emph{Small batch solution}). From the results shown in Figure \ref{fig:inter_large_small_batch_pic}, it is clear that the small batch solution has worse training accuracy but better test accuracy. Meanwhile, there is no 'bump' between these solutions which suggests they are in the same basin. Therefore, small batch SGD generalizes better because it could find a better biased solution in the asymmetric valley, not because it finds a different wider or flatter minimum. \begin{figure}[ht] \centering \includegraphics[width=.4\textwidth]{inter_train_batch_change_300_128to32_nomomentum_seed2000280_3_C100_inter_pic.pdf} \caption{Large and small minibatch interpolation(batch size 128 to 32 of PreResNet-164 on CIFAR-100)} \label{fig:inter_large_small_batch_pic} \end{figure} \subsection{Illusion on the width of a minimum} \label{subsec:random_ray} We further point out that visualizing the ``width'' of a local minimum in a low-dimensional space may lead to illusive results. For example, one visualization technique \cite{swa} is showing how the loss changes along many random directions $\boldsymbol{v}_i$'s drawn from the $d$-dimensional Gaussian distribution. We take the large batch and small batch solutions from the previous subsection as our example. Figure \ref{fig:random_ray_largeandsmallbatch} visualizes the ``width'' of the two solutions using the method described above. From the figure, one may draw the conclusion that small batch training leads to a wider minimum compared to large batch training. However, as discussed in Subsection~\ref{subsec:largebatchtraining} these two solutions are actually from the same basin. In other words, the loss curvature near the two solutions looks different because they are located at different locations in an asymmetric valley, instead of being located at different local minima. Similar observation holds for SWA and SGD solutions, see Appendix \ref{sec:random_ray}. \begin{figure}[htbp] \centering \includegraphics[width=.45\textwidth]{randomray_changebatch128to32_nomom_scale3to60_sgdandswa_test_loss_randomray_result-crop.pdf} \caption{Random ray of large batch and small batch solution(ResNet-164 on CIFAR-100).} \label{fig:random_ray_largeandsmallbatch} \end{figure} \section{Batch Norm and Asymmetric Valleys} \label{sec:bn} Preview sections have focused on defining \emph{what} are asymmetric valleys, and \emph{how} to leverage them for better generalization. In this section, we take a step forward to answer \emph{where} they originate, by showing empirical evidences that the Batch Normalization (BN) \cite{batchnorm} adopted by modern neural networks seems to be a major cause for asymmetric valleys. \paragraph{Directions on BN parameters are more asymmetric.} For a given SGD solution, if we take a random direction where only the BN parameters have non-zero entries, and compare it with a random direction where only the non-BN parameters have non-zero entries, we observe that those BN-related directions are usually more asymmetric. The result with ResNet-110 on CIFAR-10 is shown in Figure~\ref{fig:conv_bn_comp_res110_c10}. As we can see, the Non-BN direction is sharp on both sides, but BN direction is flat on one side, and sharp on the other side. We also conducted trials with different networks and datasets, and obtained similar results (see Appendix~\ref{sec:exploration_on_bn_drection}). \begin{figure}[t] \centering \includegraphics[width=.45\textwidth]{explore_new_bn_compare_110_C10_smallscale_0-crop.pdf} \caption{ BN and Non-BN directions through a local minimum of ResNet-110 on CIFAR-10. } \label{fig:conv_bn_comp_res110_c10} \end{figure} \paragraph{SGD averaging is more effective on BN parameters.} By Theorem \ref{thm:dimd} and \ref{thm:asym_avg}, we know that SGD averaging could lead to biased solutions on asymmetric directions with better generalization. If BN indeed creates many asymmetric directions, can we improve the model performance by only averaging the weights of BN layers? Note that BN parameters only constitute a small fraction of the total model parameters, e.g., 1.41\% in a ResNet-110. In the follow experiment on ResNet-110 for CIFAR-10, we perform SGD averaging only on BN parameters, denoted as SWA-BN; and also averaging randomly selected non-BN parameters of the same amount (1.41\% of the total parameters), denoted as SWA-Non-BN. The results are shown in Figure \ref{fig:swa_bn_compare}. It can be observed that averaging only BN parameters (blue curve) is more effective than averaging non-BN parameters (green curve), although there is still a gap comparing to averaging all the weights (yellow curve). \begin{figure}[t] \centering \includegraphics[width=.45\textwidth]{train_average_compare_randconv2_swa_bn_compare-crop.pdf} \caption{ SGD averaging on BN parameters could give better test accuracy compared with SGD averaging on non-BN parameters. } \label{fig:swa_bn_compare} \end{figure} Moreover, we also conduct experiments with two 8-layer ResNets on CIFAR-10, one with BN layers and one without. We choose shallow networks here as deeper models without BN can not be effectively trained. \begin{figure}[htbp] \centering \includegraphics[width=.45\textwidth]{appendix_res8_withbn_d4_1_Figure-crop.pdf} \caption{Test accuracy of ResNet-8 with and without BN layers, after running weight averaging (SWA).} \label{fig:BN_compare} \end{figure} As shown in figure \ref{fig:BN_compare}, we start weight averaging at the $126$-th epoch. Although in both networks, we observe an improvement in test accuracy after averaging, it is clear that the network with BN layers have larger improvement compared with the network without BN layers. This again indicates that SGD averaging is more effective on BN parameters. The results presented above are still quite preliminary. Understanding how the asymmetric valleys are formed in deep networks might be a valuable future research direction. \section{Conclusion} The width of solutions has been used to explain generalization. In this paper, we elaborate on these arguments, and show that width along \emph{Asymmetric Valleys}, where the loss may increase at different rates along two opposition directions, is especially important for explaining generalization. Based on a formal definition of asymmetric valley, we showed that a biased solution lying on the flat side of the valley generalizes better than the empirical minimizer. Further, it is proved that by averaging the points along the SGD trajectory naturally leads to such biased solution. We have conducted extensive experiments with state-of-the-art deep models to verify our theorems. We hope this paper will strengthen our understanding on the loss landscape of deep neural networks, and inspire new theories and algorithms that further improve generalization.
1,477,468,751,202
arxiv
\section*{Appendix \thesection\protect\indent \parbox[t]{11.715cm} {#1}} \addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1} } \newcommand{\bbox}[1]{\boldsymbol{#1}} \newcommand {\defeq}{\stackrel{\rm def}{=}} \newcommand{\tr}[1]{\:{\rm tr}\,#1} \newcommand{\ntr}[1]{\,\frac {\rm tr}{N}\,#1} \def{\,\rm e}\,{{\,\rm e}\,} \def{\Bbb I}{{\Bbb I}} \def{\rm d}{{\rm d}} \def{\cal D}{{\cal D}} \def{\rm i}{{\rm i}} \def{x}{{x}} \def\hspace*{\fill}\linebreak{\hspace*{\fill}\linebreak} \def\vspace*{\fill}\pagebreak{\vspace*{\fill}\pagebreak} \newcommand{\rf}[1]{(\ref{#1})} \newcommand{\eq}[1]{Eq.~(\ref{#1})} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\left\langle{\left\langle} \def\right\rangle{\right\rangle} \def{\mathcal{A}}{{\mathcal{A}}} \def{\cal U}{{\cal U}} \def{\cal F}{{\cal F}} \def{\rm cl}{{\rm cl}} \newcommand{\nonumber \\*}{\nonumber \\*} \newcommand{{\it i.e.}\ }{{\it i.e.}\ } \newcommand{{\prime}}{{\prime}} \newcommand{\rightarrow}{\rightarrow} \hyphenation{pre-print} \hyphenation{pre-prints} \hyphenation{di-men-sion-al} \hyphenation{di-men-sion-al-ly} \newcommand{\fr}[2]{{\textstyle {#1 \over #2}}} \newcommand{{\textstyle{1\over 2}}}{{\textstyle{1\over 2}}} \def\Bigl({\Bigl(} \def\Bigl[{\Bigl[} \def\Bigl\{{\Bigl\{} \def\Bigr){\Bigr)} \def\Bigr]{\Bigr]} \def\Bigr\}{\Bigr\}} \def\mathrel{\mathpalette\fun <}{\mathrel{\mathpalette\fun <}} \def\mathrel{\mathpalette\fun >}{\mathrel{\mathpalette\fun >}} \def\fun#1#2{\lower3.6pt\vbox{\baselineskip0pt\lineskip.9pt \ialign{$\mathsurround=0pt#1\hfil##\hfil$\crcr#2\crcr\sim\crcr}}} \hyphenation{pre-print} \hyphenation{pre-prints} \hyphenation{di-men-sion-al} \hyphenation{di-men-sion-al-ly} \def\hybrid{\topmargin 0pt \oddsidemargin 0pt \headheight 0pt \headsep 0pt \textwidth 17.5cm \textheight 25cm \voffset=-0.7cm \hoffset=-0.4cm \hoffset=-1.2cm \marginparwidth 0.0in \parskip 5pt plus 1pt \jot = 1.5ex} \catcode`\@=11 \def\marginnote#1{} \newcount\hour \newcount\minute \newtoks\amorpm \hour=\time\divide\hour by60 \minute=\time{\multiply\hour by60 \global\advance\minute by-\hour} \edef\standardtime{{\ifnum\hour<12 \global\amorpm={am}% \else\global\amorpm={pm}\advance\hour by-12 \fi \ifnum\hour=0 \hour=12 \fi \number\hour:\ifnum\minute<10 0\fi\number\minute\the\amorpm}} \edef\militarytime{\number\hour:\ifnum\minute<10 0\fi\number\minute} \def\draftlabel#1{{\@bsphack\if@filesw {\let\thepage\relax \xdef\@gtempa{\write\@auxout{\string \newlabel{#1}{{\@currentlabel}{\thepage}}}}}\@gtempa \if@nobreak \ifvmode\nobreak\fi\fi\fi\@esphack} \gdef\@eqnlabel{#1}} \def\@eqnlabel{} \def\@vacuum{} \def\draftmarginnote#1{\marginpar{\raggedright\scriptsize\tt#1}} \def\draft{\oddsidemargin -0.1truein \def\@oddfoot{\sl preliminary draft \hfil \rm\thepage\hfil\sl\today\quad\militarytime} \let\@evenfoot\@oddfoot \overfullrule 3pt \let\label=\draftlabel \let\marginnote=\draftmarginnote \def\@eqnnum{{\rm (\oldtheequation\alph{equation}})}\rlap{\kern\marginparsep\tt\@eqnlabel}% \global\let\@eqnlabel\@vacuum} } \newdimen\linethick \linethick=0.4pt \newdimen\hboxitspace \hboxitspace=5pt \newdimen\vboxitspace \vboxitspace=5pt \def\fr#1{% \begin{equation}\oldfalse \vcenter{ \hrule height\linethick \hbox{\vrule width\linethick \kern\hboxitspace \vbox{\kern\vboxitspace \hbox{$\begin{array}{c}\displaystyle#1 \end{array}$}% \kern\vboxitspace}% \kern\hboxitspace \vrule width\linethick}% \hrule height\linethick}% \end{equation}} \newdimen\Squaresize \Squaresize=14pt \newdimen\Thickness \Thickness=0.5pt \def\Square#1{\hbox{\vrule width \Thickness \vbox to \Squaresize{\hrule height \Thickness\vss \hbox to \Squaresize{\hss#1\hss} \vss\hrule height\Thickness} \unskip\vrule width \Thickness} \kern-\Thickness} \def\Vsquare#1{\vbox{\Square{$#1$}}\kern-\Thickness} \def\omit\hskip\Squaresize{\omit\hskip\Squaresize} \def\young#1{ \vbox{\smallskip\offinterlineskip \halign{&\Vsquare{##}\cr #1}}} \def\@addtoreset{equation}{section{\@addtoreset{equation}{section} \def\oldtheequation\alph{equation}}{\thesection.\arabic{equation}}} \@addtoreset{equation}{section \newcommand{\sect}[1]{\setcounter{equation}{0}\section{#1}} \renewcommand{\oldtheequation\alph{equation}}}{\thesection.\arabic{equation}} \newcommand{\l@qq}[2]{\addvspace{2em} \hbox to\textwidth{\hspace{1em}\bf #1 \dotfill #2}} \def\add#1{\addcontentsline{toc}{app}{\bf #1}} \defAppendix{Appendix} \newcounter{app} \def\Alph{app}{\Alph{app}} \def\setcounter{equation}{0{\setcounter{equation}{0} \def\oldtheequation\alph{equation}}{\Alph{app}.\arabic{equation}}\par \addvspace{4ex} \@afterindentfalse \secdef\@app\@dapp} \newcommand\@app{\@startsection {app}{1}{0ex}% {-3.5ex \@plus -1ex \@minus -.2ex}% {2.3ex \@plus.2ex}% {\normalfont\Large\bf}} \newcommand{\appmark}[1]{\markboth{ \uppercase{\Alph{app}\hspace{1em}#1}}{}} \def\@dapp#1{% {\parindent \z@ \raggedright \bf #1}\par\nobreak} \def\l@app#1#2{\ifnum \c@tocdepth >\z@ \addpenalty\@secpenalty \addvspace{1.0em \@plus{\prime}@}% \setlength\@tempdima{2.5em}% \begingroup \parindent \z@ \rightskip \@pnumwidth \parfillskip -\@pnumwidth \leavevmode \bfseries \advance\leftskip\@tempdima \hskip -\leftskip #1\nobreak\hfil \nobreak\hb@xt@\@pnumwidth{\hss #2}\par \endgroup\fi} \newcounter{sapp}[app] \def\Alph{app}.\arabic{sapp}{\Alph{app}.\arabic{sapp}} \def\sapp{\def\oldtheequation\alph{equation}}{\Alph{app}.\arabic{equation}}\par \@afterindentfalse \secdef\@sapp\@dsapp} \newcommand\@sapp{\@startsection{sapp}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\large\bfseries}} \newcommand{\sappmark}[1]{\markboth{ \uppercase{\Alph{app}.\arabic{sapp}\hspace{1em}#1}}{}} \def\@dsapp#1{% {\parindent \z@ \raggedright \bf #1}\par\nobreak} \newcommand{\l@sapp}{\@dottedtocline{2}{1.5em}{3em}} \newcounter{ssapp}[sapp] \def\Alph{app}.\arabic{sapp}.\arabic{ssapp}{\Alph{app}.\arabic{sapp}.\arabic{ssapp}} \def\ssapp{\def\oldtheequation\alph{equation}}{\Alph{app}.\arabic{equation}}\par \@afterindentfalse \secdef\@ssapp\@dssapp} \newcommand\@ssapp{\@startsection{ssapp}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\normalsize\bfseries}} \newcommand{\ssappmark}[1]{\markboth{ \uppercase{\Alph{app}.\arabic{sapp}.\arabic{ssapp}\hspace{1em}#1}}{}} \def\@dssapp#1{% {\parindent \z@ \raggedright \bf #1}\par\nobreak} \newcommand{\l@ssapp}{\@dottedtocline{2}{1.5em}{3em}} \def\titlepage{\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn \else \newpage \fi \thispagestyle{empty}\c@page\z@ \def\arabic{footnote}{\fnsymbol{footnote}} } \def\endtitlepage{\if@restonecol\twocolumn \else \fi \def\arabic{footnote}{\arabic{footnote}} \setcounter{footnote}{0}} \relax \hybrid \newtoks\@stequation \def\subequations{\refstepcounter{equation}% \edef\@savedequation{\the\c@equation}% \@stequation=\expandafter{\oldtheequation\alph{equation}} \edef\@savedtheequation{\the\@stequation \edef\oldtheequation{\oldtheequation\alph{equation}}}% \setcounter{equation}{0}% \def\oldtheequation\alph{equation}}{\oldtheequation\alph{equation}}} \def\endsubequations{% \setcounter{equation}{\@savedequation}% \@stequation=\expandafter{\@savedtheequation}% \edef\oldtheequation\alph{equation}}{\the\@stequation}% \global\@ignoretrue} \parskip=0.4em \makeatletter \newdimen\normalarrayskip \newdimen\minarrayskip \normalarrayskip\baselineskip \minarrayskip\jot \newif\ifold \oldtrue \def\oldfalse{\oldfalse} \def\arraymode{\ifold\relax\else\displaystyle\fi} \def\eqnumphantom{\phantom{(\oldtheequation\alph{equation}})}} \def\@arrayskip{\ifold\baselineskip\z@\lineskip\z@ \else \baselineskip\minarrayskip\lineskip1\baselineskip\fi} \def\@arrayclassz{\ifcase \@lastchclass \@acolampacol \or \@ampacol \or \or \or \@addamp \or \@acolampacol \or \@firstampfalse \@acol \fi \edef\@preamble{\@preamble \ifcase \@chnum \hfil$\relax\arraymode\@sharp$\hfil \or $\relax\arraymode\@sharp$\hfil \or \hfil$\relax\arraymode\@sharp$\fi}} \makeatother \newtheorem{th}{Theorem}[section] \newtheorem{de}{Definition}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{ex}{Example}[section] \newtheorem{note}{Note}[section] \newtheorem{rem}{Remark}[section] \newtheorem{ath}{Theorem}[app] \newtheorem{aprop}{Proposition}[app] \newtheorem{ade}{Definition}[app] \newtheorem{acor}{Corollary}[app] \newtheorem{alem}{Lemma}[app] \newtheorem{arem}{Remark}[app] \def\bse{\begin{subequations}} \def\ese{\end{subequations}} % \mathsurround=2pt \begin{document} \begin{titlepage} \begin{flushright} ITEP--TH--02/07\\ hep-th/0703123\\ February, 2007 \end{flushright} \vspace{1.3cm} \begin{center} {\LARGE Wilson Loops in 2D Noncommutative \\%[.4cm] Euclidean Gauge Theory: \\[.4cm] 2. $1/\theta$ Expansion}\\ \vspace{1.4cm} {\large Jan Ambj{\o}rn$^{a),\;c)}$, Andrei Dubin$^{b)}$ and Yuri Makeenko$^{a),\; b)}$} \\[.8cm] {$^{a)}$\it The Niels Bohr Institute,} \\ {\it Blegdamsvej 17, 2100 Copenhagen {\O}, Denmark}\\[.4cm] {$^{b)}$\it Institute of Theoretical and Experimental Physics,} \\ {\it B. Cheremushkinskaya 25, 117259 Moscow, Russia}\\[.4cm] {$^{c)}$\it Institute for Theoretical Physics, Utrecht University,} \\ {\it Leuvenlaan 4, NL-3584 CE Utrecht, The Netherlands. } \end{center} \vskip 1.5 cm \begin{abstract} We analyze the $1/\theta$ and $1/N$ expansions of the Wilson loop averages $<W(C)>_{U_{\theta}(N)}$ in the two-dimensional noncommutative $U_{\theta}(N)$ gauge theory with the parameter of noncommutativity $\theta$. For a generic rectangular contour $C$, a concise integral representation is derived (non-perturbatively both in the coupling constant $g^{2}$ and in $\theta$) for the next-to-leading term of the $1/\theta$ expansion. In turn, in the limit when ${\theta}$ is much larger than the area $A(C)$ of the surface bounded by $C$, the large $\theta$ asymptote of this representation is argued to yield the next-to-leading term of the $1/\theta$ series. For both of the expansions, the next-to-leading contribution exhibits only a {\it power-like}\/ decay for areas $A(C)>>\sigma^{-1}$ (but $A(C)<<{\theta}$) much larger than the inverse of the string tension $\sigma$ defining the range of the exponential decay of the leading term. Consequently, for large $\theta$, it hinders a direct stringy interpretation of the subleading terms of the $1/N$ expansion in the spirit of Gross-Taylor proposal for the $\theta=0$ commutative $D=2$ gauge theory. \end{abstract} \end{titlepage} \newpage \section{Introduction} In short, given a commutative field theory defined in the Euclidean space ${\bf R}^{D}$ by the action $S=\int d^{D}x~{\cal L}(\phi(x))$, the corresponding noncommutative theory is implemented replacing the products of the fields $\phi({\bf x})$ by the so-called star-products introduced according to the rule% \footnote{For a review see \cite{DN02,Sza03} and references therein.} \begin{equation} \left( f_{1}\star{f_{2}}\right)({\bf x})\equiv exp\left(-\frac{i}{2}\theta_{\mu\nu}\partial^{y}_{\mu} \partial^{z}_{\nu}\right)~ f_{1}({\bf y})~f_{2}({\bf z})\Bigg|_{y=z=x}~, \label{1.8} \end{equation} where the parameter of noncommutativity $\theta_{\mu\nu}$, entering the commutation relation $[x_{\mu},x_{\nu}]=-i\theta_{\mu\nu}$ satisfied by $D$ noncommuting coordinates, is real and antisymmetric. In particular, the action of the standard $D-$dimensional $U(N)$ Yang-Mills theory is superseded by \begin{equation} S=\frac{1}{4g^{2}}\int d^{D}x~ tr\left({\mathcal{F}}^{2}_{\mu\nu}({\bf x})\right)~~~~;~~~~~ {\mathcal{F}}_{\mu\nu}=\partial_{\mu}{\mathcal{A}}_{\nu}+ \partial_{\nu}{\mathcal{A}}_{\mu} -i({\mathcal{A}}_{\mu}\star{\mathcal{A}}_{\nu}-{\mathcal{A}}_{\nu}\star {\mathcal{A}}_{\mu})~, \label{1.7} \end{equation} where ${\mathcal{A}}_{\mu}\equiv{{\mathcal{A}}^{a}_{\mu}t^{a}}$ with $tr(t^{a}t^{b})=\delta^{ab}$, and $\theta_{21}=-\theta_{12}=\theta$ in the $D=2$ case in question. The noncommutative two-dimensional $U_{\theta}(N)$ system (\ref{1.7}) provides the simplest example of the noncommutative gauge theory. As well as in the $\theta=0$ case, investigation of non-perturbative effects in a low-dimensional model is expected to prepare us for the analysis of a more complicated four-dimensional quantum dynamics. An incomplete list of papers, devoted to this direction of research, is presented in references \cite{BNT01}-\cite{RS07}. The aim of the present work is to extend the perturbative analysis of our previous publication \cite{ADM04} and examine, non-perturbatively in the coupling constant $g^{2}$, the two alternative expansions of the Wilson loop-average $<W(C)>_{U_{\theta}(N)}$ in the $D=2$ $U_{\theta}(N)$ theory on a plain. The first one is the $1/\theta$ series \begin{equation} <W(C)>_{U_{\theta}(N)}=\sum_{k=0}^{\infty}~\theta^{-k} <{\cal W}(C)>^{(k)}_{N} \label{CO.02} \end{equation} that is to be compared with the more familiar 't Hooft $1/N$ topological expansion \begin{equation} <W(C)>_{U_{\theta}(N)}=\sum_{G=0}^{\infty}~ N^{-2G}<W(C)>^{(G)}_{U_{\theta}(1)}~, \label{CO.01} \end{equation} where $G$ can be identified with the genus of the auxiliary surface canonically associated to any given diagram of the weak-coupling series of the $N-$independent quantity $<W(C)>^{(G)}_{U_{\theta}(N)}$. Also, the contour $C$ is always restricted to be closed. The $\theta\rightarrow{\infty}$ limit of the $U_{\theta}(N)$ theory is known \cite{Filk} to retain the same set of the planar diagrams (described by the same amplitudes) as the $N\rightarrow{\infty}$ limit does so that the leading terms of both of the above expansions coincide, \begin{equation} <W(C)>^{(0)}_{U_{\theta}(N)}=<{\cal W}(C)>^{(0)}_{N}~, \label{1.41a} \end{equation} provided the appropriate identification of the coupling constants. As the $G=0$ term of Eq. (\ref{CO.01}) is $\theta-${\it independent}, it therefore reduces to the corresponding average in the commutative variant of the gauge theory. In consequence, the leading term of the series (\ref{CO.02}) reduces, \begin{equation} <{\cal W}(C)>^{(0)}_{N}=<W(C)>^{(0)}_{U(N)} ~~~~~,~~~~~<{\cal W}(\Box)>^{(0)}_{N}=exp[-\sigma A(\Box)]~, \label{1.41b} \end{equation} to $G=0$ term of the $\theta=0$ expansion (\ref{CO.01}) of the average $<W(C)>_{U(N)}$ in the ordinary commutative $U(N)$ gauge theory. In particular, it fits in the simple Nambu-Goto pattern for an arbitrary non-self-intersecting contour $C$. In this paper, for an arbitrary rectangular contour $C=\Box$, we evaluate the next-to-leading term $<W(\Box)>^{(1)}_{U_{\theta}(1)}$ of the topological expansion (\ref{CO.01}) and argue that its large $\theta$ asymptote exactly reproduces, \begin{equation} <{\cal W}(C)>_{N}^{(2)}= \frac{1}{N^{2}}~\lim_{\theta\rightarrow{\infty}}~ \theta^{2}<W(C)>_{U_{\theta}(1)}^{(1)}~, \label{CO.03} \end{equation} the $k=2$ term $<{\cal W}(\Box)>^{(2)}_{N}$ of the $1/\theta$ series (\ref{CO.02}) (while $<{\cal W}(C)>^{(1)}_{N}=0$). The proof of the relation (\ref{CO.03}) will be presented in a separate publication \cite{ADM05b}. As for the computation of $<{\cal W}(C)>_{N}^{(2)}$, for this purpose we perform a resummation of the genus-one diagrams for a generic $C=\Box$, that is facilitated by the choice of the axial gauge where, at the level of the $D=2$ action (\ref{1.7}), only tree-graphs (without self-interaction vertices) are left. Nevertheless, the problem remains to be nontrivial: due to the noncommutative implementation \cite{Ish}-\cite{DK01} of the Wilson loop, an infinite number of different connected $G=1$ diagrams contributes to the average $<W(C)>_{U_{\theta}(N)}$ even in the case of a non-self-intersecting contour $C$ that is in contradistinction with the commutative case, where $<W(\Box)>_{U(N)}=<W(\Box)>^{(0)}_{U(N)}$. To deal with this problem, we propose a specific method of resummation. Application of the method allows to unambiguously split the whole set of the relevant perturbative $G=1$ diagrams into the three subsets. Being parameterized by the two integer numbers $r$ and $v$ with $0\leq r\leq v\leq 1$, each subset can be obtained starting with the corresponding protograph (with $2+r-v$ lines) and then dressing it through the addition of extra lines in compliance with certain algorithm. For a rectangle $C=\Box$, it yields an integral representation of the $G=1$ term of the $1/N$ expansion in the form \begin{equation} <W(\Box)>_{U_{\theta}(1)}^{(1)}~=~ \frac{1}{(2\pi\sigma\theta)^{2}}\sum_{0\leq r\leq v\leq 1} h_{rv}{\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})~, \label{SU.01z} \end{equation} where ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$ denotes the effective amplitude which, after multiplication by the factor $h_{rv}=2+r-v$ separated for a later convenience, accumulates the entire $rv-$subset of the perturbative amplitudes. Besides a dependence on $\bar{\theta}=\sigma\theta$, ${\cal Z}_{rv}(\cdot)$ depends only on the dimensionless area $\bar{A}=\sigma RT$ of $C=\Box$ rather than separately on the lengths $T$ and $R$ of the temporal and spatial sides of $\Box$. Correspondingly, in the large $\theta$ limit, \begin{equation} \theta>>A(C)~, \label{LI.01} \end{equation} the $N=1$ relation (\ref{CO.03}) can be rewritten as \begin{equation} <{\cal W}(\Box)>_{1}^{(2)}=\frac{1}{(2\pi\sigma)^{2}} \sum_{0\leq r\leq v\leq 1}h_{rv}{\cal Z}_{rv}(\bar{A},0)~, \label{CO.03f} \end{equation} where ${\cal Z}_{rv}(\bar{A},0)$ is obtained from ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$ (which is {\it continuous}\/ in $\bar{\theta}^{-1}$ in a vicinity of $\bar{\theta}^{-1}=0$) simply replacing\footnote{The peculiarity of this replacement is that it can {\it not}\/ be applied directly to the perturbative amplitudes describing individual Feynman diagrams. It matches the observation \cite{ADM04} that the large $\theta$ asymptote of the leading perturbative contribution to $<W(C)>_{U_{\theta}(1)}^{(1)}$ scales as $\theta^{0}$ rather than as $\theta^{-2}$. In turn, it implies a nontriviality of the relation (\ref{CO.03}).} $\bar{\theta}^{-1}$ by zero. Then, performing the Laplace transformation with respect to $\bar{A}$, the image $\tilde{\cal Z}_{rv}(\beta,0)$ of the large $\theta$ asymptote ${\cal Z}_{rv}(\bar{A},0)$ assumes the concise form \begin{equation} \tilde{\cal Z}_{rv}(\beta,0)=\frac{1}{(\beta+1)^{2}} \int\limits_{-\infty}^{+\infty} d\bar\zeta d\bar\eta~ \frac{{\cal K}_{rv}(\bar\zeta,\bar\eta)} {(\beta+|1-\bar\zeta|)^{h_{rv}-1}~(\beta+|1+\bar\eta|)~ (\beta+|1+\bar\eta-\bar\zeta|)}~, \label{LA.03} \end{equation} where \begin{equation} {\cal K}_{rv}(\bar\zeta,\bar\eta)= \sum_{{e}_{3}=-r}^{0}\sum_{{e}_{1}=-1}^{v-r} \sum_{{e}_{2}=v}^{1}(-1)^{v+\sum_{k=1}^{3}{e}_{k}}~ 2^{(v-r)(1-|{e}_{1}|)}|{e}_{1}+\bar\zeta|~ |{e}_{2}+\bar\eta|^{1-v}~|{e}_{3}+\bar\zeta|^{r}~. \label{MUL.01a} \end{equation} The integral representation (\ref{LA.03}) is the main result of the paper. Building on the latter representation, one concludes that the pattern of the $\theta\neq{0}$ expansion (\ref{CO.01}) shows, especially in the limit (\ref{LI.01}), a number of features which are in sharp contrast with the $1/N$ expansion of the average in the $\theta=0$ case. Indeed, in the latter case, the Nambu-Goto pattern (\ref{1.41b}) provides the exact result $<W(\Box)>_{U(N)}=<W(\Box)>^{(0)}_{U(N)}$ for an arbitrary non-self-intersecting loop $C$, and the corresponding subleading terms are vanishing: $<W(C)>_{U(N)}^{(G)}=0$ for $G\geq{1}$. Furthermore, for self-intersecting contours $C$, nonvanishing subleading $G\geq{1}$ terms $<W(C)>_{U(N)}^{(G)}$ all possess \cite{Kaz&Kost} the area-law asymptote like in Eq. (\ref{1.41b}) in the limit $\bar{A}\rightarrow{\infty}$. When $\theta\neq{0}$, even for a rectangular loop $C$, the pattern of $<W(C)>_{U_{\theta}(N)}$ is characterized by an infinite $1/N-$series, each $G\geq{1}$ term of which nontrivially depends both on $\bar\theta$ and on $\bar{A}(C)$. In addition, we present simple arguments that, in contradistinction with Eq. (\ref{1.41b}), the asymptote (\ref{CO.03f}) of the next-to-leading term exhibits a power-like (rather than exponential) decay for areas $\sigma^{-1}<<A(\Box)<<{\theta}$ much larger than the string tension $\sigma$. This asymptote is evaluated in \cite{ADM05b} with the result \begin{equation} \frac{1}{N^{2}}<W(\Box)>_{U_{\theta}(1)}^{(1)}~\longrightarrow~ \frac{4}{\pi^2\left(\sigma\theta N\right)^{2}}~ \frac{ln(\sigma{A})}{\sigma{A}}~~~~~,~~~~~ \sigma\theta,~\sigma{A}~\longrightarrow{~\infty}~, \label{FA.13b} \end{equation} that can be traced back to the (infinite, in the limit $\theta\rightarrow{\infty}$) {\it nonlocality}\/ of the star-product (\ref{1.8}) emphasized in the discussion \cite{Minw} of the $UV/IR$ mixing. Due to the generality of the reasoning, all the subleading $G\geq{1}$ coefficients $<W(C)>_{U_{\theta}(1)}^{(G)}$ are as well expected to show, irrespectively of the form of $C$, a power-like decay for $\sigma^{-1}<<A(C)<<{\theta}$. In particular, it precludes a straightforward stringy interpretation of the subleading terms of the expansion (\ref{CO.01}) in the spirit of Gross-Taylor proposal \cite{Gr&Tayl} for the $\theta=0$ commutative $D=2$ gauge theory. In Section \ref{gen}, we put forward a concise form (\ref{1.31b}) of the perturbative $2n-$point functions, the loop-average $<W(C)>_{U_{\theta}(1)}$ is composed of in the $D=2$ $U(1)$ theory (\ref{1.7}). In Section \ref{deform1}, it is sketched how these functions are modified under the two auxiliary (genus-preserving) deformations of a given diagram to be used for the derivation of the decomposition (\ref{SU.01z}). To put the deformations into action, in Section \ref{parameter}, we introduce a finite number of the judiciously selected elementary genus-one graphs and propose their $\gamma jrv-$parameterization. Then, any remaining nonelementary $G=1$ perturbative diagram can be obtained through the appropriate multiple application of the latter deformations to one of thus selected elementary graphs. When a particular elementary diagram with a given $\gamma jrv-$assignment is dressed by all its admissible deformations, the corresponding perturbative $2n-$point function is replaced by the effective one, as it is shown in Section \ref{repres1}. The replacement is implemented in such a way that certain $n-v$ propagators of the are superseded by their effective counterparts (\ref{KEY.01}). The integral representation of the effective $2n-$point functions, is completed in Section \ref{effective}. In Section \ref{effective1}, we express the $G=1$ term $<W(\Box)>_{U_{\theta}(1)}^{(1)}$ of the expansion (\ref{CO.01}) as a superposition of the effective amplitudes (\ref{FR.01}) that are obtained when the arguments of the above $2n-$point functions are integrated over the rectangle $C=\Box$. The effective amplitudes can be collected into the three $rv-$superpositions ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$ associated to the corresponding protographs parameterizing the decomposition (\ref{SU.01z}). The explicit expression (\ref{MUL.04}) for ${\cal Z}_{rv}(\cdot)$ is then derived. It is observed that, for a fixed $rv-$specification, this expression can be deduced directly through the appropriate dressing of the $rv-$protograph. The derivation of the large $\theta$ representation (\ref{LA.03}) is sketched in Section \ref{prescr}. Conclusions, a brief discussion of the perspectives, and implications for $D=3,4$ gauge theory (\ref{1.7}) are sketched in Section \ref{conclus}. Finally, the Appendices contain technical details used in the main text. \section{Generalities of the perturbative expansion} \label{gen} Building on the integral representation of the $U_{\theta}(1)$ average, we begin with a sketch of the derivation of the relevant perturbative $2n-$point functions. \subsection{Average of the noncommutative Wilson loop} To this aim, consider the perturbative expansion of the average of the noncommutative Wilson loop~\cite{Ish} \begin{equation} W(C)={\mathcal{P}}e_{\star}^{ i\oint_{C} dx_{\mu}(s) {\mathcal{A}}_{\mu}({\bf x}(s))}~. \label{1.3a} \end{equation} in the $U_{\theta}(N)$ noncommutative gauge theory on the 2D plane ${\Bbb R^{2}}$. For this purpose, it is sufficient to use the path-integral representation~\cite{ADM04} of the $U_{\theta}(1)$ average \begin{equation} <W(C)>_{U_{\theta}(1)}=\Bigg<exp\left(-\frac{1}{2} \oint\limits_{C} dx_{\mu}(s)\oint\limits_{C} dx_{\nu}(s') D_{\mu\nu}({\bf x}(s)-{\bf x}(s')+{\bf \xi}(s)- {\bf \xi}(s'))\right) \Bigg>_{\xi(\tilde{s})}~, \label{1.1} \end{equation} as it follows from the $N-$independence of the quantities $<W(C)>^{(G)}_{U_{\theta}(N)}$ which are, therefore, replaced by $<W(C)>^{(G)}_{U_{\theta}(1)}$ in Eq. (\ref{CO.01}). In Eq. (\ref{1.1}), $D_{\mu\nu}({\bf z})$ is the standard $D=2$ photon's propagator in the axial gauge ${\mathcal{A}}_{1}=0$, \begin{equation} D_{\mu\nu}({\bf z})=<{\mathcal{A}}_{\mu}({\bf z}) {\mathcal{A}}_{\nu}({\bf 0})>_{U(1)}= -\frac{g^{2}}{2}~\delta_{\mu 2}\delta_{\nu 2}~|z_{1}|~\delta(z_{2})~, \label{1.2} \end{equation} and the functional averaging over the auxiliary $\xi_{\mu}(s)$ field (parameterized by the proper time $s\in[0,1]$ chosen to run clockwise starting with the left lower corner of $C=\Box$) is to be performed according to the prescription \begin{equation} \Bigg<{\mathcal{B}}[{\bf \xi}(s)]\Bigg>_{\xi(\tilde{s})}= \int {\mathcal{D}}\xi_{\mu}(s)~e^{\frac{i}{2}(\theta^{-1})_{\mu\nu} \int ds ds' \xi^{\mu}(s)G^{-1}(s,s') \xi^{\nu}(s')}~{\mathcal{B}}[{\bf \xi}(s)]~. \label{1.4} \end{equation} Here, ${\mathcal{D}}\xi_{\mu}(s)$ denotes the standard flat measure so that $<\xi^\mu (s) \xi^\nu (s')>= i\theta^{\mu\nu} {\rm sign}\,(s-s')/2$, where, prior to the regularization, we are to identify $G^{-1}(s,s')=\dot \delta(s-s')$. Let us also note that Eq. (\ref{1.4}) is based on the integral representation \begin{equation} exp\left(-\frac{i}{2}~{\theta}_{\mu\nu} {\partial^{{\bf x}}_{\mu}} {\partial^{{\bf y}}_{\nu}}\right)f_{1}({\bf x})~f_{2}({\bf y})= \int e^{2i ({\theta}^{-1})_{\mu\nu} \xi_1^\mu\xi_2 ^\nu } f _1 ({\bf x}+{\bf \xi}_1) f _2 ({\bf y}+{\bf \xi}_2)~ \prod_{j=1}^{2}\frac{d^{2}\xi^{\mu}_{j}}{w(\theta)} \label{IR.01} \end{equation} of the star-product (\ref{1.8}), where $w(\theta)= (\pi^2|\det {\theta}|)^{1/2}$. In consequence, the noncommutative Wilson loop~\rf{1.3a} itself can be represented as~\cite{Oku99}, \begin{equation} W(C)=\Bigg<exp\left(i\oint\limits_{C} dx_{\mu}(s) {\mathcal{A}}_{\mu}({\bf x}(s)+{\bf \xi}(s))\right) \Bigg> _{\xi(\tilde{s})}~. \label{1.3} \end{equation} Finally, the coupling $g^{2}$ of the $U_{\theta}(N)$ noncommutative gauge theory is related with the string tension $\sigma$, entering \eq{1.41b}, by the formula \begin{equation} \sigma=g_{U_{\theta}(N)}^{2}N/2~. \label{1.41d} \end{equation} \subsection{Perturbative $\theta$-dependent $2n-$point functions} Take any given $n$th order diagram of the weak-coupling expansion of the average (\ref{1.1}) that, being applied to the $1/N$ series (\ref{CO.01}) can be rewritten in the form \begin{equation} <W(C)>_{U_{\theta}(N)}^{(G)}=\sum_{n=0}^{\infty}\lambda^{2n}~ <W(C)>_{U_{\theta}(N)}^{(G,n)} \label{NE.02} \end{equation} with $\lambda=g^{2}N$. For a particular $n\geq 2G$, $<W(C)>_{U_{\theta}(N)}^{(G,n)}$ is given by the multiple contour integral of the $\xi$-average applied to the corresponding product of $n$ $\xi$-dependent propagators $D_{\mu\nu}({\bf y}_{l}+\xi(s_{l})-\xi(s'_{l}))$, where \begin{equation} {\bf y}_{l}={\bf x}(s_{l})-{\bf x}(s'_{l})~, \label{1.38aa} \end{equation} with $l=1,2,...,n$. Then, any diagram can be topologically visualized as the collection of the {\it oriented}\/ (according to the proper-time parameterization) lines so that the $q$th propagator-line starts at a given point ${\bf x}(s'_{q})\in C$ and terminates at the corresponding ${\bf x}(s_{q})\in C$. When the $\xi$-averaging of the product is performed, the perturbative $2n-$point function can be rewritten~\cite{ADM04} in the form \begin{equation} V^{(n)}_{U_{\theta}(1)}({\bf y}_{1},...,{\bf y}_{n})=o_{n}~ \prod_{1\leq l<j}^{n}exp\left(\frac{i}{2}~{\cal C}_{lj}\breve{ \theta}_{\mu\nu} {\partial^{{\bf z}_{l}}_{\mu}} {\partial^{{\bf z}_{j}}_{\nu}}\right){D}_{22}({\bf z}_{1})~ {D}_{22}({\bf z}_{2})~...~{D}_{22}({\bf z}_{n}) \Big|_{\{{\bf z}_{k}={\bf y}_{k}\}}~, \label{1.31b} \end{equation} where\footnote{In the computation of any $2n$th order perturbative diagram, the factor $o_{n}$ disappears. The subfactor $2^{-n}$ is exactly cancelled by the symmetry factor responsible for the interchange of two different end-points of each of the $n$ lines. By the same token, the subfactor factor $1/n!$ is precisely cancelled by the symmetry factor corresponding to all possible permutations of the $n$ different (non-oriented) lines. Finally, $(-1)^{-n}$ is to be combined with the implicit factor $(-1)^{-n}$ that arises when one pulls the minus sign out of each propagator (\ref{1.2}) entering $V^{(n)}_{U_{\theta}(1)}(\cdot)$.} $o_{n}=(-1/2)^{n}/n!$, and the {\it intersection}\/ matrix ${\mathcal{C}}_{lj}=-{\mathcal{C}}_{ji}$, being defined algebraically as \begin{equation} {\mathcal{C}}_{lj}=\frac{1}{2}\left(sign(s_{l}-s_{j})+ sign(s'_{l}-s'_{j})-sign(s_{l}-s'_{j})- sign(s'_{l}-s_{j})\right)~, \label{1.34} \end{equation} counts the number of times the $l$th oriented line crosses over the $j$th oriented line (and, without loss of generality, we presume that $s_{l}\geq s'_{l}$ for $\forall{l}$). As for the relevant noncommutative parameter $\breve{ \theta}_{\mu\nu}$, it is {\it twice}\/ larger \begin{equation} \breve{ \theta}_{\mu\nu}=2{\theta}_{\mu\nu}~, \label{1.26} \end{equation} compared to the parameter ${\theta}_{\mu\nu}$ defining the original star-product (\ref{1.8}). Being rewritten in the momentum space, Eq. (\ref{1.31b}) implies that, compared to the commutative case, a given $\theta\neq 0$ perturbative $2n-$point function is assigned with the extra $\theta-$dependent factor \begin{equation} \Bigg<\prod_{k=1}^{n} e^{i{\bf p}_{k}\cdot({\bf \xi}(s_{k})-{\bf \xi}(s'_{k}))} \Bigg>_{\xi(\tilde{s})}= exp\left(-i\sum_{l<j}{\mathcal{C}}_{lj}~\theta_{\mu\nu}~ p_{l}^{\mu}~p_{j}^{\nu}\right)~. \label{1.33} \end{equation} where the momentum ${\bf p}_{l}$ is canonically conjugated to the $l$th coordinate (\ref{1.38aa}). In turn, the r.h. side of Eq. (\ref{1.33}) reproduces the existing formula \cite{Filk,Minw} obtained in the analysis of the partition function in the noncommutative field-theories. Finally, the pattern of Eqs. (\ref{1.33}) and (\ref{1.34}) suggests the natural definition of (dis)connected diagrams. Algebraically, a particular $n$th order graph is to be viewed as disconnected in the case when the associated $n\times n$ matrix ${\mathcal{C}}_{lj}$ assumes a {\it block-diagonal}\/ form ${\mathcal{C}}_{lj}= \otimes_{k}{\mathcal{C}}^{(k)}_{l_{k}j_{k}}$, with $\sum_{k}n_{k}=n$, so that the nonvanishing entries of ${\mathcal{C}}_{lj}$ are reproduced exclusively by smaller $n_{k}\times n_{k}$ matrices ${\mathcal{C}}^{(k)}_{l_{k}j_{k}}$, where $n_{k}<n$ for $\forall{k}$. Conversely, when a nontrivial implementation of this decomposition of a particular ${\mathcal{C}}_{lj}$ is impossible, the corresponding diagram is called connected. As the rank $r[{\cal C}]=2G(\{{\bf y}_{l}\})$ of the matrix ${\cal C}_{ij}$ is known to be equal to the doubled genus $G(\{{\bf y}_{l}\})$ of the diagram, one expects that the order $n$ of a connected genus $G$ graph complies with the inequality $n(\{{\bf y}_{l}\})\geq{2G(\{{\bf y}_{l}\})}$. \section{The two deformations and the irreducible diagrams} \label{deform1} The aim of this Section is to present the central elements of the exact resummation% \footnote{The details of this resummation procedure will be published elsewhere.} of the weak-coupling series applied to the noncommutative Wilson loop average that, in turn, leads to the decomposition (\ref{SU.01z}) introducing the parameters $r$ and $v$. For this purpose, observe first that the complexity of the perturbative expansion of the considered average roots in the complexity of the perturbative $2n-$point functions (\ref{1.31b}) associated to the connected graphs (of an arbitrary large order) discussed in the end of the previous Section. In consequence, for a connected graph of order $m\geq 2G$, the $2m-$point function (\ref{1.31b}) can be expressed in the simplest cases as a multiple irreducible star-product $f_{1}\star{f_{2}}\star ...\star{f_{m}}$, where the quantities $f_{k}(\cdot)$ are composed of the propagators (\ref{1.2}). (In general, the pattern of the connected $2n-$point function can be deduced according to the prescription discussed in the beginning of subsection \ref{deform1c}.) In particular, it can be shown that $2\leq m\leq{3}$ for $G=1$, while Eq. (\ref{1.31b}) for a generic $G=1$ diagram can be represented in the form of an ordinary product of a single $m$th order star-product and a number of the propagators (\ref{1.2}). To put this representation into use, we introduce the two {\it genus-preserving}\/ deformations to be called ${\cal R}_{a}^{-1}-$ and $\bar{\cal R}_{b}^{-1}-$deformations. Increasing the order $n$ of a $G\geq{1}$ graph by one, they relate the corresponding pairs of functions (\ref{1.31b}) in a way that does {\it not}\/ change the multiplicities of all the irreducible star-products involved. Correspondingly, with respect to the inverse ${\cal R}_{a}-$ and $\bar{\cal R}_{b}-$deformations, one introduces ${\cal R}_{a}\otimes\bar{\cal R}_{b}-$irreducible Feynman diagrams. The advantage of the construction is that nonvanishing amplitudes (\ref{1.31b}) are associated only to the {\it finite}\/ number of the irreducible diagrams depicted in figs. 1, 2, 7a, and 7e (which are postulated to fix the {\it topology}\/ of the attachment of the lines' end-points to the upper and lower horizontal sides of $C=\Box$). Then, the complete set of the {\it connected}\/ genus-one diagrams can be generated applying all possible $\bar{\cal R}_{b}^{-1}-$deformations to all $m$ lines of these irreducible diagrams. In the end of the Section, we discuss a reason for a further refinement of the resummation algorithm. Being implemented via certain dressing of the lines of the so-called elementary (rather than irreducible) diagrams, the algorithm prescribes that, for any of the dressed lines of the latter diagram, (in the relevant amplitude) one replaces the perturbative propagator by a concise effective propagator (\ref{KEY.01}). \subsection{The ${\cal R}_{a}^{-1}-$deformations} \label{deform1a} The first one is what we call ${\cal R}_{a}^{-1}-$deformation when a given elementary graph is modified by the addition of an extra $i$th propagator-line that does not intersect\footnote{The condition (\ref{1.50a}) refers to such implementation of the decomposition ${\mathcal{C}}_{lj}= \otimes_{k}{\mathcal{C}}^{(k)}_{l_{k}j_{k}}$ when, except for a single factor ${\mathcal{C}}^{(q)}_{l_{q}j_{q}}$, all the remaining factors are one-dimensional.} any line in the original $\{k\}$-set of the elementary diagram: \begin{equation} {\cal C}_{ik}=0~~~~~~~~,~~~~~~~~\forall{k\neq{i}}~. \label{1.50a} \end{equation} Starting with a given $2n-$point function $V^{(n)}_{U_{\theta}(1)}(\cdot)$ and identifying $i=n+1$, one readily obtains that, modulo the numerical constant, the considered deformation of $V^{(n)}_{U_{\theta}(1)}(\cdot)$ merely multiplies it by the extra propagator, \begin{equation} V^{(n+1)}_{U_{\theta}(1)}({\bf y}_{1},...,{\bf y}_{n},{\bf y}_{n+1})= -\frac{1}{2(n+1)}~V^{(n)}_{U_{\theta}(1)}({\bf y}_{1},...,{\bf y}_{n})~ {D}_{22}({\bf y}_{n+1})~. \label{DEF.01} \end{equation} Correspondingly, one defines the inverse of the ${\cal R}_{a}^{-1}-$deformation as the ${\cal R}_{a}-$deformation which eliminates such an $i$th line of a (${\cal R}_{a}-$reducible) diagram that complies with Eq. (\ref{1.50a}) for some $k$. In the absence of such a line (for any $k$), the graph is called ${\cal R}_{a}-$irreducible\footnote{Note that any connected diagram is necessarily ${\cal R}_{a}-$irreducible.}. Concerning an $m-$fold application of the ${\cal R}_{a}^{-1}-$deformation, the corresponding generalization of Eq. (\ref{DEF.01}) is routine: the single factor $-{D}_{22}({\bf y}_{n+1})/2(n+1)$ is replaced by the product $\prod_{k=1}^{m}(-1){D}_{22}({\bf y}_{n+k})/2(n+k)$. I.e. all of thus generated extra lines are assigned (as well as in the $\theta=0$ commutative gauge theory) with the ordinary perturbative propagator (\ref{1.2}). For example, a generic non-elementary ${\cal R}_{a}^{-1}-$deformation of the graph in fig. 1a is described by the diagram in fig. 3a. In the latter figure the additional lines are depicted by dotted lines which are vertical (i.e. characterized by vanishing relative time) owing to the pattern of the latter propagator. More generally, the vertical lines in figs. 5, 6, 8, and 9 are also generated by the admissible multiple ${\cal R}_{a}^{-1}-$deformations of the corresponding elementary graphs. Finally, by the same token as in the $\theta=0$ case, it is straightforward to obtain that the ${\cal R}_{a}^{-1}-$dressing of a given connected graph results in the multiplication of the amplitude, associated to this graph, by a factor to be fixed by Eq. (\ref{EX.01h}) below. \subsection{The $\bar{\cal R}_{b}^{-1}-$deformations} \label{deform1b} The second one is what we denote as the $\bar{\cal R}_{b}^{-1}-$deformation of a given $k$th line of the elementary graph (when the remaining lines of this graph are defined as the $\{q\}_{k}-$set) that introduces an extra line, labeled by $i$, so that the following twofold condition is fulfilled. To begin with, one requires that the $k$th and the $i$th lines, being mutually non-intersecting, intersect the $\{q\}_{k}-$set in the topologically equivalent way (modulo possible reversion of the orientation). E.g. the $\bar{\cal R}_{b}^{-1}-$copies of the right and left solid horizontal lines in fig. 1a are depicted by parallel (owing to the condition (\ref{FA.09}) below) dotted lines in figs. 4b and 4c respectively. In general, it can be formalized by the condition \begin{equation} {\cal C}_{ik}=0~~~~~,~~~~~ {\cal C}_{iq}=\alpha_{ik}~{\cal C}_{kq}~~~~~~~~,~~~~~~~~\forall{q}\neq{k,i}~, \label{1.50} \end{equation} where, depending on the choice of the relative orientation of the $i$th line, the $q$-independent constant $\alpha_{ik}\equiv{\alpha_{i,k}}$ is equal to $1$ or $-1$ (with $\alpha^{(r)}_{11}=1$). Additionally, it is convenient to impose that thus introduced extra line should {\it not}\/ be horizontal, i.e., both its end-points are not attached to the same horizontal side (along the second axis) of the rectangle $C$. As for the inverse transformation, the $\bar{\cal R}_{b}-$deformation deletes such an $i$th line of a diagram that Eq. (\ref{1.50a}) holds true. Correspondingly, any line of a $\bar{\cal R}_{b}-$irreducible graph has {\it no}\/ $\bar{\cal R}_{b}^{-1}-$copies in the sense of the above twofold condition. Next, identifying $\alpha_{n,n+1}=\alpha^{(n)}$, one obtains that the $\bar{\cal R}_{b}^{-1}-$deformation of (\ref{1.31b}) results in the $2(n+1)-$point function \begin{equation} V^{(n+1)}_{U_{\theta}(1)}({\bf y}_{1},...,{\bf y}_{n},{\bf y}_{n+1})= \label{DEF.02} \end{equation} $$ =o_{n+1}~ \prod_{1\leq l<j}^{n}e^{\frac{i}{2}~{\cal C}_{lj}\breve{ \theta}_{\mu\nu} {\partial^{{\bf z}_{l}}_{\mu}} {\partial^{{\bf z}_{j}}_{\nu}}}{D}_{22}({\bf z}_{1})~ ...{D}_{22}({\bf z}_{n-1})\left[{D}_{22}({\bf z}_{n})~ {D}_{22}(({\bf z}_{n}-{\bf y}_{n})\alpha^{(n)}+{\bf y}_{n+1})\right] \Big|_{\{{\bf z}_{k}={\bf y}_{k}\}}~. $$ which is expressed through the original $n\times n$ intersection matrix ${\cal C}_{lj}$. In view of the pattern (\ref{1.2}) of the propagator, Eq. (\ref{DEF.02}) implies that \begin{equation} \alpha^{(n)}y^{2}_{n}=y^{2}_{n+1} \label{DEF.03} \end{equation} In turn, it entails that the $\bar{\cal R}_{b}^{-1}-$copy of a given line spans the same time-interval (fixed by the second component $y^{2}_{l}$ of the relative distance (\ref{1.38aa})) as the latter line does. In full generality, this property is expressed by Eq.~(\ref{GP.01}) that crucially simplifies the computations. Next, the multiple\footnote{In what follows, a composition of multiple ${\cal R}_{a}^{-1}-$ and $\bar{\cal R}_{b}^{-1}-$deformations (associated to some lines of an elementary graph) is, in short, denoted as ${\cal R}_{a}^{-1}\otimes\bar{\cal R}_{b}^{-1}-$deformation of a graph.} application of the $\bar{\cal R}_{b}^{-1}-$deformations (\ref{1.50}), introduces an extra $\{i_{a}\}-$set of the lines which, intersecting nether each other nor the $k$th line, fulfill the $i\rightarrow i_{a}$ option of Eq. (\ref{1.50}). Then, to reproduce the replacement (\ref{KEY.01}), one should take advantage of the following reduction. When applied to an elementary graph, the (multiple) $\bar{\cal R}_{b}^{-1}-$deformations result in diagrams described by vanishing amplitudes unless they are constrained by a particular $\{\alpha^{(r)}\}-$assignment. The amplitude (\ref{1.31b}) may be nonvanishing only when, for any given $r$th line of the elementary graph, all its $\bar{\cal R}_{b}^{-1}-$copies (if any) are assigned with {\it one and the same}\/ value of the parameter \begin{equation} \alpha^{(r)}_{l1}=\alpha^{(r)}~~~~~,~~~~~\forall{l}\geq{2}~,~\forall{r}~, \label{GP.01} \end{equation} where $\alpha^{(r)}_{lk}=\pm 1$ enters the implementation of Eq. (\ref{1.50}) corresponding to the $r$th line. Let us also note that another useful property of the $\bar{\cal R}_{b}^{-1}-$deformations which generalizes the relation (\ref{DEF.03}). The corresponding implementations of the $2n-$point function (\ref{1.31b}) enforce that \begin{equation} y^{2}_{k}=\alpha^{(k)}_{l1}y^{2}_{k,l}~~~~~,~~~~~ \forall{l=2,...,n_{k}}~~~,~~~\forall{k}~, \label{FA.09} \end{equation} where ${\bf y}_{k,l}$ denotes the relative distance (\ref{1.38aa}) corresponding to the $l$th $\bar{\cal R}_{b}^{-1}-$copy of the $k$th line (described by ${\bf y}_{k}\equiv{\bf y}_{k,1}$). That is why, for $C=\Box$, the latter copies are depicted by such straight dotted lines which are mutually {\it parallel}\/ like dotted lines in figs. 4b and 4c. \subsubsection{The necessity for a further refinement} \label{deform1c} According to above, given a generic connected graph, the pattern of the corresponding $2n-$point function can be deduced from Eq. (\ref{1.31b}) via the replacement ${D}_{22}({\bf z}_{k})\rightarrow{f_{k}({\bf z}_{k},...)}$. Here, $f_{k}({\bf z}_{k},...)$ takes into account possible $\bar{\cal R}_{b}^{-1}-$dressing of the $k$th line of the irreducible diagram (described by the matrix ${\cal C}_{lj}$ of rank $n$) which is associated to the connected graph in question via a sequence of $\bar{\cal R}_{b}-$deformations. Conversely, once a line of an irreducible diagram may be dressed by conglomerates of $\bar{\cal R}_{b}^{-1}-$copies characterized only by an unambiguous value of the corresponding $\alpha^{(k)}$, (in the computation of the amplitude) the overall $\bar{\cal R}_{b}^{-1}-$dressing of this line results in the replacement of the associated perturbative propagator by its effective counterpart to be fixed by the $f_{k}=1$ option of Eq. (\ref{KEY.01}). Still, the shortage of the resummation algorithm, built on the irreducible diagrams, is that some of the time-ordered components of the latter diagrams possess a single line which may be assigned with {\it different}\/ signs of $\alpha^{(r)}$. In consequence, the concise prescription of the modification (\ref{KEY.01}) of the propagator can not be directly applied to such a line. To circumvent this problem, we use an alternative prescription to reproduce the complete set of the connected $G=1$ diagrams. The idea is to introduce the larger set of the elementary {\it time-ordered}\/ graphs (belonging to the three $rv-$varieties in accordance with the decomposition (\ref{SU.01z})) and properly change the algorithm of their $\bar{\cal R}_{b}^{-1}-$dressing so that a single line of some elementary graphs is not dressed at all. As for the overall dressing of each of the remaining lines, being characterized by an {\it unambiguous}\/ sign of the corresponding $\alpha^{(k)}$, it is as previously fixed by the $f_{k}=1$ option of the replacement (\ref{KEY.01}). In this way, the set of the genus-one diagrams (generated by the perturbative expansion of the average (\ref{1.1})) can be unambiguously decomposed into a finite number of subsets parameterized by the elementary graphs. Then, each subset is described by the associated effective $2n-$point function that, therefore, accumulates the overall ${\cal R}_{a}^{-1}\otimes \bar{\cal R}_{b}^{-1}-$dressing of the corresponding {\it elementary}\/ connected $G=1$ graph. \section{The parameterization of the elementary graphs} \label{parameter} Let us realize the program formulated in subsection \ref{deform1c} and introduce such parameterization of the elementary graphs that is as well applicable after the overall ${\cal R}_{a}^{-1}\otimes \bar{\cal R}_{b}^{-1}-$dressing of these graphs. To this aim, the set of the elementary time-ordered graphs is postulated to include not only all time-ordered components of the ${\cal R}_{a}\otimes\bar{\cal R}_{b}-$irreducible Feynman diagrams in figs. 1, 2, 7a, and 7e, but also a variety of a few connected $\bar{\cal R}_{b}-${\it reducible}\/ graphs associated to certain components of the diagrams in figs. 1c and 2e. The additional graphs are obtained from the (time-ordered components of the) diagrams in figs. 7a and 7e via the vertical reattachments applied to the leftmost or/and rightmost end-points of each of the latter diagrams. {\it Preserving}\/ both the time-coordinates of the latter end-points and the intersection-matrix (modulo possible change of the sign of its entries), the reattachments replace the single end-point of one or both their horizontal lines from one horizontal side of $C$ to another. Modulo the reflection interchanging the horizontal sides of $C=\Box$, the additional diagrams are depicted in the remaining figs. 7. Note that all these extra diagrams\footnote{Observe also that, in view of the constraint (\ref{DEF.03}), the geometry of these diagrams implies the additional constraint on the relative time-ordering of their end-points. E.g., in figs. 7c and 7g the lower leftmost point must be to the right with respect to the upper leftmost point.} possess exactly one pair of the lines which, being labeled by $i$ and $k$, comply with the condition (\ref{1.50}). Also, the discussion below implicitly takes into account that both the elementary graphs in the figs. 2a and 2b and all their deformations, being assigned with {\it vanishing}\/ amplitudes (\ref{1.31b}), can be therefore excluded from the analysis. \subsection{$S(4)-$symmetry and reflection-invariance} \label{S(4)} To properly enumerate the elementary graphs and introduce their $\gamma jrv-$parameterization, we should first discuss two types of the transformations which relate the elementary graphs in such a way that the structure of the overall ${\cal R}_{a}^{-1}\otimes\bar{\cal R}_{b}^{-1}-$dressing is kept intact. In turn, to facilitate the application of these transformations, we are to postulate the following convention. When the elementary graphs (or protographs, see subsection \ref{assignment1}) are associated to one and the same time-ordered component of a given Feynman diagram, they are nevertheless considered to be {\it different}, provided the topology of the attachment of their lines' end-points (to the upper and lower horizontal sides of $C=\Box$) is different. For example, the pairs of distinct graphs are depicted in figs. 1a, 1b and figs. 1c, 1d respectively. Turning to the transformations of the elementary graphs, the first type is implemented through the vertical reattachments which can be combined to generate $S(4)-${\it multiplets}\/ of the latter graphs. Consisting of four graphs, each such multiplet implements the discrete space of the $S(4)-$group\footnote{When applied simultaneously to this $4-$set of the graphs, the reattachments can be used to generate the $4!$ elements of the group itself.} of permutations. Note that not only the elementary graphs but also their deformations, included into the subsets described by the corresponding effective amplitudes, are unambiguously splitted into a finite number of distinct (non-overlapping) $S(4)-$multiplets. An example is given by the diagrams in figs. 5c and 6a--6c, where the bold lines depict the associated elementary graph while the nonvertical dotted lines represent the $\bar{\cal R}_{b}^{-1}-$copies introduced by the $\bar{\cal R}_{b}^{-1}-$deformations. As it is illustrated by the latter four figures, the required symmetry of the dressing is maintained by the condition that the positions of the end-points of all the $\bar{\cal R}_{b}^{-1}-$copies are left intact. As for the second type of the transformations, the $S(4)-$multiplets of the elementary graphs may be related via the $S(2)-$reflection that (mapping the contour $C=\Box$ onto itself) mutually interchanges the two horizontal (or, what is equivalent in the $v=1$ case, vertical) sides of $C$. Finally, one can implement the $S(4)\otimes S(2)-$transformations to combine the {\it dressed}\/ (by all admissible ${\cal R}_{a}^{-1}\otimes\bar{\cal R}_{b}^{-1}-$deformations) elementary graphs into the $S(4)\otimes S(2)-$multiplets. The prescription reads that both the vertical reattachments and the reflections are as previously applied only to the lines associated to the elementary graph, leaving intact the positions of the end-points of all ${\cal R}_{a}^{-1}-$ and $\bar{\cal R}_{b}^{-1}-$copies of these lines. (E.g., figs. 5c, 6a--6c represent the four members of the $S(4)-$multiplet of the dressed graphs to be assigned with $r=v=0$, see below.) \subsection{The $\gamma jrv-$parameterization of the multiplets} \label{assignment} Both prior and after their dressing by all admissible ${\cal R}_{a}^{-1}\otimes\bar{\cal R}_{b}^{-1}-$deformations, the elementary graphs are convenient to collect into the $\gamma jrv-$varieties of the $S(4)-$multiplets which consist of $h_{rv}=2+r-v$ $S(4)-$multiplets related via one of the two types of $S(2)-$reflections discussed above. Then, the four integer numbers $\gamma,~j,~r$, and $v$ parameterize $S(4)\otimes S(2)-$multiplets of the diagrams so that the relevant geometry of the multiple deformations of the graphs in each such multiplet is specified in the reflection- and reattachment-{\it invariant}\/ way. As a result, the algorithm of the resummation can be decomposed into the two steps. At the first step, for given values of $\gamma$, $j$, $r$, and $v$, one constructs the $h_{rv}=2+r-v$ effective amplitudes parameterized by certain special elementary graphs related (when $h_{rv}=2$) via the $S(2)-$reflections. In turn, possessing the {\it maximal}\/ number (equal to $h_{rv}$) of the horizontal lines attached to $1+r$ different horizontal sides of $C=\Box$, each of the latter $h_{rv}$ graphs enters the corresponding $S(4)-$multiplet in the $\gamma jrv-$variety. In turn, these are precisely the lines that constitute the associated protograph (relevant for the decomposition (\ref{SU.01z})) which, in the $rv-$variety of the $S(4)-$multiplets (with different $\gamma$ and $j$), has the maximal amount of the horizontal lines. This construction is sketched below (see also Appendix \ref{enumer1}). At the second step, the remaining three elementary graphs of each $S(4)-$multiplet (as well as the rest of the protographs) can be then reproduced are obtained via the vertical reattachments of the leftmost or/and rightmost end-points of the above $2+r-v$ horizontal lines. Modulo possible change of the sign of its entries, the intersection-matrix is invariant under these $S(4)-$transformations since, by construction of the elementary graphs, the leftmost and rightmost points of the entire graph necessarily belong to the $h_{rv}$ lines defining the corresponding protograph. \subsubsection{The topological $jrv-$parameterization and the protographs} \label{assignment1} Consider first the integers $j,~r,$ and $v$ which can be interpreted directly in terms of the relevant topological properties {\it common}\/ for all the graphs in a given $S(4)-$multiplet. To begin with, $r=0,1$ is equal to the number of the ${\cal R}_{b}^{-1}-$copies\footnote{The definition of this type of the deformation can be obtained from the one of the $\bar{\cal R}_{b}^{-1}-$deformation omitting the requirement that the extra line, introduced according to (\ref{1.50}), is necessarily nonhorizontal.} of the graph with the maximal number of a single line (to be identified below) available in a given elementary graph. Correspondingly, all the graphs associated to figs. 7 are assigned with $r=1$ (while $v=1$ since $1\geq v\geq{r}$), while the remaining diagrams in figs. 1 and 2, are parameterized by $r=0$. Next, the number of the lines is equal to $n=1+j+r$. In turn, for a given $r$, $j+1=n-r=2,3$ yields the multiplicity of the irreducible star-product the form of which assumes both the corresponding perturbative $2n-$point function (\ref{1.31b}), and, owing to the replacement (\ref{KEY.01}), its effective counterpart considered in subsection \ref{bdeform}. Therefore, the graphs in figs. 1 and 7a--7d are assigned with $j=1$, while the remaining elementary diagrams are assigned with $j=2$. As for $v=0,1$ (with $0\leq r\leq v\leq 1$ in compliance with Eq.~(\ref{SU.01z})), the $n$th order elementary graph has $h_{rv}=2+r-v$ lines which may be involved into the vertical $S(4)-$reattachments {\it without}\/ changing (the module of) the entries of the intersection-matrix. Furthermore, only $2+r-2v$ of these reattached lines are to be dressed, together with the remaining $n-h_{rv}$ lines, by the $\bar{\cal R}_{b}-$copies in compliance with the $f_{k}=1$ prescription (\ref{KEY.01}). In particular, for $v=r=1$ graphs of figs. 7, it is the single line, devoid of the latter type of the dressing, that is considered to possess one ${\cal R}_{b}-$copy. (Alternatively, one may state that both the latter line and its copy share the same common $\bar{\cal R}_{b}-$dressing.) For each of the reattached lines, $2-(v-r)$ of its end-points are $S(4)-$transformed. Therefore, for cases other than $r=1-v=0$, the reattachments can be faithfully represented by the parameters% \footnote{One is to identify $R={x}^{1}(s_{l})-{x}^{1}(s'_{l})$, $s_{l}>s'_{l}$, with the spatial component of the relative distance (\ref{1.38aa}) such that ${\bf x}(s_{l})$ and ${\bf x}(s'_{l})$ belong respectively to the lower and upper horizontal sides of the contour $C=\Box$. In turn, in view of the proper time parameterization fixed prior to Eq. (\ref{1.4}), it implies that the vertical $1-$axis is to be directed from the upper to the lower horizontal side of the rectangle $\Box$.} $a_{k}=0,1$ so that $y^{1}_{k}=a_{k}R$, with $k$ assuming $h_{rv}=2$ different values. In the $r=1-v=0$ case (when $h_{rv}=1$), in addition to $a_{1}$ we have to introduce the extra parameter $\tilde{a}_{1}$, \begin{equation} x^{1}(s'_{1})=(1-a_{1})\tilde{a}_{1}R~~~~~,~~~~~ x^{1}(s_{1})=\tilde{a}_{1}R+(1-\tilde{a}_{1})a_{1}R~, \label{ST.01} \end{equation} which is equal to $1$ and $0$ depending on whether or not the reattachment involves the left(most) end-point of the horizontal line of fig. 2e (while $x^{1}(s_{1})-x^{1}(s'_{1})=a_{1}R$). To simplify the notations, the pair of the parameters, used to represent the reattachments, is denoted as $\{a_{k}\}\equiv \{a_{k}\}_{rv}$ for all $0\leq r\leq v\leq 1$. Next, the parameters $r$ and $v$ can be used to enumerate the protographs which are time-ordered as well. A particular protograph, can be reconstructed eliminating all the $n-h_{rv}$ lines of the corresponding elementary graph except for the $2+r-v$ lines affected by the $S(4)-$reattachments. Modulo the $S(4)-$reattachments, thus separated protographs are depicted by bold lines in figs. 3a--3e for those protographs which, for a given $rv-$assignment possess the maximal number $2+r-v$. The figs. 3a, 3b and 3d,3e are in one-to-one correspondence with the pairs of the $S(4)-$multiplets which, being related via the reflection (interchanging the horizontal sides of the contour $C$), are characterized by $r=v=0$ and $r=v=1$ respectively. It should be stressed that, to avoid double-counting, one is to consider only such reflections of the protographs which can not be alternatively reproduced by the vertical $S(4)-$reattachments. Correspondingly, fig. 3c refers to the single $r=v-1=0$ multiplet\footnote{The $v=1$ protograph in fig. 3c should {\it not}\/ be accompanied by the reflection-partner which, being defined by the requirement that both end-points of the single line are attached to the lower side of C, can be alternatively obtained via the composition of the two vertical reattachments. Note also is that the genus of the $v=1$ protographs is zero rather than one which explains why we have to start from the elementary graphs rather than directly from the protographs.}. In sum, there are precisely $h_{rv}=2+r-v$ $S(4)-$multiplets of the protographs which, being parameterized by a particular $rv-$assignment, are related via the $S(4)-$reflections. In compliance with Appendix \ref{enumer1}, in each such multiplet the labels of the $\bar{\cal R}_{b}-$dressed lines assume $n-v=r+j-v+1$ different values in the set $\Omega_{jrv}$ obtained from the sequence $1+v,~2,~1+j,~2+2r$ via the identification of the $v+(2-j)+(1-r)=4+v-n$ coinciding entities (all being equal to 2) so that $n-v=\sum_{k\in{\Omega_{jrv}}}1$. Correspondingly, to parameterize the entire set of $n$ lines, we reliable these lines introducing the the set $\tilde\Omega_{jrv}$ obtained from the sequence $1,~2,~1+j,~2+2r$ via the identification of the $(2-j)+(1-r)=4-n$ coinciding entities so that $n=\sum_{k\in \tilde{\Omega}_{jrv}}1$. In turn, the labels of the $2+r-v$ lines, involved into the $S(4)-$reattachments, assume values in the set ${\cal S}_{rv}$ obtained from the sequence $1,~c_{rv}=2+3r-v$ (with $c_{rv}=1,2,4$) via the identification of the $v-r$ coinciding entities. \subsubsection{The residual $\gamma-$parameterization of the elementary graphs} \label{gamspecif} The necessity to complete the $jrv-$parameterization and introduce one more parameter $\gamma$, additionally specifying the $S(4)\otimes S(2)-$multiplets, is motivated by the geometry of the pairs of the elementary graphs depicted by bold lines in figs. 8a, 8b (both characterized by $j=r+1=v=1$) and 9a, 9b (both characterized by $j=r=v=1$). In general, with the help of this parameter $\gamma=1,...,f_{jrv}$, one is to enumerate distinct $S(4)\otimes S(2)-$multiplets of the graphs which are separated when one fixes the parameters $j$, $r$, and $v$. The previous discussion suggests that, to find the number $f_{jrv}$ of such multiplets, it is sufficient to consider only those elementary graphs which, representing the corresponding multiplet, possess the {\it maximal}\/ number $h_{rv}$ of the horizontal lines for a particular $jrv-$specification. Then, $f_{jrv}$ is equal to the number of such graphs which, being different time-ordered components of the same Feynman diagram, are {\it not}\/ related via the $S(4)\otimes S(2)-$transformations. In turn, the latter number can be found in the following way. To begin with, for a particular $jrv-$assignment and the matrix ${\cal C}_{ik}$, one fixes generic positions both of the $2+r-v$ horizontal lines (involved into the $S(4)-$reattachments) and of the upper end-points of the remaining $n-(2+r-v)=j-1+v$ nonhorizontal lines. Actually, it is straightforward to infer from Eqs.~(\ref{DEF.03}) and (\ref{1.50}) that $f_{jrv}$ is $r-$independent, $f_{jrv}\equiv f_{jv}$, which allows to deduce $f_{jv}$ restricting our analysis the $r=0$ cases. Next, we should take into account that both by the perturbative and the associated effective $2n-$point functions impose $n-2$ specific constraints (see Eq. (\ref{1.40kk}) below) on admissible combinations of the temporal components $y^{2}_{l}$ of the relative distances (\ref{1.38aa}). In consequence, among the $j-1+v$ lower end-points of the nonhorizontal lines, only $v$ points remain to be independent degrees of freedom (in addition to the $2h_{rv}|_{r=0}+(j-1+v)$ ones already fixed above). When $v=0$, obviously $f_{j0}=1$ for $j=1,2$. As for $v=1$, when we vary the position of the lower end-point of $k$th nonhorizontal line representing the residual $v=1$ degree of freedom, the resulting time-ordered components of the transformed diagram are distinguished by the $\left[\otimes_{i=1}^{j} Sign(y^{2}_{i})\right]/S(j)-$assignment\footnote{$Sign(y^{2}_{i})$ denotes the sign-function depending on the relative time $y^{2}_{i}$ (associated to the $i$th nonhorizontal line) which may be changed through the variation of the lower end-point in question.} defined {\it modulo}\/ possible $S(j-1+v)-$permutations of the labels $i$ of the $(j-1+v)|_{v=1}$ nonhorizontal lines. Therefore, $f_{j1}=(j-1)+2(2-j)$, where it is formalized that for $f_{11}=(v+1)|_{v=1}$, while $f_{21}=f_{11}-1=1$ (as it is clear from fig. 2e). Summarizing, one arrives at the formula \begin{equation} f_{jv}=(1-v)+(3-j)v~~~~~;~~~~~f_{j0}=f_{2v}=1~~,~~\forall{j,v}~, \label{SU.01l} \end{equation} so that $1\leq f_{jv}\leq 2$, and $f_{jv}=2$ only when $j=v=1$. Note that, although the construction of the parameter $f_{jv}$ is obviously reflection-invariant, the action of the reflections on the $S(4)-$multiplets of the elementary graphs is still nontrivial in the cases when $f_{jv}=2$. The reflections, introduced in subsection \ref{assignment1}, in these cases relate those of the latter multiplets which, being endowed with the same $jrv-$assignment, are described by the two different values of $\gamma$. Summarizing, there are precisely $h_{rv}=2+r-v$ $S(4)-$multiplets of the elementary graphs which, being parameterized by a particular $\gamma jrv-$assignment, are related via the $S(4)-$reflections in the $j-${\it independent}\/ way. \section{Dressing of the elementary graphs and protographs} \label{repres1} To derive the representation (\ref{SU.01z}), the first step is to express $<W(\Box)>_{U_{\theta}(1)}^{(1)}$ in terms of the $S(4)-$multiplets of the effective $2n-$point functions. Each of these functions describes the corresponding elementary graph together with all its admissible ${\cal R}_{a}^{-1}-$ and $\bar{\cal R}_{b}^{-1}-$deformations according to the algorithm sketched in the previous Section. In view of the factorization (\ref{DEF.01}), for $C=\Box$ it is convenient to represent the effective functions as the product ${\cal I}^{(n)}(\{{\bf y}_{k}\}) \tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})$, where $\{{\bf y}_{k}\}$ denotes the set of the relative coordinates (\ref{1.38aa}) characterizing the corresponding time-ordered elementary graph of a given order $2n$. In particular, the factor ${\cal I}^{(n)}(\cdot)$ (to be defined in Eq. (\ref{EX.01h})) accumulates the overall ${\cal R}_{a}^{-1}-$dressing of the latter graph. As for $\tilde{V}_{U_{\theta}(1)}^{(n)}(\cdot)$, it describes a given elementary graph together with the entire its $\bar{\cal R}_{b}^{-1}-$dressing in the way consistent with the $S(4)\otimes S(2)-$symmetry. In turn, the quantity $\tilde{V}_{U_{\theta}(1)}^{(n)}(\cdot)$ can be introduced as the concise modification of the corresponding elementary $2n-$point function (\ref{1.31b}). For this purpose, the perturbative propagators of certain $n-v$ lines should be replaced by the effective ones defined by the $f_{k}=1$ option of Eq. (\ref{KEY.01}). Parameterizing the effective functions, the elementary graphs can be viewed as the {\it intermediate}\/ collective coordinates which are useful in the computation of the corresponding individual effective amplitudes \begin{equation} \frac{1}{(2\pi\bar{\theta})^{2}}~ {\cal Z}^{(\gamma)}_{jrv}(\{a_{k}\},\bar{A},\bar{\theta}^{-1})= \oint\limits_{C}\prod_{l=1}^{n}dx^{2}(s_{l})dx^{2}(s'_{l})~ {\cal I}^{(n)}(\{{\bf y}_{k}\}) \tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})\Big|^{\gamma}_{jrv} \label{FR.01w} \end{equation} where the vertical $S(4)-$reattachments of the $h_{rv}=2+r-v$ lines are described by the set of the parameters $\{a_{k}\}\equiv\{a_{k}\}_{rv}$ introduced in subsection \ref{assignment1}. By construction, the $G=1$ term $<W(\Box)>_{U_{\theta}(1)}^{(1)}$ of the expansion (\ref{CO.01}) can be represented as the superposition of the amplitudes (\ref{FR.01w}) which, as we will see, should be combined into the $rv-$superposition ${\cal Z}_{rv}(\{a_{k}\},\cdot)$. These superpositions are obtained summing up the amplitudes (\ref{FR.01w}) corresponding to all $\sum_{j=1}^{2} f_{jv}$ elementary graphs which are associated to a given $rv-$protograph according to the prescription discussed in subsection \ref{assignment1}. Then, Eq.~(\ref{SU.01z}) is reproduced provided \begin{equation} {\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})=\sum_{\{a_{l}\}_{rv}} {\cal Z}_{rv}(\{a_{k}\},\bar{A},\bar{\theta}^{-1})~~~~~,~~~~~ {\cal Z}_{rv}(\{a_{k}\},\bar{A},\bar{\theta}^{-1})= \sum_{j=1}^{2}\sum_{\gamma=1}^{f_{jv}} {\cal Z}^{(\gamma)}_{jrv}(\{a_{k}\},\bar{A},\bar{\theta}^{-1})~, \label{SU.01} \end{equation} where $f_{jv}$ is defined in Eq. (\ref{SU.01l}), and the sum over $\{a_{l}\}_{rv}$ includes the contribution of the four terms related through the $S(4)-$reattachments applied to the end-points of those lines which, being associated to the corresponding protograph, are parameterized by the label $l\in {\cal S}_{rv}$ (where the set ${\cal S}_{rv}$ is introduced in the end of subsection \ref{assignment1}). In this way, ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$ yields the contribution of the $S(4)-$multiplet of the protographs endowed with the $rv-$assignment. (Eq. (\ref{SU.01}) takes into account that, when $h_{rv}=2$, the dressed elementary diagrams can be collected into the pairs related via the reflection which leaves the amplitudes ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$ invariant, as it is verified in Appendix \ref{symmetry3}.) Then, building on the pattern of the perturbative functions (\ref{1.31b}), the amplitude ${\cal Z}_{rv}(\{a_{k}\},\cdot)$ can be reproduced in a way which reveals the important reduction resulting in the final set of the collective coordinates that, in turn, supports the relevance of the protographs. We postpone the discussion of this issue till subsection \ref{completeness}. \subsection{Introducing an explicit time-ordering} \label{ordering} To proceed further, it is convenient to reformulate both ${\cal I}^{(n)}(\{{\bf y}_{i}\})$ and $\tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{i}\})$ in terms of a minimal amount of independent variable arguments instead of the $n-$set $\{{\bf y}_{i}\}$ of the relative distances (\ref{1.38aa}). Consider a rectangular contour $C=\Box$ such that $R$ and $T$ denote the lengths of its vertical and horizontal sides which, in the notations of Eq. (\ref{1.2}), are parallel respectively to the first and the second axis. Then, the subset $\{y^{1}_{i}\}$ can be reduced to $\{a_{k}\}$, defined after Eq.~(\ref{SU.01}). Concerning the reduction of the remaining subset $\{y^{2}_{i}\}$, it can be shown that, among the $\delta-$functional constraints imposed by the $G=1$ effective $2n-$point function $\tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})$, there are $n-2$ constraints which are the same as in the case of the perturbative $2n-$point function of the associated elementary diagram, \begin{equation} \tilde{V}_{U_{\theta}(1)}^{(n)}(\cdot)\sim {V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{l}\}) \sim \prod_{p=1}^{n-2G} \delta\left(\sum_{l=1}^{n}\lambda^{(p)}_{l}~y^{2}_{l}\right)~, \label{1.40kk} \end{equation} where the $n-2G$ $n$-vectors $\lambda^{(p)}_{l}$, depending only on the topology of the associated elementary graph, span the subspace of those eigenvectors of the intersection matrix ${\mathcal{C}}_{kl}$ which possess vanishing eigenvalue: $\sum_{l=1}^{n}{\mathcal{C}}_{kl}\lambda^{(p)}_{l}=0$ for $p=1,2,...,n-2G$. In view of the latter constraints imposed on the temporal components $y^{2}_{l}$, there are only $m=n+2$ independent time-ordered parameters $\tau_{k}\geq{\tau_{k-1}}$ which can be chosen to replace the set of the $2n$ temporal coordinates $x^{2}(s_{l})$ and $x^{2}(s'_{l})$ assigned to the line's end-points of a given elementary graph. Then, as it is shown in Appendix \ref{freedom}, there is a simple geometrical prescription to introduce $\tau_{k}$ (with $\tau_{0}=0,~ \tau_{m+1}=T$). In particular, there are $m-v$ parameters $\tau_{i}$ that are directly identified with the properly associated coordinates $x^{2}(\cdot)$ in the latter $2n-$set. E.g., in the simplest cases to figs. 1a and 2c, the $m$ parameters $\tau_{i}$ simply relabel, according to the relative {\it time-ordering}\/ (see Eq. (\ref{FA.02y})), the set of the $m$ end-points attached in these two figures to the upper side of $C=\Box$. Then, combining the $S(4)-$reattachments and the reflection, this prescription can be generalized to deal with the two reflection-pairs of the $r=v=0$ $S(4)-$multiplets of the graphs endowed with $j=1$ and $j=2$. Correspondingly, for $v=1$ in Appendix \ref{freedom} we propose, besides $m-1$ such reidentifications associated to certain side of $\Box$, a simple geometrical operation to represent the remaining quantity $\tau_{i_{0}}$ as a superposition of three temporal coordinates $x^{2}(\cdot)$. Furthermore, the proposed $v=1$ prescription can be formulated in the way invariant both under the $S(4)-$reattachments and under the reflection. Next, on the upper (or, alternatively, lower) horizontal side of the rectangle $C$, $\tau_{q}$ and $\tau_{q-1}$ can be viewed as the bordering points of the $n+3$ connected nonoverlapping intervals \begin{equation} \Delta \tau_{k-1}=\tau_{k}-\tau_{k-1}~\geq{~0}~~~~~,~~~~~ \sum_{k=0}^{m}\Delta \tau_{k}=T~~~~~,~~~~~m=n+2=j+r+3~, \label{FA.02w} \end{equation} the overall time $T$ is splitted into. Consequently, the previously introduced relative times\footnote{As the temporal intervals $y^{2}_{l}$ are overlapping in general, it hinders a resolution the $G=1$ constraints (\ref{1.40kk}) directly in terms of these intervals. On the other hand, the $n>2$ splitting (\ref{FU.02}) yields the concise form to represent, for any given time-ordered component of a Feynman graph, the latter resolution employing the {\it nonoverlapping}\/ intervals $\Delta\tau_{k}$.} \begin{equation} {t}^{(\gamma)}_{p}\equiv y^{2}_{p}\Big|^{\gamma}_{jrv}= (-1)^{s_{p}(\{a_{k}\})}\sum_{l=0}^{m}d^{(\gamma)}_{jrv}(p,l)~ \Delta{\tau}_{l}~~~~~,~~~~~ d^{(\gamma)}_{jrv}(p,0)=d^{(\gamma)}_{jrv}(p,m)=0~, \label{FU.02} \end{equation} can be represented as the superpositions of $\Delta{\tau}_{k}$, and for simplicity we omit the superscript $jrv$ in the notation ${t}^{(\gamma)}_{p}$. In Eq. (\ref{FU.02}), $d^{(\gamma)}_{jrv}(p,l)=0,\pm 1$, while $s(\{a_{k}\})$ denotes an integer-valued $p-$dependent function (fixed by Eq. (\ref{LAT.01}) in Appendix \ref{symmetry3}) which additionally depends on the set $\{a_{k}\}$ of the variables introduced after Eq. (\ref{ST.01}). It is noteworthy that, provided $f_{jv}=2$, the $\gamma-$dependence of ${t}^{(\gamma)}_{p}$ (together with the implicit $\gamma-$dependence of the auxiliary parameter $\omega^{(\gamma)}_{k}$ introduced in Eq. (\ref{KEY.01b})) is the only source of the $\gamma-$dependence of the effective amplitude ${\cal Z}^{(\gamma)}_{jrv}(\cdot)$, see Eqs. (\ref{SR.01g}) and (\ref{SR.01f}) below, associated to the elementary graphs in a $S(4)-$multiplet with a given $\gamma jrv-$assignment. (E.g., the geometry of figs. 8a and 8b implies that $t_{2}^{(1)}<0$ and $t_{2}^{(2)}>0$ respectively.) In turn, the $S(4)-$symmetry of the dressing of the elementary graphs guarantees that the pattern of $t_{p}^{(\gamma)}$ (as well as that of $\omega^{(\gamma)}_{k}$ in Eq. (\ref{KEY.01b})) is the same for all members of each $S(4)-$multiplet. \subsection{Accumulating the ${\cal R}_{a}^{-1}-$deformations} \label{adeform} Given a particular effective amplitude, the associated ${\cal R}_{a}^{-1}-$deformations a given elementary graph are generated via {\it all}\/ possible inclusions of such extra lines that, in accordance with Eq. (\ref{1.50a}), intersect neither each other nor the original lines of the graph. In figs. 3--9 the latter extra lines are depicted as vertical (due to the $\delta-$function in the perturbative propagator (\ref{1.2})) and dotted. In view of Eq. (\ref{DEF.01}), the inclusion of the deformations of this type merely multiplies the amplitude, describing the original elementary graph, by a factor ${\cal I}^{(n)}(\{{\bf y}_{k}\})$. To deduce this factor, we note that the temporal coordinates of the upper end-points of the ${\cal R}_{a}^{-1}-$copies may span only the first and the last intervals $\Delta \tau_{0}$ and $\Delta \tau_{n+2}$ respectively. Then, akin to the commutative case, it is straightforward to obtain that, when the superposition of all the admissible ${\cal R}_{a}^{-1}-$copies is included, it result in \begin{equation} {\cal I}^{(n)}(\{{\bf y}_{k}\})= exp\left(-\sigma |R|\left[\Delta \tau_{0}+\Delta \tau_{n+2}\right]\right)~, \label{EX.01h} \end{equation} the $\tau_{j}-$dependence of which matches the pair of the conditions (\ref{FU.02}) imposed on $d^{(\gamma)}_{jrv}(p,l)$. \subsection{The $\bar{\cal R}_{b}^{-1}-$deformations and the effective propagators} \label{propag} Next, consider the block $\tilde{V}_{U_{\theta}(1)}^{(n)}(\cdot)$ that results after the $\bar{\cal R}_{b}^{-1}-$dressing of a given elementary graph with $n$ lines. The short-cut way to reconstruct this block is to specify those $n-v$ (with $v=0,1$) lines of the latter graph where the corresponding propagator (\ref{1.2}) is replaced, in the relevant implementation of Eq. (\ref{1.31b}), by its effective counterpart so that the $S(4)-$symmetry of the overall dressing is maintained. When the $k$th line is dressed by all admissible $\bar{\cal R}_{b}^{-1}-$deformations, the replacement is fixed by the $f_{k}=1$ option of the substitution \begin{equation} {D}_{22}({\bf z}_{k})~\longrightarrow~\left(-\sigma |z^{1}_{k}|\right)^ {f_{k}}\delta\left(z^{2}_{k}\right)~ exp\left(-\sigma |R+(z^{1}_{k}-y^{1}_{k})\alpha^{(k)}| \Delta T^{b}_{k}(f_{k},\gamma) \right)~~~~,~~~~k\in{\Omega_{2rv}}~, \label{KEY.01} \end{equation} that reduces to the ordinary multiplication of the propagator ${D}_{22}(\cdot)$ by the $k-$dependent exponential factor (with the set $\Omega_{jrv}$ of the $n-v$ different labels being defined in the end of subsection \ref{assignment1}). In this factor, the parameter $\alpha^{(k)}=\pm 1$ is traced back to Eq. (\ref{DEF.02}), and each elementary graph may be endowed only with a single $\{\alpha^{(i)}\}-$assignment (with $i\in{\Omega_{jrv}}$) which renders the global topological characteristic (\ref{GP.01}) of the dressing (\ref{KEY.01}) {\it unambiguous}. Also, \begin{equation} \Delta T^{b}_{k}(f_{k},\gamma)=\Delta \tau_{q(k)}+ [1-f_{k}]\Delta \tau_{q(k)+2\omega^{(\gamma)}_{k}-1}~~~~~~~,~~~~~~~~ \Delta T^{a}_{q}=\Delta \tau_{q}~~,~~q=0,n+2~, \label{KEY.01b} \end{equation} (i.e., $f_{0}=f_{n+2}=1$), while the extra subscripts $a$ and $b$ are introduced to indicate the type of the associated deformations. In turn, in the $f_{k}=1$ case the interval $\Delta T^{b}_{k}(1,\gamma)= \Delta \tau_{q(k)}$ is spanned by the end-points of the $\bar{\cal R}_{b}^{-1}-$copies of the $k$th line (see Appendix \ref{enumer1} for particular examples). As for the function $q(k)$ defining the label of the corresponding interval (\ref{FA.02w}), it formally determines an embedding of an element of the $S(n-v)$ group of permutations into the $S(n+1)$ group: $0<q(k)<n+2$ for all different $n-v$ values of $k\in{\Omega_{jrv}}$. In Appendix \ref{freedom}, we sketch a simple rule which allows to reconstruct $q(k)$ so that this function is common for any given $S(4)-$multiplet of elementary graphs. \subsubsection{The completeness of the $\bar{\cal R}_{b}^{-1}-$dressing of the protographs} \label{completeness} To explain the relevance for a certain $f_{k}\neq{1}$ option of the replacement (\ref{KEY.01}), we should take into account that, in the evaluation of the amplitude ${\cal Z}_{rv}(\{a_{k}\},\cdot)$ defined by Eq. (\ref{SU.01}), there are important cancellations between the contributions of the individual effective amplitudes (\ref{FR.01w}). To obtain ${\cal Z}_{rv}(\{a_{k}\},\cdot)$ for a particular $rv-$assignment, it is sufficient to take the {\it single}\/ $j=2$ elementary graph (unambiguously associated to the corresponding protograph) and apply, according to a judicious $\{f_{k}\}-$assignment, the replacement (\ref{KEY.01}) to the same $n-v$ lines of this graph as previously. But, contrary to the computation of the amplitudes (\ref{FR.01w}), the $f_{k}=1$ variant of the replacement (\ref{KEY.01}) remains to be applied only to the $2+r-2v$ lines involved into the $S(4)-$reattachments. The point is that $f_{k}=0$ for all the $(v+j-1)|_{j=2}$ lines which, being not affected by the reattachments (while labeled by $k\in\Omega_{2rv}/{\cal S}_{rv}$, i.e., $k=3$ and, when $v=1$, $k=3-v$), therefore do {\it not}\/ belong to the protograph. Furthermore, it is accompanied by such reduction of the measure that, implying the a specific fine-tuning (\ref{KEY.01a}), retains relevant variable arguments which, at least in the simpler $r=v=0$ case, are associated only to the corresponding protograph. It is further discussed in subsection \ref{colcoor}, where a more subtle situation for other values of $r$ and $v$ is also sketched. An explicit ($rv-$dependent) construction of the auxiliary parameters $\omega^{(1)}_{k}=0,1$ with $k=3-v,~3$ (and $\gamma=1$ owing to the $j=2$ option of Eq. (\ref{SU.01l})) is presented in subappendix \ref{omega} so that $0<q(k)+2\omega^{(1)}_{k}-1<n+2$, i.e., $\Delta T^{b}_{k}(f_{k},1)\cap \Delta T^{a}_{0}= \Delta T^{b}_{k}(f_{k},1)\cap \Delta T^{a}_{2+n}=0$, $\forall{k \in{\Omega_{2rv}}}$. Then, the emphasized above cancellations result, for any admissible $\{f_{k}\}-$assignment, in the important completeness condition (to be verified in subappendix \ref{omega}) fulfilled by the $n-v= \sum_{k\in{\Omega_{2rv}}}1$ intervals $\Delta T^{b}_{k}(f_{k},1)$: \begin{equation} \sum_{k\in{\Omega_{2rv}}} \Delta T^{b}_{k}(f_{k},1)=T-\Delta T^{a}_{0}- \Delta T^{a}_{n+2}~~~~~~~,~~~~~~~\sum_{k\in{\Omega_{2rv}}}f_{k}=2+r-2v~, \label{KEY.01a} \end{equation} where each $\Delta T^{b}_{k}(f_{k},1)$ is spanned by the end-points of the $\bar{\cal R}_{b}^{-1}-$copies which, in a given $rv-$protograph, are associated to the $k$th line of the corresponding $j=2$ elementary diagram according to Eq. (\ref{MUL.04y}). To interpret Eq. (\ref{KEY.01a}), we first note that the residual time-interval $T-\Delta T^{a}_{0}-\Delta T^{a}_{n+2}$ results from the overall temporal domain $T$ after the exclusion of its left- and rightmost segments $\Delta T^{a}_{0}$ and $\Delta T^{a}_{n+2}$ which (entering the factor (\ref{EX.01h})) are spanned by the end-points of the ${\cal R}_{a}^{-1}-$copies. Also, the $n-v$ open intervals $\Delta T^{b}_{k}(f_{k},1)$ are mutually nonoverlapping, $\Delta T^{b}_{k}(f_{k},1)\cap \Delta T^{b}_{q}(f_{q},1)=0$ for $\forall{k\neq{q}}$, which means that $q(p)\neq q(k)+2\omega^{(1)}_{k}-1$ $\forall{k}\in \Omega_{2rv}/{\cal S}_{rv},~\forall{p}\in \Omega_{2rv}$. Then, the completeness condition (\ref{KEY.01a}) geometrically implies therefore that, once a particular $rv-$protograph is fully dressed, the {\it entire}\/ residual time-interval is covered by the $n-v=3+r-v$ mutually nonoverlapping intervals $\Delta T^{b}_{k}(f_{k},1)$. Examples are described by figs. 5c (with $r=v=0$), 8f (with $r=v-1=0$), and 9f (with $r=v=1$). On the other hand, in the evaluation of the individual effective amplitudes (\ref{FR.01w}), the sum in the l.h. side of Eq. (\ref{KEY.01a}) is replaced by the sum $\sum_{k\in{\Omega_{jrv}}} \Delta \tau_{q(k)}<T-\Delta T^{a}_{0}- \Delta T^{a}_{n+2}$ which generically is less than the residual time-interval. In turn, this inequality follows from the fact that the number $n+1$ of the relevant elementary intervals $\Delta \tau_{k}$ (the residual interval is decomposed into so that $\sum_{i=1}^{n+1} \Delta \tau_{i}=T-\Delta T^{a}_{0}-\Delta T^{a}_{n+2}$) is always less than the number $n-v=\sum_{k\in{\Omega_{2rv}}}1$ of the lines involved into the $f_{k}=1$ dressing (\ref{KEY.01}). Examples are presented in the $j=2$ case by figs. 5b, 8c, and 9c. Finally, in order to introduce and properly utilize the basic formula (\ref{SR.01f}) below, in the $v=1$ cases it is convenient to define both $\omega^{(\gamma)}_{k}$ and $\Delta T^{b}_{k}(f_{k},\gamma)$ not only for $j=2$ but for $j=1$ as well. As it is verified in subappendix \ref{omega}, the extension is fixed by the prescription\footnote{It matches Eq. (\ref{FI.02}).}: $\omega^{(\gamma)}_{2}|_{j=1}=\omega^{(1)}_{4-\gamma}|_{j=2}$ with $\gamma=1,2$, while $\Delta T^{b}_{k}(f_{k},\gamma)$ is to be defined by the same Eq. (\ref{KEY.01b}). \section{The structure of the effective amplitude $\tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})$} \label{effective} The convenient representation (\ref{SR.01f}) of the factor $\tilde{V}_{U_{\theta}(1)}^{(n)}(\cdot)$, describing a given $2n$th order elementary graph together with the entire its $\bar{\cal R}_{b}^{-1}-$dressing, can be deduced from the integral representation of the elementary $2n-$point function (\ref{1.31b}) through the simple prescription. For this purpose, the product of the concise exponential factors (\ref{SR.01a}) is to be included under the integrand of such representation of the function (\ref{1.31b}) that generalizes Eq. (\ref{IR.01}). In subsection \ref{mcontact}, we present a brief verification that this prescription matches the result of the appropriate application of the $n-v$ replacements (\ref{KEY.01}) with $f_{k}=1$. \subsection{The $\bar{\cal R}_{b}^{-1}-$deformations of the elementary $2n-$point functions} \label{bdeform} Let us introduce the effective functions $\tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})$ in a way that makes manifest the relations between those functions which are parameterized by the elementary graphs with a given $rv-$assignment. For this purpose, we first get rid\footnote{It is admissible because the effective amplitude (\ref{FR.01w}) anyway involves the contour integrals over the $2n$ temporal coordinates $x^{2}(\cdot)$ of the lines' end-points which define the set $\{{\bf y}_{k}\}$. Also, the lines are labeled in Appendix \ref{enumer1}) so that the fourth line, being present only in the $r=v=1$ cases, is the ${\cal R}_{b}^{-1}-$copy of the first line. As for the third line, being present only in the $j=2$ cases, it is {\it not}\/ involved into the $S(4)-$reattachments as well as the second line (present in all cases).} of the $n-2=r+j-1$ $\delta(\cdot)-$functions (defined by the $G=1$ Eq.~(\ref{1.40kk})), starting with the $(r+j-1)-$fold integral \begin{equation} \int\limits_{-T}^{T} d^{j-1}t^{(\gamma)}_{3}~d^{r}t^{(\gamma)}_{4}~ \tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})\Big|^{\gamma}_{jrv}= J_{jrv}\left[\frac{(-1)^{\omega^{(\gamma)}_{3}-1}\partial} {\partial \tau_{q(3)+\omega^{(\gamma)}_{3}}}\right]^{j-1} \left[\frac{(-1)^{\omega^{(\gamma)}_{2}-1}\partial} {\partial \tau_{q(2)+\omega^{(\gamma)}_{2}}}\right]^{v} \tilde{\cal V}^{(\gamma)}_{jrv}(\{a_{i}\},\{\Delta \tau_{q(k)}\}), \label{SR.01f} \end{equation} where $J_{jrv}=(-1)^{v+j-1}{\sigma^{2+r-v}}/{(2\pi\theta)^{2}}$, $\omega^{(\gamma)}_{k}=0,1$ is introduced in subsection \ref{completeness} on the basis of Eq. (\ref{KEY.01b}), and (for the sake of generality) we temporarily formulate the integration in terms of the relative times (\ref{FU.02}) (rather than $x^{2}(\cdot)$), postulating that $\int d^{0}x \tilde{V}(y)=\tilde{V}(y)$. Then, $$ \tilde{\cal V}^{(\gamma)}_{jrv}(\{a_{i}\},\{\Delta \tau_{q(k)}\})= \int d\zeta d\eta~ e^{i\left(\eta t^{(\gamma)}_{1}-\zeta t^{(\gamma)}_{2}\right){\cal C}_{21}/\theta}~ {\cal K}_{rv}(\zeta,\eta,\{a_{i}\})~\times $$ \begin{equation} \times \left[{\cal F}(\alpha^{(1)}\zeta,\Delta \tau_{q(1)})\right]^{1-v} {\cal F}(\alpha^{(2)}\eta,\Delta \tau_{q(2)})~ \left[{\cal F}(\alpha^{(3)}T_{{\cal C}_{ij}}(\eta,\zeta),\Delta \tau_{q(3)})\right]^{j-1} \left[{\cal F}(\alpha^{(1)}\zeta,\Delta \tau_{q(4)})\right]^{r}~, \label{SR.01g} \end{equation} with\footnote{Actually, the quantities $\alpha^{(k)}\equiv \alpha^{(k)}(\{a_{k}\})$, ${\cal C}_{ij}\equiv {\cal C}_{ij}(\{a_{k}\})$, and $t^{(\gamma)}_{p}\equiv t^{(\gamma)}_{p}(\{a_{k}\})$ implicitly depend (see Appendix \ref{symmetry3}) on the set $\{a_{k}\}$ of the variables introduced after Eq. (\ref{ST.01}).} $\alpha^{(k)}$, ${\bf y}_{k}$, ${\cal C}_{ij}$, and $t^{(\gamma)}_{p}$ being given by Eqs. (\ref{GP.01}), (\ref{1.38aa}), (\ref{1.34}), and (\ref{FU.02}) respectively, while \begin{equation} \tilde{\cal K}_{rv}(\zeta,\eta,\{a_{i}\})=|a_{1}R+\zeta|~ |a_{2}R+\eta|^{1-v}|a_{4}R+\alpha^{(1)}\zeta|^{r}~~~~,~~~~y^{1}_{k}=a_{k}R~, \label{SR.01s} \end{equation} \begin{equation} {\cal F}(\eta,\Delta \tau_{q(k)})= exp\left(-\sigma|R+\eta|\Delta \tau_{q(k)}\right)~, \label{SR.01a} \end{equation} where $\Delta \tau_{q(k)}$ is the same $k\rightarrow q(k)$ option of the time-interval (\ref{FA.02w}) as in the $f_{k}=1$ variant of Eq. (\ref{KEY.01}), and \begin{equation} T_{{\cal C}_{ij}}(\eta,\zeta)= ({\cal C}_{31}/{\cal C}_{21})\eta-({\cal C}_{32}/{\cal C}_{21})\zeta~, \label{1.24h} \end{equation} with $|{\cal C}_{21}|=1$. Finally, Eq. (\ref{SR.01f}) is to be augmented by the $r+j-1$ constraints (imposed by thus resolved $\delta-$functions of Eq. (\ref{1.40kk})) that results in the relations \begin{equation} (j-1)\left(t^{(\gamma)}_{3}- T_{{\cal C}_{ij}}(t^{(\gamma)}_{2},t^{(\gamma)}_{1})\right)=0~~~~~,~~~~~ r\left(t^{(\gamma)}_{1}-\alpha^{(1)}t^{(\gamma)}_{4}\right)=0~, \label{SR.01y} \end{equation} where, the second condition yields (when $r=1$) the implementation of the general constraint (\ref{FA.09}), while the first one will be interpreted geometrically in Appendix \ref{dMUL.04y}. It also noteworthy that the r.h. side of Eq. (\ref{SR.01f}) depends on $\gamma$ {\it only}\/ through the $\gamma-$dependent quantities $\omega^{(\gamma)}_{p}$ together with the $\gamma-$dependent decomposition (\ref{FU.02}) of the parameters $t^{(\gamma)}_{1}$ and $t^{(\gamma)}_{2}$ entering Eq. (\ref{SR.01g}). \subsection{Relation to the elementary $2n-$point functions} \label{mcontact} Before we discuss how to reinterpret the partial integrand of the effective function (\ref{SR.01f}) in compliance with the replacement (\ref{KEY.01}), the intermediate step is to establish the relation between the latter function and its counterpart associated to the corresponding elementary graph. Also, we point out a preliminary indication of the relevance of the parameterization in terms of the protographs. For this purpose, we first take into account that, as it will be proved in Appendix \ref{dMUL.04y}, in the r.h. side of Eq. (\ref{SR.01f}) the partial derivative $\partial/\partial \tau_{q(p)+\omega^{(\gamma)}_{p}}$ acts only on the $p$th factor (\ref{SR.01a}) (with $p=3-v,3$) of the expression (\ref{SR.01g}). In consequence, these derivatives merely insert, under the integrand, the $r+j-1$ factors $-\sigma|R+{\cal G}_{k}(\zeta,\eta)|$ entering the exponent of Eq. (\ref{SR.01a}): \begin{equation} \left[\frac{(-1)^{\omega^{(\gamma)}_{3}-1}\partial} {\partial \tau_{q(3)+\omega^{(\gamma)}_{3}}}\right]^{j-1} \left[\frac{(-1)^{\omega^{(\gamma)}_{2}-1}\partial} {\partial \tau_{q(2)+\omega^{(\gamma)}_{2}}}\right]^{v} ~~~ \longrightarrow~~~(-\sigma)^{j-1+v}|R+T_{{\cal C}_{ij}}(\eta,\zeta)|^{j-1} |R+\eta|^{v}~, \label{VAR.02} \end{equation} where we have used that $\alpha^{(2)}=1$ when $v=1$, while $\alpha^{(3)}=1$ when $j=2$. Once the replacement (\ref{VAR.02}) is performed, the general rule states that, for any admissible $n=1+j+r$, the integral representations of a given elementary $2n-$point function ${V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})$ can be deduced from the corresponding effective one $\tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})$ through the replacement \begin{equation} {\cal F}(\cdot,\Delta \tau_{q(k)})~\longrightarrow~1~~,~~\forall{k} ~~~~~~~\Longrightarrow~~~~~~~ \tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})~\longrightarrow~ {V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})~, \label{RED.01} \end{equation} with ${\cal F}(\cdot)$ being defined by Eq. (\ref{SR.01a}). In particular, (provided ${\cal C}_{12}=-1$) the reduced option (\ref{RED.01}) of the $r=v=j-1=0$ implementation of the function (\ref{SR.01f}), being associated to the diagrams of fig. 1, assumes the form of integral representation of the $f_k ({\bf z})\rightarrow D_{22}({\bf z}),~ k=1,2$ implementation of the star-product (\ref{IR.01}), where the propagator $D_{22}({\bf z})$ is introduced in Eq. (\ref{1.2}). It is manifest after the identifications: $\zeta\rightarrow{\xi^{1}_{1}},~ \eta\rightarrow{\xi^{1}_{2}}$. Correspondingly, the $k$th factor $({\cal F}(\cdot,\Delta \tau_{q(k)})-1)$ accumulates the contribution of all admissible $\bar{\cal R}_{b}^{-1}-$copies of the $k$th line so that the interval\footnote{It is noteworthy that, underlying the solvability of the problem, the {\it local}\/ in $\Delta \tau_{q(k)}$ pattern of the factor (\ref{SR.01a}) is traced back to the specific constraints (\ref{FA.09}) imposed by the perturbative amplitudes (\ref{1.31b}).} $\Delta \tau_{q(k)}$ is spanned by the temporal coordinates of the upper (or, equivalently, lower) end-points of the latter copies. Altogether, in thus reduced Eq. (\ref{SR.01f}) the integral representation ${V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})$ of a given $2n$th order elementary graph includes, besides the exponential factor $e^{i\left(\eta t_{1}-\zeta t_{2}\right){\cal C}_{21}/\theta}$ (inherited from Eq. (\ref{IR.01})) and the $G=1$ product (\ref{1.40kk}) of $n-2$ different $\delta(\cdot)-$functions, the product \begin{equation} \tilde{\cal K}_{n+2}(\zeta,\eta,R)= \prod_{i=1}^{4}|a_{i}R+{\cal G}_{i}(\zeta,\eta)|^{w_{i}}~~~~~,~~~~~ {\cal G}_{k}(\zeta,\eta)=\tilde{b}_{k}\zeta+\tilde{c}_{k}\eta~, \label{SUB.01j} \end{equation} composed of $n=1+j+r$ factors $|a_{i}R+{\cal G}_{i}(\zeta,\eta)|$ each of which represents (when $w_{i}\neq{0}$, i.e., for $i\in \tilde{\Omega}_{jrv}$) the $i$th propagator of the latter graph so that $w_{1}=w_{2}=1,~w_{3}=j-1=0,1,~w_{4}=r=0,1$ with $\sum_{i=1}^{4}w_{i}=n$. Here, $a_{k}$ is defined in Eq. (\ref{SR.01s}), $\tilde{b}_{q},\tilde{c}_{q}=0,\pm 1$, while $R$ is defined in the footnote prior to Eq. (\ref{ST.01}), and $2\leq n\leq 4$. In turn, it is the $\tilde{\cal K}_{rv}(\cdot)-$part (\ref{SR.01s}) of $\tilde{\cal K}_{n+2}$ which, being associated to the $2+r-v$ lines involved into the $S(4)-$reattachments (when $a_{k}$ assumes both of the admissible values), refers to the protographs. The remaining $v+j-1$ lines, corresponding to the $f_{k}=0$ option of the replacement (\ref{KEY.01}), are not affected by the reattachments so that the corresponding $a_{k}$ are equal to unity which matches the $a_{k}=1$ pattern of the exponents (\ref{SR.01a}) {\it necessarily}\/ associated to all these lines via the replacement (\ref{VAR.02}). In conclusion, it is routine to convert the inverse of the prescription (\ref{RED.01}) into the composition of the replacements (\ref{KEY.01}). In view of Eq. (\ref{VAR.02}), the general pattern (\ref{SR.01f}) is such that any particular effective $2n-$point function $\tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})$ can be deduced from the associated elementary one through the corresponding option of the replacement \begin{equation} \prod_{k=1}^{4}|a_{k}R+{\cal G}_{k}(\zeta,\eta)|^{w_{k}} ~\longrightarrow~\prod_{k=1}^{4} |a_{k}R+{\cal G}_{k}(\zeta,\eta)|^{w_{k}}~ e^{-\sigma v_{k}|R+{\cal G}_{k}(\zeta,\eta)\alpha^{(k)}| \Delta \tau_{q(k)}}~, \label{SUB.01a} \end{equation} where $v_{1}=1-v=0,1$, $v_{k}=w_{k}$, for $k=2,3,4$ (with $v_{k}\neq 0$ only when $k\in \Omega_{jrv}$, $\sum_{k=1}^{4}v_{i}=n-v$) that matches the pattern of Eq.~(\ref{SR.01g}). Then, it takes a straightforward argument to verify that the substitution (\ref{SUB.01a}) is indeed equivalent to the $f_{k}=1$ prescription (\ref{KEY.01}) applied, with the identification $\alpha^{(4)}=\alpha^{(1)}$, to the corresponding $\Omega_{jrv}-$subset of the $n-v$ perturbative propagators. \section{Integral representation of the effective amplitudes} \label{effective1} At this step, we are ready to obtain the explicit form of the amplitude ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$ which, being introduced in Eq. (\ref{SU.01}), defines the decomposition (\ref{SU.01z}) of the $G=1$ term $<W(\Box)>_{U_{\theta}(1)}^{(1)}$ of the $1/N$ expansion (\ref{CO.01}). For this purpose, we first put forward the general representation (\ref{FR.01}) of the individual effective amplitudes (\ref{FR.01w}) which, being parameterized by the corresponding elementary graphs, are evaluated {\it non-perturbatively}\/ both in $g^{2}$ and in $\theta$. The latter amplitudes arise when the $2n-$point function $\tilde{V}_{U_{\theta}(1)}^{(n)}(\{{\bf y}_{k}\})$, being multiplied by the factor (\ref{EX.01h}), is integrated over the $n$ pairs of the relative coordinates (\ref{1.38aa}) (defining the set $\{{\bf y}_{k}\}$), all restricted to the contour $C=\Box$. Then, building on this representation, the superposition (\ref{SU.01}) of ${\cal Z}^{(\gamma)}_{jrv}(\cdot)$ is evaluated collecting together the contributions associated to all the elementary graphs with the same $rv-$assignment. In turn, the specific cancellations, taking place between the different terms of the superposition, support the pattern of the relevant collective coordinates. In particular, it verifies the representation of $\tilde{V}_{U_{\theta}(1)}^{(n)}(\cdot)$ (discussed in subsection \ref{completeness}) formulated in terms of the properly dressed protographs. We also clarify the relation between the latter dressing and the structure of the collective coordinates. \subsection{General pattern of the individual effective amplitudes} \label{key} Synthesizing the factors (\ref{EX.01h}) and (\ref{SR.01f}), one concludes that the individual effective amplitudes (\ref{FR.01w}) assume the form $$ (-1)^{\sum_{l\in {\cal S}_{rv}}a_{l}}~ {\cal Z}^{(\gamma)}_{jrv}(\{a_{k}\},\bar{A},\bar{\theta}^{-1})= $$ \begin{equation} =\bar{A}^{2+h_{rv}} \int d^{n+2}\bar{\tau} e^{-\bar{A}\left(\Delta \bar\tau_{0}+\Delta \bar\tau_{n+2}\right)} \left[\frac{(-1)^{\omega^{(\gamma)}_{3}-1}\partial} {\partial \tau_{q(3)+\omega^{(\gamma)}_{3}}}\right]^{j-1} \left[\frac{(-1)^{\omega^{(\gamma)}_{2}-1}\partial} {\partial \tau_{q(2)+\omega^{(\gamma)}_{2}}}\right]^{v} {\cal V}^{(\gamma)}_{jrv}(\{a_{i}\},\{\Delta \bar\tau_{q(k)}\}), \label{FR.01} \end{equation} where the sum in the l.h. side runs over the labels $l$ of the $2+r-v$ lines involved into the $S(4)-$reattachments in the first relation of Eq. (\ref{SU.01}) (with the set ${\cal S}_{rv}$ being specified in the end of subsection \ref{assignment1}), and we introduce the compact notation \begin{equation} \int\limits_{0\leq \bar{\tau}_{k}\leq \bar{\tau}_{k+1}}^{\bar{\tau}_{m}\leq 1} \prod_{l=1}^{m} d\bar{\tau}_{l} ...\equiv \int d^{m}\bar{\tau}...~, \label{MEA.01} \end{equation} for the integrations over the $m=n+2$ ordered times $\bar\tau_{j}$, and, for a rectangular contour $C=\Box$ of the size $R\times T$, it is convenient to utilize the change of the variables \begin{equation} \tau_{k}=T\bar{\tau}_{k}~~~~~,~~~~~t_{k}=T\bar{t}_{k}~~~~~,~~~~~ \zeta=R\bar{\zeta}~~,~~\eta=R\bar{\eta}~, \label{BR.02a} \end{equation} which introduces the dimensionless quantities $\bar{\tau}_{k}$ (with $\Delta \bar\tau_{k-1}=\bar\tau_{k}-\bar\tau_{k-1}\geq{0}$), $\bar{\eta}$, $\bar{\zeta}$, and $\bar{t}_{k}$ so that $R^{4+r-v}{\cal V}_{jrv}(\{a_{i}\},\{\Delta \bar\tau_{q(k)}\})= \tilde{\cal V}_{jrv}(\{a_{i}\},\{T\Delta \bar\tau_{q(k)}\})$. In particular, this change makes it manifest that, besides the dependence on $\{a_{i}\}$ and $\bar{\theta}=\sigma\theta$, the considered effective amplitudes are certain functions of the dimensionless area of the rectangle $C$ \begin{equation} \bar{A}=\sigma A(C)\Big|_{C=\Box}=\sigma RT \label{BR.02} \end{equation} rather than of $R$ and $T$ separately. Note also that in the r.h. side of Eq. (\ref{FR.01}) the $m=n+2$ species of the $d\bar\tau_{j}-$integrations reproduce\footnote{To make use of Eq. (\ref{SR.01f}), we utilize the fact that, owing to the $G=1$ pattern (\ref{1.40kk}), the integrations $\int d^{j-1}t_{3}~d^{r}t_{4}$ can be reformulated as $r+j-1$ integrations with respect to the temporal coordinates $x^{2}(\cdot)$ of those end-points which are {\it not}\/ involved into the definition of $\tau_{k}$.}, according to the discussion of subsection \ref{ordering}, the $2n-$fold contour-integral (\ref{FR.01w}) which runs over the time-coordinates $dx^{2}(s_{l})$ and $dx^{2}(s'_{l})$ constrained by the $G=1$ product (\ref{1.40kk}) of the $\delta(\cdot)-$functions. Correspondingly, this transformation of the measure has the jacobian which is equal to $(-1)^{n_{jrv}^{-}}$, where $n_{jrv}^{-}$ and $n_{jrv}^{+}$ denote the numbers of the line's end-points attached, for a given elementary graph, respectively to the lower and upper horizontal side of the rectangle $C=\Box$ so that \begin{equation} (-1)^{n_{jrv}^{-}}=\prod_{i\in \tilde{\Omega}_{jrv}}(-1)^{a_{i}}~~~~~~~,~~~~~~~ \frac{1}{2}(n_{jrv}^{+}+n_{jrv}^{-})=n_{jrv}\equiv n= \sum_{k\in \tilde{\Omega}_{jrv}}1~, \label{SIG.01} \end{equation} where $a_{k}$ is defined in Eq. (\ref{SR.01s}), while the set $\tilde{\Omega}_{jrv}$ is specified in the end of subsection \ref{assignment1}. In turn, to justify the $\{a_{k}\}-$dependent sign-factor in the l.h. side of Eq. (\ref{FR.01}), it remains to notice that $\sum_{k\in \tilde{\Omega}_{jrv}}a_{k}=v+j-1+\sum_{l\in {\cal S}_{rv}}a_{l}$ since $a_{k}=1$ for $\forall{k}\in \tilde{\Omega}_{jrv}/{\cal S}_{rv}$. \subsection{Derivation of the combinations ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$} \label{tuning} The effective amplitudes (\ref{FR.01}), parameterized by the individual elementary graphs, are still intermediate quantities. To say the least, for generic $\bar{A}$, they are {\it singular}\/ for $\bar{\theta}^{-1}\rightarrow{0}$. To arrive at amplitudes which are already continuous in $\bar{\theta}^{-1}$ in a vicinity of $\bar{\theta}^{-1}=0$, our aim is to evaluate the combinations (\ref{SU.01}) of the latter amplitudes entering the decomposition (\ref{SU.01z}). Then, to reveal the cancellations between different terms of the sum (\ref{SU.01}), in the r.h. side of Eq. (\ref{FR.01}) one is to perform $v+j-1$ integrations to get rid of the corresponding number of the partial derivatives (employing that $0<q(k)<n+2$ for $\forall{k}=1,..,n$). In Appendix \ref{dMUL.04y}, it is shown that, matching the prescription (\ref{KEY.01}) formulated in subsection \ref{completeness}, a straightforward computation yields \begin{equation} {\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})=\bar{A}^{2+h_{rv}} \sum_{\{a_{l}\}_{rv}}(-1)^{\sum_{l\in {\cal S}_{rv}}a_{l}} \int d^{2+h_{rv}}\bar\tau~ e^{-\bar{A}\left(\Delta \bar\tau_{0}+\Delta \bar\tau_{2+h_{rv}}\right)}~ {\cal V}_{2rv}(\{a_{k}\},\{\Delta \bar{T}^{b}_{k}\})~, \label{MUL.04y} \end{equation} where the sum over $a_{l}$ is the same as in Eq. (\ref{SU.01}), $\tilde{\cal V}_{2rv}(\cdot)\equiv{\tilde{\cal V}^{(1)}_{2rv}(\cdot)}$ is defined in Eq. (\ref{SR.01g}), and we have omitted the subscript $\gamma=1$ since, in view of Eq. (\ref{SU.01l}), $\gamma$ assumes the single value for $j=2$ irrespectively of the values of $r$ and $v$. Note that, in the exponent, $\Delta \bar\tau_{n+2}$ is replaced by $\Delta \bar\tau_{2+h_{rv}}\equiv T^{-1}\Delta T^{a}_{2+h_{rv}}$ while, in the quantity ${\cal V}_{2rv}(\cdot,\cdot)$, the set $\{\Delta \bar\tau_{q(k)}\}$ is superseded by $\{\Delta \bar{T}^{b}_{k}\}\equiv\{\Delta \bar{T}^{b}_{k}(f_{k},1)\}$, where the intervals $\Delta \bar{T}^{b}_{k}(f_{k},1)= T^{-1}\Delta T^{b}_{k}(f_{k},1)$, being constrained by the condition (\ref{KEY.01a}), are introduced in Eq. (\ref{KEY.01b}). Altogether, omitting the subscripts $a$ and $b$, the relevant $3+h_{rv}$ intervals $\Delta \bar{T}_{i}=\bar\tau_{i+1}-\bar\tau_{i}\geq 0$ are expressed through $2+h_{rv}$ ordered quantities $\bar\tau_{i}$ characterized by the $m=2+h_{rv}$ option of the measure (\ref{MEA.01}). Finally, according to Appendices \ref{dMUL.04} and \ref{symmetry3}, the r.h. side of Eq. (\ref{MUL.04y}) can be rewritten in the form \begin{equation} {\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})= \bar{A}^{2+h_{rv}} \int d^{2+h_{rv}}\bar\tau \int\limits_{-\infty}^{+\infty}d\bar\zeta d\bar\eta~ e^{i\left(\bar\eta \bar{t}_{1}- \bar\zeta \bar{t}_{2}\right)\bar{A}/\bar{\theta}}~ {\cal K}_{rv}(\bar\zeta,\bar\eta)~ {\cal Y}_{rv}(\bar\zeta,\bar\eta,\{\Delta \bar\tau_{k}\}), \label{MUL.04} \end{equation} where $\bar{t}_{p}\equiv\bar{t}^{(1)}_{p}$, $p=1,2$, \begin{equation} {\cal Y}_{rv}(\cdot)= e^{-\bar{A}\left(\Delta \bar\tau_{0}+\Delta \bar\tau_{2+h_{rv}}\right)}~ exp\left(-\bar{A}\left[(1+r-v)|1-\bar\zeta|\Delta \bar\tau_{3}+ |1+\bar\eta|\Delta \bar\tau_{1+v} +|1-\bar\zeta+\bar\eta|\Delta \bar\tau_{2-v}\right]\right)~, \label{MUL.02a} \end{equation} ${\cal K}_{rv}(\cdot)$ is given by Eq. (\ref{MUL.01a}), and the sum over $\{e_{k}\}$ supersedes the one over $\{a_{l}\}_{rv}$ (combining four different implementations ${\cal Z}_{rv}(\{a_{k}\},\cdot)$) so that the dressing-weight ${\cal Y}_{rv}(\cdot)$ is manifestly $S(4)-$invariant, i.e., $\{e_{k}\}-$independent. For the particular $\{a_{k}\}-$assignments, ${\cal Z}_{rv}(\{a_{k}\},\cdot)$ is diagrammatically depicted in figs. 5c ($a_{1}=a_{2}=0$), 8f ($a_{1}=0$), and 9f ($a_{1}=a_{4}=0$) which are associated to $r=v=0$, $r=v-1=0$, and $r=v=1$ respectively. Let us also note that the representation (\ref{MUL.04}) readily allows to demonstrate that, for $\forall{\bar{A}}>0$, ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$ is indeed continuous in $\bar{\theta}^{-1}$ in a vicinity of $\bar{\theta}^{-1}=0$. This property, implied in the transformation of Eq. (\ref{SU.01z}) into Eq. (\ref{CO.03f}), will be explicitly derived in \cite{ADM05b}. \subsection{A closer look at the pattern of the collective coordinates} \label{colcoor} In conclusion, let us clarify the following subtlety concerning the pattern of the collective coordinates relevant for the dressing of the $rv-$protograph. The point is that, in the $v=1$ Eq. (\ref{MUL.04}), both the measure $d^{2+h_{rv}}\bar\tau$ and the relative time $\bar{t}_{2} \equiv \bar{t}^{(\gamma)}_{2}$ can not be fully determined only on the basis of the configuration of the $rv-$protograph itself (postulated to be constrained, in the $r=1$ case, by the second of the conditions (\ref{SR.01y})). The general reason is traced back to the fact that the $v=1$ protographs are not of genus-one and, therefore, their dressing necessarily encodes certain structure inherited from the associted $j=2$ elementary diagrams. In consequence, the above measure includes integration over one more parameter\footnote{This parameter supersedes, after the two integrations (over $\tau_{q(p)+\omega^{(1)}_{p}}$ with $p=1,2$), the parameter $\bar\tau_{3+r}$ defined by Eq. (\ref{REP.04}) in the case of the $j=2$ amplitude (\ref{FR.01}).} $\bar\tau_{2+r}$ in addition to the $2h_{rv}-r=2-v+h_{rv}$ parameters which are directly identified (see Appendices \ref{dMUL.04y} and \ref{freedom} for the details), with the independent temporal coordinates of the end-points of the protographs' lines: \begin{equation} \int d^{2+h_{rv}}\bar\tau~...= \int d^{2-v+h_{rv}}\bar\tau \int d^{v}\bar{t}_{2}...~, \label{CLO.01} \end{equation} where we have used that $\bar{t}_{2}=\bar\tau_{2+r}-\bar\tau_{1}$ (with $\tau_{1}=x^{2}(s'_{1})$ as it is depicted in figs. 8f and 9f). Then, as it is discussed in Appendix \ref{freedom}, the presence of $\tau_{2+r}$ is tightly related to the first of the constraints (\ref{SR.01y}) fulfilled by the three parameters $t_{p},~p=1,2,3$. In turn, as it is sketched in Appendix \ref{dMUL.04y}, the latter constraints underlie the completeness condition (\ref{KEY.01a}) for $\forall{r,v}$. Note also the reduction $\int d^{2+n}\bar\tau~.. \rightarrow \int d^{4+r-v}\bar\tau~..$ of the relevant measure, formalized by the transition from the combination of the individual amplitudes (\ref{FR.01}) to Eq. (\ref{MUL.04y}), entails the relevant $f_{k}=0$ replacements (\ref{KEY.01}) applied to the $j=2$ Eq. (\ref{FR.01}). Indeed, the latter replacements result after such integration over $n-(2+r-v)|_{j=2}=v+1$ parameters $\tau_{q(p)+\omega^{(\gamma)}_{p}}$ (with $p=3-v,3$), in the process of which the corresponding intervals $\Delta \tau_{q(k)}$ vary in the domains $[0, \Delta T^{b}_{k}(0,\gamma)]$ (see Appendix \ref{dMUL.04y} for more details). \section{The large $\theta$ limit} \label{prescr} At this step we are ready to put forward the prescription (\ref{LA.02c}) to implement the large $\theta$ limit in Eq. (\ref{MUL.04}). By virtue of the $1/\theta^{2}$ factor in front of the sum in the r.h. side of eq. (\ref{CO.03f}), the asserted large $\theta$ scaling $<W(\Box)>_{U_{\theta}(1)}^{(1)}\sim \theta^{-2}$ is a consequence of the important property of the combinations ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$. For any finite $\bar{A}\neq 0$, the relevant large $\theta$ limit (\ref{LI.01}) can implemented directly through the substitution \begin{equation} e^{i\left(\bar\eta \bar{t}_{1}-\bar\zeta \bar{t}_{2}\right)\bar{A} /\bar{\theta}}~\longrightarrow~1~~~\Longrightarrow~~~ {\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})~\longrightarrow~ {\cal Z}_{rv}(\bar{A},0)~, \label{LA.02c} \end{equation} to be made in the integrand of the representation (\ref{MUL.04}) of the quantity ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$ that replaces the latter quantity by its reduction ${\cal Z}_{rv}(\bar{A},0)$. In turn, provided Eq. (\ref{CO.03}) is valid, the prescription (\ref{LA.02c}) yields the integral represetation (\ref{CO.03f}) for the next-to-leading term of the $1/\theta$ expansion (\ref{CO.02}) (with $<{\cal W}(\Box)>_{N}^{(1)}=0$). The self-consistency of the deformation (\ref{LA.02c}) is maintained provided ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$ is continuous in $\bar{\theta}^{-1}$ in a vicinity of $\bar{\theta}^{-1}=0$. In turn, it can be shown that, for $\bar{A}\neq 0$, the latter property is valid provided this deformation does not violate the {\it convergence}\/ of the $(m+2)-$dimensional integral over $\bar{\tau}_{k}$, $\bar\zeta$, and $\bar\eta$ defining the representation (\ref{MUL.04}) of ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$, where $m=2+h_{rv}$. To demonstrate the convergence, it is convenient first to get rid of the explicit $m-$dimentional ordered integration over $\bar{\tau}_{j}$. For this purpose, it is useful to perform the Laplace transformation of ${\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})$ with respect to the dimensionless area (\ref{BR.02}) that results in \begin{equation} \tilde{\cal Z}_{rv}(\beta,\bar{\theta}^{-1})=\int_{0}^{+\infty} d\bar{A}~{\cal Z}_{rv}(\bar{A},\bar{\theta}^{-1})~e^{-\beta \bar{A}}~. \label{LA.02} \end{equation} The advantage of this trick is that, in the integral representation of the image $\tilde{\cal Z}_{rv}(\beta,\bar{\theta}^{-1})$, the $\bar{\tau}_{j}-$integrations can be easily performed using the general relation \begin{equation} \prod_{j=0}^{m}\frac{1}{\beta+B_{j}}= \int\limits_{0}^{+\infty} d\bar{A}~ e^{-\beta \bar{A}}\int\limits_{0\leq \breve{\tau}_{k}\leq \breve{\tau}_{k+1}}^{\breve{\tau}_{m}\leq \bar{A}} \prod_{k=1}^{m} d\breve{\tau}_{k}\prod_{j=0}^{m} exp\left(-B_{j}\Delta\breve{\tau}_{j}\right)~, \label{LA.01} \end{equation} where $\breve{\tau}_{j}$ is to be identified with $\bar{A}\bar{\tau}_{j}$, while $\Delta\breve{\tau}_{j-1}=\breve{\tau}_{j}-\breve{\tau}_{j-1}$ with $\breve{\tau}_{0}\equiv{0}$ and $\breve{\tau}_{m+1}\equiv{\bar{A}}$. In particular, in this way one proves that the Laplace image $\tilde{\cal Z}_{rv}(\beta,0)$ of the large $\theta$ asymptote ${\cal Z}_{rv}(\bar{A},0)$ of the amplitude (\ref{MUL.04}) assumes the form (\ref{LA.03}). As for the self-consistency of the prescription (\ref{LA.02c}), it can be verified provided the double integral (\ref{LA.03}) is convergent for $\forall{\beta}>0$ so that $\tilde{\cal Z}_{rv}(\beta,\bar{\theta}^{-1})$ is continuous in $\bar{\theta}^{-1}$ in a vicinity of $\bar{\theta}^{-1}=0$. A direct inspection verfies that the convergence indeed takes place. Also, it should be stressed that, due to the infrared singularities of the propagators, the prescription (\ref{LA.02c}) is not applicable directly to each individual perturbative diagram. This property may be inferred from the integral representations of the elementary amplitudes given by the reduction (\ref{RED.01}) of the effective amplitudes considered in subsection \ref{bdeform}. Actually, even the individual effective amplitudes (\ref{FR.01}) still are not suitable for this purpose either that can be traced back to the violation of the completeness condition (\ref{KEY.01a}). It takes certain specific cancellations between the latter amplitudes that, resulting in the latter condition, makes the substitution (\ref{LA.02c}) applicable to the combinations (\ref{MUL.04}). We shall continue the discussion of this issue in \cite{ADM05b}. \section{Conclusions} \label{conclus} In the present paper we obtain the exact integral representation (\ref{SU.01z}) of the next-to-leading term $<W(\Box)>_{U_{\theta}(1)}^{(1)}$ of the $1/N$ expansion (\ref{CO.01}) of the average in the $D=2$ gauge theory (\ref{1.7}). It provides the rigorous non-perturbative\footnote{It is specifically important in the large $\theta$ limit (\ref{LI.01}), where the truncated perturbative series of $<W(C)>_{U_{\theta}(1)}^{(G)}$ is shown \cite{ADM04} to result in the false asymptotical $\theta-$scaling that is supposed to take place not only for $D=2$ but for $D=3,4$ as well.} computation made, from the first principles, in the noncommutative gauge theory. The Laplace image (\ref{LA.02}) of the large $\theta$ asymptote of $<W(\Box)>_{U_{\theta}(1)}^{(1)}$ assumes the particularly concise form (\ref{CO.03f}). In turn, the latter asymptote is argued to be directly related (\ref{CO.03}) to the next-to-leading term of the $1/\theta$ expansion (\ref{CO.02}) of $<W(C)>_{U_{\theta}(1)}$. It is noteworthy that the considered asymptote reveals the power-like decay which is in sharp contrast with the exponential area-law asymptote (\ref{1.41b}) valid in the leading order of the $1/N-$ (or, equivalently, $1/\theta-$) expansion. Furthermore, as the origin of the power-like decay can be traced back to the (infinite, in the limit $\theta\rightarrow{\infty}$) nonlocality of the star-product, similar decay is supposed to persist for all $G\geq{1}$ subleading\footnote{Contrary to the $G\geq{1}$ terms, the leading $G=0$ term is insensitive to the star-product structure that matches its $\theta-$independence (\ref{1.41b}).} terms $<W(C)>_{U_{\theta}(1)}^{(G)}$ of the large $\theta$ $1/N$ expansion. In consequence, it precludes an apparent extension of the stringy representation of the latter expansion in the spirit of the Gross-Taylor proposal \cite{Gr&Tayl} formulated for the commutative $D=2$ gauge theories. Another subtlety, concerning possible stringy reformulation of the noncommutative observables, is that the noncommutative gauge invariance is also maintained \cite{Ish} for certain combinations of the Wilson lines associated to the {\it open}\/ contours $C=C_{xy}$ with ${\bf x}\neq{\bf y}$. Nevertheless, the optimistic point of view could be that all these subtleties may suggest a hint for a considerable extension of the stringy paradigm conventionally utilized in the context of two-dimensional gauge (or, more generally, matrix) systems. As the developed here methods are general enough, we hope that our analysis makes a step towards a derivation of an arbitrary two-dimensional average $<W(C)>_{U_{\theta}(1)}$. Most straightforwardly, they can be applied to consider the $G=1$ term of the average (\ref{1.1}) for a generic rectangular contour $C=\Box$ with a nontrivial number $n\geq{2}$ of windings. E.g., it would be interesting to adapt the pattern (\ref{FR.01}) to the case when $n>>1$ and estimate its asymptotical dependence on $n$. Also, the $G\geq{2}$ terms $<W(\Box)>_{U_{\theta}(1)}^{(G)}$ could be in principle evaluated akin to the $G=1$ case that is expected to lead to a generalization of Eq. (\ref{FR.01}). In particular, we expect that there should be $2G$ parameters $\zeta_{q}$, $\eta_{q}$ with $q=1,...,G$, while the factor in front of the integral becomes $\bar{A}^{m}/ (\sigma\theta)^{2G}$. More subtle open question is to generalize our approach to a (non-self-intersecting) contour of a generic geometry. In the commutative $\theta=0$ case, the crucial simplification takes place by virtue of the invariance of the partition function under the group of (simplectic) area-preserving diffeomorphisms which guarantees that $<W(C)>_{U(1)}$ depends only on the area $A(C)$ irrespectively of the form of $C$. On the other hand, the representation (\ref{1.1}) does not make manifest if there is a symmetry that relates the averages $<W(C)>_{U_{\theta}(1)}$ with different geometries of the contour $C$. Furthermore, the lowest order perturbative computation \cite{ADM04} indicates that the simplectic invariance may be lost in the non-commutative case. Nevertheless, the explicit $A(\Box)-$ (rather than twofold $R-$ and $T-$) dependence of the derived $G=1$ term $<W(C)>_{U_{\theta}(1)}^{(1)}$ looks like a promising sign. Also, it would be interesting to make contact with the noncommutative Loop equations \cite{ANMS99,LE/NCYM} which might be an alternative approach to the above problems. Finally, among other new questions raised by the present analysis, we would like to mention the following one important in the context of the $D=4,3$ noncommutative Yang-Mills theory (\ref{1.7}). We conjecture that in this case the minimal area-law asymptote, presumably valid for a generic closed fundamental Wilson loop in the $N\rightarrow\infty$ limit, fades away at the level of the subleading $G\geq{1}$ terms similarly to what happens in the $D=2$ case. \subsection*{Acknowledgments} This work was supported in part by the grant INTAS--00--390. The work of J.A.\ and Y.M.\ was supported in part by by the Danish National Research Foundation. A.D.\ and Y.M.\ are partially supported by the Federal Program of the Russian Ministry of Industry, Science and Technology No 40.052.1.1.1112 and by the Federal Agency for Atomic Energy of Russia. \vspace{24pt} \section*{Appendices} \setcounter{equation}{0{Elementary graphs and their deformations} \label{enumer1} To complete the discussion of subsections \ref{assignment1} and \ref{gamspecif}, let us first explicitly separate, for any given $jrv-$assignment, the elementary graphs with the maximal amount of the horizontal lines and sketch the pattern of their $\bar{\cal R}_{b}^{-1}-$deformations. Also, we note that the $S(4)\otimes S(2)-$transformations can be consistently applied to the latter graphs both prior and after the ${\cal R}_{a}^{-1}\otimes\bar{\cal R}_{b}^{-1}-$dressing. In the $r=v=0$ case when $f_{jv}|_{v=0}=1$, the two $j=1$ and $j=2$ $S(4)-$multiplets can be generated from the graphs in figs. 1a, 1b and 2c, 2d respectively so that, for each $j$, the two corresponding figures may be related via the reflection mutually interchanging the horizontal sides of $C=\Box$. (When $j=2$, we take into account that both the elementary graphs in the figs. 2a, 2b and all their ${\cal R}_{a}^{-1}\otimes\bar{\cal R}_{b}^{-1}-$deformations are assigned with {\it vanishing} amplitudes (\ref{1.31b}).) As for the parameterization of the lines, the left and the right horizontal lines in figs. 1a and 2c are assigned with labels $1$ and $2$ respectively so that ${\cal C}_{21}=1$. The remaining nonhorizontal line in fig. 2c attains the label $3$. A for the $\bar{\cal R}_{b}^{-1}-$dressing, it applies to all $n=j+1$ lines of the considered $r=v=0$ elementary graphs. Being depicted by the corresponding bunch of (nonvertical) parallel dotted lines, these $\bar{\cal R}_{b}^{-1}-$dressed graphs are described by figs. 5a (with $j=1$) and 5b (with $j=2$). In the $r=v-1=0$ case, the graphs with $2+r-v=1$ horizontal line are depicted by bold lines in figs. 8a--8c, where the horizontal line is assigned with labels $1$, with the nonhorizontal line(s) being parameterized by the label(s) $2,1+j$ so that ${\cal C}_{23}=1$ when $j=2$. The constraint, separating these $v=1$ components, is that the $j+1$ end-point at the lower side do belong to the time-interval bounded by the end-points of the remaining horizontal line attached to the upper side. In turn, to make the $\bar{\cal R}_{b}^{-1}-$dressing of the latter graphs unambiguous, in the $v=1$ case we should specify those of their lines which are accompanied by their $\bar{\cal R}_{b}^{-1}-$deformations. For $r=v-1=0$, all of the line possess their individual dressings except for the single line (assigned with the label 1), involved into the vertical reattachments, which is {\it not} dressed: see figs. 8a, 8b (with $j=1$) and 8c (with $j=2$). As previously, the $\bar{\cal R}_{b}^{-1}-$dressing of a given nonhorizontal (bold) line is depicted by the bunch of (nonvertical) dotted lines which, in the $1+r=v=1$ case at hand, are all parallel to the latter line. In the remaining $r=v=1$ case, the graphs with $2+r-v=2$ horizontal lines are given by the entire decomposition of the Feynman diagrams in figs. 7a (with $j=1$) and 7e (with $j=2$) into the time-ordered components parameterized by $j=1,2$ and $\gamma=1,f_{jv}$. In turn, for a given $j$ and $\gamma$, these components can be collected into the pairs which are comprised of the two graphs related via the reflection mutually interchanging the horizontal (or, equivalently, vertical) sides of $C=\Box$. Correspondingly, the labels 1 and 4 are assigned to the left and right horizontal lines, while (in the case when the first line is attached to the upper side of $C$) the remaining two nonhorizontal lines are parameterized similar to the corresponding figs. 8a--8c. are assigned with {\it vanishing} amplitudes (\ref{1.31b}).) Concerning the pattern of the $\bar{\cal R}_{b}^{-1}-$dressing, all of the line possess their individual dressings except for the two horizontal lines (assigned with the labels 1 and 4 respectively). As it is clear from figs. 9a, 9b (with $j=1$) and 9c (with $j=2$), the latter two lines share the same $\bar{\cal R}_{b}^{-1}-$dressing which, in Eq. (\ref{SR.01g}), is formally associated to the fourth line. Note also that, given these rules, a direct inspection demonstrates that each graph (with $2+r-v$ horizontal lines) is {\it unambiguously} endowed with the unique $\{\alpha^{(k)}\}-$assignment which matches the aim formulated subsection \ref{deform1c}. Finally, it is straightforward to reproduce the remaining three members of each $S(4)-$multiplet of the elementary graphs, employing the $S(4)-$reattachments defined in subsection \ref{S(4)}. Then, for $h_{rv}=2$ one readily combines the latter $jrv-$multiplets into the pairs related via the $S(2)-$reflections interchanging the horizontal (or, what is equivalent in the $v=1$ case, vertical) sides of the contour $C$. \setcounter{equation}{0{The $\{\Delta \tau_{q(k)}\}-$assignment} \label{freedom} By virtue of the $S(4)\otimes S(2)-$symmetry implemented in Section \ref{parameter} and Appendix \ref{enumer1}, there is the following short-cut way to introduce the prescription that fixes the $\{\Delta \tau_{q(k)}\}-$assignment (entering Eq. (\ref{KEY.01})) unambiguously for all the effective amplitudes collected into the $S(4)\otimes S(2)-$multiplets. For all inequivalent values of $j$, $r$, and $v$, we first fix the prescription\footnote{In certain cases, this assignment may be imposed in a few alternative ways without changing the corresponding effective amplitude. The prescription fixes this freedom in the $S(4)-$invariant way.} for a single graph in a particular $S(4)\otimes S(2)-$multiplet with given $\gamma jrv-$assignment. Then, it is verified that the pattern of the prescription is not changed when adapted to the remaining graphs obtained employing the $S(4)-$reattachments combined with the $S(2)-$reflections. In turn, given an elementary graph representing such a multiplet, there are two steps to implement the $\{\Delta \tau_{q(k)}\}-$assignment. The first step, discussed in the present Appendix, is to perform such a change of the variables that replaces $2n$ temporal coordinates\footnote{Recall that $l$ labels the $l$th line of a given graph, $s'_{l}<s_{l}$ for $\forall{l}$, and the proper-time parameterization goes clockwise starting with the left lower coner of $C=\Box$.} $x^{2}(s_{l})$ and $x^{2}(s'_{l})$, constrained by $G=1$ Eq. (\ref{1.40kk}), by $n+2$ independent parameters $\tau_{i}$. At the second step, one is to determine the function $q(k):~k\rightarrow{q}$. The latter step is established in Appendix \ref{dMUL.04}. \sapp{The $r=v=0$ case} Both of the steps are most straightforward in the case of the $r=v=0$ multiplets when the realization of the two relevant symmetries of the assignment in question is routine as well. Presuming that $s_{k}\geq s'_{k}$ for $\forall{k}$, the first step can be formalized by the prescription \begin{equation} x^{2}(s'_{k})=\tau_{k}~~~~~,~~~~~x^{2}(s_{k})=\tau_{k+j+1}~~~,~~~k=1,2 ~~~~~,~~~~~(j-1)\left(x^{2}(s'_{3})-{\tau}_{3}\right)=0~, \label{FA.02y} \end{equation} where $x^{\mu}(s'_{k}),~x^{\mu}(s_{k})$ are the end-points of the left ($k=1$) and right ($k=2$) lines in fig. 1a ($j=1$) and 2c ($j=2$). \sapp{The $v=1$ cases} Concerning the $v=1$ cases\footnote{Recall that, one is to restrict the admissible positions of the lower end-points of those $j$ nonhorizontal lines which are not involved into the $S(4)-$reattachments. In the $r=v-1=0$ and $r=v=1$ cases, it is fixed by Eqs. (\ref{RE.02a}) and (\ref{INN.04}) correspondingly.}, consider first the $j=1$ graphs which, being depicted by bold lines in figs. 8a, 8b and 9a, 9b, are associated to $r=0$ and $r=1$ respectively, where $\gamma=1$ and $\gamma=2$ are assigned to figs. 8a,9a and 8b, 9b correspondingly. In all figures, $\tau_{1}$ and $\tau_{4+r}$ should be identified respectively with the temporal coordinates of the leftmost and rightmost end-points of the elementary graph, belonging to $1+r$ bold lines (defining the associated protograph). Next, the remaining $1+r$ end-points of the latter lines can be as well directly identified with the corresponding parameters $\tau_{i}$ so that it can be summarized by equations \begin{equation} x^{2}(s'_{1})=\tau_{1}~~,~~ x^{2}(s_{1})\delta_{1\gamma}+x^{2}(s'_{4})\delta_{2\gamma}=\tau_{4+r}~~,~~ x^{2}(s_{2})=\tau_{1+r+\gamma}~~~,~~~ \delta_{2r}\cdot\left(x^{2}(s_{4})-\tau_{2}\right)=0~, \label{REP.01} \end{equation} where $\delta_{nm}$ denotes the standard Kronecker delta-function with $\delta_{nn}=1$ and $\delta_{nm}$ for $\forall n\neq m$. For a given $n+2=3+j+r$, the direct reidentification (\ref{REP.01}) allows to define only $n+1$ parameters $\tau_{i}$. The remaining $(n+2)$th parameter $\tau_{4+r-\gamma}$ has to be introduced via the following procedure which is also used to determined the corresponding interval\footnote{In turn, $\Delta \tau_{q(2)}$ is to be identified with the interval spanned by the lower end-point of the second bold line in the process of this parallel transport: $q(2)=2+r$.} $\Delta \tau_{q(2)}$. The proposal is to identify $\tau_{4+r-\gamma}$ with the new position \begin{equation} \left(x^{2}(s_{1})\delta_{1\gamma}+x^{2}(s'_{1})\delta_{2\gamma}\right)+ (-1)^{\gamma}t_{2} =\tau_{4+r-\gamma} \label{REP.02} \end{equation} of the lower end-point of the second bold line resulting after the judicious {\it parallel transport} of this line. Namely, the line is transported, until its upper end-point hits the corresponding end-point of the first bold line, to the right in the $\gamma=1$ case of figs. 8a, 9a and to the left in the $\gamma=2$ case of figs. 8b, 9b. Note also that $\bar\tau_{4+r-\gamma}$ describes the collective coordinates defining the measure (\ref{CLO.01}). Turning to the $j=2$ case of figs. 8c and 9c (both assigned with $\gamma=1$), we first note that the addition of the extra bold line (compared to figs. 8a, 8b and 9a, 9b) results in the one more delta-function in the $G=1$ factor (\ref{1.40kk}). In consequence, compared to the associated $j=1$ cases, only a single additional parameter $\tau_{i}$ is introduced which can be directly identified with the the temporal coordinate $x^{2}(s_{3})$ of the lower end-point of this extra line (which, being nonhorizontal, is not involved into the reattachments). As for the remaining $n+1=4+r$ parameters $\tau_{k}$, they are defined in the way similar to the previous $j=1$ discussion. Actually, it can be reformulated in the more geometrically clear way. For this purpose, in all figures, $\tau_{1}$ and $\tau_{5+r}$ should be identified correspondingly with the temporal coordinates of the leftmost and rightmost end-points of the elementary graph depicted by the bold lines. Additionally, the $2+r$ end-points (of the latter lines) can be as well directly identified with the corresponding parameters $\tau_{i}$ that can be summarized in the form \begin{equation} x^{2}(s'_{1})=\tau_{1}~,~ x^{2}(s_{1})\delta_{1\gamma}+x^{2}(s'_{4})\delta_{2\gamma}=\tau_{5+r}~,~ x^{2}(s_{2})=\tau_{4+r}~,~ x^{2}(s_{3})=\tau_{2+r}~,~ \delta_{1r}\left(x^{2}(s_{4})-\tau_{2}\right)=0. \label{REP.03} \end{equation} In this way, we define the $n+1$ parameters while the so far missing $(n+2)$th parameter $\tau_{3+r}$ can be determined through the following procedure utilizing the double parallel transport which, geometrically, can be visualized the triangle-rule (most transparent in figs. 8f and 9f). The proposal is to identify $\tau_{3+r}$ with the position \begin{equation} \tau_{1}+t_{2}=\tau_{3+r} \label{REP.04} \end{equation} where the two lower end-points of the second and the third bold lines coalesce when these two lines are transported until their upper end-points {\it simultaneously} hit the corresponding end-points of the first (horizontal) bold line. In turn, it implies the algebraical fine-tuning maintained by the first of the conditions (\ref{SR.01y}) which, geometrically, means that (when properly transported and reoriented) the three vector ${\bf y}_{k},~k=1,2,3,$ can be combined into a triangle\footnote{A direct inspection of fig. 5b and figs. 6 reveals that, with a minor modification, a similar triangle-rule can be formulated in the $v=r=0$ case as well.} in the $a_{1}=0$ case of figs. 8c and 9c. \sapp{The $S(4)-$ and reflection-invariance} \label{assisym} Evidently, the proposed algorithm to introduce the $\{\Delta \tau_{q(k)}\}-$assignment is not changed after a generic combination of the $S(4)-$reattachments. Indeed, it readily follows from the fact that, keeping the temporal coordinates of the end-points intact, they are applied only to the right- or/and leftmost end-points of elementary graphs. Concerning the reflection-invariance, consider first the $r=v=0$ case. Then the reflection (interchanging the horizontal sides of the rectangle $C$) is applied to the two $S(4)-$multiplets corresponding to the figs. 1a (with $j=1$) and 2c (with $j=2$). In the reflection-partners represented by figs. 1b and 2d respectively, the time-intervals $\Delta\bar{\tau}_{k}$ be associated to the lower horizontal side of $C$. In the latter two figures, we parameterize the left and the right horizontal lines by label 1 and 2 correspondingly (so that, in fig. 2d, the remaining nonhorizontal line is assigned with the label 3). Introducing the parameters $\tau_{i}$ by the same token as previously, it guarantees that the function $q(k)$ is reflection-invariant. Also, compared to the case of fig. 1a and 2c, the figs. 1b and 2d can be characterized via the replacements $t_{p}\rightarrow{-t_{p}}$, ${\cal C}_{il}\rightarrow -{\cal C}_{il}$ with $p=1,2$ and $i,l=1,2,3$. In turn, the latter replacements follow from the definitions (\ref{1.38aa}) and (\ref{1.34}) which are augmented by the convention to implement the proper-time parameterization (implying, in particular, that $s_{l}\geq s'_{l}$ for $\forall{l}$). Finally, consider the remaining case of the three pairs of the $r=v=1$ $S(4)-$multiplets (assigned with $\gamma=1,2$ for $j=1$ and $\gamma=1$ for $j=2$) which, within a particular pair, are related through the reflection interchanging the vertical sides of $C=\Box$. In each of the latter multiplets it is sufficient to consider the single elementary graph with the two horizontal lines. E.g., see figs. 9g and 9h which are the reflection-partners of figs. 9b and 9c respectively. For concreteness, we restrict\footnote{The remaining two pairs, associated to figs. 9a and 9b, are handled in a similar way.} the discussion to figs. 9c and 9h, associating the time-intervals $\Delta\bar{\tau}_{k}$ to the lower horizontal side of $C$. In the latter two figures, we parameterize the left and the right horizontal lines by label 1 and 4 correspondingly. Then, to maintain the reflection-covariance of the algorithm (introduced in the previous subappendix), in the case of fig. 9h one is to perform the additional change of the variables $\bar{\tau}_{k}\rightarrow \bar{\tau}_{n+3-k}$ with $k=1,...,n+2$ (possessing the jacobian equal to unity) that results in the reidentification $\Delta\bar{\tau}_{k}\rightarrow \Delta\bar{\tau}_{n+2-k}$ applied to $k=0,...,n+2$. (As previously, we require that $s_{l}\geq s'_{l}$ for $\forall{l}$.) This reidentification evidently implies the transformation $q(k)\rightarrow q(n+2-k)$, provided the labels 2,3 are assigned to the remaining nonhorizontal lines so that ${\cal C}_{32}\rightarrow -{\cal C}_{32}$ (while ${\cal C}_{1p}\rightarrow -{\cal C}_{1p}$ for $p=2,3$). In turn, a direct inspection demonstrates that, after this transformation, the function $q(k)$ assumes the same form as in the case of fig. 9c which verifies its reflection-covariance. As for the splitting (\ref{FU.02}), the reflection-partners can be characterized through the replacements $t_{i}\rightarrow{-t_{i}}$ for $i=1,2,3,4$. \setcounter{equation}{0{Justifying Eq. (\ref{MUL.04y})} \label{dMUL.04y} To transform the superposition (\ref{SU.01}) into the form of Eq. (\ref{MUL.04y}), in the integral representation (\ref{FR.01}) of ${\cal Z}^{(\gamma)}_{jrv}(\cdot)$ one is to first perform (for each $v+j-1>0$ term) the change of the variables \begin{equation} \int d^{2+n}\bar\tau~...~~~\longrightarrow~~~\int d^{2+h_{rv}}\bar\tau~ \int_{\bar\tau_{q_{\gamma}(3)-1}}^ {\bar\tau_{q_{\gamma}(3)+1}} d^{j-1}\bar\tau_{q_{\gamma}(3)} \int_{\bar\tau_{q_{\gamma}(2)-1}}^ {\bar\tau_{q_{\gamma}(2)+1}} d^{v}\bar\tau_{q_{\gamma}(2)}~... \label{FI.03} \end{equation} that manifestly separates the $2+h_{rv}$ collective coordinates combined into the measure (\ref{CLO.01}), provided $q_{\gamma}(p)=q(p)+ \omega^{(\gamma)}_{p}$, where $\omega^{(\gamma)}_{p}=0,1$ is explicitly constructed in subappendix \ref{omega} so that the prescription, formulated in the end of subsection \ref{completeness}, is valid. In turn, due to the presence of the corresponding number of the derivatives in the r.h.side of Eq. (\ref{FR.01}), the remaining $v+j-1$ integrations\footnote{Recall that these integrations are associated to those lines (of a given elementary graph) which, being non-horizontal, are {\it not} involved into the $S(4)-$reattachments.} (with respect to $\bar\tau_{q_{\gamma}(p)}\in [\bar\tau_{q_{\gamma}(p)-1},\bar\tau_{q_{\gamma}(p)+1}]$) are readily performed. The computation is simplified by the fact\footnote{It is this fact that verifies the prescription (\ref{VAR.02}).} that, by construction of $\omega^{(\gamma)}_{p}$, both $\bar{t}^{(\gamma)}_{1},~\bar{t}^{(\gamma)}_{2}$ and $\Delta\bar\tau_{q(i)}$ are {\it independent} of $\bar\tau_{q_{\gamma}(p)}$ for $\forall{i\neq{p}}$, $\forall{p}=3-v,3$, and $\forall{\gamma}=1,f_{jv}$, while $(-1)^{\omega^{(\gamma)}_{3}-1} \partial/\partial \bar\tau_{q_{\gamma}(p)}$ can be replaced by $\partial/\partial \Delta\bar\tau_{q(p)}$ when it acts on the $\Delta\bar\tau_{q(p)}-$dependent factor (\ref{SR.01a}). In consequence, in the expression (\ref{SR.01g}) for $\tilde{\cal V}^{(\gamma)}_{jrv}(\cdot)$, the dependence on $\tau_{q_{\gamma}(p)}$ is localized in the corresponding $k=p$ implementation of the factor (\ref{SR.01a}). Furthermore, the interval $\Delta \bar\tau_{q(p)}$ varies in the domain $[0,\Delta \bar{T}^{b}_{p}(0,\gamma)]$ (where ${T}^{b}_{p}(0,\gamma)=\tau_{q_{\gamma}(p)+1}-\tau_{q_{\gamma}(p)-1}$ is defined in Eq. (\ref{KEY.01b})) when $\bar\tau_{q_{\gamma}(p)}$ spans the domain $[\bar\tau_{q_{\gamma}(p)-1},\bar\tau_{q_{\gamma}(p)+1}]$. Altogether, the amplitude (\ref{FR.01}) can be rewritten in the form which can be obtained from Eq. (\ref{MUL.04y}) through the replacement \begin{equation} {\cal V}_{2rv}(\{a_{k}\},\{\Delta \bar{T}^{b}_{k}\}) ~~~\longrightarrow~~~\sum_{j=1}^{2}\sum_{\gamma=1}^{f_{jv}} \left[\sum_{\breve\tau_{3}=0}^{1} (-1)^{\breve\tau_{3}}\right]^{j-1} \left[\sum_{\breve\tau_{2}=0}^{1} (-1)^{\breve\tau_{2}}\right]^{v} {\cal V}^{(\gamma)}_{jrv}(\{a_{i}\},\{\Delta \bar\tau_{q(k)}\}) \Big|_{\{\breve\tau_{p}\}}~, \label{FI.04} \end{equation} where sum over $\breve\tau_{p}= (\Delta \bar{T}^{b}_{p}(0,\gamma)-\Delta \bar\tau_{q(p)})/ \Delta \bar{T}^{b}_{p}(0,\gamma)$ reproduces the sum over the boundary values of the relevant intervals $\Delta \bar\tau_{q(p)}$, while ${\cal V}_{2rv}(\cdot,\cdot)\equiv {\cal V}^{(1)}_{2rv}(\cdot,\cdot)$ as well as in Eq. (\ref{MUL.04y}). Next, in the r.h. side of Eq. (\ref{FI.04}), there are mutual cancellations (see Eqs. (\ref{FI.01}) and (\ref{FI.02}) below) which, due to the $S(4)-$invariance of the $\bar{\cal R}^{-1}_{b}-$dressing, are maintained between the $j=1$ and $j=2$ terms considered {\it separately} for any admissible $\{a_{l}\}-$assignment. As a result, survives only the single $j=2$ term \begin{equation} {\cal V}_{2rv}(\{a_{i}\},\{\Delta \bar\tau_{q(k)}\})\Big|_ {\{\Delta\bar\tau_{q(p)}=\Delta \bar{T}^{b}_{p}(0)\}} ={\cal V}_{2rv}(\{a_{k}\},\{\Delta \bar{T}^{b}_{k}\}) \label{FI.05} \end{equation} (with $\bar{T}^{b}_{p}(0)\equiv \bar{T}^{b}_{p}(0,\gamma)|_{\gamma=1}$) characterized by the condition \begin{equation} \Delta\bar\tau_{q_{\gamma}(p)-1}=0~~~~\Longrightarrow~~~~ \Delta\bar\tau_{q(p)}=\Delta \bar{T}^{b}_{p}(0,1),~~~~~~,~~~~~ \forall{p}=3-v,3~, \label{REI.01} \end{equation} that reduces the number $2+n$ of the original variables $\bar\tau_{i}$, entering Eq. (\ref{FR.01}), to the smaller amount $2+h_{rv}$ associated to Eq. (\ref{MUL.04}). In consequence, for fixed values of those $\bar\tau_{k}$ which define the collective coordinates entering the measure (\ref{CLO.01}), it maintains the {\it maximal} value of $\sum_{p=3-v}^{3} \Delta \tau_{q(p)}$, where $p$ labels those $v+j-1$ lines of a given elementary graph which, being associated to $f_{k}=0$ replacement (\ref{KEY.01}), are not involved into the $S(4)-$reattachments. In turn, by virtue of the $j=2$ constraints (\ref{SR.01y}), it supports the completeness condition (\ref{KEY.01a}). Altogether, it verifies Eq. (\ref{MUL.04y}). As for the asserted mutual cancellations, the simplest situation takes place in the $r=v=0$ case when the parameter $\gamma$, assuming the singe value (since $f_{j0}=1$ according to Eq. (\ref{SU.01l})), can be safely omitted. Therefore, for each admissible values of $a_{1}$ and $a_{2}$ (involved in the $v=0$ summation in Eq. (\ref{MUL.04y})), the fine-tuning takes place between the pairs of effective amplitudes ${\cal Z}_{j00}(\{a_{k}\},\bar{A},\bar{\theta}^{-1})$ with $j=1,2$. In this case, due to the identity ${\cal F}(z,0)=1$ valid for $\forall{z}$ (as it is clear from the definition (\ref{SR.01a})), the very pattern (\ref{SR.01g}) of ${\cal V}_{jrv}(\cdot)$ ensures the relation \begin{equation} {\cal V}_{200}(\{a_{k}\},\{\Delta \bar\tau_{q(i)}\}) \Big|_{\Delta \bar\tau_{q(3)}=0}= {\cal V}_{100}(\{a_{k}\},\{\Delta \bar\tau_{q(i)}\}), \label{FI.01} \end{equation} so that the reduction $\Delta \bar\tau_{q(k)}=0$ converts the $6-$set $\{\Delta \bar\tau_{k}\}$ (associated to the $j=2$ l.h. side of the identity) into its counterpart (in the $j=1$ l.h. side) consisting of the $5$ intervals $\Delta \bar\tau_{k}$. In turn, it proves the inverse of the $v=0$ replacement (\ref{FI.04}) and, in consequence, the $v=0$ option of the prescription (\ref{KEY.01}) endowed with the $\{f_{k}\}-$specification in compliance with subsection \ref{completeness}. For the particular case of $a_{1}=a_{2}=0$, ${\cal Z}_{00}(\{a_{k}\},\cdot)$ (resulting from the cancellation between ${\cal Z}_{200}(\{a_{k}\},\cdot)$, fig. 5b, and ${\cal Z}_{100}(\{a_{k}\},\cdot)$, fig. 5a) is diagrammatically depicted by fig. 5c. The remaining options of ${\cal Z}_{00}(\{a_{k}\},\cdot)$ are represented by figs. 6a--6c. Concerning the $v=1$ cases, the inverse of the $v=1$ replacements (\ref{FI.04}) follow from the pair of the relations\footnote{In the derivation of Eq. (\ref{FI.02}), we utilize that $\Delta T^{b}_{2}(0,\gamma)|_{j=1}=\Delta T^{b}_{4-\gamma}(0,1)|_{j=2}$ provided $t^{(\gamma)}_{2}|_{j=1}=t^{(1)}_{4-\gamma}|_{j=2}$.} \begin{equation} {\cal V}_{2r1}(\cdot,\{\Delta \bar\tau_{q(i)}\}) \Big|_{\Delta \bar\tau_{q(p)}=0}= {\cal V}^{(p-1)}_{1r1}(\cdot,\{\Delta \bar\tau_{q(i)}\}) \Big|_{\Delta \bar\tau_{q(2)}=\bar{T}^{b}_{2}(0,p-1)}~~~~,~~~~p=2,3~, \label{FI.02} \end{equation} \begin{equation} {\cal V}_{2r1}(\cdot,\{\Delta \bar\tau_{q(i)}\}) \Big|_{\Delta \bar\tau_{q(2)}=0}^{\Delta \bar\tau_{q(3)}=0}= {\cal V}^{(\gamma)}_{1r1}(\cdot,\{\Delta \bar\tau_{q(i)}\}) \Big|_{\Delta \bar\tau_{q(2)}=0}=0~, \label{FI.02a} \end{equation} where Eq. (\ref{FI.02}) can be deduced essentially by the same token as Eq. (\ref{FI.01}) (while Eq. (\ref{FI.02a}) is proved in \cite{ADM05b}). The only new element is to take into account that, contrary to the $r=v=0$ case (\ref{FI.01}), there are two $p=2,3$ options to implement the $j=2\rightarrow j=1$ reduction (of the $(6+r)-$set $\{\Delta \bar\tau_{k}\}$ into the corresponding $(5+r)-$set) so that the $p$th option is associated to the $\gamma=p-1$ implementation of ${\cal V}^{(\gamma)}_{1r1}(\cdot)$. Geometrically, for the particular $\{a_{k}\}-$assignments, the latter identification is clear from the comparison of the $j=2$ figs. 8c and 9c with the $j=1$ pairs of the figs. 8a, 8b and 9a, 9b respectively. (In the derivation of this representation of ${\cal Z}^{(1)}_{1r1}(\cdot)$, we also utilize the change of the variables $\bar\eta\rightarrow{\bar\eta+\bar\zeta}$ $\bar\zeta\rightarrow{\bar\zeta}$ that, in the combination $\bar\eta \bar{t}^{(1)}_{1}-\bar\zeta \bar{t}^{(1)}_{2}$ entering the relevant option of Eq. (\ref{SR.01g}), replaces $\bar{t}^{(1)}_{2}$ by $\bar{t}^{(2)}_{2}$.) Finally, it is possible to diagrammatically visualize the $v=1$ replacement (\ref{FI.04}), in the form similar to the $r=v=0$ one. For simplicity, we as previously restrict the discussion to the case of the $\{a_{k}\}-$assignments with $a_{1}=0$ and, when $r=1$, $a_{4}=0$. Then, observe first that (in the $\gamma=1$ case) the relation (\ref{FI.02a}) implies the equivalence of the effective amplitudes associated to figs. 8a, 9a and 8d, 9d correspondingly. Next, the $p=3$ variant of the relation (\ref{FI.02}) guarantees that the superposition ${\cal Z}^{(2)}_{1r1}(\{a_{k}\},\cdot)+ {\cal Z}^{(1)}_{2r1}(\{a_{k}\},\cdot)$ is diagrammatically represented by figs. 8e and 9e when $r=0$ and $r=1$ respectively. As for ${\cal Z}_{r1}(\{a_{k}\},\cdot)$, being depicted in figs. 8f and 9f when $r=0$ and $r=1$ correspondingly, it results after the residual cancellation which takes place, by the same token as in the $r=v=0$ case, between effective amplitudes of figs. 8e (9e) and 8d (9d). \sapp{The choice of the $\{\omega^{(\gamma)}_{k}\}-$assignment} \label{omega} It remains to introduce the appropriate set of the parameters $\omega^{(\gamma)}_{k}$, where $k=3-v,3$ labels those $n-h_{rv}=v+j-1$ lines of the elementary graph which are {\it not} associated to the corresponding protograph, i.e., $k\in {\cal X}_{jrv}\equiv\tilde\Omega_{jrv}/{\cal S}_{rv}$ (where the sets $\tilde\Omega_{jrv}$ and ${\cal S}_{rv}$ are introduced in the end of subsection \ref{assignment1}). For this purpose, we propose the following algorithm. First, we observe that the parameters $\bar\tau_{q_{\gamma}(k)}$ ($k=3-v,3,~q_{\gamma}(p)=q(p)+ \omega^{(\gamma)}_{p}$) represent the temporal coordinates which remain dynamical when one fixes both the positions of the end-points of the corresponding protograph's line and, in the $v=1$ case, an admissible value of $t^{(\gamma)}_{2}$. In compliance with Appendix \ref{freedom}, for $v+j-1>0$ it leaves variable exactly $v+j-1$ independent temporal coordinates of either upper (when $v=0$) or lower end-points of the $v+j-1$ lines labeled by $k\in {\cal X}_{jrv}$. Correspondingly, each of thus introduced parameters $\bar\tau_{q_{\gamma}(k)}$ is associated to the two adjacent intervals $\Delta\bar\tau_{q_{\gamma}(k)-i}$ with $i=0,1$ so that $\sum_{0}^{1}\Delta\bar\tau_{q_{\gamma}(k)-i}=\Delta{T}^{b}_{p}(0,\gamma)$. Then, it is a matter of convention to choose one of the two possible values of $i=i_{\gamma}(k)$ in order to identify $q(k)=q_{\gamma}(k)-i_{\gamma}(k)$ for a given $\gamma$. Having fixed this freedom\footnote{A direct inspection of the elementary graphs verifies that this freedom is absent for the remaining $h_{rv}$ lines which, being endowed with the $\bar{\cal R}^{-1}_{b}-$dressing, are assigned with $k\in {\cal S}_{rv}= \Omega_{jrv}/{\cal X}_{jrv}$, while $f_{k}=1$ in the sence of Eq. (\ref{KEY.01b}), i.e., $\Delta{T}^{b}_{k}(1,\gamma)= \Delta\bar\tau_{q(k)}$ for $k\in {\cal S}_{rv}$.} according to the prescription of Appendix \ref{dMUL.04}, one is led to the identification $\omega^{(\gamma)}_{k}=i_{\gamma}(k)$. Given this prescription, one obtains (in the $S(4)-$invariant and reflection-covariant way in the sense of subappendix \ref{assisym}) that $\omega^{(1)}_{3}=0$ for $r=v=j-2=0$, $\omega^{(\gamma)}_{2}=\gamma-1$ for $v=j=1$ (and $\forall{r}=0,1$), while $\omega^{(1)}_{p}=3-p$ for $v=j-1=1$ (and $\forall{r}=0,1$). Also, it supports the prescription, formulated in the end of subsection \ref{completeness}. Next, by construction, thus introduced $n-v$ intervals $\Delta{T}^{b}_{k}(f_{k},\gamma)$ meet the important constraint (justified by a direct inspection of the relevant elementary graphs): these intervals are mutually {\it nonoverlapping}. Furthermore, in the $j=2$ case, the very number $n-v$ of the intervals ensures that they comply with the completeness condition (\ref{KEY.01a}). (Among the $n+1$ intervals $\Delta\bar\tau_{i}$, comprising the residual temporal interval in the r.h. side of this condition, there are exactly $v+1$ pairs combined into the corresponding intervals $\Delta{T}^{b}_{p}(0,1),~p=3-v,3$.) Also, the latter constraint guarantees that $\Delta\bar\tau_{q(i)}$ is independent of $\bar\tau_{q_{\gamma}(p)}$ for $\forall{i\neq{p}}$, $\forall{p}=3-v,3$. Finally, it is straightforward to argue that the same independence of $\bar\tau_{q_{\gamma}(p)}$ holds true for $\bar{t}^{(\gamma)}_{1},~\bar{t}^{(\gamma)}_{2}$ as well. It is most transparent in the $r=v=0$ case where these relative times are fully determined by the positions of the end-points of the $2+r-v$ lines involved into the $S(4)-$reattachments. In the $v=1$ case, this argument still applies to $\bar{t}^{(\gamma)}_{1}$, while the independence of $\bar{t}^{(\gamma)}_{2}$ is verified by the relation (\ref{REP.02}). \setcounter{equation}{0{Explicit implementation of $\tilde{\cal V}_{2rv}(\{a_{i}\},\{\Delta \tau_{q(k)}\})$} \label{dMUL.04} The aim of this Appendix to explicitly determine the $\{a_{l}\}-${\it dependent} parameters which define the relevant implementation of the pattern (\ref{SR.01g}) of the quantity $\tilde{\cal V}_{2rv}(\cdot)$ entering Eq. (\ref{MUL.04y}). In compliance with the discussion of subsection \ref{enumer1}, our strategy is to introduce the required parameters for generic $\{a_{l}\}-$assignment as the $\{a_{l}\}-$dependent deformation of the parameters associated to a particular elementary graph in a given $rv-$variety of the elementary diagrams. In the next Appendix, we will verify that, after an appropriate change of the variables $\bar\zeta$ and $\bar\eta$, one can rewrite this quantity in the form matching Eq. (\ref{MUL.04}). \sapp{The $r=v=0$ case} Consider first the $a_{1}=a_{2}=0$ contribution to the $r=v=0$ superposition (\ref{MUL.04y}) which is determined by such implementation of $\tilde{\cal V}_{200}(\{a_{i}\},\{\Delta \tau_{q(k)}\})$ that is parameterized by the $j=2$ graph\footnote{Recall that both the elementary and the effective amplitudes, associated to figs. 2a and 2b, are vanishing due to the specific implementation of the ${V}_{U_{\theta}(1)}^{(n)}(\cdot)\rightarrow \tilde{V}_{U_{\theta}(1)}^{(n)}(\cdot)$ option of the $G=1$ constraints (\ref{1.40kk}).} 2c (when ${\cal C}_{12}={\cal C}_{13}={\cal C}_{23}=-1$), the deformations of which are described in fig. 5b. Defining $\bar{\tau}_{k}$ and $\Delta\bar{\tau}_{k}$ according to the $j=2$ Eq. (\ref{FA.02y}), the $\{a_{l}\}-$independent parameters $\bar{t}_{p}$ are determined by the $z=0$ variant relations \begin{equation} (-1)^{a_{1}+z}\bar{t}_{1}=\bar{\tau}_{4}-\bar{\tau}_{1}= \Delta \bar{\tau}_{1}+\Delta \bar{\tau}_{2}+\Delta \bar{\tau}_{3}~~~~~,~~~~~ (-1)^{z}\bar{t}_{2}=\bar{\tau}_{5}-\bar{\tau}_{2}= \Delta \bar{\tau}_{2}+\Delta \bar{\tau}_{3}+\Delta \bar{\tau}_{4}~. \label{AM.05} \end{equation} with $\bar{t}_{1}-\bar{t}_{2}+\bar{t}_{3}=0$, while the convention to fix the labels $k=1,2,3$ is fixed in Appendix \ref{enumer1}. Correspondingly, it leads to the $a_{1}=z=0$ option of the identification \begin{equation} q(2)=1~,~q(1)=4~,~q(3)=3~~,~~\alpha^{(p)}= (-1)^{z}{\cal C}_{32}=1~~,~~-\alpha^{(1)}= (-1)^{z}{\cal C}_{p1}=(-1)^{a_{1}}~,~p=2,3~, \label{AMB.01} \end{equation} where, as it should, the function $q(k)$ is $\{a_{l}\}-$independent. Also, the $\omega^{(1)}_{3}=0$ option of the $v=0$ Eq. (\ref{REI.01}) implies that the measure of the $r=v=0$ representation (\ref{MUL.04}) is obtained through the reidentification: $\bar\tau_{i}\rightarrow\bar\tau_{i}$ for $i=1,2$, while $\bar\tau_{i}\rightarrow\bar\tau_{i-1}$ for $i=4,5$ so that $\bar\tau_{3}$ disappears. As for the remaining three contributions to the $r=v=0$ superposition (\ref{MUL.04y}), they are associated to such implementation of the quantity $\tilde{\cal V}_{200}(\{a_{i}\},\{\Delta \tau_{q(k)}\})$ that are parameterized by the $v=0$ components of the diagrams 2e and 2g obtained through the vertical reattachments applied to fig. 2c. For this purpose, the leftmost or/and rightmost end-point of the pair of the lines in fig. 2c is/are replaced, keeping their time-coordinates $x^{2}(s'_{1})$ and $x^{2}(s_{2})$ intact, from the upper to the lower horizontal side of the rectangle $C$. Taking into account that the vertical $1-$axis is directed from the upper to the lower horizontal side of the rectangle $\Box$, it is formalized by the relations \begin{equation} x^{1}(s'_{1})=a_{1}R~~~~~,~~~~~x^{1}(s_{2})=a_{2}R~~~~~,~~~~~ x^{1}(s'_{2})=x^{1}(s_{1})=0~. \label{AM.07d} \end{equation} When $a_{1}+a_{2}\geq 1$, it parameterizes the three different implementations of $\tilde{\cal V}_{200}(\cdot)$ which, being associated to figs. 6a--6c, are described by the corresponding $\{a_{k}\}-$implementations of Eqs. (\ref{AMB.01}) and (\ref{AM.05}), where one is to put $z=0$. Finally, to compute the entire $r=v=0$ contribution to the decomposition (\ref{SU.01z}), it remains to include the contribution of such $r=v=0$ superposition (\ref{MUL.04y}) that is associated to the $S(4)-$multiplet of the $j=2$ elementary graphs specified by the graph in fig. 2d. Alternatively, these graphs can be obtained from the previously constructed $j=2$ $S(4)-$multiplet (specified by the graph in fig. 2c) via the reflection interchanging the horizontal sides of the rectangle $C$. Then, introducing the $\{\Delta\bar{\tau}_{k}\}-$assignment according to the convention of subappendix \ref{assisym}, one arrives at the $r=v=0$ implementation of Eq. (\ref{MUL.04y}) fixed by the $z=1$ option of Eqs. (\ref{AM.05}) and (\ref{AMB.01}). \sapp{The $r=v-1=0$ case} Next, consider the $a_{1}=0$ contribution to the $r=v-1=0$ superposition (\ref{MUL.04y}) which is determined by such implementation of $\tilde{\cal V}_{201}(\{a_{i}\},\{\Delta \tau_{q(k)}\})$ that is parameterized by the $v=1$ component of the $j=2$ diagram 2e (when ${\cal C}_{12}={\cal C}_{13}={\cal C}_{23}=-1$), the deformations of which are described in fig. 8c. It is geometrically evident that there is the single $v=1$ component (assigned with $\gamma=1$) of the latter diagram which is constrained by the $p=2,3$ options of the condition \begin{equation} x^{2}(s_{p})\in[x^{2}(s'_{1}),x^{2}(s_{1})] \label{RE.02a} \end{equation} applied to both of the nonhorizontal lines. In this case, introducing $\bar{\tau}_{k}$ and $\Delta\bar{\tau}_{k}$ according to the $r=0$ prescription of Eqs. (\ref{REP.03}) and (\ref{REP.04}), the decomposition of the parameters $\bar{t}_{p}$ is determined by the $a_{1}=\tilde{a}_{1}=0$ variant of the relations \begin{equation} (-1)^{a_{1}+\tilde{a}_{1}}\bar{t}_{1}=\bar{\tau}_{5}-\bar{\tau}_{1}= \Delta \bar{\tau}_{1}+\Delta \bar{\tau}_{2}+ \Delta \bar{\tau}_{3}+\Delta \bar{\tau}_{4}~~~~~,~~~~~ \bar{t}_{2}=\bar{\tau}_{3}-\bar{\tau}_{1}=\Delta \bar{\tau}_{1}+ \Delta \bar{\tau}_{2}~, \label{INN.03} \end{equation} with $\bar{t}_{1}-\bar{t}_{2}+\bar{t}_{3}=0$, and the convention\footnote{Akin to fig. 2c, the convention ${\cal C}_{23}=-1$ implies that $x^{2}(s_{3})<x^{2}(s_{2})$.} to fix the labels $k=1,2,3$ is sketched in subsection \ref{enumer1}. Correspondingly, it yields the $a_{1}=\tilde{a}_{1}=0$ option of the identification \begin{equation} q(2)=3~,~q(3)=2~~~,~~~\alpha^{(3)}=\alpha^{(2)}= {\cal C}_{32}=1~~~,~~~{\cal C}_{p1}=(-1)^{a_{1}+\tilde{a}_{1}}~,~p=2,3~. \label{INN.02a} \end{equation} Also, the $\omega^{(1)}_{p}=3-p$ option of the $v=0$ Eq. (\ref{REI.01}) implies that the measure of the $r=v-1=0$ representation (\ref{MUL.04}) is obtained through the reidentification: $\bar\tau_{1}\rightarrow\bar\tau_{1}$, $\bar\tau_{3}\rightarrow\bar\tau_{2}$, while $\bar\tau_{5}\rightarrow \bar\tau_{3}$ so that $\bar\tau_{2}$ and $\bar\tau_{4}$ disappear. Then, the remaining three contributions to the $r=v-1=0$ superposition (\ref{MUL.04y}) are associated to such implementation of the quantity $\tilde{\cal V}_{201}(\{a_{i}\},\{\Delta \tau_{q(k)}\})$ that are parameterized by the $v=1$ components of the diagrams 2f and 2g. In turn, the latter elementary graphs can be obtained from the $v=1$ component of the diagram 2e through the vertical reattachments of the left or/and right end-point of its single horizontal line that is formalized by Eq. (\ref{ST.01}). Together with the already considered $w=a_{1}=0$ case, it generates the four different implementations of $\tilde{\cal V}_{201}(\cdot)$ which are described by the corresponding $\{a_{i}\}-$dependent implementations of Eqs. (\ref{INN.03}) and (\ref{INN.02a}). \sapp{The $r=v=1$ case} Consider the $a_{1}=a_{4}=0$ contribution to the $r=v=1$ superposition (\ref{MUL.04y}) which is determined by such implementation of $\tilde{\cal V}_{211}(\{a_{i}\},\{\Delta \tau_{q(k)}\})$ that is parameterized by that component of the Feynman diagram 7e (when ${\cal C}_{12}={\cal C}_{13}={\cal C}_{23}=-1$), where the upper horizontal line is on the left compared to the lower one. This component, the deformations of which are described in fig. 9c, is constrained by the $p=2,3$ options of the condition \begin{equation} x^{2}(s_{p})\in[x^{2}(s'_{4}),x^{2}(s_{4})]~~~~~,~~~~~ x^{2}(s_{1})\leq x^{2}(s'_{4,j})\leq T~, \label{INN.04} \end{equation} where $x^{2}(s'_{4,j})$ denotes the temporal coordinate of the end-point of the $j$th $\bar{\cal R}_{b}^{-1}-$copy that is common for both bold horizontal lines in fig. 9c. Defining $\bar{\tau}_{k}$ and $\Delta\bar{\tau}_{k}$ according to the $r=0$ prescription of Eqs. (\ref{REP.03}) and (\ref{REP.04}), the decomposition of the parameters $\bar{t}_{p}$ is determined by the $a_{1}=z=0$ relations \begin{equation} (-1)^{a_{1}+z}\bar{t}_{1}=\bar{\tau}_{6}-\bar{\tau}_{2}= \Delta \bar{\tau}_{2}+\Delta \bar{\tau}_{3}+ \Delta \bar{\tau}_{4}+\Delta \bar{\tau}_{5}~~~~~,~~~~~ (-1)^{z}\bar{t}_{2}=\bar{\tau}_{4}-\bar{\tau}_{1}=\Delta \bar{\tau}_{1}+ \Delta \bar{\tau}_{2}+\Delta \bar{\tau}_{3}~, \label{INN.09} \end{equation} with $\bar{t}_{1}-\bar{t}_{2}+\bar{t}_{3}=0$. In turn, the deformations of fig. 7e, depicted in fig. 9c, are described by the $a_{1}=z=0$ option of the identification \begin{equation} q(2)=4~,~q(3)=3~,~q(4)=1~~,~~\alpha^{(p)}= (-1)^{z}{\cal C}_{32}=1~~,~~-\alpha^{(1)}= (-1)^{z}{\cal C}_{p1}=(-1)^{a_{1}}~,~p=2,3. \label{INN.10} \end{equation} Also, the $\omega^{(1)}_{p}=3-p$ option of the $v=1$ Eq. (\ref{REI.01}) implies that the measure of the $r=v=1$ representation (\ref{MUL.04}) is obtained through the reidentification: $\bar\tau_{i}\rightarrow\bar\tau_{i}$ for $i=1,2$, $\bar\tau_{4}\rightarrow\bar\tau_{3}$, while $\bar\tau_{6}\rightarrow \bar\tau_{4}$ so that $\bar\tau_{3}$ and $\bar\tau_{5}$ disappear. Concerning the remaining contributions to the $r=v=1$ superposition (\ref{MUL.04y}), they are associated to the implementation of the quantity $\tilde{\cal V}_{211}(\{a_{i}\},\{\Delta \tau_{q(k)}\})$ parameterized by the three elementary graphs. Being generated through the vertical reattachments applied to the leftmost or/and rightmost end-point of fig. 7e (constrained by the $p=2,3$ conditions (\ref{INN.04})). These graphs are depicted in figs. 7g, 7h and the one obtained from fig. 7g via the reflection interchanging the horizontal sides of $C=\Box$. It is formalized by the relations \begin{equation} x^{1}(s'_{1})=a_{1}R~~~~~,~~~~~x^{1}(s'_{4})=a_{4}R~~~~~,~~~~~ x^{1}(s_{1})=x^{1}(s_{4})-R=0~. \label{ST.02} \end{equation} that, together with the above $a_{1}=a_{4}=0$ option, yields the four different implementations of $\tilde{\cal V}_{211}(\cdot)$ described by the corresponding $\{a_{k}\}-$implementations of Eqs. (\ref{INN.09}) and (\ref{INN.10}), where one is to put $z=0$. Finally, to compute the entire $r=v=1$ contribution to the decomposition (\ref{SU.01z}), it remains to include the contribution of such $r=v=1$ superposition (\ref{MUL.04y}) that is associated to the $S(4)-$multiplet of the $j=2$ elementary graphs specified by such component of fig. 7e when the upper horizontal line is on the right compared to the lower one. Alternatively, it can be reproduced from the $S(4)-$multiplet, specified by the so far considered component of fig. 7e, via the reflection interchanging the two vertical sides of $C=\Box$. Then, introducing the $\{\Delta\bar{\tau}_{q(k)}\}-$assignment according to the convention of subappendix \ref{assisym} and performing the auxiliary change of the variables $\bar{\tau}_{k}\rightarrow \bar{\tau}_{n+3-k}$ (with $k=1,...,n+2$), by the same token as previously we arrive at the $r=v=1$ implementation of Eq. (\ref{MUL.04y}) fixed by the $z=1$ option of Eqs. (\ref{INN.09}) and (\ref{INN.10}). \setcounter{equation}{0{Eq. (\ref{MUL.04}): $S(4)-$ and reflection-symmetry} \label{symmetry3} As for the ${\cal R}_{a}^{-1}-$deformations, the vertical nature of the reattachments evidently implies both $S(4)-$ and reflection-symmetries of the parameters which determine the factor (\ref{EX.01h}) (representing the latter deformations in Eq. (\ref{MUL.04})). More generally, provided the prescription of subappendix \ref{assisym}, these two symmetries hold true for the algorithm (presented in Appendix \ref{freedom}) to introduce the entire set $\{\Delta \tau_{q(k)}\}$. Concerning the $\bar{\cal R}_{b}^{-1}-$dressing, the situation is a little bit more tricky as it is clear from the results discussed in the latter Appendix. The relevant parameters, defining\footnote{In particular, it applies to the parameters $\alpha^{(i)}$ and $y^{1}_{i}$ which determine the implementations of the replacements (\ref{KEY.01}).} the implementation (\ref{SR.01g}) of $\tilde{\cal V}_{2rv}(\{a_{i}\},\{\Delta \tau_{q(k)}\})$, may be changed by a particular reattachment or reflection. In consequence, the symmetries of the $\bar{\cal R}_{b}^{-1}-$dressing become manifest only {\it after} the appropriate change of the variables. To explain this point, we first accept the convention that, for a given $rv-$specification, the $z-$ and $\tilde{a}_{1}-$dependent equations below are implemented according to the assignment fixed in the previous Appendix. Then, a direct inspection (presented below) verifies that the quantity $\tilde{\cal V}_{2rv}(\cdot)$ is $z-${\it independent}. Furthermore, after the corresponding implementation of the $z-$independent change of the variables, \begin{equation} \bar{\zeta}~\longrightarrow~(-1)^{a_{1}+m_{rv}(\tilde{a}_{1})}\bar{\zeta}~~~,~~~ \bar{\eta}~\longrightarrow~\bar{\eta}~~~~~~~,~~~~~~~ m_{rv}(\tilde{a}_{1})=v(1-r)\tilde{a}_{1}~, \label{CHA.01} \end{equation} the residual $\{a_{i}\}-$dependence (in the $r=v-1=0$ case including, by definition given after Eq. (\ref{ST.01}), the $\tilde{a}_{1}-$dependence) of $\tilde{\cal V}_{2rv}(\cdot)$ arises only due to the corresponding dependence of the parameters\footnote{Eq. (\ref{LAT.02}) unifies the $r=v=0$ and $r=v=1$ cases (characterized by $w=0$) together with the $r=v-1=0$ case. In particular, $m_{rv}(\tilde{a}_{1})=0$ for all $0\leq r\leq v\leq 1$, except for $v-1=r=0$ when $m_{rv}(\tilde{a}_{1})=w$.} \begin{equation} e_{1}=(-1)^{a_{1}+m_{rv}(\tilde{a}_{1})}a_{1}~~~~~,~~~~~ e_{2}=a_{2}~~~~~,~~~~~e_{3}=-a_{4}~, \label{LAT.02} \end{equation} in terms of which one formulates the quantity \begin{equation} {\cal K}_{rv}((-1)^{a_{1}+m_{rv}(\tilde{a}_{1})}\bar\zeta,\bar\eta, (-1)^{a_{1}+m_{rv}(\tilde{a}_{1})}\alpha^{(1)},\{a_{l}\})= {\cal K}_{rv}(\bar\zeta,\bar\eta,\alpha^{(1)},\{e_{l}\}) \label{LAT.02r} \end{equation} where ${\cal K}_{rv}(\bar\zeta,\bar\eta,\alpha^{(1)},\{a_{l}\})=R^{v-2-r} \tilde{\cal K}_{rv}(R\bar\zeta,R\bar\eta,\{a_{l}\})$ is obtained, from the factor (\ref{SR.01s}) (implicitly depending on $\alpha^{(1)}$ when $r=1$) via the change of the variables (\ref{BR.02a}), and we take into account the transformation law \begin{equation} \alpha^{(1)}~\longrightarrow~(-1)^{a_{1}+m_{rv}(\tilde{a}_{1})}\alpha^{(1)}~~~~~,~~~~~ \alpha^{(k)}~\longrightarrow~\alpha^{(k)}~,~\forall{k}\neq{1}~, \label{LAT.04} \end{equation} that unifies the particular implementations of this transformation which, being given in the previous Appendix, directly follows from definition of $\alpha^{(k)}$ defined by Eqs. (\ref{1.50}) and (\ref{GP.01}). Justifying the representation (\ref{MUL.04}) of ${\cal Z}_{rv}(\cdot)$, we obtain that both the building block (\ref{MUL.02a}) and the exponential $e^{i\left(\bar\eta \bar{t}_{1}-\bar\zeta \bar{t}_{2}\right) \bar{A}{\cal C}_{21}/\bar{\theta}}$ are manifestly $\{e_{i}\}-$independent (with $\bar{t}^{(1)}_{p}\equiv \bar{t}_{p}$ in the $j=2$ case at hand). In turn, the relation (\ref{LAT.02r}) implies Eq. (\ref{MUL.01a}). In particular, in the $v-1=r=0$ case, $e_{1}\equiv e_{1}(a_{1},\tilde{a}_{1})$ depends on the two independent parameters $\tilde{a}_{1}$ and $a_{1}$ which implies that the four members of the $j-2=v-1=r=0$ $S(4)-$multiplet are specified by the three values of $e_{1}$ so that $e_{1}=0$ appears twice (for $a_{1}=0$, $\tilde{a}_{1}=0,1$). In turn, it explains the origin of the factor $2^{(v-r)(1-|{e}_{1}|)}$ in Eq. (\ref{MUL.01a}) which, being equal to unity unless $v-1=r=0$, assumes the value 2 only when $e_{1}=0$. To prove the asserted properties of $\tilde{\cal V}_{2rv}(\cdot)$, let us first verify the independence of the latter exponential. For this purpose, one is to utilize that, unifying Eqs. (\ref{AM.05}), (\ref{INN.03}), and (\ref{INN.09}), the $\{a_{i}\}-$dependence of the splitting (\ref{FU.02}) is defined by the replacement \begin{equation} t_{1}~\longrightarrow~(-1)^{a_{1}+k_{rv}(z)+m_{rv}(\tilde{a}_{1})}t_{1}~~~~~,~~~~~ t_{j}~\longrightarrow~(-1)^{k_{rv}(z)}t_{j}~~~,~~~j=2,3,4~, \label{LAT.01} \end{equation} where\footnote{In order to unify the three different $rv-$assignments, the function $k_{rv}(z)$ is chosen so that $k_{rv}(z)=z$ when $v=r=0$ and $v=r=1$, while $k_{rv}(z)=0$ when $1-v=r=0$.} $k_{rv}(z)=((1-v)+vr)z$. Therefore, modulo the sign factors, the splitting is $S(4)-$ and reflection-invariant. (In particular, the factor $(-1)^{a_{1}+m_{rv}(\tilde{a}_{1})}$ arises due to the prescription formulated in the footnote after Eq. (\ref{SR.01a}).) Also, the previous Appendix establishes variables, provided the transformation properties ${\mathcal{C}}_{kl}\rightarrow~(-1)^{H_{kl}}{\mathcal{C}}_{kl}$ of the entries of the intersection-matrix, \begin{equation} {\mathcal{C}}_{p1}~\longrightarrow~ (-1)^{a_{1}+k_{rv}(z)+m_{rv}(\tilde{a}_{1})}{\mathcal{C}}_{p1}~, ~\forall{p}\neq{1}~~~~~,~~~~~ {\mathcal{C}}_{il}~\longrightarrow~(-1)^{k_{rv}(z)}{\mathcal{C}}_{il}~,~ \forall{i,l}\neq{1}~, \label{LAT.03} \end{equation} where we take into account the definition (\ref{1.34}) of ${\mathcal{C}}_{il}$ combined with the pattern of the reattachments (formalized by Eqs. (\ref{AM.07d}), (\ref{ST.01}), and (\ref{ST.02})). Altogether, one concludes that (in the quantity (\ref{SR.01g})) the $\{a_{i}\}-$dependence of the factor $e^{i\left(\bar\eta \bar{t}_{1}-\bar\zeta \bar{t}_{2}\right) \bar{A}{\cal C}_{21}/\bar{\theta}}$ indeed disappears when Eq. (\ref{LAT.01}) is combined with the change (\ref{CHA.01}) of the variables, provided the transformation law (\ref{LAT.03}). Next, let us turn to the $\{a_{i}\}-$dependence of the dressing weight (considered prior to the change of the variables) composed of the $n-v$ factors (\ref{SR.01a}) entering the definition (\ref{SR.01g}) of $\tilde{\cal V}_{2rv}(\cdot)$. In view of Eq. (\ref{LAT.03}), this dependence is determined by the transformation law (\ref{LAT.04}) together with the replacement \begin{equation} T_{{\cal C}_{ij}}(\bar\eta,\bar\zeta)~\longrightarrow~ T_{(-1)^{H_{ij}}{\cal C}_{ij}} (\bar\eta,(-1)^{a_{1}+m_{rv}(\tilde{a}_{1})}\bar\zeta) =\bar\eta-\bar\zeta \label{LAT.05} \end{equation} of the arguments of the combination (\ref{1.24h}). As a result, after the change (\ref{CHA.01}) of the variables, the considered weight assumes the $\{a_{i}\}-$independent implementation (\ref{MUL.02a}). Finally, to deduce the relation (\ref{LAT.02r}), all what one needs is to apply the replacements (\ref{LAT.03}) and (\ref{LAT.04}) together with the change (\ref{CHA.01}) of the variables. In particular, in the $r=v=1$ case (characterized by $m_{11}(\cdot)=0$), by virtue of Eq. (\ref{LAT.04}), the transformation yields $e_{4}=(-1)^{-2a_{1}}a_{4}/\alpha^{(1)}=- a_{4}$, where $\alpha^{(1)}=-1$ is associated to figs. 7a and 7e. Summarizing, it verifies that the decomposition (\ref{SU.01z}), indeed assumes the form fixed by Eq.~(\ref{MUL.04}). \vspace*{\fill}\pagebreak
1,477,468,751,203
arxiv
\section{introduction} The ATLAS and CMS have observed the Higgs boson \cite{ATLAS-CMS} and up to now the measurements of Higgs properties consist with the standard model (SM) predictions. However, the SM suffers from the so-called naturalness problem \cite{finetuning} which inspires theorists to propose various new physics models. Among these new physics models the natural SUSY \cite{nsusy1,nsusy2,nsusy3,nsusy4,nsusy5} satisfied the naturalness criterion perfectly. It needs a light stop sector and a weak scale higgsino mass $\mu \lesssim O(300\,\text{GeV})$. The stop sector in the natural SUSY has been discussed extensively \cite{stop,nsusy-stop,hanz,drees}. The weak scale higgsino in the natural SUSY results in the existence of at least two light neutralinos and a pair of charginos at the weak scale. If the lighter one of the two neutralinos is the dark matter, the dark matter relic density would be far below the observed one because a higgsino like dark matter usually has large annihilation cross section to the SM particles. However if the bino is the LSP, then the dark matter is bino like and has some higgsino component, the dark matter relic density could much easier to be satisfied. So searching for such kinds of electroweakinos would directly probe the dark matter sector and the naturalness of SUSY. Generally, the search strategies depends on the spectrum of electroweakinos. If the electroweakinos are highly degenerate at low energy, they could be probed by the mono-jet, mono-photon or mono-$Z$ in the future experiments \cite{monojet,carpenter,liutao}. One such case is, at weak scale, only the higgsinos are light. In this case, the mono-jet signal can search higgsinos to 150 GeV at 2$\sigma$ level at 14TeV LHC with luminosity $3000~\rm{fb}^{-1}$. If the mass splitting between the electroweakinos is moderate, they can be probed through multi-soft leptons \cite{soft-lepton}. Recently, some authors also proposed a new channel $\ell^+ \ell^- +\gamma +\slashed E_T $ to probe the region with a small splitting between the higgsinos and bino \cite{bino-higgsino}. The photon in the final state comes from the $\chi^0_2$ decaying into $\chi^0_1$ plus a photon, and the two leptons come from the other neutralino decaying into LSP via a virtual $Z$ boson. When the splitting between the two neutralinos is small, the branching ratio of $\chi_2^0\rightarrow \gamma \chi_0^1$ is considerable. So, another signal $j+\ell+ \gamma + \slashed {E}_T $ from neutralino/chargino pair production may be also accessible at the future LHC \cite{new}. If the electroweakinos have a large mass splitting, the multi-leptons final state from chargino/neutralino pair production has the highest sensitivity \cite{higgsino-3l,hantao}. The ATLAS and CMS collaborations performed such a study and gave the mass limits $m_{\tilde{\chi}^\pm_1,~\tilde{\chi}^0_2} >345$ GeV (ATLAS) \cite{atlas-3l} and $m_{\tilde{\chi}^\pm_1,~\tilde{\chi}^0_2} >270$ GeV (CMS) \cite{cms-3l} assuming the chargino/neutralino decays via intermediate gauge bosons and the LSP is almost massless. But this limit depends on the component of chargino/neutralino. In the experiment, a wino NLSP and bino LSP are assumed and thus the pair produced chargino/neutralino are wino-like. The limit would be changed if the NLSP is higgsino. One reason is that the cross section of chargino/neutralino production in the wino NLSP case is larger than the one in the higgsino case. The other reason is that in the realistic spectrum, the decay branching ratio of $\chi_2^0\rightarrow\chi^0_1 + Z$ is not 100\% since $\chi_2^0$ could decay into $\chi^0_1 + h$. In the present work, we focuses on the LSP bino and NLSP higgsino case. We also require the wino decoupled. Such kinds of spectrums can be realized in the non-universal gaugino masses models. We reinterpreted the experiments results in the realistic spectrum, and give the prospect of detecting this signal in the future LHC experiments. The rest of this paper is organized as follows: In Sec. II, we scan the parameter space, and the properties of the surviving parameter space are investigated. In Sec. III, we reinterpret the experimental limits on the parameter space. In Sec. IV, the prospect of detecting this signal in our surviving space are studied. \section{The property of surviving parameter space } In this section, we scan the parameter space in the frame of natural SUSY with a light bino. Some survived samples are shown, which are susceptible to the 3$l$ experiment and possible closed by future direct search results. The parameter space are scanned in the following region: \begin{eqnarray} 1~\text{GeV} <M_1< 100~\text{GeV},\quad 100~\text{GeV}<\mu < 300~\text{GeV},\quad 3 < \tan\beta < 60 , \end{eqnarray} where the lower bound of $\mu$ avoids the chargino search limit and upper bound satisfies the naturalness requirement. Other parameters, except for the stop sector, are fix at 2 TeV. The stop sector are scanned in this region, \begin{eqnarray} 700 ~\textrm{GeV} <(M_{\tilde{Q}_3},M_{\tilde{t}_R})< 2 ~\textrm{TeV}, ~~-3 ~\textrm{TeV}< A_t < 3~\textrm{TeV} \end{eqnarray} where the lower bound avoids the direct stop search limit and the upper bound keeps the naturalness of the SUSY. The following constraints are considered in the scan: \begin{itemize} \item[(1)] The SM-like Higgs mass is required to within the range of 123--127 GeV. We use \textsf{FeynHiggs2.8.9} \cite{feynhiggs} to calculate the Higgs mass, and impose the experimental constraints from LEP, Tevatron and LHC by \textsf{HiggsBounds-3.8.0} \cite{higgsbounds}. \item[(2)] We require our samples to satisfy various B-physics bounds at 2$\sigma$ level. We use \textsf{SuperIso v3.3} \cite{superiso} to implement the constraints, including $B\rightarrow X_s\gamma$ and the latest measurements of $B_s\rightarrow \mu^+\mu^-$, $B_d\rightarrow X_s\mu^+\mu^-$ and $ B^+\rightarrow \tau^+\nu$. \item[(3)] The SUSY prediction of the precision electroweak observable, such as $\rho_l$, $\sin^2 \theta_{\rm eff}^l$, $m_W$ and $R_b$ \cite{rb}, are required to be within the $2\sigma$ ranges of the experimental values. \item[(4)] A light bino in the natural SUSY would mix with the higgsino, which induces three neutralinos and a pair of charginos. The lightest neutralino acts as the dark matter candidate. So the relic abundance and the direct search of the dark matter set limit on the parameter space. Here we require the thermal relic density of the lightest neutralino (as the dark matter candidate) is under the 2$\sigma$ upper limit of the Planck value \cite{planck}. We use the code \textsf{MicrOmega v2.4} \cite{micromega} to calculate the relic abundance and DM-nucleon scattering. \end{itemize} \begin{figure}[htbp] \includegraphics[width=6.0in,height=3in]{1.eps} \caption{ Dark matter properties in the surviving space. All points satisfy 2$\sigma$ upper limit of PLANCK. The red pentagrams locate within the 2$\sigma$ range of PLANCK, $0.091 < \Omega h^2 < 0.138$, where a 10\% theoretical uncertainty is included. The DM-nucleon scattering cross section has been scaled by a factor $\Omega h^2/\Omega h^2(PLANCK)$. } \label{fig1} \end{figure} Figure \ref{fig1} shows the dark matter properties in the surviving space. The left panel scatters the relic abundance of the surviving samples. We can see that there are two regions left both of which have a bino like dark matter, requiring the 2$\sigma$ upper limit of the Planck value \cite{planck}. The first region corresponding a lighter dark matter mass lying in $35\sim 50\,\text{GeV}$ near half of the Z boson mass. In the other region, the dark matter mass lies in $50-65$ GeV near half of the Higgs mass. It is easy to explain it. Usually, a bino LSP has a small annihilation cross section and thus it is easy to get a large relic density. If there is a particle in the s-channel, the annihilation cross section can be raised through resonance enhancement and thus the relic abundance are reduced. It requires that the dark matter mass is around half of the boson mass. In our spectrum, only the $Z$ boson or the Higgs boson can play this role. We should note there are points satisfying the 2$\sigma$ range of PLANCK. These points could get the correct relic abundance due to joint effect of the resonance and the mixing between bino and higgsino \cite{well-temper}. The right panel shows the limit of dark matter direct search. It tells that the $Z/h$ resonance region easily get rid of the LUX \cite{LUX} constraint, due to the moderate spitting between bino and higgsino. The future XENON-1T (2017) will exclude large parameter space of this region, but a small fraction can still survived when dark matter mass is very close to half the $Z$ mass or Higgs mass. It should be noted the some points within the 2$\sigma$ range of PLANCK still survived the LUX search. But they are possibly covered by future dark matter Direct search XENON-1T (2017). So, in the following, we concentrate on the points which are still survived under limits from the dark matter experiments. Although in this region, a correct relic abundance could be derived by elaborately tuning the neutralino mass, we just take the dark matter relic abundance as an upper limit here. \section{Direct search limits on the parameter space} At LHC, the ATLAS and CMS collaborations have separately preform the $3l$ searches \cite{atlas-3l,cms-3l}. They aims at the $\chi^0_2 \chi^{\pm}_1$ pair production following by decays $\chi_2^0\rightarrow\chi^0_1 + Z$, $\chi^\pm_1\rightarrow\chi_1^0 + W^\pm$ (the $W$ and $Z$ can be virtual), and then the $W$/$Z$ decays producing 3 leptons in the final state. In this paper, we use the ATLAS experiment result to constrain our parameter space. The ATLAS experiment\cite{atlas-3l2} defines six signal regions aiming at $Z$-depleted region and $Z$-enriched region. Table \ref{tab1} shows the selection requirements of these six signal regions. We can see that SRnoZ{(a,b,c)} concentrate on the $Z$-depleted case where the invariant mass of the SFOS lepton pair departs the $Z$-boson mass. Conversely, in the other three regions, a $Z$ boson is required in the mediate state. Through the analysis, the experiments give an exclusion region on the $\chi^0_2-\chi^0_1$ plane and the exclusion limit can reach 320 GeV \footnote{We note that in the latest ATLAS $3l$ results this limit reaches 345 GeV. We carefully check the difference between the old one and the latest one, and find that the cut efficiency is not improved significantly and the latest result is also hard to implement because twenty signal regions are defined.}. It should be noted that in the experiments a wino like NLSP and bino LSP are assumed and the decay branching ratio of $\chi_2^0\rightarrow\chi^0_1 + Z$ and $\chi^\pm_1\rightarrow\chi_1^0 + W^\pm$ are set to be 100\%. In the higgsino NLSP case, not only the $\chi^0_2$, bust also the $\chi^0_3$ contributes to the signals. Even so, the total cross section is still about half of the one in the wino case. So we should carefully implement the $3l$ experiments on our parameter space; in the present work, we use the Monte Carlo simulation. \textsf{MadGraph5} \cite{mad5} are adopted to generate events, the parton shower is carried out by \textsf{PYTHIA} \cite{pythia}, and \textsf{CheckMATE1.1.4} \cite{Checkmate} are used to simulate the $3l$ experiments events. Finally, we combine the simulation results from the six signal regions and derive the final exclusion limit. At the beginning, we checked the reliability of our simulation with the benchmark points provided by ATLAS paper; we found them well consistent. Figure 2 shows our limit assuming a 100\% branching ratio of $\chi_2^0\rightarrow\chi^0_1 + Z$ and $\chi_3^0\rightarrow\chi^0_1 + Z$. We can see the limit can reach utmost 250 GeV when the LSP is near to 0 GeV, comparing to the limit 320 GeV in the wino case. It also tells when the LSP becomes heavy, the limit reduces rapidly. This figure is also consistent with similar figures provided by other authors\cite{higgsino-3l}. We should note that there is a part where the limit is not effective. It locates around $M_{\chi^0_2} \sim$ $140-160$ GeV and $M_{\chi^0_1} \sim$ $40-60$ GeV. It can be explained as follows. When the splitting of $\chi^0_2$ and $\chi^0_1$ is just larger than the $Z$ mass, this kinematics is very similar to the one of $WZ$ background and thus its backgrounds are relatively large. Then this region has a small probing efficiency. \begin{table}[th] \centering\caption{The selection requirements for the six signal regions. \label{tab1}} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Selection & SRnoZa & SRnoZb & SRnoZc & SRZa & SRZb & SRZc \\ \hline $m_{SFOS}$ [GeV] &$<$ 60 &60-81.2 &$<$81.2 or $>$ 101.2 &81.2-101.2 &81.2-101.2 &81.2-101.2 \\ \hline $E_T^{miss}$ [GeV] &$ > $50 & $>$75 &$>$75 &75-120 &75-120 &$>$120 \\ \hline $m_T$ [GeV] &$-$ &$-$ &$>$110 &$<$110 &$>$110 &$>$110 \\ \hline $p_T$ $3^{rd}~l$ [GeV] &$>$10 &$>$10 &$>$30 &$>$10 &$>$10 &$>$10 \\ \hline SR veto & SRnoZc & SRnoZc & $-$ & $-$ & $-$ & $-$ \\ \hline \end{tabular} \label{tab1} \end{table} \begin{figure}[htbp] \includegraphics[width=4.0in,height=3in]{2.eps} \caption{The $3l$ exclusion limit on the $\chi^0_2-\chi^0_1$ plane with higgsino being the NLSP and a 100\% branching ratio of $\chi_{2,3}^0\rightarrow\chi^0_1 + Z$ is assumed. The red dashed line is the dividing line $m_{\chi_2^0}=m_{\chi_1^0}+m_{h}$. On the right part of this line, the decay channel of $\chi^0_{2}\rightarrow\chi^0_1 h$ opens.} \label{fig2} \end{figure} To impose the $3l$ constraint on our samples, it should also survey the decay branching ratio of $\chi^0_{2,3}$. Fig. 3 presents the decays of $\chi^0_{2,3}$ ($\chi^0_{2,3}\rightarrow\chi^0_1 Z$ including off-shell Z). When $\chi^0_1 h $ channel does not open, the $\chi^0_{2,3}\rightarrow\chi^0_1 Z$ dominates. Otherwise, we can see two distinct regions for the decays of $\chi^0_{2,3}$, the $h$-enriched region and the $h$-depressed region. In the $h$-enriched region, the branching ratio of $\chi^0_{2,3}\rightarrow\chi^0_1 h$ can reach as much as $75\%$, whereas in the $h$-depressed region, this branching ratio would be less than 25\%. Note that $\chi^0_{2}\rightarrow\chi^0_1 h$ and $\chi^0_{3}\rightarrow\chi^0_1 h$ can not be enriched at the same time, which can be inferred from the third panel of Fig. 3. The reason is illustrated clearly in paper\cite{hantao,jung}. It also should point out that the $3l$ probing ability is relevant to the $Br(\chi^0_{2}\rightarrow\chi^0_1 Z) +Br(\chi^0_{3}\rightarrow\chi^0_1 Z)$. \begin{figure}[htbp] \includegraphics[width=7.0in,height=3in]{3.eps} \caption{The decay branching ratio of the neutralinos.} \label{fig3} \end{figure} \begin{figure}[htbp] \includegraphics[width=4.0in,height=3in]{4.eps} \caption{The $3l$ exclusion results considering branching ratio effect. The blue dashed line is the dividing line $m_{\chi_2^0}=m_{\chi_1^0}+m_{h}$. On the right part of this line, the decay channel of $\chi^0_{2}\rightarrow\chi^0_1 h$ opens. The pentagrams on the plot satisfying the 2$\sigma$ range of PLANCK.} \label{fig4} \end{figure} Figure 4 presents the limit at 8 TeV LHC on the parameter space considering the branching ratio effect. It is found that only a very tiny region can be excluded in this scenario and most regions survived. The blue line represents the threshold $\chi^0_{2}\rightarrow\chi^0_1 h$. Only the points near the bottom of the line have been excluded. The points which survives lying the left part of the line escape from the experimental limits largely due to a suppression on the the kinematics cut. And the reason for the points surviving on the right part of the line is the $\chi^0_{2}\rightarrow\chi^0_1 h$ channel opens and the reduce of $\chi^0_{2,3}\chi^\pm_1$ production rate. Note that even the $\chi^0_{2}\rightarrow\chi^0_1 h$ channel opens, there are still points excluded. The reason is that when the $\chi^0_{2}\rightarrow\chi^0_1 h$ is just open, the branching ratio of $\chi^0_{2,3}\rightarrow\chi^0_1 h$ is still very small, which can be seen from the right panel of Fig. 3. We note the points satisfying the relic density are still relatively safe. In addition, the mass of $\chi^0_2$ in our surviving samples is larger than 150 GeV. It is largely due to the Z invisible decay and Higgs invisible decay limit because a fraction of Higgsino component of dark matter would sizably affect the decays of Z boson and Higgs boson. \section{Probing prospects in future LHC } The discovery potential at 14 TeV LHC are discussed in this section. Although the $\chi^0_{2,3}\chi^\pm_1 $ production rate will increase at 14 TeV LHC, the background will enlarge too. We must simulate the backgrounds as well as the signals. The irreducible background includes diboson, triboson and $t\bar{t}W/Z$ production, among which the diboson production highly dominates. The reducible background includes single and pair production of top quarks, $WW$ and single $W$ or $Z$ boson processes produced in association with jets or photons, among which the $t\bar{t}$ production highly dominates. In our paper we only simulate the mainly backgrounds: the diboson backgrounds and the $t\bar{t}$ backgrounds. We use \textsf{MadGraph} simulate our backgrounds and scale the cross sections to the next leading order \cite{NLO}. To make our backgrounds more accurate, we first simulate the backgrounds at 8TeV. After comparing our simulated backgrounds and the backgrounds derived from experimental results, we get the scale factors in each signal region. Then we multiply the corresponding scale factors on our simulated 14TeV backgrounds and then we take these scaled 14TeV backgrounds as our backgrounds. Although at 14 TeV LHC the scaled factors might be changed a little, it still offset some deviation between our simulations and the experimental background estimation. In Table 1 lists the number of background events at 14 TeV LHC 300 fb$^{-1}$. In the $Z$-enriched region, $WZ$ production dominates the background, whereas in the $Z$-depleted region, the $t\bar{t}$ has a comparative contribution. \begin{table}[th] \centering\caption{The number of background events at 14 TeV LHC 300 fb$^{-1}$. \label{tab1}} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Background & SRnoZa & SRnoZb & SRnoZc & SRZa & SRZb & SRZc \\ \hline $ZZ$ &410 &59 &10 &280 &39 &12 \\ \hline $ZW^\pm$ &1391 &595 &71 &6850 &661 &189 \\ \hline $t\bar{t}$ &1715 &401 &62 &272 &178 &19 \\ \hline Total &3516 &1055 &143 &7402 &878 &220 \\ \hline \end{tabular} \label{tab2} \end{table} For the signal, the cross section are calculated using \textsf{Prospino2.1} \cite{prospino}. We implement the same cuts on the signal and backgrounds. The following formulas are adopted to calculate the significance \begin{eqnarray} \text{Significance} = \frac{S}{\sqrt{B+(0.1 B)^2}} \end{eqnarray} where $S$ is the number of signal events and $B$ is the total number of background events. We also considered 10\% sys. error in the estimation. We present the final results in Fig. \ref{fig5}. It shows that the region with $40~\text{GeV} \lesssim m_{\tilde{\chi}^\pm_1} \lesssim 60~\text{GeV}$ and $160~\text{GeV} \lesssim m_{\tilde{\chi}^0_{2,3}} \lesssim 300~\text{GeV}$ can be covered at $3\sigma$ level. Some parameter space can reach the $5\sigma$ discovery level. We note here the points satisfying the 2$\sigma$ range of PLANCK would be easily covered at $2\sigma$ level. A tiny part of the parameter space is under $2\sigma$ because it locates at the region where the kinematics similar to the $WZ$ background. However, if the luminosity increases, this region would be more squeezed. \begin{figure}[htbp] \includegraphics[width=4.0in,height=3in]{5.eps} \caption{The discovery potential at 14 TeV LHC with 300 fb$^{-1}$. The pentagrams on the plot satisfying the 2$\sigma$ range of PLANCK. } \label{fig5} \end{figure} Finally, we stress that our analysis is performed in the framework of MSSM. In some extensions of the MSSM, such as the next-to-minimal supersymmetric standard model (NMSSM) which seems to be more favored by the LHC Higgs data \cite{nmssm-1}, the neutralino LSP may have a significant singlino component and thus can be very light \cite{nmssm-2}. Then the $\tilde{\chi}^\pm_1 \tilde{\chi}^0_{2,3}$ production may have different signatures. \section{Conclusion} Motivated by the naturalness, we study a simplified MSSM scenario where only the bino-like LSP and higgsino-like NLSP are light. We first scan the parameter space of this scenario, considering the constraints from the Higgs mass, flavor physics, electroweak precision measurements and dark matter experiments. Then in the allowed parameter space, we perform a Monte Carlo simulation for the $\tilde{\chi}^\pm_1 \tilde{\chi}^0_{2,3}$ production followed by $\tilde{\chi}^\pm_1 \to W^\pm \tilde{\chi}^0_1$ and $\tilde{\chi}^0_{2,3} \to Z\tilde{\chi}^0_1$. By examining the presently available trilepton bounds on the wino-like chargino/neutralino, we find that only a narrow region $40\,\rm{GeV} \lesssim m_{\tilde{\chi}^0_1} \lesssim 50\,\rm{GeV}$ and $160\,\rm{GeV} \lesssim m_{\tilde{\chi}^0_{2,3}} \lesssim 170\,\rm {GeV}$ on the plane of $m_{\tilde{\chi}^0_1}-m_{\tilde{\chi}^0_{2,3}}$ can be excluded. Finally, we explore the potential of trilepton signature in probing such a scenario at 14 TeV LHC and find that the region with $40\,\rm{GeV} \lesssim m_{\tilde{\chi}^0_1} \lesssim 60\,\rm {GeV}$ and $160 \rm {GeV}\,\lesssim m_{\tilde{\chi}^0_{2,3}} \lesssim 300\,\rm{GeV}$ can be covered at $3\sigma$ level with luminosity ${\cal L}=300$ fb$^{-1}$. \section*{Acknowledgement} I would like to thank Junjie Cao, Jie Ren, Lei Wu, Jin Min Yang and Yang Zhang for helpful discussions and valuable comments on the manuscript. I acknowledge the Korea Ministry of Education, Science and Technology (MEST) for the support of the Young Scientist Training Program at the Asia Pacific Center for Theoretical Physics (APCTP). \hbox to \hsize{\hss} \newpage
1,477,468,751,204
arxiv
\section{\label{sec:level1}Introduction} The center vortices are color magnetic line-like (surface-like) objects in three (four) dimensions which are quantized in terms of center elements of the gauge group. Condensation of center vortices in the vacuum of QCD leads to quark confinement such that the color electric flux between quark and antiquark is compressed into tubes and a linear rising potential between static quarks is obtained. In the vortex picture, quark confinement emerges due to the interaction between center vortices and Wilson loops \cite{Greensite2003,Engelhardt2005}. On the other hand, monopoles are playing the role of agents of confinement in the dual superconductor scenario \cite{Suzuki1993,Hooft1981}. Therefore, one may expect that there are some kind of relations between monopoles and center vortices. Monte Carlo simulations \cite{Del Debbio1998} indicate that a center vortex configuration after transforming to maximal Abelian gauge and then Abelian projection, appears in the form of the monopole-vortex chains in $SU(2)$ gauge group. The idea of monopole-vortex chains has been studied by so many researchers \cite{Del Debbio1998,Cornwall1998,Chernodub2009,Chernodub2005,Reinhardt}. In this article, monopole-vortex chains in $SU(2)$ gauge group are investigated in a model. This model is the same as the thick center vortex model \cite{Fabe1998}, but we use monopole-antimonopole flux instead of the center vortex flux. The motivation is to see if by this simple model we can observe the idea of monopole-vortex chains which has already been confirmed by lattice calculations, as well as some other phenomenological models. In this model, monopole-antimonopole configurations which are line-like and similar to center vortices are assumed to exist in the vacuum. Studying the group factors of the monopole-antimonopole configurations and center vortices, we understand that the monopole-antimonopole configurations are constructed of two center vortices. Increasing the thickness of the center vortex core increases the energy of the center vortex and therefore the energy of the vacuum which is made of these vortices. As a result, the potential energy between static quark-antiquark increases. This fact can be confirmed by this model, as well. This is a trivial fact from the physical point of view that the condensation of vortices leads to quark confinement. Classically, it is very similar to the Aharanov-Bohm effect where increasing the thickness of the magnetic flux and therefore the magnetic energy of the system, changes the interference pattern. Using this simple model, we have calculated the potentials induced by the monopole-antimonopole configurations and center vortices. Comparing these potentials, we observe that the monopole-antimonopole configurations leads to a larger static quark-antiquark potential compared with the case when we use two center vortices in the model. We interpret this extra energy as a repulsive energy between two center vortices constructing the monopole-antimonopole configurations and then we discuss that the monopole-antimonopole configurations can deform to the monopole-vortex chains as confirmed by lattice calculations and other phenomenological models. In section \ref{sec:monopole}, the formation of monopoles which is related to the Abelian gauge fixing method is reviewed in $SU(2)$ gauge group. A model with structures of center vortices and monopole-antimonopole configurations are studied in sections \ref{sec:model1} and \ref{sec:model2}. Then in section \ref{sec:SU(2)}, we study the group factors and potentials of these structures to argue monopole-vortex chains. Finally, we summarize the main points of our study in section~\ref{sec:conclusion}. \section{Abelian gauge fixing and magnetic monopole charges}\label{sec:monopole} By Abelian gauge fixing, magnetic monopoles are produced in a non Abelian gauge theory. Specific points in the space where the Abelian gauge fixing becomes undetermined are sources of magnetic monopoles. In the following the formation of the magnetic charge by Abelian gauge fixing method is discussed \cite{Ripka}. In order to reduce a non Abelian gauge theory into an Abelian gauge theory, the gluon field under a gauge transformation can not be diagonalized. In fact, the gluon field $A^\mu $ has four components and only one of them can be aligned simultaneously. Therefore, a scalar field is used to fix a gauge. One can consider a scalar field $\Phi \left( x\right)$ in the adjoint representation of $SU(N)$ as the following: \begin{equation} \Phi \left( x\right) =\Phi _a\left( x\right) T_a \label{phigauge} \end{equation} where $T_a$ are the $N^2-1$ generators of the $SU\left( N\right) $ gauge group. A gauge which diagonalizes the matrix $\Phi \left( x\right)$ is called Abelian gauge. Now, we consider the $SU(2)$ gauge group. A gauge transformation $\Omega \left( x\right)$ can diagonalize the field $\Phi \left( x\right)$: \begin{equation} \Phi =\Phi _{a}T_{a}\rightarrow \Omega \Phi \Omega ^{\dagger }=\lambda T_{3}=\left( \begin{array}{cc} \lambda & 0 \\ 0 & -\lambda \end{array} \right), \end{equation} where \begin{equation} \lambda =\sqrt{\Phi _{1}^{2}+\Phi _{2}^{2}+\Phi _{3}^{2}}. \end{equation} The eigenvalues $\lambda \left( x\right) $ of the matrix $\Phi \left( x\right) $ are degenerated when $\lambda =0$ and therefore three components $% \Phi _{a=1,2,3}\left( \vec{r}\right) $ are zero at specific points $\vec{r}=\vec{r}_{0}$: \begin{equation} \Phi _{1}\left( \vec{r}_{0}\right) =0\;\;\;\;\;\;\Phi _{2}\left( \vec{r}% _{0}\right) =0\;\;\;\;\;\;\Phi _{3}\left( \vec{r}_{0}\right) =0 \label{x123r} \end{equation} In the vicinity of the point $\vec{r}=\vec{r}_{0}$ we can express $\Phi \left( \vec{% r}\right)$ in terms of a Taylor expansion: \begin{equation} \Phi \left( \vec{r}\right) =\Phi _{a}\left( \vec{r}\right) T_{a}=T_{a}C_{ab}\left( x_{b}-x_{0b}\right), \end{equation} where $C_{ab}=\left. \frac{% \partial \Phi _{a}}{\partial x_{b}}\right| _{\vec{r}=\vec{r}_{0}}$. Therefore the field $\Phi \left( \vec{r}\right) $ has the hedgehog shape in the vicinity of the point $\vec{r}=\vec{r}_{0}$. One can define another coordinate system where the point $\vec{r}_{0}$ is placed at the origin. In this coordinate system, the field $\Phi \left( \vec{r}^{\prime }\right) $ has the form: \begin{equation} \Phi \left( \vec{r}^{\prime }\right) =x_{a}^{\prime }T_{a}\;\;\;\;\;\;x_{a}^{\prime }=C_{ab}\left( x_{b}-x_{0b}\right). \label{shedge} \end{equation} Dropping the prime on $% x^{\prime }$ and using the spherical coordinates for the vector $\vec{r}$, one get to \begin{equation} \Phi \left( \vec{r}\right) =x_{a}T_{a}=\frac{r}{2}\left( \begin{array}{cc} \cos \theta & e^{-i\varphi }\sin \theta \\ e^{i\varphi }\sin \theta & -\cos \theta \end{array} \right). \end{equation} The gauge transformation $\Omega$ which diagonalizes the hedgehog field $\Phi $ is \begin{equation} \Omega \left( \theta ,\varphi \right)=\left( \begin{array}{cc} e^{i\varphi }\cos \frac{\theta }{2} & \sin \frac{\theta }{2} \\ -\sin \frac{\theta }{2} & e^{-i\varphi }\cos \frac{\theta }{2} \end{array} \right). \label{omega} \end{equation} Therefore the hedgehog field $\Phi $ is diagonalized as the following: \begin{equation} \Omega \Phi \Omega ^{\dagger }=\frac{r}{2}\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) =rT_{3}. \end{equation} The gluon field transforms under the same gauge transformation: \begin{equation} \vec{A}=\vec{A}_aT_a\rightarrow \Omega \left( \vec{A}+\frac 1{ie}\vec{\nabla}\right) \Omega ^{\dagger }. \label{aomega} \end{equation} One can obtain: \begin{equation} \frac 1{ie}\Omega \vec{\nabla}\Omega ^{\dagger }=\frac 1e\left( -\vec{e}% _\theta T_2e^{i\varphi }-\vec{e}_\varphi \frac{1+\cos \theta }{r\sin \theta }% T_3+\vec{e}_\varphi \frac 1r\left( \cos \varphi T_1-\sin \varphi T_2\right) \right). \end{equation} Thus, the gluon field under the gauge transformation of Eq. (\ref {omega}) can be separated into a regular part $\vec{A}% ^{R} $ and a singular part: \begin{equation} \vec{A}=\vec{A}_{a}T_{a}=\vec{A}_{a}^{R}T_{a}-\frac{1}{e}\vec{n}_{\varphi }% \frac{1+\cos \theta }{r\sin \theta }T_{3}. \label{asph} \end{equation} The singular part has the form of a gauge field in the vicinity of a magnetic monopole with magnetic charge equal to: \begin{equation} g=-\frac{4\pi }{e}T_{3}. \label{gpie} \end{equation} To summarize, we observe that in the vicinity of the points where the eigenvalues of the matrix $\Phi \left( x\right) $ are degenerate, the singular part of the gluon field in the Abelian gauge behaves like a monopole with magnetic charge $g=-\frac{4\pi }{e}T_{3}$. \section{ A model of vacuum structure } \label{sec:model1} In this model \cite{Fabe1998}, the Yang Mills vacuum is dominated by center vortices which have a finite thickness (a core). In $SU(N)$ gauge group, there are $N-1$ types of center vortices corresponding to the nontrivial center elements of $z_n=e^{\frac{i2\pi n}{N}}$ enumerated by the value $n=1,...,N-1$. The effect of piercing a Wilson loop by a thick center vortex is assumed to be represented by insertion of a group element $G$ in the link product as the following \begin{equation} W(C)=Tr \big[U...U\big]\longrightarrow Tr \big[U...G...U\big], \label{W2} \end{equation} where \begin{equation} \label{average} G(\vec\alpha_C^{n}(x),{S})={S}\;\exp\left[i\vec{\alpha}_C^{n}(x)\cdot\vec{\mathcal{H}}\right]\;{S}^\dagger. \end{equation} The $\{\mathcal{H}_i~i=1,..,N-1\}$ are the Cartan generators, ${S}$ is a random element of $SU(N)$ gauge group and angle $\vec{\alpha}_C^{n}(x)$ shows the flux profile which depends on the Wilson loop size and the location $x$ of the center vortex with respect to the Wilson contour $C$. The random group orientations associated with $S$ are uncorrelated, and should be averaged. The averaged contribution of $G$ over orientations in the group manifold specified by $S$ is \[ \overline{G}(\vec{\alpha}_C^{n}(x))=\int dS \; S \exp\left[i\vec{\alpha}_C^{n}(x)\cdot\vec{\mathcal{H}}\right] {S}^\dagger= \] \begin{equation} \label{group} =\frac{1}{d_r}Tr\left(\exp\left[i\vec{\alpha}_C^{n}(x)\cdot\vec{\mathcal{H}}\right]\right)\; \mathbf{I}_{d_r}\equiv \mathcal{G}_r(\vec{\alpha}_C^{n}(x))\; \mathbf{I}_{d_r}, \end{equation} where $\mathcal{G}_r(\vec{\alpha}_C^{n}(x))$ is called the group factor and $\mathbf{I}_{d_r}$ is the $d_r\times d_r$ unit matrix. In $SU(N)$ case, the group factor of the fundamental representation interpolates smoothly from $e^{\frac{i2\pi n}{N}}$, if the core of the center vortex is located completely inside the Wilson loop, to $1$, if the core is completely exterior. The Wilson loop $C$ is assumed as a rectangular $R\times T$ loop in the $x-t$ plane with $T\gg R$ where the left and the right time-like legs of the Wilson loop are located at $x=0$ and $x=R$. In other words, the two static charges are located at these points. A desired ansatz for angle $\vec\alpha_{C}^{n}(x)$ must lead to a well-defined potential $i.e.$ linearity and Casimir scaling at the intermediate distances. Any reasonable ansatz for the angle $\vec\alpha_{C}^{n}(x)$ must satisfy the following conditions: \begin{description} \item{1.} $\vec\alpha_{C}^{n}(x)= 0$ when a center vortex locates far outside the Wilson loop. \item{2.} $\vec\alpha_{C}^{n}(x)=\vec\alpha_{max}^{n}$ when a center vortex locates deep inside a large Wilson loop. The maximum value of the angle $\vec\alpha_{max}^{n}$ is obtained from the following maximum flux condition: \begin{equation} \label{max} \exp(i\vec{\alpha}_{max}^{n}\cdot\vec{\mathcal{H}_r})=\exp(i{\alpha}_{i{(max)}}^{n}{\mathcal{H}_{ir}})=e^{i2k\pi n /N} I, \end{equation} where $k$ is the $N$-ality of representation $r$. \item{3.} $\vec\alpha_{C}^{n}(x)= 0$ as $R\to0$ (small Wilson loop). \end{description} An ansatz for the flux profile which would meet these conditions is assumed as the following \cite{Fabe1998} \begin{equation} \alpha_{i}^{n}(x)=\frac{\alpha_{i{(max)}}^{n}}{2}[1-\tanh(ay(x)+\frac{b}{R})], \label{alpha} \end{equation} where $n$ indicates the center vortex type, $a , b$ are free parameters of the model, ${\alpha}_{i{(max)}}^{n}$ corresponding to Eq. (\ref {max}) indicates the maximum value of the flux profile and $R$ is the distance between two static charges. $y(x)$, the nearest distance of $x$ from the timelike side of the loop, is \begin{equation} y(x) = \left\{ \begin{array}{cl} x-R & \mbox{for~} |R-x| \le |x| \cr -x & \mbox{for~} |R-x| > |x| \end{array} \right. \end{equation} The flux of Eq. (\ref{alpha}) is one of the many examples that can give the appropriate potential. Some other examples were discussed in Ref. \cite{Deld2000}. For $SU(2)$ gauge group, when the vortex core is entirely contained within the Wilson loop, using Eq. (\ref {max}), we get \begin{equation} \label{alpha2} \exp[i{\alpha}^{1}_{max} {\mathcal{H}}_{3}]= z_1 I, \end{equation} where $\mathcal{H}_{3}$ is Cartan generator and $z_1 I=e^{\pi i} I$ is the center element of $SU(2)$ gauge group. Therefore, the maximum value of the angle $\alpha^{1}_{max}$ for the fundamental representation is equal to $2\pi$. Thus, the ansatz of the flux profile given in Eq. (\ref {max}) for $SU(2)$ is obtained as the following \begin{equation} \alpha^{1}(x)=\pi[1-\tanh(ay(x)+\frac{b}{R})]. \label{alpha3} \end{equation} Figure \ref{0}a schematically shows the interaction of center vortices with an $R\times T$ Wilson loop using the ansatz for the flux profile of center vortices which is given in Eq. (\ref {alpha}). The ansatz of the flux profile in Eq. (\ref {alpha3}) is plotted in Fig. \ref{0}b. \begin{figure}[h!] \centering a)\includegraphics[width=0.49\columnwidth]{01.eps} b)\includegraphics[width=0.46\columnwidth]{02.eps} \caption{a) The figure schematically shows the interaction between the $SU(2)$ center vortices with the ansatz of the flux profile given in Eq. (\ref {alpha3}) and the Wilson loop as a rectangular $R\times T$ loop in the $x-t$ plane as well as some parameters of the ansatz. The effect of center vortex on the loop is assumed by insertion of a group element $G$ in the link product of the Wilson loop which interpolates smoothly between $G=+I$ at $\alpha^1=0$, when the center vortex locates far outside the Wilson loop, and $G=-I$ at $\alpha^1_{max}=2\pi$, when the center vortex completely locates inside the Wilson loop. b) The angle $\alpha^{1}$ versus $x$ corresponds to the ansatz for $SU(2)$ gauge theory. The left and right time-like legs of the Wilson loop are located at $x=0$ and $x=R=100$. The free parameters $a$ and $b$ are chosen to be $0.05$ and $4$, respectively. }\label{0} \end{figure} In this model, as an assumption, the probabilities of piercing the plaquettes in the Wilson loop by center vortices are uncorrelated. Assuming that an $n$th center vortex appears in any given plaquette with the probability $f_n$, the expectation value of the Wilson loop is obtained: \begin{equation} \label{w} <W(C)>=\prod_x \left\{ 1 - \sum_{n=1}^{N-1} f_n (1- Re\mathcal{G}_r[\vec\alpha^{n}_C(x)]) \right\} <W_0(C)>, \end{equation} where \begin{equation} \label{p} f_n = f_{N-n} ~~~~ \mbox{and} ~~~~ \mathcal{G}_r[\vec\alpha^{n}(x)]) = \mathcal{G}_r^*[\vec\alpha^{N-n}(x)]). \end{equation} $<W_0(C)>$ denotes $Tr \big[U...U\big]$ which no vortex pierces the Wilson loop. One of the criteria for the color confinement is the area law for the Wilson loop $\it{i.e.}$ \begin{equation}\label{wilson} <W(C)>=\exp{\big(-\sigma A(C)\big)}<W_0(C)>. \end{equation} Here $A(C)$ is the minimal surface spanned on the Wilson loop $C$ and $\sigma> 0$ is the confining string tension. Using Eq. (\ref {wilson}) into Eq. (\ref {w}), the string tension is obtained as the following \begin{equation}\label{sigma} \sigma =- {1\over A} \sum_x \ln\left\{ 1 - \sum_{n=1}^{N-1} f_n (1 - Re\mathcal{G}_r[\vec\alpha^{n}_{C}(x)]) \right\}. \end{equation} One gets the static potential induced by center vortices between static color charges in representation $r$ at distance $R$ as the following \begin{equation} \label{potential} V_r(R) = -\sum_{x=-\infty}^{\infty}\ln\left\{ 1 - \sum^{N-1}_{n=1} f_{n} (1 - {\mathrm {Re}}\mathcal{G}_{r} [\vec{\alpha}^n_{C}(x)])\right\}, \end{equation} where the center of vortex cores pierces the middle of plaquettes $i.e.$ $x=(n+\frac{1}{2})a$ ($n \in (-\infty,\infty)$) where $a$ is the lattice spacing. We use $a=1$ throughout this paper. Although $R$ takes only integer values in the lattice formulation, but the figures related to $V_r (R)$ are platted over the continuous interval. For $SU(2)$ gauge group, the static potential induced by $z_1$ center vortices at $f_{1} \ll 1$ and small distances between static charges (small $R$) where $\alpha^{1}(x)\ll2\pi$ is obtained as the following \begin{equation} \label{potential2} V_j(R) = \left\{{\frac{f_1}{6}}\sum_{x=-\infty}^{\infty} {\alpha}^1(x) \right\} j(j+1), \end{equation} where spin index $j$ shows the representations in $SU(2)$ gauge theory. According to Eq. (\ref {potential2}), the static potential is proportional to the eigenvalue of the quadratic Casimir operator $i.e.$ $V_j(R) \sim j(j+1)$ in agreement with the Casimir scaling effect observed in lattice simulations \cite{Ambjorn:1984mb}. The Casimir proportionality of the static potential induced by center vortices can be generalized from $SU(2)$ to $SU(N)$. For observing the property of Casimir scaling in the potentials at intermediate regime, the probability $f_n$ should be far smaller than $1$. Therefore, the probability $f_n$ is chosen $0.1$ as a desired value in the calculations. The Casimir scaling is not found at intermediate distances for any choice of the free parameters related to the ansatz in Eq. (\ref {alpha}). But it is observed for a large region of the parameter space. As an example, the extent of Casimir scaling region at intermediate distances can be changed by any factor $F$ by setting $a \rightarrow a/F,~b \rightarrow bF$. The thickness of the center vortex would be on the order ${1}/{a}$ for the ansatz given in Eq. (\ref {max}). Therefore, choosing $F>1$ as an integer value, increases the thickness of the center vortex and the Casimir scaling region while $F<1$, decreases these quantities. In the next section, we investigate the effect of a monopole flux on a Wilson loop. \section{ Monopole-antimonopole configurations } \label{sec:model2} Now, we consider monopole-antimonopole pairs as the Abelian configurations in the vacuum. We are assuming that the magnetic fields between monopole and antimonopole are initially localized in a tube as plotted in Fig. \ref{1}. We use these configurations in the thick center vortex model instead of center vortices. The monopole-antimonopole configurations are line-like similar to the center vortices. The effect of piercing a Wilson loop by the monopole-antimonopole configuration is represented by insertion of a phase $e^{ ie\int_S\vec{B}.d\vec{s}}$ \cite{Chernodub2005} in the link product where $e$ is the color electric charge and $\int_S\vec{B}.d\vec{s}$ is the total magnetic flux of the monopole. The magnetic field of a monopole with topological charge $g$ obeys the Maxwell equation $\vec{% \nabla}.\vec{B}=g\delta \left( \vec{r}\right) $. Therefore, the total magnetic flux of a monopole crossing the surface $S$ is equal the magnetic charge $g$ as the following \cite{Chatterjeea2014,Ripka} \begin{equation} \label{ce} {\Phi}_m=\int_S\vec{B}.d\vec{s}=\int_Vd^3r\;\vec{\nabla}.\vec{B}=\int_Vd^3r\;g\delta \left( \vec{r}\right) =g, \end{equation} where $V$ is the volume enclosed by the surface $S$. For $SU(2)$ gauge group, $g$ is the monopole charge in Eq. (\ref{gpie}). If we attribute a thickness for these configurations as what is done for the thick center vortices, the effect of a monopole-antimonopole configuration on a Wilson loop is to multiply the loop by a group element the same as the one in Eq. (\ref{group}). If a monopole-antimonopole configuration is entirely contained within the loop, then \begin{equation} \label{center} \exp\left[i\vec{\alpha}^{n}\cdot\vec{\mathcal{H}}\right]=e^{ ieg}, \end{equation} where $eg$ satisfies the charge quantization condition $eg=2n\pi$. For $SU(2)$ gauge group, corresponding to Eq. (\ref {gpie}), the magnetic charge of the monopole is $g=-\frac{4\pi }{e}\mathcal{H}_{3}$ and the one for antimonopole is $g=+\frac{4\pi }{e}\mathcal{H}_{3}$ where $\mathcal{H}_{3}=diag\big(\frac{1}{2},-\frac{1}{2}\big)$ represents the Cartan generator. When a monopole-antimonopole pair is entirely contained within the Wilson loop, using $SU(2)$ magnetic charge into Eq. (\ref {max}), we get \begin{equation} \label{alpha4} \exp[i{\alpha}^{0}_{max} {\mathcal{H}}_{3}]= e^{\pm i4\pi \mathcal{H}_{3}}=e^{i2\pi} I, \end{equation} where index $n=0$ is related to the monopole-antimonopole configurations. The sign in the exponent is not important since the direction of the configuration which pierces the Wilson loop is not important. Therefore the maximum value of the angle $\alpha^{0}_{max}$ for the fundamental representation is equal to $4\pi$. Therefore the ansatz of the flux profile given in Eq. (\ref {max}) for the monopole-antimonopole configurations of $SU(2)$ gauge theory is obtained as the following \begin{equation} \alpha^{0}(x)=2\pi[1-\tanh(ay(x)+\frac{b}{R})]. \label{alpha5} \end{equation} The potential induced by monopole-antimonopole configurations is the same as the one induced by center vortices represented in Eq. (\ref{potential}). For $SU(2)$ gauge group, the potential induced by monopole-antimonopole configurations for the fundamental representation is obtained as the following \begin{equation} \label{potential3} V_f(R) = - \sum_{x=-\infty}^\infty \ln\{(1-f_0) + f_0\mathcal{G}_{f}[\alpha^0(x)] \}. \end{equation} In the next section, we study the group factors and the potentials for the center vortices and the monopole-antimonopole configurations. \begin{figure} \begin{center} \resizebox{0.1\textwidth}{!}{ \includegraphics{1.eps}} \caption{\label{1} A schematic view of the monopole-antimonopole configuration which is initially considered to be localized. This configuration is line-like, the same as the center vortices. The arrows on the lines show the direction of the magnetic field.} \end{center} \end{figure} \section{SU(2) and vacuum structures} \label{sec:SU(2)} To study the center vortices and monopole-antimonopole configurations in the vacuum for $SU(2)$ gauge group, we discuss the interaction between the Wilson loop and these configurations. First, the group factors of these configurations which have an important role in producing the potentials of Eq. (\ref{potential}) \cite{Rafibakhsh2014,HD2015} are studied and the relation between these configurations is discussed. Then, with calculating the potentials induced by these configurations, interactions inside these configurations are studied. \subsection{Interaction between the Wilson loop and center vortices} First, we calculate the group factor of the center vortices in $SU(2)$ gauge group. The group factor for the fundamental representation of $SU(2)$ is obtained from Eq. (\ref{group}) \begin{equation} \mathcal{G}_{j=1/2}={1\over 2j+1}\mbox{Tr}\exp[i{\alpha}^{1} {\mathcal{H}}_{3}]=cos(\frac{{\alpha}^{1}}{2}), \label{group-factor} \end{equation} where ${\mathcal{H}}_{3}$ is the Cartan generator of $SU(2)$ gauge group. According to Eq. (\ref{alpha2}), the maximum value of the angle $\alpha^{1}_{max}$ for the fundamental representation is equal to $2\pi$. Using ansatz given in Eq. (\ref{alpha3}), Fig. \ref{23}a shows $\mathcal{G}_r(\alpha^{n})$ obtained from center vortices versus $x$ for a fundamental representation Wilson loop with $R=80$. The legs of the Wilson loop are located at $x = 0$ and $x = 80$. The free parameters $a$ and $b$ are chosen to be $0.05$ and $4$, respectively. The group factor interpolates smoothly from $-1$, when the vortex core is located completely inside the Wilson loop, to $1$, when the core is entirely outside the loop. Figure \ref{g12}a shows $\mathcal{G}_r(\alpha^{n})$ obtained from center vortices versus $x$ for small sizes of Wilson loops (small $R$). For small size of the Wilson loop, center vortices are partially located inside the Wilson loop and the maximum flux is not center vortex flux. Therefore the minimum of the group factor of center vortices increases with decreasing the size of the Wilson loop and approaches to $1$. \subsection{Interaction between the Wilson loop and monopole fluxes} Next, we calculate the group factor of the monopole-antimonopole configurations in $SU(2)$ gauge group. Using Eq. (\ref{group}), the group factor for the fundamental representation is obtained as the following \begin{equation} \mathcal{G}_{f}=cos(\frac{{\alpha}^{0}}{2}), \label{gfactor} \end{equation} According to Eq. (\ref{alpha4}), the maximum value of the angle $\alpha^{0}_{max}$ for the fundamental representation is equal to $4\pi$. Using ansatz given in Eq. (\ref{alpha5}), Fig. \ref{23}b plots $\mathcal{G}_r(\alpha^{n})$ obtained from the monopole-antimonopole configurations versus $x$ for a Wilson loop of the size $R=80$ for the fundamental representation. The Wilson loop legs are located at $x = 0$ and $x = 80$. The free parameters $a$ and $b$ are chosen to be $0.05/F$ and $ 4F$, respectively. When the monopole-antimonopole configuration overlaps the minimal area of the Wilson loop, it affects the loop. For $F=1$, the value of the group factor is $1$, when the monopole-antimonopole configuration core is located completely inside or outside the Wilson loop. For $F>1$, the thickness of center vortices is increased and the maximum value of the flux profile $\alpha^{0}$ is less than $4\pi$. Therefore, when the center of vortex core is located in the middle of the Wilson loop with the size $R=80$, $\mathcal{G}_r(\alpha^{0})$ becomes less than $1$. Increasing the size of the Wilson loop, the maximum value of the group factor reaches to $1$. When the center of monopole-antimonopole configuration is placed at $x = 0$ or $x = 80$, half of the maximum flux enters the Wilson loop. The group factor interpolates smoothly from $1$, when the monopole-antimonopole configuration core is entirely outside the loop, to $-1$, when the half of the core is located inside the Wilson loop. As shown in Fig. \ref{23}b, the results are the same by changing the free parameters. This behavior of the group factor is similar to the group factor of the center vortex which changes smoothly between $1$, when the center vortex is located completely outside the Wilson loop and $-1$, when the center vortex is completely inside the loop. Therefore, half of the monopole-antimonopole flux is equal to the vortex flux $\it{i.e.}$ the monopole-antimonopole configuration is constructed from two center vortices. Figure \ref{g12}b shows $\mathcal{G}_r(\alpha^{n})$ obtained from monopole-antimonopole configuration versus $x$ for small sizes of Wilson loops. For small $R$, monopole-antimonopole configurations are partially located inside the Wilson loop and the maximum flux is not equal to the total flux of the monopole-antimonopole configuration. Assuming the monopole-antimonopole configuration is constructed from center vortices, we observe that the value of $-1$ for the group factor, corresponding to the total flux of the center vortex, happens when $R$ is equal to $13$. Decreasing the size of the Wilson loop, the minimum of the group factor of the monopole-antimonopole configuration increases and deviates from $-1$. \begin{figure}[h!] \centering a)\includegraphics[width=0.46\columnwidth]{2.eps} b)\includegraphics[width=0.46\columnwidth]{3.eps} \caption{ a) ${\mathrm {Re}}(\mathcal{G}_{r})$ obtained from center vortices versus $x$ for the fundamental representation of the $SU(2)$ gauge group for $R=80$. The free parameters $a$ and $b$ are chosen to be $0.05$ and $4$, respectively. When center vortices locate completely inside the Wilson loop, the value of the group factor is $-1$ b) the same as a) but obtained from the monopole-antimonopole configurations. The free parameters $a$ and $b$ are chosen to be $ 0.05/F$ and $ 4F$, respectively. When half of the core of the monopole-antimonopole configuration locates inside the Wilson loop (at $x = 0$ or $x = 80$), the flux inside the loop is equivalent to the center vortex flux. Therefore the fluxes of the center vortices inside the monopole-antimonopole configuration do not have an overlap. It seems that two similarly oriented vortices repel each other. As shown, by changing the free parameters (for example, varying $F=1$ to $F=2$), the results are the same and half of the core of the monopole-antimonopole configuration is equivalent to the center vortex flux. By varying $F=1$ to $F=2$, the thickness of center vortices is increased and becomes more than the size of the Wilson loop ($R=80$). Therefore, when the center of vortex core is located in the middle of the Wilson loop ($x=40$), $\mathcal{G}_r(\alpha^{0})$ becomes less than $1$. Increasing the size of the Wilson loop, the maximum value of the group factor reaches to $1$. We show in Fig. \ref{4} that by changing $F=1$ to $F=2$ the static potentials are just scaled up. }\label{23} \end{figure} \begin{figure}[h!] \centering a)\includegraphics[width=0.46\columnwidth]{g1.eps} b)\includegraphics[width=0.46\columnwidth]{g2.eps} \caption{a) ${\mathrm {Re}}(\mathcal{G}_{r})$ obtained from center vortices versus $x$ for the fundamental representation of the $SU(2)$ gauge group for small sizes of Wilson loops (small $R$). The free parameters $a$ and $b$ are chosen to be $0.05$ and $4$, respectively. Decreasing the size of the Wilson loop, the minimum value of the group factor increases and approaches to $1$. b) the same as a) but obtained from the monopole-antimonopole configurations. Assuming the monopole-antimonopole configuration is constructed from center vortices, we observe that the value of $-1$ for the group factor, corresponding to the total flux of the center vortex, happens when $R$ is equal to $13$. Decreasing the size of the Wilson loop, the minimum of the group factor of the monopole-antimonopole configuration increases and deviates from $-1$.}\label{g12} \end{figure} The assumed center vortices inside the monopole-antimonopole configuration affect each other. Since the behavior of one half of the monopole flux on the Wilson loop is the same as one center vortex, the fluxes of two vortices constructing monopole-antimonopole configuration do not have an overlap. Therefore it seems that two center vortices inside the monopole-antimonopole configuration repel each other. In the next subsection the interaction between these center vortices is investigated, in details. Before that we study another approach, explained in ref. \cite{Deldar2014}, for obtaining the relation between center vortex and monopole fluxes. Using fractional flux of a monopole, the flux of a center vortex is constructed in $SU(2)$ gauge group. Substituting $\mathcal{H}_3$ from Eq. (\ref{gpie}) in Eq. ({\ref{alpha2}) and ${\alpha}^{1}_{max}=2\pi$, we get \cite{Deldar2014} \begin{equation} \label{cent} \exp\left[i2\pi \mathcal{H}_3\right]=\exp\left[-ie\frac{g}{2}\right]=z_1 I. \end{equation} According to Eq. ({\ref{ce}), $g$ is equal to the total magnetic flux of a monopole. Therefore the effect of a center vortex on the Wilson loop is the same as the effect of an Abelian configuration corresponding to the half of the matrix flux $g$ on the Wilson loop. Now, we obtain the flux of this Abelian configuration and compare it with the flux of center vortex which is equal to ${\Phi}_v=\pi$ \cite{Ambjorn2000}. The contribution of this Abelian configuration on the Wilson loop is \begin{equation} \label{c8} W=\mathcal{G}_f=\frac{1}{d_f}\mathrm{Tr}\left(\exp\left[-ie\frac{g}{2}\right]\right)=\frac{1}{2}\mathrm{Tr}\left(\begin{array}{cc} e^{-i\pi} & 0 \\0 &e^{i\pi} \end{array} \right)=e^{i\pi}. \end{equation} Comparing Eq. ({\ref{c8}) with the contribution of an Abelian field configuration to the Wilson loop which is $W=e^{iq{\Phi}}$ ($q$ means units of the electric charge and $q=1$ for the fundamental representation) \cite{Chernodub2005}, the flux of this Abelian configuration is equal to $\pi$. Therefore, the flux of this Abelian configuration corresponding to the half of the magnetic charge $g$, is the same as one center vortex on the Wilson loop. In the next subsection the interaction between center vortices inside the monopole-antimonopole configuration is studied. \subsection{Monopole-vortex chains} In the previous sections, we have shown that the flux between a monopole-antimonopole pair is constructed from the fluxes of two vortices. To understand the interaction between two center vortices inside the monopole-antimonopole configuration, we study the potentials induced by the center vortices and the monopole-antimonopole configurations using the ``center vortex model". Using Eqs. ({\ref{potential}) and ({\ref{potential3}), Fig. \ref{4} shows the static potential of the fundamental representation at intermediate distances induced by monopole-antimonopole configurations compared with the one induced by the center vortices. The potential energy induced by monopole-antimonopole configurations is larger than the twice of the potential energy induced by the center vortices. The free parameters $a$, $b$ and $f_n (n=0,1)$ are chosen to be $ 0.05/F$, $ 4F$ and $0.1$, respectively. As shown in Fig. \ref{4}, the result do not change by varying the factor $F$ related to the free parameters. Using two center vortices in the model without any interaction, potential at small distances is obtained to be equal to the case when we use one center vortex with a thickness of twice the original one. We recall that increasing the thickness of the center vortex core would increase the energy of the center vortex and therefore the energy of the vacuum which is made of these vortices. As a result, the potential energy between static quark-antiquark increases. This is shown in figure \ref{4}. On the other hand, if there is no interaction between the vortices of the monopole-antimonopole pair, the induced potential for the small distances is expected to be equal to the induced potential by the two non interacting vortices. However, as shown in figure \ref{4}, the induced potentials are not equal. The extra energy obtained for the induced potential between the quark-antiquark using monopole-antimonopole configurations, can be interpreted as a repulsion energy between the two center vortices constructing the configuration. Therefore, two vortices with the same flux orientations inside the monopole-antimonopole configuration repel each other. The interaction between the constructing vortices of monopole-antimonopole pair can be observed by the small or intermediate size Wilson loops. For large enough Wilson loops, two center vortices, constructing the monopole-antimonopole configuration, are located completely inside the Wilson loop. Thus, the effect of two center vortices ($z^2$) on the large Wilson loops is trivial ($z^2=I$). The flat potential at large distances in Fig. \ref{chain}, shows this trivial behavior. Therefore, interaction between two vortices can not be observed for $R$ greater than the vortex core size. The vortex core size is about $20$ with the free parameters we used in the model. We recall that the lengths are dimensionless in the model. Thus, interaction between vortices of the monopole-antimonopole configuration is possible for $R$ less than $20$. Using the monopole-antimonopole configuration of the vacuum, we only show that there is a repulsion between two center vortices within the monopole-antimonopole configuration and they construct a monopole-vortex chain. However, these monopole-vortex chains should be observed in $3$ dimensions. In the model, since the Wilson loop is a rectangular $R\times T$ loop in the $x-t$ plane, it probably intersects with one of the legs of the chain at a time. Many random piercings of the Wilson loop by these legs and then averaging those random piercings leads to the confinement. In fact, the repulsion deforms the localized flux and only one of the vortices would intersect the Wilson loop as confirmed by the chain models \cite{Del Debbio1998,Cornwall1998,Chernodub2009,Chernodub2005,Reinhardt}. These cases are shown in Fig. \ref{chain}. In ref. \cite{Reinhardt}, Reinhardt and $\it{et~ al.}$ explained that the monopole-antimonopole flux splits into two equal portions of center vortex fluxes shown in Fig. \ref{Dirac}. The Wilson loop which intersects one of these center vortices leads to confinement for the static sources. In addition, the dual superconductor picture of quark confinement was proposed by Nambu in 1970's \cite{Nambu1974}. Ginzburg-Landau theory defines two parameters: the superconducting coherence length $\displaystyle \xi$ and the London magnetic field penetration depth $\displaystyle \lambda$. As an interesting possibility, the repel of two center vortices may mean the Type-II superconductor of the QCD vacuum, that is, the Ginzburg-Landau parameter $\kappa=\lambda /\xi$ of the QCD vacuum is larger than $1/\sqrt{2}$. We would like to mention that the interaction between vortices has been studied by the domain model (the modified thick center vortex model) in ref. \cite{HD2015}, as well. In that article, based on ``energetics'' we have shown that two vortices with the same flux orientations inside $(z_1)^2$ vacuum domains repel each other. While two vortices with opposite flux orientations inside $z_1z^*_1$ vacuum domains attract each other. The group factors analysis of $(z_1)^2$ and $z_1z^*_1$ vacuum domains agree with this article. Since two similarly oriented vortices inside $(z_1)^2$ vacuum domain repel each other, we conclude that they do not make a stable configuration and one should consider each of them as a single vortex in the model. On the other hand, since two vortices with the opposite orientation inside $z_1z^*_1$ vacuum domain attract each other, we conclude that they make a stable configuration. Adding the contribution of the $z_1z^*_1$ vacuum domain to the potential obtained from center vortices, the length of the Casimir scaling regime increases \cite{HD2015}. The results of this paper is in agreement with our previous paper. To summarize, in this article, we obtain a chain of monopole-vortex. The magnetic flux coming from a monopole inside the chain is squeezed into vortices of finite thickness and a non-orientable closed loop is formed. The non-orientable closed loop means that two vortex lines inside the loop have different orientations of magnetic fluxes. Figure \ref{5} schematically shows the interaction between center vortices inside the monopole-antimonopole configuration. \begin{figure}[h!] \centering a)\includegraphics[width=0.46\columnwidth]{41.eps} b)\includegraphics[width=0.46\columnwidth]{42.eps} \caption{a) The potential energy $V_{mon}(R)$ induced by monopole-antimonopole configurations and the twice of the potential $V_{vor}(R)$ obtained by the center vortices. The free parameters $a$ and $b$ are chosen to be $ 0.05/F$ and $4F$ where $F=1$ and the probability $f_n$ ($n=0,1$) is chosen $0.1$. The potential ratio $V_{mon}(R)/2V_{vor}(R)$ is about $1.5 $. The extra positive potential energy of static potential induced by monopole-antimonopole configurations compared with the twice of the static potential obtained from center vortices shows that two similarly oriented center vortices inside the monopole-antimonopole configuration repel each other and make a monopole-vortex chain. b) the same as a) but for $F=2$. The potential ratios $V_{mon}(R)/2V_{vor}(R)$, obtained from a) and b), are the same within the errors. Therefore, varying the free parameters do not change the physical results.}\label{4} \end{figure} \begin{figure} \begin{center} \resizebox{0.46\textwidth}{!}{ \includegraphics{chain.eps}} \caption{\label{chain} The potential energy induced by the monopole-vortex chains. The free parameters $a$, $b$ and $f_n$ are chosen to be $ 0.05$, $4$ and $0.1$. If the monopole-vortex chains intersect in two points with the large Wilson loop, the static sources are screened at large distances. On the other hand, if one leg of the monopole-vortex chain intersects the large Wilson loop, confinement is observed for the fundamental representation. } \end{center} \end{figure} \begin{figure}[h!] \centering a)\includegraphics[width=0.26\columnwidth]{Dirac1.eps} b)\includegraphics[width=0.26\columnwidth]{Dirac2.eps} \caption{ a) The monopole-antimonopole pair in $SU(2)$ gauge group. These configurations contribute unity to the Wilson loop $C$. b) Assuming that the magnetic flux of the monopole-antimonopole configuration is split into two center vortex fluxes, one leg of this chain contributes $-1$ to the Wilson loop $C$. Therefore these chains lead to the confinement for the static sources \cite{Reinhardt}. }\label{Dirac} \end{figure} \begin{figure} \begin{center} \resizebox{0.05\textwidth}{!}{ \includegraphics{5.eps}} \caption{\label{5} A schematic view of a monopole-vortex chain obtained from the monopole-antimonopole configuration. Center vortices of the monopole-antimonopole configuration repel each other and make a monopole-vortex chain. The arrows on the vortex lines show the direction of the magnetic field of the vortex.} \end{center} \end{figure} Our understanding of monopole-antimonopole flux and monopole vortex chain is also in agreement with other research about this topic as comes in the following. According to the Monte Carlo simulations, after Abelian projection almost all monopoles are sitting on top of the vortices \cite{Del Debbio1998,Ambjorn2000} as shown in Fig. \ref{6}. Therefore a center vortex upon Abelian projection would appear in the form of monopole-vortex chains. Indeed Abelian monopoles and center vortices correlate with each other. Figure \ref{7}a shows some monopole-vortex chains in $SU(2)$ gauge group \cite{Ambjorn2000}. \begin{figure} \begin{center} \resizebox{0.5\textwidth}{!}{ \includegraphics{6.eps}} \caption{\label{6} monopoles priced by P-vortices. Almost all monopoles (about 93\%) are priced by one P-vortex (middle panel). Only very small fractions of monopoles either are not pierced at all (about 3\%)(left panel), or are pierced by more than one line (about 4\%)(right panel) \cite{Del Debbio1998}.} \end{center} \end{figure} \begin{figure}[h!] \centering a)\includegraphics[width=0.36\columnwidth]{71.eps} b)\includegraphics[width=0.1\columnwidth]{72.eps} \caption{a) Some monopole-vortex chains in $SU(2)$ gauge group shown in ref. \cite{Ambjorn2000}. b) The monopole-vortex chain shown in ref. \cite{Cornwall1998}. Therefore, the monopole-vortex chain obtained in this article agrees with the results of lattice gauge theory and chain models.}\label{7} \end{figure} In addition, the monopole-vortex junctions called as nexuses are studied in ref. \cite{Cornwall1977}. Some solutions to the equations of motion obtained from the low-energy effective energy functional $E$ of QCD \cite{Cornwall1998} are studied. Several thick vortices meet at a monopole-like center (nexus), with finite action and non-singular field strengths. In $SU(N)$ gauge group each nexus is the source of $N$ center vortices. Figure \ref{7}b shows monopole-vortex chain obtained by Cornwall for $SU(2)$ gauge group \cite{Cornwall1998}. In ref. \cite{Chernodub2009}, examples of the monopole-vortex chains are also plotted using the method of ref. \cite{Cornwall1998}. Therefore the monopole-vortex chain in the vacuum obtained from the model agrees with the results of lattice gauge theory and chain models. \section{conclusion} \label{sec:conclusion} The formation of the monopole-vortex chains which are observed in lattice simulation is studied in a model. This model is similar to the thick center vortex model but instead of center vortices in the model we use monopole-antimonopole configurations which are line-like the same as center vortices. Comparing group factors of monopole-antimonopole configurations and center vortices, we observe that the flux of the monopole-antimonopole configuration is constructed from two center vortex fluxes. Calculating the induced quark-antiquark potential from two non interacting vortices and monopole-antimonopole configurations and comparing the plots, we observe that the potential energy induced by the monopole-antimonopole configurations is larger than the twice of the one induced by the two non interacting center vortices configurations. The extra positive energy is interpreted as the repulsive energy between the vortices inside the monopole-antimonopole configuration. The resulting monopole-vortex chains agree with the lattice calculations and phenomenological models. In general, these monopole-vortex chains should be observed in $3$ dimensions. In the model, the Wilson loop, which is a rectangular $R\times T$ loop in the $x-t$ plane, probably intersects with one of the legs of the chain at a time. Many random piercings of the Wilson loop by these legs and then averaging those random piercings leads to the confinement. \section{\boldmath Acknowledgments} We are grateful to the Iran National Science Foundation (INSF) and the research council of the University of Tehran for supporting this study.
1,477,468,751,205
arxiv
\section{Introduction} The Cuntz-Krieger algebras were introduced by J.~Cuntz and W.~Krieger in 1980, \cite{cuntzkrieger}, as $C^{*}$-algebras arising from dynamical systems. This class of $C^{*}$-algebras has since shown up in several contexts, including the classification program as the Cuntz-Krieger algebras with finitely many ideals, are examples of non-simple purely infinite $C^{*}$-algebras. It has been known since M.~Enomoto and Y.~Watatani introduced graph algebras in 1980 in \cite{ew}, that Cuntz-Krieger algebras are the graph algebras arising from finite graphs with no sinks and no sources (see also \cite{mrs}), but no characterization in terms of outer properties has been established for the Cuntz-Krieger algebras. We show in Theorem~\ref{t:phantom-ck-algebras} that the Cuntz-Krieger algebras are the graph algebras arising from finite graphs with no sinks, and conclude that a graph algebra is a Cuntz-Krieger algebra if and only if it is unital and the rank of its $K_{0}$-group equals the rank of its $K_{1}$-group. Using this, we show in~Theorem~\ref{t:morita-ck-algebras}, that if a unital $C^{*}$-algebra is stably isomorphic to a Cuntz-Krieger algebra, then it is isomorphic to a Cuntz-Krieger algebra. As a corollary to Theorem~\ref{t:morita-ck-algebras}, we see that corners of Cuntz-Krieger algebras are Cuntz-Krieger algebras, see Corollary~\ref{cor}. It is quite surprising that the class of Cuntz-Krieger algebras has this permanence property since the larger class of graph algebras do not (as the graph algebra $\mathsf M_{2^{\infty}}\otimes\mathbb K$ provides a counterexample). Moreover, this shows that corners of Cuntz-Krieger algebras are semiprojective, Corollary~\ref{c:sem}, as Cuntz-Krieger algebras are semiprojective. Our results also show that a unital corner of a stabilized Cuntz-Krieger algebra is semiprojective since a stabilized Cuntz-Krieger algebra is semiprojective. It was conjectured by B.~Blackadar in \cite[Conjecture~4.4]{blackadar-semiprojective} that a full corner of a semiprojective $C^{*}$-algebra is semiprojective. He showed in \cite[Proposition~2.7]{blackadar-semiprojective} that a full unital corner of a semiprojective $C^{*}$-algebra is semiprojective. Recently, S.~Eilers and T.~Katsura showed in~\cite{ek_graph} that a corner of a unital graph $C^{*}$-algebra that is semiprojective is also semiprojective. Corollary~\ref{c:sem} is a special case of their results since every Cuntz-Krieger algebra is isomorphic to a unital semiprojective graph $C^{*}$-algebra. Semiprojectivity is easy in our case since the graphs are finite. Thus we do not need any results from \cite{ek_graph}. \section{Definitions and preliminaries} \begin{definition} \label{def:graph} Let $E = (E^0,E^1,s_{E},r_{E})$ be a countable directed graph. A Cuntz-Krieger $E$-family is a set of mutually orthogonal projections $\{ p_v \mid v \in E^0 \}$ and a set $\{ s_e \mid e \in E^1 \}$ of partial isometries satisfying the following conditions: \begin{itemize} \item[(CK0)] $s_e^* s_f = 0$ if $e,f \in E^1$ and $e \neq f$, \item[(CK1)] $s_e^* s_e = p_{r_{E}(e)}$ for all $e \in E^1$, \item[(CK2)] $s_e s_e^* \leq p_{s_{E}(e)}$ for all $e \in E^1$, and, \item[(CK3)] $p_v = \sum_{e \in s_{E}^{-1}(v)} s_e s_e^*$ for all $v \in E^0$ with $0 < |s_{E}^{-1}(v)| < \infty$. \end{itemize} The \emph{graph algebra} $C^*(E)$ is defined as the universal $C^*$-algebra given by these generators and relations. \end{definition} \begin{definition} Let $E$ be a directed graph, and let $v\in E^0$ be a vertex in $E$. The vertex $v$ is called \emph{regular} if $s_E^{-1}(v)$ is finite and nonempty. If $s_E^{-1}(v)$ is empty, $v$ is called a \emph{sink}, and if $r_E^{-1}(v)$ is empty, $v$ is called a \emph{source}. If $s_E^{-1}(v)$ is infinite, $v$ is called an \emph{infinite emitter}. \end{definition} \begin{definition}\label{d:ckalgebras} A graph $C^{*}$-algebra of a finite graph with no sinks and no sources is called a \emph{Cuntz-Krieger algebra}. \end{definition} J.~Cuntz and W.~Krieger originally defined a Cuntz-Krieger algebra as the universal $C^*$-algebra determined by a collection of partial isometries satisfying relations determined by a finite matrix with entries in $\{0, 1 \}$. It follows from \cite[Section~4]{KPRR97} that the class of Cuntz-Krieger algebras coincides with the class of graph $C^*$-algebras of finite graphs with no sinks or sources, and moreover, if $E$ is a finite graph with no sinks or sources, $C^*(E)$ coincides with the Cuntz-Krieger algebra associated with the edge matrix of $E$. In their study of Cuntz-Krieger algebras, Cuntz and Krieger often imposed Condition (I) on their matrices, which is equivalent to imposing Condition (L) on the graph. In work after Cuntz and Krieger, particularly in \cite{aHR97}, it was shown that Condition (I) was not necessary, and that loops without exits would produce ideals in the associated $C^{*}$-algebra that are Morita equivalent to $C( \mathbb{T})$. In this paper, we do not assume our matrices satisfy Condition (I). Thus obtaining results for $C^{*}$-algebras without real rank zero and results where the $C^{*}$-algebras have uncountably many ideals or are commutative. We use the language of graph $C^{*}$-algebras in order to provide us with nice combinatorial models of Cuntz-Krieger algebras. Thus, motivating us to define a Cuntz-Krieger algebra as in Definition~\ref{d:ckalgebras}. \begin{definition}\label{d:hereditary} Let $E$ be a directed graph. A path $\alpha=e_1e_2\cdots e_n$ in $E$ with $r_E(\alpha) := r( e_{n} ) = s_{E} ( e_{1} ) =: s_E(\alpha)$ is called a \emph{cycle}. A cycle $\alpha=e_1e_2\cdots e_n$ is called \emph{vertex-simple} if $s_{E} ( e_{i} ) \neq s_{E} ( e_{j} )$ for all $i \neq j$. We refer to $s_E(\alpha)$ as the \emph{base point} of the cycle $\alpha$. In particular, an edge $e$ in $E$ with $s_E(e)=r_E(e)$ is called a \emph{cycle of length one with base point $s_E(e)$}. \end{definition} \begin{definition} Let $E$ be a directed graph. For vertices $v,w$ in $E$, we write $v\geq w$ if there is a path in $E$ from $v$ to $w$, i.e., a path $\alpha$ in $E$ with $s_E(\alpha)=v$ and $r_E(\alpha)=w$. Let $S$ be a subset of $E^{0}$. We write $v \geq S$ if there exists $u \in S$ such that $v \geq u$. Let $H$ be a subset of $E^{0}$. The subset $H$ is called \emph{hereditary} if for all $v\in H$ and $w\in E^0$, $v\geq w$ implies $w\in H$ and $H$ is called \emph{saturated} if $r_E(s_E^{-1}(v))\subseteq H$ implies $v\in H$ for all regular vertices $v$ in $E$. For a hereditary subset $H$ in $E^0$, we let $I_H$ denote the ideal in $C^*(E)$ generated by $\{ p_{v} \mid v \in H \}$. \end{definition} \begin{definition} Let $E$ be a countable directed graph. Let $\gamma$ denote the gauge action on $C^*(E)$, i.e., the action $\gamma$ of the circle group $\mathbb T$ on $C^*(E)$ for which $\gamma_z(s_e)=zs_e$ and $\gamma_z(p_v)=p_v$ for all $z\in\mathbb T$, $e\in E^1$, and $v\in E^0$. An ideal $I$ in $C^*(E)$ is called \emph{gauge invariant} if $\gamma_z(I)\subseteq I$ for all $z\in\mathbb T$. \end{definition} When $E$ is a row-finite graph, the map $H\mapsto I_H$ defines a lattice isomorphism between the saturated hereditary subsets in $E^0$ and the gauge invariant ideals in $C^*(E)$, see~\cite[Theorem~4.1]{BPRS00}. \begin{definition} Let $E$ and $F$ be directed graphs. A \emph{graph homomorphism} $f\colon E\to F$ consists of two maps $f^0\colon E^0\to F^0$ and $f^1\colon E^1\to F^1$ satisfying $r_F \circ f^1=f^0 \circ r_E$ and $s_F \circ f^1=f^0 \circ s_E$. A graph homomorphim $f\colon E\to F$ is called a \emph{CK-morphism} if $f^0$ and $f^1$ are injective and $f^1$ restricts to a bijection from $s_E^{-1}(v)$ onto $s_F^{-1}(f^0(v))$ for all regular vertices $v$ in $E$. If $E$ is a subgraph of $F$, we call it a \emph{CK-subgraph} if the inclusion $E\to F$ is a CK-morphism. \end{definition} The definition of a CK-morphism between arbitrary graphs was introduced by K.~R.~Goodearl in \cite{goodearl}. Let \textbf{CKGr} be the category whose objects are arbitrary directed graphs and whose morphisms are CK-morphisms. Goodearl showed that there is a functor $L_{K}$ from the category \textbf{CKGr} to the category of algebras over a field $K$. The functor $L_{K}$ assigns an object $E$ the Leavitt path algebra $L_{K} (E)$. Goodearl also proved in \cite[Corollary~3.3]{goodearl} that for every CK-morphism $\phi$, the $K$-algebra homomorphism $L_{K} ( \phi )$ is injective. We now prove the analog of \cite[Corollary~3.3]{goodearl} where the category of $K$-algebras is replaced by the category of $C^{*}$-algebras and the functor assigns an object $E$ the graph $C^{*}$-algebra $C^{*} (E)$. \begin{lemma} \label{l:ck-subgraph} Let $E$ and $F$ be countable directed graphs, let $f\colon E\to F$ be a CK-morphism. Let $\{p_v, s_e \mid v\in E^0, e\in E^1 \}$ be a universal Cuntz-Krieger $E$-family generating $C^*(E)$, and let $\{q_v, t_e \mid v\in F^0, e\in F^1 \}$ be a universal Cuntz-Krieger $F$-family generating $C^*(F)$. Then the assignments, $p_v\mapsto q_{f^0(v)}$ and $s_e\mapsto t_{f^1(e)}$, induce an injective $*$-homomorphism $\phi\colon C^*(E)\to C^*(F)$ with image equal to the subalgebra of $C^{*} (F)$ generated by $\{ q_v, t_e \mid v,s_F(e)\in f^0(E^0) \}$. \end{lemma} \begin{proof} Using the fact that $f$ is a CK-morphism, one can verify that $\{q_{f^0(v)}, t_{f^1(e)} \mid v\in E^0, e\in E^1 \}$ is a Cuntz-Krieger $E$-family in $C^*(F)$. The universal property of $C^{*} (E)$ now implies that the $*$-homomorphism $\phi$ exists. Since $\phi$ intertwines the canonical gauge actions on $C^{*} (E)$ and $C^{*} (F)$ and since $\phi ( p_{v} ) = q_{ f^{0}(v) } \neq 0$ for all $v \in E^{0}$, the gauge invariant uniqueness theorem implies that $\phi$ is injective. Since $f$ is a CK-morphism, the sets $f^1(E^1)$ and $\{e\in F^1 \mid s_F(e)\in f^0(E^0)\}$ coincide. It now follows that $\phi ( C^{*} (E) )$ is equal to the subalgebra of $C^{*} (F)$ generated by $\{ q_v, t_e \mid v,s_F(e)\in f^0(E^0) \}$. \end{proof} \section{Graph $C^{*}$-algebras over finite graphs with no sinks} \begin{assumption} Throughout the rest of the paper, unless stated otherwise, all graphs will be countable and directed. \end{assumption} \begin{definition} Let $E$ be a graph, let $v_0 \in E^0$ be a vertex, and let $n$ be a positive integer. Define a graph $E(v_0,n)$ as follows: \begin{align*} E(v_0,n)^{0} &= E^{0} \cup \{ v_{1} ,v_{2} , \dots, v_{n} \} \\ E(v_0,n)^1 &= E^{1} \cup \{ e_{1} , e_{2}, \dots, e_{n} \} \end{align*} where $r_{E(v_0,n)}$ and $s_{E(v_0,n)}$ extends $r_{E}$ and $s_{E}$ respectively and $r_{E(v_0,n)} ( e_{i} ) = v_{i-1}$ and $s_{E(v_0,n)} ( e_{i} ) = v_{i}$. \end{definition} \begin{definition} Let $E$ be a graph, let $e_0\in E^1$ be an edge, and let $n$ be a positive integer. Define a graph $E(e_0,n)$ as follows: \begin{align*} E(e_0,n)^{0} &= E^{0} \cup \{ v_{1} , v_{2} , \dots, v_{n} \} \\ E(e_0,n)^{1} &= \left( E^{1} \setminus \{ e_{0} \} \right) \cup \{ e_{1} , e_{2} , \dots, e_{n+1} \} \end{align*} where $r_{E(e_0,n)}$ and $s_{E(e_0,n)}$ extends $r_{E}$ and $s_{E}$ respectively, $r_{E(e_0,n)} ( e_{i} ) = v_{i-1}$ for $i=2,\dots, n+1$ and $s_{E(e_0,n)} ( e_{i} ) = v_{i}$ for $i = 1, \dots, n$, and $r_{E(e_0,n)} ( e_{1} ) = r_E(e_0)$ and $s_{E(e_0,n)} ( e_{n+1} ) = s_{E} ( e_{0} )$. \end{definition} \begin{example} \label{example} Let $E$ be the graph \begin{align*} \xymatrix{ v_{0} \ar@(ul, ur)[]^{e_{0}} \ar@(ur,dr)[]^{f} }. \end{align*} Then $E(v_0,n)$ is the graph \begin{align*} \xymatrix{ v_{n} \ar[r]^{e_{n}} \ar[r] & v_{n-1} \ar[r]^{ e_{n-1}} \ar[r] & \dots \ar[r]^{e_{2}} & v_{1}\ar[r]^{ e_{1} } & v_{0} \ar@(ul, ur)[]^{e_{0}} \ar@(ur,dr)[]^{f} } \end{align*} and $E(e_0,n)$ is the graph \begin{align*} \xymatrix{ & \dots \ar[rd]^{ e_{3} } & \\ v_{n-1} \ar[ru]^{ e_{n-1} } & & v_{2} \ar[d]^{ e_{2} } \\ v_{n} \ar[u]^{e_{n}} & & v_{1} \ar[ld]^{ e_{1} } \\ & v_{0} \ar[ul]^{ e_{n+1} } \ar@(dl,dr)[]_{f} } \end{align*} \end{example} \begin{proposition}\label{p:removing-sources} Let $E$ be a graph, let $e_0\in E^1$ be an edge, and let $n$ be a positive integer. Define $v_{0} = r_{E} ( e_{0} )$. Then $C^{*} (E(v_0,n)) \cong C^{*} (E(e_0,n))$. \end{proposition} \begin{proof} Let $\{ s_{e} , p_{v} \mid e \in E(e_0,n)^{1} , v \in E(e_0,n)^{0} \}$ be a universal Cuntz-Krieger $E(e_0,n)$-family generating $C^{*} ( E(e_0,n) )$. For each $v \in E(v_0,n)^{0}$ and $e \in E(v_0,n)^{1}$ set \begin{align*} Q_{v} &= p_{v} \\ T_{e} &= \begin{cases} s_{e} , &\text{if $e \neq e_{0}$} \\ s_{ e_{n+1} } s_{e_{n}} \cdots s_{ e_{1} }, &\text{if $e = e_{0}$}. \end{cases} \end{align*} We will show that $\{ T_{e} , Q_{v} \mid e \in E(v_0,n)^{1} , v \in E(v_0,n)^{0} \}$ is a Cuntz-Krieger $E(v_0,n)$-family that generates $C^{*} ( E(e_0,n) )$. It is clear that $Q_{v} Q_{w} = 0$ for all $v \neq w$. Let $e , f \in E( v_{0}, n)^{1}$ with $e \neq f$. Then \begin{align*} T_{e}^{*} T_{f} &= \begin{cases} s_{e}^{*} s_{f} , &\text{if $e \neq e_{0}$ and $f \neq e_{0}$} \\ s_{e_{1}}^{*} s_{e_{2}}^{*} \dots s_{ e_{n+1} }^{*} s_{f} , &\text{if $e = e_{0}$} \\ s_{e}^{*} s_{ e_{n+1} } s_{e_{n}} \cdots s_{ e_{1} }, &\text{if $f = e_{0}$} \end{cases} \\ &= 0. \end{align*} The last two cases hold true because $g \neq e_{n+1}$ for all $g \in E(v_0,n)^{1}$. Now let $e \in E(v_0,n)^{1}$. Then \begin{align*} T_{e}^{*} T_{e} &= \begin{cases} s_{e}^{*} s_{e}, &\text{if $e \neq e_{0}$} \\ s_{e_{1}}^{*} s_{e_{2}}^{*} \dots s_{ e_{n+1} }^{*} s_{ e_{n+1} } s_{e_{n}} \cdots s_{ e_{1} }, &\text{if $e = e_{0}$} \end{cases} \\ &= \begin{cases} p_{ r_{E(e_0,n) } ( e ) }, &\text{if $e \neq e_{0}$} \\ p_{ r_{E(e_0,n) } ( e_{1} ) }, &\text{if $e = e_{0}$} \end{cases} \\ &= p_{ r_{ E(v_0,n) } ( e ) } \\ &= Q_{ r_{E(v_0,n)} ( e ) } \end{align*} and \begin{align*} T_{e} T_{e}^{*} &= \begin{cases} s_{e}s_{e}^{*}, &\text{if $e \neq e_{0}$} \\ s_{ e_{n+1} } s_{e_{n}} \cdots s_{ e_{1} } s_{e_{1}}^{*} s_{e_{2}}^{*} \dots s_{ e_{n+1} }^{*} , &\text{if $e = e_{0}$} \end{cases} \\ &\leq \begin{cases} p_{ s_{E(e_0,n) } ( e ) }, &\text{if $e \neq e_{0}$} \\ p_{ s_{E(e_0,n) } ( e_{n+1} ) }, &\text{if $e = e_{0}$} \end{cases} \\ &= \begin{cases} p_{ s_{E(v_0,n) } ( e ) }, &\text{if $e \neq e_{0}$} \\ p_{ s_{E} ( e_{0} ) }, &\text{if $e = e_{0}$} \end{cases} \\ &= Q_{ s_{E(v_0,n)} ( e ) }. \end{align*} Let $v \in E(v_0,n)^{0}$ be a regular vertex. Note that $v$ is a regular vertex in $E(e_0,n)$. Suppose $v = v_{i}$ for some $i = 1, \dots, n$. Then $s_{E(e_0,n)}^{-1} ( v_{i} ) = \{ e_{i} \} = s_{E(v_0,n)}^{-1} ( v_{i} ) $. Hence, \begin{align*} Q_{v} = Q_{v_{i} } = p_{v_{i}} = s_{e_{i}} s_{e_{i}}^{*} = T_{e_{i}} T_{e_{i}}^{*}. \end{align*} Suppose $v \neq v_{i}$ for $i = 1, \dots, n$. We break this into two cases. Suppose $e_{n+1} \notin s_{E(e_0,n)}^{-1} ( v )$. Then $v \neq s_{ E } ( e_{0} )$. Since $v \neq v_{i}$ for $i = 1, \dots, n$ and $v \neq s_{E} ( e_{0} )$, we have that $s_{ E(v_0,n) }^{-1} ( v ) \cap \{ e_{0} , e_{1} , \dots, e_{n} \} = \emptyset$ and $s_{ E(e_0,n) }^{-1} ( v ) \cap \{ e_{1} , \dots, e_{n}, e_{n+1} \} = \emptyset$. Thus, \begin{align*} s_{ E(e_0,n) }^{-1} ( v ) = s_{ E }^{-1} ( v ) = s_{ E(v_0,n)}^{-1} ( v ) . \end{align*} Hence, \begin{align*} Q_{v} = p_{v} = \sum_{ e \in s_{ E(e_0,n) }^{-1} ( v ) } s_{e} s_{e}^{*} = \sum_{ e \in s_{ E(v_0,n) }^{-1} ( v ) } s_{e} s_{e}^{*} = \sum_{ e \in s_{ E(v_0,n) }^{-1} ( v ) } T_{e} T_{e}^{*}. \end{align*} Suppose $e_{n+1} \in s_{E(e_0,n)}^{-1} ( v )$. Then $v= s_{E(e_0,n)} ( e_{n+1} )= s_{E} ( e_{0} )$, which implies that $e_{0} \in s_{E(v_0,n)}^{-1} ( v )$. Note that $s_{e_{i}} s_{e_{i}}^{*} = p_{ v_{i} }$ for all $i = 1, 2, \dots, n$. Thus, \begin{align*} Q_{v} &= p_{v} = \sum_{ e \in s_{E(e_0,n)}^{-1} ( v ) } s_{e} s_{e}^{*} \\ &= \sum_{ e \in s_{E}^{-1} ( v )\setminus \{e_{0}\} } s_{e} s_{e}^{*} + s_{ e_{n+1} } s_{ e_{n+1} }^{*} \\ &= \sum_{ e \in s_{E}^{-1} ( v )\setminus \{e_{0}\} } s_{e} s_{e}^{*} + s_{ e_{n+1} } p_{v_{n}} s_{ e_{n+1} }^{*} \\ &= \sum_{ e \in s_{E}^{-1} ( v )\setminus \{e_{0}\} } s_{e} s_{e}^{*} + s_{ e_{n+1} } s_{ e_{n} } s_{e_{n} }^{*} s_{ e_{n+1} }^{*} \\ &\ \vdots \\ &= \sum_{ e \in s_{E}^{-1} ( v )\setminus \{e_{0}\} } s_{e} s_{e}^{*} + s_{ e_{n+1} } s_{ e_{n} } \cdots s_{e_{1} } s_{e_{1} }^{*} \dots s_{ e_{n} } ^{*} s_{ e_{n+1} }^{*} \\ &= \sum_{ e \in s_{E}^{-1} ( v )\setminus \{e_{0}\} } T_{e} T_{e}^{*} + T_{e_{0}} T_{e_{0}}^{*} \\ &= \sum_{ e \in s_{E(v_0,n) }^{-1} ( v ) } T_{e} T_{e}^{*}. \end{align*} We have just shown that $\{ T_{e} , Q_{v} \mid e \in E(v_0,n)^{1} , v \in E(v_0,n)^{0} \}$ is a Cuntz-Krieger $E(v_0,n)$-family. Suppose $\{ t_{e} , q_{v} \mid e \in E(v_0,n)^{1} , v \in E(v_0,n)^{0} \}$ is a universal Cuntz-Krieger $E(v_0,n)$-family generating $C^{*} ( E(v_0,n) )$. Then there exists a $*$-homomorphism $\psi\colon C^{*} ( E(v_0,n) ) \to C^{*} ( E(e_0,n) ) $ such that \begin{align*} \psi ( q_{v} ) &= Q_{v} \\ \psi ( t_{e} ) &= T_{e} \end{align*} for all $e \in E(v_0,n)^{1}$ and $v \in E(v_0,n)^{0}$. Note that the only generator of $C^{*} ( E(e_0,n) )$ that is not included in \begin{align*} \{ T_{e} , Q_{v} \mid e \in E(v_0,n)^{1} , v \in E(v_0,n)^{0} \} \end{align*} is $s_{e_{n+1}}$. In this case, recall again that \begin{align*} p_{v_{i}} = s_{e_{i}} s_{ e_{i} }^{*} \end{align*} for all $i = 1, 2, \dots, n$. Therefore, \begin{align*} T_{ e_{0} } T_{ e_{1} }^{*} \dots T_{ e_{n}}^{*} &= s_{ e_{n+1} } s_{ e_{n}} \dots s_{e_{1}}s_{ e_{1} }^{*} \dots s_{ e_{n}}^{*} \\ &= s_{ e_{n+1} } s_{ e_{n}} \dots s_{ e_{2} } p_{v_{1}} s_{e_{2}}^{*}\dots s_{ e_{n}}^{*} \\ &= s_{ e_{n+1} } s_{ e_{n}} \dots s_{ e_{2} } p_{r_{ E(e_0,n) ( e_{2} ) } } s_{e_{2}}^{*}\dots s_{ e_{n}}^{*} \\ &= s_{ e_{n+1} } s_{ e_{n}} \dots s_{ e_{2} } s_{e_{2}}^{*}\dots s_{ e_{n}}^{*} \\ &\ \vdots \\ &= s_{ e_{n+1} } s_{ e_{n} } s_{e_{n}}^{*} \\ &= s_{ e_{n+1} } p_{v_{n}} \\ &= s_{ e_{n+1} } p_{ r_{E(e_0,n) }( e_{n+1} ) } \\ &= s_{ e_{n+1}}. \end{align*} Hence, $s_{e_{n+1}} \in \psi ( C^{*} ( E(v_0,n) ) )$, which implies that $\psi$ is surjective. Note that the cycle structure of $E(v_0,n)$ is determined by the cycle structure of $E$ and vice versa. Moreover, the cycles of $E(v_0,n)$ with no exits are in one-to-one correspondence to the cycles of $E(e_0,n)$ with no exits. Let $\alpha = f_{1} f_{2} \cdots f_{m}$ be a vertex-simple cycle in $E(v_0,n)$ with no exits. Suppose $s_{E(v_0,n)} ( f_{i} ) \neq s_{E(v_0,n)} ( e_{0} )$. Then $\alpha$ is a vertex-simple cycle in $E(e_0,n)$ with no exits. Thus, $s_{ \alpha }$ is a unitary in $C^{*}(E(e_0,n))$ with spectrum $\mathbb{T}$. Hence, \begin{align*} \psi ( t_{\alpha} ) = s_{ \alpha }, \end{align*} which implies that $\psi ( t_{ \alpha } )$ is a unitary in $C^{*} (E(e_0,n))$ with spectrum $\mathbb{T}$. Suppose $s_{E(v_0,n)} ( f_{i} ) = s_{E(v_0,n)} ( e_{0} )$. Then $\alpha = e_{0} f_{2} \cdots f_{n}$ since $\alpha$ is a vertex-simple cycle in $E(v_0,n)$ with no exits. Note that \begin{align*} \psi ( t_{ \alpha } ) = s_{ e_{n+1}} s_{ e_{n}} \cdots s_{ e_{1}} s_{f_{2}} \cdots s_{ f_{n}} = s_{ \beta } \end{align*} and $\beta = e_{n+1} e_{n} \cdots e_{1} f_{2} \cdots f_{n}$ is a vertex-simple cycle in $E(e_0,n)$ with no exits. Hence, $\psi ( t_{ \alpha } ) = s_{ \beta }$ is a unitary in $C^{*} ( E(e_0,n) )$ with spectrum $\mathbb{T}$. From the above paragraph and the fact that $\psi ( q_{v} ) = p_{v} \neq 0$ for all $v \in E(v_0,n)^{0}$, by Theorem~1.2 of \cite{ws-general-ck-uniquness}, $\psi$ is injective. Therefore, $\psi$ is an isomorphism. \end{proof} \begin{remark} Proposition~\ref{p:removing-sources} allows one to remove heads of finite length while preserving isomorphism classes. \end{remark} \begin{definition} Let $E$ be a graph and let $H$ be a hereditary subset of $E^{0}$. Consider the set \begin{align*} F( H ) = \{ \alpha \in E^{*} \mid \alpha = e_{1} e_{2} \dots e_{n} , s_{E} ( e_{n} ) \notin H , r_{E} ( e_{n} ) \in H \}. \end{align*} Let $\overline{F} (H)$ be another copy of $F (H)$ and we write $\overline{\alpha}$ for the copy of $\alpha$ in $\overline F (H)$. Define a graph $E(H)$ as follows: \begin{align*} E(H)^{0} &= H \cup F(H) \\ E(H)^{1} &= s_{E}^{-1} (H) \cup \overline{F} (H) \end{align*} and extend $s_{E}$ and $r_{E}$ to $E(H)$ by defining $s_{E(H)} ( \overline{\alpha} ) = \alpha$ and $r_{ E(H) } ( \overline{ \alpha } ) = r_{E} ( \alpha )$. \end{definition} Note that $E(H)$ is just the graph $( H , r_{E}^{-1}(H), r_{E}, s_{E} )$ together with a source for each $\alpha \in F(H)$ with exactly one edge from $\alpha$ to $r_{E}( \alpha )$. \begin{example} Suppose $E$ is the graph \begin{align*} \xymatrix{ v_{2} \ar[r]^{e_{2}} & v_{1}\ar[r]^{ e_{1} } & v_{0} \ar@(ul, ur)[]^{e_{0}} \ar@(ur,dr)[]^{f} } \end{align*} and $H = \{ v_{0} \}$. Then $F(\{ v_{0} \} ) = \{ e_{1} , e_{2} e_{1} \}$. Therefore, the graph \begin{align*} \xymatrix{ e_{1} \ar[rd]^-{ \overline{e_{1}} } & \\ & v_{0} \ar@(ul, ur)[]^{e_{0}} \ar@(ur,dr)[]^{f} \\ e_{2} e_{1} \ar[ru]_-{ \overline{ e_{2} e_{1} } } & } \end{align*} represents the graph $E( \{ v_{0} \} )$. \end{example} \begin{theorem}\label{t:adding-sources} Let $E$ be a graph and let $H$ be a hereditary subset of $E^{0}$. Suppose \begin{align*} ( E^{0} \setminus H , r_{E}^{-1} ( E^{0} \setminus H ) , r_{E} , s_{E} ) \end{align*} is a finite acyclic graph and $v \geq H$ for all $v \in E^{0} \setminus H$. Assume furthermore that the set $s^{-1}(E^0\setminus H)\cap r^{-1}(H)$ is finite. Then $C^{*} (E) \cong C^{*} ( E(H) )$. \end{theorem} \begin{proof} Let $\{ s_e, p_v \mid e \in E^1, v \in E^0 \}$ be a universal Cuntz-Krieger $E$-family generating $C^*(E)$. For $v \in E(H)^0$ define $$Q_v := \begin{cases} p_v & \text{ if $v \in H$} \\ s_\alpha s_\alpha^* & \text{ if $v = \alpha \in F(H)$} \\ \end{cases}$$ and for $e \in E(H)^1$ define $$T_e := \begin{cases} s_e & \text{ if $e \in s_E^{-1}(H)$} \\ s_\alpha & \text{ if $e = \overline{\alpha} \in \overline{F}(H)$} .\\ \end{cases}$$ We shall show that $\{ T_e, Q_v \mid e \in E(H)^1, v \in E(H)^0\}$ is a Cuntz-Krieger $E(H)$-family in $C^{*} (E)$. To begin, we see that the $Q_v$ are mutually orthogonal projections and the $T_e$ are partial isometries with mutually orthogonal ranges. (The orthogonality follows from the fact that an element in $F(H)$ cannot extend an element in $F (H )$ and the fact that $s_{ E } ( \alpha ) \notin H$ for all $\alpha \in F(H)$.) To see the Cuntz-Krieger relations hold, we consider cases for $e \in E(H)^1$. If $e \in s_{E}^{-1} (H)$, then $r_{H}(e) \in H$ and $$T_e^* T_e = s_e^*s_e = p_{r_{E}(e)} = Q_{r_{E(H)}(e)}.$$ If $e = \overline{\alpha} \in F(H)$, then $r_{E}(\alpha) \in H$ and $$T_e^* T_e = T_{\overline{\alpha}}^* T_{\overline{\alpha}} = s_\alpha^*s_\alpha = p_{r_{E}(\alpha)} = Q_{r_{E}(\alpha)} = Q_{r_{E(H)}(\overline{\alpha})} = Q_{r_{ E(H) }(e)}.$$ For the second Cuntz-Krieger relation, we again let $e \in E(H)^1$ and consider cases. If $e \in s_{E}^{-1} (H)$, then $$Q_{s_{E(H)}(e)} T_e = p_{s_{E}(e)} s_e = s_e = T_e.$$ If $e = \overline{\alpha} \in \overline{F}(H)$, then $$Q_{s_{E(H)}(e)} T_e = Q_\alpha T_{\overline{\alpha}} = s_\alpha s_\alpha^* s_\alpha = s_\alpha = T_{\overline{\alpha}} = T_e.$$ Thus $Q_{s_{E(H)}(e)} T_e = T_e$ for all $e \in E(H)^1$, so that $T_e T_e^* \leq Q_{s_{E(H)}(e)}$ for all $e \in E(H)^1$, and the second Cuntz-Krieger relation holds. For the third Cuntz-Krieger relation, suppose that $v \in E(H)^0$ and that $v$ is regular. If $v \in H$, then the set of edges that $v$ emits in $E(H)$ is equal to the set of edges that $v$ emits in $E$, and hence $$Q_v = p_v = \sum_{ \{e \in E^1 \mid s_{E}(e) = v \} } s_es_e^* = \sum_{ \{e \in E(H)^{1} \mid s_{E(H)}(e) = v\} } T_eT_e^*.$$ If $v \in F(H)$, then $v = \alpha$ with $r_{E(H)}(\alpha) \in H$, and the element $\overline{\alpha}$ is the unique edge in $E(H)^0$ with source $v$, so that $$Q_v = s_\alpha s_\alpha^* = T_e T_e^*.$$ Thus the third Cuntz-Krieger relation holds, and \begin{align*} \{ T_e, Q_v \mid e \in E(H)^1, v \in E(H)^0\} \end{align*} is a Cuntz-Krieger $E(H)$-family in $C^{*} (E)$. If $\{ q_v, t_e \mid v \in E(H)^0, e \in E(H)^1 \}$ is a universal Cuntz-Krieger $E(H)$-family generating $C^*(E(H))$, then by the universal property of $C^*(E(H))$ there exists a $*$-homomorphism $\phi : C^*(E(H)) \to C^{*}(E)$ with $\phi (q_v) = Q_v$ for all $v \in E(H)^{0}$ and $\phi(t_e) = T_e$ for all $e \in E(H)^1$. We shall show injectivity of $\phi$, by applying the generalized Cuntz-Krieger uniqueness theorem of \cite{ws-general-ck-uniquness}. To verify the hypotheses, we first see that if $v \in E(H)^0$, then $\phi( q_v) = Q_v \neq 0$. Second, if $e_1 \ldots e_n$ is a vertex-simple cycle in $E(H)$ with no exits, then since the cycles in $E(H)$ come from cycles in $E$ all lying in the subgraph given by \begin{align*} ( H , s_{E}^{-1} (H) , s_{E} , r_{E} ), \end{align*} we must have that $e_i \in E^1$ for all $1 \leq i \leq n$, and $e_1 \ldots e_n$ is a cycle in $E$ with no exits. Thus $\phi(t_{e_1 \ldots e_n}) = \phi(t_{e_1}) \ldots \phi(t_{e_n}) = s_{e_1} \ldots s_{e_n} = s_{e_1 \ldots e_n}$ is a unitary whose spectrum is the entire circle. It follows from the generalized Cuntz-Krieger uniqueness theorem, stated in Theorem~1.2 of \cite{ws-general-ck-uniquness}, that $\phi$ is injective. We now show that $\phi$ is surjective. Let $e \in E^{1}$ such that $r_{E} ( e ) \in H$. If $s_{E} (e) \in H$, then \begin{align*} s_{e} = T_{{e}} = \phi ( t_{{e} } ) \in \mathrm{im} ( \phi ). \end{align*} Suppose $s_{E} ( e ) \notin H$. So, $e \in F(H)$. Hence, \begin{align*} s_{e} = T_{\overline e} = \phi ( t_{\overline{e} } ) \in \mathrm{im} ( \phi ). \end{align*} We now show that $s_e\in\operatorname{im }(\phi)$ for all $e\in r_E^{-1}(E^0\setminus H)$. By assumption, $v \geq H$ for all $v\in E^0\setminus H$. Define for each $k$ the subset $D_k$ of $E^0\setminus H$ as the set of vertices $v$ for which $k$ is the maximal length of a path from $v$ to $H$. Put $D_0=H$, and note that for $k\geq 1$, all vertices in $D_k$ are regular. By induction on $k\geq 1$ we will show for every path $\alpha$ in $E$ with $r_E(\alpha)\in D_k$ that $s_\alpha\in\operatorname{im }(\phi)$. For $k=1$ and $\alpha$ a path in $E$ with $r_E(\alpha)\in D_1$, we note that $r_E(e)\in H$ for all $e\in s_E^{-1}(r_E(\alpha))$. Hence \begin{align*} s_\alpha &= s_\alpha p_{r_E(\alpha)} = \sum_{e\in s_E^{-1}(r_E(\alpha))}s_\alpha s_es_e^* \\ &= \sum_{e\in s_E^{-1}(r_E(\alpha))}T_{\overline{\alpha e}}T_{\overline e}^* \\ &= \sum_{e\in s_E^{-1}(r_E(\alpha))}\phi(t_{\overline{\alpha e}}t_{\overline e}^*) \in\operatorname{im }(\phi) \end{align*} since $\alpha e,e\in F(H)$. For $k>1$ and $\alpha$ a path in $E$ with $r_E(\alpha)\in D_k$, we note that for all $e\in s_E^{-1}(r_E(\alpha))$ there is a $j<k$ for which $r_E(\alpha e)=r_E(e)\in D_j$. Hence \[ s_\alpha = \sum_{e\in s_E^{-1}(r_E(\alpha))}s_{\alpha e}s_e^* \in\operatorname{im }(\phi). \] We have just shown that $s_{e} \in \mathrm{im} ( \phi )$ for all $e \in E^{1}$. We now show that $p_{v} \in \mathrm{im} ( \phi )$ for all $v \in E^{0}$. Note that if $v \in E^{0}$ and $v$ is not a regular vertex, then $v \in H$. Hence, $p_{v} = Q_{v} = \phi ( q_{v} )$. Let $v \in E^{0}$ be a regular vertex. Then for each $e \in s_{E}^{-1} ( v )$, we have that $s_{e} , s_{e}^{*} \in \mathrm{im} ( \phi )$. Therefore, \begin{align*} p_{v} = \sum_{ e \in s_{E}^{-1} ( v ) } s_{e} s_{e}^{*} \in \mathrm{im} ( \phi ). \end{align*} Since $\{ p_{v}, s_{e} \mid v \in E^{0} , e \in E^{1} \} \subseteq \mathrm{im} ( \phi )$ and $\{ p_{v}, s_{e} \mid v \in E^{0} , e \in E^{1} \}$ generates $C^{*} (E)$ we have that $\phi$ is surjective. Therefore, $\phi$ is an isomorphism. \end{proof} \begin{definition} Let $E$ be a graph, let $v_0\in E^0$ be a vertex in $E$, and let $n$ be a positive integer. Define a graph $E'(v_0,n)$ as follows: \begin{align*} E'(v_0,n)^{0} &= E^{0} \cup \{ v_{1} ,v_{2} , \dots, v_{n} \} \\ E'(v_0,n)^{1} &= E^{1} \cup \{ e_{1} , e_{2}, \dots, e_{n} \} \end{align*} where $r_{E'(v_0,n)}$ and $s_{E'(v_0,n)}$ extends $r_{E}$ and $s_{E}$ respectively, and $r_{E'(v_0,n)} ( e_{i} ) = v_{0}$ and $s_{E'(v_0,n)} ( e_{i} ) = v_{i}$ for all $i=1,\dots n$. \end{definition} \begin{corollary}\label{c:adding-sources} Let $E$ be a graph, let $v_0\in E^0$ be a vertex, and let $n$ be a positive integer. Then $C^*(E(v_0,n))\cong C^*(E'(v_0,n))$. \end{corollary} \begin{proof} Note that $E^{0}$ is a hereditary subset of $E(v_0,n)^{0}$. Moreover $(E(v_0,n)^{0} \setminus E^{0} , r_{E(v_0,n)}^{-1} (E(v_0,n)^{0} \setminus E^{0} ) , r_{E(v_0,n)} , s_{E(v_0,n)} )$ is a finite acyclic graph and for each $v \in E(v_0,n)^{0} \setminus E^{0}$, there exists a path in $E(v_0,n)$ from $v$ to $E^{0}$. Finally, $s_{E(v_0,n)}(E(v_0,n)^0\setminus E^0)\cap r_{E(v_0,n)}^{-1}(E^0)$ is finite. Thus, by Theorem~\ref{t:adding-sources}, $C^{*}(E(v_0,n)) \cong C^{*} (E(v_0,n)( E^{0} ) )$. One can verify that the graph $E(v_0,n)( E^{0} )$ is isomorphic to the graph $E'(v_0,n)$. Thus, $C^{*} (E(v_0,n)( E^{0} ) ) \cong C^{*} (E'(v_0,n))$. \end{proof} \begin{example}\label{e:graphsconstruction} We give an example to illustrate the proof of Corollary~\ref{c:adding-sources}. Consider the graph $E$ of Example~\ref{example}. Then $E(v_0,2)$ is the graph \begin{align*} \xymatrix{ v_{2} \ar[r]^{e_{2}} & v_{1}\ar[r]^{ e_{1} } & v_{0} \ar@(ul, ur)[]^{e_{0}} \ar@(ur,dr)[]^{f} } \end{align*} and thereby $E(v_0,2)(\{v_0\})$ is the graph \begin{align*} \xymatrix{ e_{1} \ar[rd]^-{ \overline{e_{1}} } & \\ & v_{0} \ar@(ul, ur)[]^{e_{0}} \ar@(ur,dr)[]^{f} \\ e_{2} e_{1} \ar[ru]_-{ \overline{ e_{2} e_{1} } } & } \end{align*} which is isomorphic to the graph $E'(v_0,2)$ \begin{align*} \xymatrix{ v_{1} \ar[rd]^-{ e_{1} } & \\ & v_{0} \ar@(ul, ur)[]^{e_{0}} \ar@(ur,dr)[]^{f} \\ v_{2} \ar[ru]_-{ e_{2} } & . } \end{align*} \end{example} \begin{theorem}\label{t:phantom-ck-algebras} Let $E$ be a graph. Then the following are equivalent: \begin{itemize} \item[(1)] $E$ is finite graph with no sinks. \item[(2)] $C^{*} (E)$ is isomorphic to a Cuntz-Krieger algebra. \item[(3)] $C^{*} (E)$ is unital and \begin{align*} \mathrm{rank} ( K_{0} ( C^{*} (E) ) ) = \mathrm{rank} ( K_{1} ( C^{*} ( E ) ) ). \end{align*} \end{itemize} \end{theorem} \begin{proof} We first show (1) implies (2). Suppose $E$ is a finite graph with no sinks. Remove the sources from $E$, and remove the vertices that then become sources; repeat this procedure finitely many times to get a subgraph $F$ of $E$ that has no sinks and no sources. Notice that $F^{0}$ is a hereditary subset of $E^{0}$, that \begin{align*} ( E^{0} \setminus F^{0} , r_{E}^{-1} ( E^{0} \setminus F^{0} ) , r_{E} , s_{E} ) \end{align*} is a finite acyclic graph, and that for each $v \in E^{0} \setminus F^{0}$, there exists a path in $E$ from $v$ to $F^{0}$. Therefore, by Theorem~\ref{t:adding-sources}, $C^{*} ( E ) \cong C^{*} ( E( F^{0} ) )$. We can apply Corollary~\ref{c:adding-sources} and Proposition~\ref{p:removing-sources} as many times as needed (but finitely many times), to get a finite graph $E_{1}$ with no sinks and no sources such that $C^{*} ( E( F^{0} ) ) \cong C^{*} (E_{1})$. Note that $C^{*} (E_{1})$ is a Cuntz-Krieger algebra and $C^{*} (E) \cong C^{*} (E_{1})$. We next show (2) implies (3). Suppose $C^{*} (E)$ is isomorphic to a Cuntz-Krieger algebra. Then $C^{*} (E)$ is unital. Moreover, by the $K$-theory computation (Theorem~3.1 of \cite{ddmt_kthygraph}), \begin{align*} \mathrm{rank} ( K_{0} ( C^{*} (E) ) ) = \mathrm{rank} ( K_{1} ( C^{*} ( E ) ) ). \end{align*} We now show (3) implies (1). Suppose $C^{*} (E)$ is unital. Then $E^{0}$ is a finite set. Since \begin{align*} \mathrm{rank} ( K_{0} ( C^{*} (E) ) ) = \mathrm{rank} ( K_{1} ( C^{*} ( E ) ) ), \end{align*} by the $K$-theory computation (Theorem~3.1 of \cite{ddmt_kthygraph}), $E$ has no singular vertices. Hence, $E$ is a finite graph with no sinks. \end{proof} \begin{definition} \label{d:stabilization} Let $E$ be a graph and let $SE$ be the graph obtained by adding an infinite head to every vertex of $E$. \begin{align*} \xymatrix{ E: \ & & & \tikz \shade[ball color=black] (0,0) circle (1mm);& & \tikz \shade[ball color=black] (0,0) circle (1mm); \\ & & & & \tikz \shade[ball color=black] (0,0) circle (1mm); \ar[ru] \ar[lu] & \\ SE : \ & \cdots \tikz \shade[ball color=black] (0,0) circle (1mm); \ar[r] & \tikz \shade[ball color=black] (0,0) circle (1mm); \ar[r] & \tikz \shade[ball color=black] (0,0) circle (1mm); & & \tikz \shade[ball color=black] (0,0) circle (1mm); & \ar[l] \tikz \shade[ball color=black] (0,0) circle (1mm); & \tikz \shade[ball color=black] (0,0) circle (1mm); \ar[l] \cdots \\ & & & & \tikz \shade[ball color=black] (0,0) circle (1mm); \ar[ru] \ar[lu] & \ar[l] \tikz \shade[ball color=black] (0,0) circle (1mm); & \tikz \shade[ball color=black] (0,0) circle (1mm); \ar[l] \cdots & \\ } \end{align*} We call $SE$ the \emph{stabilization} of $E$. \end{definition} \begin{theorem}\label{t:fullcorners-stablized} Let $E$ be a graph with finitely many vertices and let $T$ be a finite hereditary subset of $(SE)^{0}$ such that $E^{0} \subseteq T$. Set \begin{align*} p_{T} = \sum_{ v \in T } p_{v} \end{align*} where $\{ s_{e} , p_{v} \mid e \in (SE)^{1} , v \in ( SE )^{0} \}$ is a universal Cuntz-Krieger $SE$-family generating $C^{*} (SE)$. Then $p_{T}$ is a full projection in $C^{*} (SE)$ and there exists a subgraph $F$ of $SE$ such that $C^{*} (F) \cong p_{T}C^{*} ( SE )p_{T}$. If in addition $C^{*} (E)$ is a Cuntz-Krieger algebra, then $p_{T} C^{*} ( SE ) p_{T}$ is a Cuntz-Krieger algebra. \end{theorem} \begin{proof} The smallest saturated subset of $(SE)^{0}$ containing $T$ is $(SE)^{0}$. Hence, $p_{T}$ is a full projection. Let $F = ( T , s_{SE}^{-1} (T) , r_{ SE} , s_{SE} )$. We claim that $F$ is a CK-subgraph of $SE$. It is clear that $F$ is a subgraph of $SE$. We will show that $s_{E}^{-1} ( v ) = s_{F}^{-1} ( v )$ for all $v \in F^{0}$. Let $v \in F^{0}$. Suppose $v \in E^{0}$. Then \begin{align*} s_{SE}^{-1} ( v ) = s_{E}^{-1} ( v ) = s_{ F }^{-1} ( v ). \end{align*} Suppose $v \in T \setminus E^0$. Then $s_{SE}^{-1} ( v ) = \{ e \} = s_{F}^{-1} ( v )$ for some $e$. Since $F$ is a CK-subgraph of $SE$, we have by Lemma~\ref{l:ck-subgraph} that $C^{*} (F)$ is isomorphic to the subalgebra of $C^{*} (SE)$ generated by \begin{align*} \{ p_{v} , s_{e} \mid s_{SE} ( e ) , v \in T \}, \end{align*} which we denote by $B$. We claim that $p_{T} C^{*} (SE ) p_{T} = B$. Note that $B$ is unital with unit $p_{T}$. Note that if $e \in s_{SE}^{-1} ( T )$, then $s_{SE} ( e )$ and $r_{SE} ( e )$ are elements of $T$. Therefore, for all $v\in T$ and all $e\in s_{SE}^{-1}(T)$, \begin{align*} p_{v} = p_{T} p_{v} p_{T} \in p_{T} C^{*} (SE ) p_{T} \end{align*} and \begin{align*} s_{e} = p_{s_{SE} (e) } s_{e} p_{r_{SE} (e) } = p_{T} p_{s_{SE} (e) } s_{e} p_{r_{SE} (e) } p_{T} \in p_{T} C^{*} (SE ) p_{T}. \end{align*} Hence, $B$ is a subalgebra of $p_{T} C^{*} ( SE ) p_{T}$. Let $\alpha$ be a finite path in $SE$. Suppose $s_{SE} (\alpha )$ is not an element of $T$. Then \begin{align*} p_{T} p_{ s_{SE} ( \alpha ) } = 0. \end{align*} If $s_{SE} (\alpha ) \in T$, then \begin{align*} p_{T} p_{ s_{SE} ( \alpha ) } = p_{ s_{SE} ( \alpha ) }. \end{align*} From these observations, we get that \begin{align*} p_{T} s_{\alpha} s_{ \beta }^{*} p_{T} &= \begin{cases} s_{ \alpha } s_{ \beta }^{*}, &\text{if $s_{SE} ( \alpha ), s_{SE} ( \beta ) \in T$} \\ 0, &\text{otherwise}. \end{cases} \end{align*} Since $e \in s_{SE}^{-1} ( T )$ implies that $s_{SE} ( e )$ and $r_{SE} ( e )$ are elements of $T$, we have that $\alpha$ is a path in $F$ if $s_{SE} ( \alpha ) \in T$. Therefore, if $s_{SE} ( \alpha ), s_{SE} ( \beta ) \in T$, then $s_{\alpha} , s_{\beta}^{*} \in B$. Hence, \begin{align*} p_{T} s_{\alpha} s_{ \beta }^{*} p_{T} \end{align*} is an element of $B$ for all paths $\alpha$ and $\beta$ in $SE$. We have just shown that $B = p_{T} C^{*} ( SE ) p_{T}$ which implies that $C^{*} (F) \cong B = p_{T} C^{*} ( SE ) p_{T}$. Assume that $C^*(E)$ is isomorphic to a Cuntz-Krieger algebra. Then by Theorem~\ref{t:phantom-ck-algebras}, the graph $E$ is finite and has no sinks. Since $F$ is a graph obtained from the graph $E$ by adding a finite head to some vertices of $E$, the graph $F$ is finite and with no sinks. By Theorem~\ref{t:phantom-ck-algebras}, $C^*(F)$ is a Cuntz-Krieger algebra. \end{proof} \section{Unital $C^*$-algebras that are stably isomporphic to Cuntz-Krieger algebras} \begin{definition} For a $C^*$-algebra $A$, and projections $p \in \mathsf M_n(A)$ and $q \in \mathsf M_m(A)$, we say $p$ is \emph{Murray-von Neumann equivalent} to $q$, denoted $p \sim q$, if there exists $v \in \mathsf M_{m,n} (A)$ with $p=v^*v$ and $q = vv^*$. For a projection $p$ in a $C^{*}$-algebra $A$ and $n \in \mathbb{N}$, $n p$ will denote the projection \begin{align*} \underbrace{p \oplus p \cdots \oplus p}_{ \text{$n$-times} } \in \mathsf{M}_{n} (A). \end{align*} \end{definition} \begin{lemma}\label{l:projects} Let $E$ be a graph and let $\{p_v,s_e \mid v\in E^0, e\in E^1 \}$ denote a universal Cuntz-Krieger $E$-family generating $C^*(E)$. Let $v\in E^0$ and assume that $v$ is a regular vertex. Then \[p_v\sim \sum_{e\in s^{-1}(v)}p_{r_E(e)}.\] \end{lemma} \begin{proof} The result follows directly from the Cuntz-Krieger relations, see Definition~\ref{def:graph}. \end{proof} \begin{lemma} \label{basiclemma} Let $E$ be a row-finite graph and let $\{p_v,s_e \mid v\in E^0, e\in E^1 \}$ denote a universal Cuntz-Krieger $E$-family generating $C^*(E)$. Let $v,w\in E^0$ with $v \neq w$. If there is a path from $v$ to $w$ in $E$, then there exists a family $(m_u(v,w))_{u\in E^0}$ of non-negative integers satisfying \[ p_v\sim p_w + \sum_{u\in E^0} m_u(v,w)p_u \] with all but finitely many $m_u(v,w)$ equal to zero. Moreover, $m_{v} (v,w)$ can be chosen such that \begin{align*} m_{v} ( v, w ) \geq | \{ e \in E^{1} \mid s_{E} (e) = r_{E} (e) = v \} |. \end{align*} \end{lemma} \begin{proof} Let $e_1\cdots e_n$ denote a path in $E$ from $v$ to $w$, so that $e_1,\ldots,e_n\in E^1$ with $r_E(e_i)=s_E(e_{i+1})$ for all $i\in\{1,\ldots, n-1\}$, $s_E(e_1)=v$, and $r_E(e_n)=w$. Define $v_i=r_E(e_i)$ for $i\in\{1,\ldots,n\}$, and $v_0=v$. Then by Lemma~\ref{l:projects}, \begin{align*} p_v &\sim p_{v_1} + \sum_{e\in s^{-1}(v)\setminus\{e_1\}}p_{r_E(e)} \\ &\sim p_{v_2} + \sum_{e\in s^{-1}(v_1)\setminus\{e_2\}}p_{r_E(e)} + \sum_{e\in s^{-1}(v_0)\setminus\{e_1\}}p_{r_E(e)} \\ &\ \vdots \\ &\sim p_w + \sum_{i=1}^{n-1} \left( \sum_{e\in s^{-1}(v_{i-1}))\setminus\{e_i\}}p_{r_E(e)} \right) . \end{align*} Define $(m_u(v,w))_{u\in E^0}$ as the non-negative integer scalars in the above linear combination of $(p_u)_{u\in E^0}$, i.e., such that \[ \sum_{i=1}^{n-1} \left( \sum_{e\in s^{-1}(v_{i-1}))\setminus\{e_i\}}p_{r_E(e)} \right) = \sum_{u\in E^0} m_u(v,w)p_u . \] This defines $(m_u(v,w))_{u\in E^0}$ for any pair $v,w\in E^0$ for which there is a path from $v$ to $w$. The last statement is clear from the construction of $m_{u} ( v, w )$. \end{proof} \begin{theorem}[Theorem~3.5 of \cite{amp:nonstablekthy}] \label{amp:nonstablekthy} Let $E$ be a row-finite graph and let $\{p_v,s_e \mid v\in E^0, e\in E^1 \}$ denote a universal Cuntz-Krieger $E$-family generating $C^*(E)$. Any projection in $C^{*} (E) \otimes \mathbb{K}$ is Murray-von Neumann equivalent to a projection of the form $\sum_{u\in E^0} m_up_u$ with all but finitely many $m_u$ equal to zero. \end{theorem} \begin{lemma}\label{l:supportloops} Suppose $E$ is a row-finite graph in which every vertex is the base point of at least one cycle of length one. Then every hereditary subset in $E^0$ is saturated. \end{lemma} \begin{proof} Let $H$ be a hereditary subset in $E^{0}$. Since every vertex in $E$ is the base point of at least one cycle of length one, $E$ is a graph with no sinks. This fact and the fact that $E$ is row-finite imply that every vertex in $E$ is a regular vertex. To show that $H$ is saturated we must show that $r_{E} ( s_{E}^{-1} ( v ) ) \subseteq H$ implies $v \in H$ for all $v \in E^{0}$. Let $v$ be a vertex in $E$ such that $r_{E} ( s_{E}^{-1} ( v ) ) \subseteq H$. By assumption there exists $e \in s_{E}^{-1} ( v )$ such that $v = r_{E} (e) = s_{E} (e)$. Hence, $v \in r_{E} ( s_{E}^{-1} (v) )$ which implies that $v \in H$. \end{proof} \begin{lemma}\label{l:fullprojections} Let $E$ be a finite graph and let $\{p_v,s_e \mid v\in E^0, e\in E^1 \}$ denote a universal Cuntz-Krieger $E$-family generating $C^*(E)$. Assume that $E$ has no sinks and no sources, and every vertex of $E$ is a base point of at least one cycle of length one. Let $p$ be a norm-full projection in $C^*(E)\otimes\mathbb K$. Then there exists a family $(m_u)_{u\in E^0}$ of integers satisfying \[ p\sim \sum_{u\in E^0} m_up_u \] and $m_u\geq 1$ for all $u\in E^0$ \end{lemma} \begin{proof} By Theorem~\ref{amp:nonstablekthy}, there exists a family $(n_u)_{u\in E^0}$ of non-negative integers satisfying \[ p\sim\sum_{u\in E^0}n_up_u .\] Set $S_{0} = \{ u \in E^{0} \mid n_{u} \neq 0 \}$ and let $H_{0}$ be the smallest hereditary subset of $E^{0}$ that contains $S_{0}$. By Lemma~\ref{l:supportloops}, $H_{0}$ is saturated. Set $q = \sum_{ v \in H_{0} } p_{v} \in I_{H_{0}}$. Note that the ideal generated by $q \otimes e_{11}$ is equal to the ideal generated by $\sum_{u\in E^0}n_up_u$, where $\{ e_{ij} \}_{i,j}$ is a system of matrix units for $\mathbb{K}$. Since $ p\sim\sum_{u\in E^0}n_up_u$, we have that the ideal generated by $q \otimes e_{11}$ is equal to the ideal generated by $p$. Thus, $q \otimes e_{11}$ is a norm-full projection in $C^{*} (E) \otimes \mathbb{K}$ which implies that $q$ is a norm-full projection in $C^{*} (E)$. Hence, $I_{H_{0}} = C^{*} (E)$ which implies that $H_{0} = E^{0}$. Therefore, for every $w \in E^{0}$, there exists $v \in S_0$ such that $v \geq w$. Set $E^{0} \setminus S_{0} = \{ w_{0} , w_{1} , \dots, w_{m} \}$. Let $v \in S_0$ such that $v \geq w_{0}$. By Lemma~\ref{basiclemma}, \begin{align*} p_v\sim p_{w_{0}} + \sum_{u\in E^0} m_u(v,w_{0})p_u \end{align*} where $m_{u} ( v, w_{0} ) \geq 0$ and \begin{align*} m_{v} ( v, w_{0} ) \geq | \{ e \in E^{1} \mid s_{E} (e) = r_{E} (e) = v \} | \geq 1. \end{align*} Therefore, \begin{align*} p \sim \sum_{ u \in E^{0} } n_{u}' p_{u} \end{align*} where $n_{u}' \geq 0$ for all $u \in E^{0}$. Moreover, \begin{align*} S_{0} \subsetneq \{ u \in E^{0} \mid n_{u} ' \neq 0 \} = S_{1} \end{align*} since $n_{w_{0}}' \neq 0$ but $n_{w_{0}} = 0$. Therefore, $| E^{0} \setminus S_{1} | < | E^{0} \setminus S_{0} |$. Let $H_{1}$ be the smallest hereditary subset of $E^{0}$ that contains $S_{1}$. Note that $E^{0} = H_{0} \subseteq H_{1} \subseteq E^{0}$. Hence, $H_{1} = E^{0}$. Hence, for each $w \in E^{0}$, there exists a $v \in S_{1}$ such that $v \geq w$. Therefore, we may continue this process to get a family $(m_u)_{u\in E^0}$ of non-negative integers satisfying \[ p\sim \sum_{u\in E^0} m_up_u \] and $m_u\geq 1$ for all $u\in E^0$. \end{proof} \begin{proposition}\label{p:full-corner-CK-algebras} Let $E$ be a finite graph with no sinks and no sources, and assume that every vertex of $E$ is a base point of at least one cycle of length one. Let $p$ be a norm-full projection in $C^{*} (E) \otimes \mathbb{K}$. Then there exists a finite graph $F$ that has no sinks and no sources such that $C^*(F)\cong p ( C^{*} (E) \otimes \mathbb{K} ) p$. \end{proposition} \begin{proof} Let $SE$ be the stabilization of $E$, as defined in Definition~\ref{d:stabilization}. Let $\{ e_{ij} \}_{ i ,j}$ be a system of matrix units for $\mathbb{K}$. By Proposition~9.8 of \cite{gamt:isomorita} and its proof, there exists an isomorphism $\phi \colon C^{*} (E) \otimes \mathbb{K} \rightarrow C^{*} ( SE ) $ such that \begin{align*} K_{0} ( \phi ) ( [ p_{v}\otimes e_{11} ] ) = [ p_{v} ] \end{align*} for all $v \in E^{0}$. Let $p$ be a norm-full projection in $C^{*} (E) \otimes \mathbb{K}$. By Lemma~\ref{l:fullprojections}, $p$ is Murray-von Neumann equivalent to $\sum_{u\in E^0} m_up_u$ with $m_u\geq 1$ for all $u\in E^0$. Therefore, since $C^*(SE)$ has weak cancellation by Corollary~7.2 of~\cite{amp:nonstablekthy}, $\phi(p)$ is Murray-von Neumann equivalent to $p_{T} \in C^{*} (SE)$ such that $T$ is a finite, hereditary subset of $(SE)^{0}$ with $E^{0} \subseteq T$. By Theorems~\ref{t:fullcorners-stablized} and~\ref{t:phantom-ck-algebras}, $p_{T} C^{*} (SE) p_{T} \cong C^{*} (F)$ for some finite graph $F$ with no sinks and no sources. Note that $p ( C^{*} (E) \otimes \mathbb{K} ) p \cong \phi ( p ) C^{*} (SE) \phi ( p ) \cong p_{T} C^{*} (SE) p_{T}$. Therefore, $p ( C^{*} (E) \otimes \mathbb{K} ) p \cong C^{*} (F)$. \end{proof} The following theorem answers a question asked by George A.~Elliott at the NordForsk Closing Conference at the Faroe Islands, May 2012. \begin{theorem}\label{t:morita-ck-algebras} Let $A$ be a unital $C^*$-algebra. \begin{itemize} \item[(1)] If $A$ is stably isomorphic to a Cuntz-Krieger algebra, then $A$ is isomorphic to a Cuntz-Krieger algebra. \item[(2)] Let $A$ be a unital, nuclear, separable $C^{*}$-algebra with finitely many ideals and let $X = \mathrm{Prim} ( A )$. If $A \otimes \mathcal{O}_{\infty}$ is $KK_{X}$-equivalent to a Cuntz-Krieger algebra with real rank zero and primitive ideal space $X$, then $A \otimes \mathcal{O}_{\infty}$ is isomorphic to a Cuntz-Krieger algebra of real rank zero. \end{itemize} \end{theorem} \begin{proof} We first prove (1). Let $B$ be a Cuntz-Krieger algebra such that $A \otimes \mathbb{K} \cong B \otimes \mathbb{K}$. Note that $B = C^{*} ( F )$ such that $F$ is a finite graph with no sinks and no sources. By Theorem~5.2 of \cite{as:geometric-class}, collapsing a regular vertex that is not a base point of a cycle of length one preserves stable isomorphism classes. Therefore, since $F$ is a finite graph with no sinks and no sources, we can apply Theorem~5.2 of \cite{as:geometric-class} a finite number of times to get a finite graph $E$ with no sinks and no sources, and every vertex of $E$ is a base point of at least one cycle of length one, such that $C^{*} (F) \otimes \mathbb{K} \cong C^{*} ( E ) \otimes \mathbb{K}$. Hence, $A \otimes \mathbb{K} \cong C^{*} (E) \otimes \mathbb{K}$. Let $\phi\colon A \otimes \mathbb{K} \to C^{*} (E) \otimes \mathbb{K}$ be an isomorphism. Let $\{ e_{ij} \}_{ i ,j }$ be a system of matrix units for $\mathbb{K}$. Since $1_{A} \otimes e_{11}$ is a norm-full projection in $A \otimes \mathbb{K}$, $p = \phi ( 1_{A} \otimes e_{11} )$ is a norm-full projection in $C^{*} (E) \otimes \mathbb{K}$. By Proposition~\ref{p:full-corner-CK-algebras}, $p ( C^{*} (E) \otimes \mathbb{K} ) p$ is isomorphic to a Cuntz-Krieger algebra. Note $( 1_{A} \otimes e_{11} )( A \otimes \mathbb{K} ) ( 1_{A} \otimes e_{11} ) \cong p ( C^{*} (E) \otimes \mathbb{K} ) p$ and $A \cong ( 1_{A} \otimes e_{11} )( A \otimes \mathbb{K} ) ( 1_{A} \otimes e_{11} )$. Therefore, $A$ is isomorphic to a Cuntz-Krieger algebra. \medskip We will now use (1) to prove (2). Let $B$ be a Cuntz-Krieger algebra with real rank zero such that $A \otimes \mathcal{O}_{\infty}$ is $KK_{X}$-equivalent to $B$ and $\mathrm{Prim}(B)\cong X$. By Folgerung 4.3 of \cite{kirchberg}, $A \otimes \mathcal{O}_{\infty} \otimes \mathbb{K} \cong B \otimes \mathbb{K}$. Therefore, $A \otimes \mathcal{O}_{\infty}$ is a unital $C^{*}$-algebra stably isomorphic to a Cuntz-Krieger algebra with real rank zero. By (1), we have that $A \otimes \mathcal{O}_{\infty}$ is isomorphic to a Cuntz-Krieger algebra. Since $A \otimes \mathcal{O}_{\infty}$ is stably isomorphic to a $C^{*}$-algebra with real rank zero, $A \otimes \mathcal{O}_{\infty}$ has real rank zero. Therefore, $A \otimes \mathcal{O}_{\infty}$ is isomorphic to a Cuntz-Krieger algebra with real rank zero. \end{proof} \begin{corollary} \label{cor:matrices} Let $A$ be a $C^{*}$-algebra. Then the following are equivalent. \begin{itemize} \item[(1)] $A$ is a Cuntz-Krieger algebra. \item[(2)] $\mathsf M_{n}(A)$ is a Cuntz-Krieger algebra for all $n\in\mathbb \mathbb{N}$. \item[(3)] $\mathsf{M}_{n} (A)$ is a Cuntz-Krieger algebra for some $n \in \mathbb{N}$. \end{itemize} \end{corollary} \begin{proof} (1) implies (2) follows from Theorem~\ref{t:morita-ck-algebras}. (2) implies (3) is obvious. Suppose $\mathsf{M}_{n} (A)$ is a Cuntz-Krieger algebra for some $n \in \mathbb{N}$. In particular, $\mathsf{M}_{n} (A)$ is a unital $C^{*}$-algebra with $1_{\mathsf{M}_{n} (A)} = [ x_{ij} ]$. A computation shows that $x_{11}$ is a multiplicative identity for $A$. Therefore, $A$ is a unital $C^{*}$-algebra. Since $A \otimes \mathbb{K} \cong \mathsf{M}_{n} ( A ) \otimes \mathbb{K}$ and since $\mathsf{M}_{n} (A)$ is a Cuntz-Krieger algebra, by Theorem~\ref{t:morita-ck-algebras}, $A$ is a Cuntz-Krieger algebra. \end{proof} \begin{corollary} \label{cor} Let $A$ be a Cuntz-Krieger algebra. \begin{itemize} \item[(1)] If $p$ is a nonzero projection in $A$, then $p A p$ is isomorphic to a Cuntz-Krieger algebra. \item[(2)] If $p$ is a nonzero projection in $A \otimes \mathbb{K}$, then $p( A \otimes \mathbb{K} ) p$ is isomorphic to a Cuntz-Krieger algebra. \end{itemize} \end{corollary} \begin{proof} We first prove (1) in the case when $p$ is a norm-full projection. Suppose $p$ is a norm-full projection. By Corollary~2.6 of \cite{lb-her-algs}, $pAp \otimes \mathbb{K} \cong A \otimes \mathbb{K}$. Therefore, $pAp$ is a unital $C^{*}$-algebra that is stably isomorphic to a Cuntz-Krieger algebra. By Theorem~\ref{t:morita-ck-algebras}, $p A p$ is isomorphic to a Cuntz-Krieger algebra. We now prove the general case in (1). Let $A = C^{*} (E)$ where $E$ is a finite graph with no sinks and no sources. Let $p$ be a nonzero projection of $A$. Set \begin{align*} I = \text{ the ideal in $C^{*} (E)$ generated by $p$}. \end{align*} Note that $p A p \subseteq I$ which implies that $p A p \subseteq p I p$. Since $p I p \subseteq p A p$, we have that $p A p = p I p$. Thus, $p I p$ is a norm-full hereditary subalgebra of $I$. By Corollary~2.6 of \cite{lb-her-algs}, $p I p \otimes \mathbb{K} \cong I \otimes \mathbb{K}$. Since $I$ is generated by a projection $p$, by Theorem~7.3 and the proof of Theorem~5.3 of \cite{amp:nonstablekthy}, $I$ is a gauge-invariant ideal of $C^{*} (E)$. Thus, by Theorem~3.7 of \cite{bhrs:iccig}, there exists a hereditary saturated subset $H$ of $E^{0}$ such that $I_H$ is $I$, see Definition~\ref{d:hereditary}. Let $E_{H} = ( H , s_{E}^{-1} ( H ), r_{E} , s_{E} )$. By Proposition~3.4 of \cite{bhrs:iccig}, $I_{H} \otimes \mathbb{K} \cong C^{*} ( E_{H} ) \otimes \mathbb{K}$. Note that $E_{H}$ is a finite graph with no sinks. By Proposition~3.1 of \cite{as:geometric-class}, we may continue to remove the sources to obtain a finite graph $F$ with no sinks and no sources such that $C^{*} ( E_{H} ) \otimes \mathbb{K} \cong C^{*} (F ) \otimes \mathbb{K}$. Hence, $C^{*} (F)$ is a Cuntz-Krieger algebra and $p A p = p I p$ is a unital $C^{*}$-algebra that is stably isomorphic to $C^{*} (F)$. By Theorem~\ref{t:morita-ck-algebras}, $p A p$ is isomorphic to a Cuntz-Krieger algebra. We now prove (2). Let $p$ be a nonzero projection in $A \otimes \mathbb{K}$. Recall that $A = C^{*} (E)$, where $E$ is a finite graph with no sinks and no sources. By Theorem~\ref{amp:nonstablekthy}, there exists a non-empty subset $S$ of $E^{0}$ and a collection of positive integers $\{ m_{v} \}_{v \in S}$ such that $p$ is Murray-von Neumann equivalent to $\sum_{ v \in S } m_{v} p_{v}$. Set $q = \sum_{ v \in S } p_{v}$. Then $q$ is a nonzero projection in $A$ and by (1), we have that $q A q \cong C^{*} (F)$ for some finite graph $F$ with no sinks and no sources. By Theorem~5.3 of \cite{amp:nonstablekthy}, $p$ and $q \otimes e_{11}$ generate the same ideal of $A \otimes \mathbb{K}$. Hence, $q A q \otimes \mathbb{K} \cong ( q \otimes e_{11} ) A \otimes \mathbb{K} ( q \otimes e_{11} ) \cong p ( A \otimes \mathbb{K} ) p \otimes \mathbb{K}$. Therefore, $p ( A \otimes \mathbb{K} ) p$ is stably isomorphic to a Cuntz-Krieger algebra. By Theorem~\ref{t:morita-ck-algebras}, $p ( A \otimes \mathbb{K} ) p$ is isomorphic to a Cuntz-Krieger algebra. \end{proof} \begin{corollary} \label{c:sem} Let $A$ be a Cuntz-Krieger algebra. If $p$ is a projection in $A \otimes \mathbb{K}$, then $p( A \otimes \mathbb{K} ) p$ is semiprojective. If $p$ is a projection in $A$, then $p A p$ is semiprojective. \end{corollary} \begin{proof} This follows from Corollary~\ref{cor} since by Corollary~2.24 of~\cite{blackadar} all Cuntz-Krieger algebras are semiprojective and by Corollary~2.29 of~\cite{blackadar} all stabilized Cuntz-Krieger algebras are semiprojective. \end{proof} \section{Acknowledgements} The authors are grateful to George A.~Elliott for asking such inspiring questions. The authors also wish to thank S{\o}ren Eilers, Adam S{\o}rensen, and Mark Tomforde for helpful conversations that have led to the improvement of our results. The second named author is grateful to S{\o}ren Eilers and the Department of Mathematical Sciences at the University of Copenhagen for providing the dynamic research environment where this work was initiated during the Spring of 2012. This research was supported by the Danish National Research Foundation through the Centre for Symmetry and Deformation (DNRF92) at University of Copenhagen, and by the NordForsk research network ``Operator Algebras and Dynamics'' (grant \#11580). \def$'${$'$}
1,477,468,751,206
arxiv
\section{Attentive Pooling Networks for Answer Selection} \label{ap_networks} \emph{Attentive pooling} is an approach that enables the pooling layer to be aware of the current input pair, in a way that information from the question $q$ can directly influence the computation of the answer representation $r^a$, and vice versa. The main idea consists of learning a similarity measure over the projected segments in the input pairs, and uses the similarity scores between the segments to compute attention vectors. When AP is applied to CNN, which we call AP-CNN, the network learns the similarity measure over the convolved input sequences. When AP is applied to biLSTM, which we call AP-biLSTM, the network learns the similarity measure over the hidden states produced by the biLSTM when processing the two input sequences. We use a similarity measure that has a bilinear form but followed by a non-linearity. \begin{figure*}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.52\textwidth]{apcnn_bilstm}} \caption{Attentive Pooling Networks for Answer Selection.} \label{apcnn} \end{center} \vskip -0.2in \end{figure*} In Fig. \ref{apcnn}, we illustrate the application of AP over the output of the convolution or the biLSTM to construct the representations $r^q$ and $r^a$. Consider the input pair ($q$, $a$) where the question has size $M$ and the answer has size $L$\footnote{In Fig. \ref{apcnn}, $q$ has a size of five and $a$ has a size of seven.}. After we compute the matrices $Q \in \mathbb{R}^{c \times M}$ and $A \in \mathbb{R}^{c \times L}$, either by convolution or biLSTM, we compute the matrix $G \in \mathbb{R}^{M \times L}$ as follows: \begin{equation} \label{computing_g} G = \tanh \left( Q^{T}UA \right) \end{equation} where $U \in \mathbb{R}^{c \times c}$ is a matrix of parameters to be learned by the NN. When the convolution is used to compute $Q$ and $A$, the matrix $G$ contains the scores of a \emph{soft alignment} between the convolved $k$-size context windows of $q$ and $a$. When the biLSTM is used to compute $Q$ and $A$, the matrix $G$ contains the scores of a \emph{soft alignment} between the hidden vectors of each token in $q$ and $a$. Next, we apply column-wise and row-wise max-poolings over $G$ to generate the vectors $g^q \in \mathbb{R}^{M}$ and $g^a \in \mathbb{R}^{L}$, respectively. Formally, the $j$-th elements of the vectors $g^q$ and $g^a$ are computed as follows: \begin{equation} [g^{q}]_j =\max_{1 < m < M}\left[G_{j,m}\right] \end{equation} \begin{equation} [g^{a}]_j =\max_{1 < l < L}\left[G_{l,j}\right] \end{equation} We can interpret each element $j$ of the vector $g^a$ as an \emph{importance score} for the context around the $j$-th word in the candidate answer $a$ with regard to the question $q$. Likewise, each element $j$ of the vector $g^q$ can be interpreted as the importance score for the context around the $j$-th word in the question $q$ with regard to the candidate answer $a$. Next, we apply the softmax function to the vectors $g^q$ and $g^a$ to create attention vectors $\sigma^{q}$ and $\sigma^{a}$. For instance, the $j$-th element of the vector $\sigma^{q}$ is computed as follows: \begin{equation} [\sigma^{q}]_j = \dfrac{e^{[g^{q}]_j}}{\displaystyle \sum_{1 < l < M}{e^{[g^{q}]_l}}} \end{equation} Finally, the representations $r^q$ and $r^a$ are computed as the dot product between the attention vectors $\sigma^{q}$ and $\sigma^{a}$ and the output of the convolution (or biLSTM) over $q$ and $a$, respectively: \begin{equation} \label{sigmaq} r^q=Q\sigma^{q} \end{equation} \begin{equation} \label{sigmaa} r^a=A\sigma^{a} \end{equation} Like in QA-CNN and QA-biLSTM, the final score is also computed using the cosine similarity between $r^q$ and $r^a$. We use SGD to train AP-CNN and AP-biLSTM by minimizing the same pairwise loss function used in QA-CNN and QA-biLSTM. \section{Conclusions} \label{conclusions} We present attentive pooling, a two-way attention mechanism for discriminative model training. The main contributions of the paper are: (1) AP is more general than recently proposed two-way attention mechanism because: (a) it learns how to compute interactions between the items in the input pair; and (b) it can be applied to both CNNs and RNNs; (2) we demonstrate that AP can be effectively used with CNNs and biLSTM in the context of the answer selection task, using three different benchmark datasets; (3) our experimental results demonstrate that AP helps the CNN to cope with large input texts; (4) we present new state-of-the-art results for InsuranceQA and TREC-QA datasets. (5) for the WikiQA dataset our results are the best reported so far for methods that do not use handcrafted features. \section{Experimental Results} \label{experimental_results} \subsection{InsuranceQA} In Table \ref{tab:insuranceqa}, we present the experimental results of the four NNs for the InsuranceQA dataset. The results are in terms of accuracy, which is equivalent to precision at top one. On the bottom part of this table, we can see that AP-CNN outperforms QA-CNN by a large margin in both test sets, as well as in the dev set. AP-biLSTM also outperforms the QA-biLSTM in all the three sets. AP-CNN and AP-biLSTM have similar performance. On the top part of Table \ref{tab:insuranceqa} we present the results of two state-of-the-art systems for this dataset. In \cite{feng2015applying}, the authors present a CNN architecture that is similar to QA-CNN, but that uses a different similarity metric instead of cosine similarity. In \cite{tan:Arxiv15}, the authors use a biLTSM architecture that employs unidirectional attention. Both AP-CNN and AP-biLSTM outperform the state-of-the-art systems. \begin{table}[ht!] \caption{Accuracy of different systems for InsuranceQA} \label{tab:insuranceqa} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lrrr} \hline \abovespace\belowspace \bf System & \bf Dev & \bf Test1 & \bf Test2 \\ \hline \cite{feng2015applying} & 65.4 & 65.3 & 61.0 \\ \cite{tan:Arxiv15} & 68.4 & 68.1 & 62.2 \\ \hline QA-CNN & 61.6 & 60.2 & 56.1\\ QA-biLSTM & 66.6 & 66.6 & 63.7 \\ AP-CNN & \bf 68.8 & 69.8 & 66.3 \\ AP-biLSTM & 68.4 & \bf 71.7 & \bf 66.4\\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} One important characteristic of AP-CNN is that it requires less convolutional filters than QA-CNN. For the InsuranceQA dataset, AP-CNN uses 10x less filters (400) than QA-CNN (4000). Using 800 filters in AP-CNN produces very similar results as using 400. On the other hand, as also found in \cite{feng2015applying}, QA-CNN requires at least 2000 filters to achieve more than 60\% accuracy on InsuranceQA. AP-CNN needs less filters because it does not rely only on the final vector representation to capture interactions between the input question and answer. As a result, although AP-CNN has a more complex architecture, its training time is two times faster than QA-CNN. Using a Tesla K20Xm, our Theano implementation of AP-CNN takes about 16 minutes to complete one epoch (training + inference over validation set) for InsuranceQA, which consists on processing 1.5 million text segments. In figures \ref{apcnn_vs_qacnn_t1} and \ref{apcnn_vs_qacnn_t2}, we plot the aggregated accuracy of AP-CNN and QA-CNN for answers up to a certain length for the Test1 and Test2 sets, respectively. We can see in both plots that the performance of both system is better for shorter answers. However, while the performance of QA-CNN continues to drop as larger answers are considered, the performance of AP-CNN seems to be stable after reaching a length of $\sim$90 tokens. These results give support to our hypothesis that attentive pooling helps the CNN to become robust to larger input texts. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{plot_insuranceqa_test1}} \caption{Aggregated accuracy for answers up to a certain length in the InsuranceQA Test1 set} \label{apcnn_vs_qacnn_t1} \end{center} \vskip -0.2in \end{figure} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{plot_insuranceqa_test2}} \caption{Aggregated accuracy for answers up to a certain length in the InsuranceQA Test2 set} \label{apcnn_vs_qacnn_t2} \end{center} \vskip -0.2in \end{figure} \subsection{TREC-QA} In Table \ref{tab:trecqa}, we present the experimental results of the four NNs for the TREC-QA dataset. The results are in terms of mean average precision (MAP) and mean reciprocal rank (MRR), which are the metric normally used in previous work with the same dataset. We use the official \emph{trec\_eval} scorer to compute MAP and MRR. We can see in Table \ref{tab:trecqa} that AP-CNN outperforms QA-CNN by a large margin in both metrics. AP-biLSTM outperforms the QA-biLSTM, but its performance is not as good as the of AP-CNN. On the top part of Table \ref{tab:trecqa} we present the results of three recent work that use TREC-QA as a benchmark. In \cite{wang2015}, the authors present an LTSM architecture for answer selection. Their best result consists of a combination of LSTM and the BM25 algorithm. In \cite{severyn2015}, the authors propose an NN architecture where the representations created by a convolutional layer are the input to similarity measure learning. Wang \& Ittycheriah \yrcite{wang_Ittycheriah2015} propose a word-alignment-based method that is suitable for the FAQ-based QA task. AP-CNN outperforms the state-of-the-art systems in both metrics, MAP and MRR. \begin{table}[ht!] \caption{Performance of different systems for TREC-QA} \label{tab:trecqa} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lrr} \hline \abovespace\belowspace \bf System & \bf MAP & \bf MRR \\ \hline \citet{wang2015} & 0.7134 & 0.7913 \\ \citet{severyn2015} & 0.7460 & 0.8080 \\ \citet{wang_Ittycheriah2015} & 0.7460 & 0.8200 \\ \hline QA-biLSTM & 0.6750 & 0.7723 \\ QA-CNN & 0.7147 & 0.8070 \\ AP-biLSTM & 0.7132 & 0.8032 \\ AP-CNN & \bf 0.7530 & \bf 0.8511 \\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} \subsection{WikiQA} Table \ref{tab:wikiqa} shows the experimental results of the four NNs for the WikiQA dataset. Like in the other two datasets, AP-CNN outperforms QA-CNN, and AP-biLSTM outperforms the QA-biLSTM. The difference of performance between AP-CNN and QA-CNN is smaller than the one for the InsuranceQA dataset. We believe it is because the average size of the answers in WikiQA (25) is much smaller than in InsuranceQA (95). It is expected that attentive pooling bring more impact to the datasets that have larger answer/question lengths. In Table \ref{tab:wikiqa} we also present the results of two recent work that use WikiQA as a benchmark. Yang et al. \yrcite{yang2015}, present a bigram CNN model with average pooling. In \cite{yin2015}, the authors propose an attention-based CNN. In order to make a fair comparison, in Table \ref{tab:wikiqa} we include Yin et al.'s result that use word embeddings only\footnote{Yin et al. \cite{yin2015} report 0.6921(MAP) and 0.7108(MRR) when using handcrafted features in addition to word embeddings.}. AP-CNN outperforms these two systems in both metrics. \begin{table}[ht!] \caption{Performance of different systems for WikiQA} \label{tab:wikiqa} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lrr} \hline \abovespace\belowspace \bf System & \bf MAP & \bf MRR \\ \hline \citet{yang2015} & 0.6520 & 0.6652 \\ \citet{yin2015} & 0.6600 & 0.6770 \\ \hline QA-biLSTM & 0.6557 & 0.6695 \\ QA-CNN & 0.6701 & 0.6822 \\ AP-biLSTM & 0.6705 & 0.6842 \\ AP-CNN & \bf 0.6886 & \bf 0.6957 \\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} \subsection{Attentive Pooling Visualization} Figures \ref{example1} and \ref{example2} depict two heat maps of two test questions from InsuranceQA that were correctly answered by AP-CNN and whose answers are more than 100 words long. The stronger the color of a word in the question (answer), the larger the attention weight in $\sigma^q$ ($\sigma^a$) of the trigram centered at that word. As we can see in the pictures, the attentive pooling mechanism is indeed putting more focus on the segments of the answer that have some interaction with the question, and vice-verse. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{example2_red}} \caption{Attention heat map from AP-CNN for a correctly selected answer.} \label{example1} \end{center} \vskip -0.2in \end{figure} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{example1_red}} \caption{Attention heat map from AP-CNN for a correctly selected answer.} \label{example2} \end{center} \vskip -0.2in \end{figure} \section{Experimental Setup} \label{experimental_setup} \subsection{Datasets} We apply AP-CNN, AP-biLSTM, QA-CNN and QA-biLSTM to three different answer selection datasets: InsuranceQA, TREC-QA and WikiQA. These datasets contain text of different domains and have different caracteristics. Table \ref{as:datasets} presents some statistics about the datasets, including the number of questions in each set, average length of questions (M) and answers (L), average number of candidate answers in the dev/test sets and the average ratio between the lengths of questions and their ground-truth answers. \begin{table*}[ht] \caption{Answer Selection Datasets.} \label{as:datasets} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lrrrcccc} \hline \abovespace\belowspace Dataset & Train & Dev & Test & Avg. M & Avg. L & Avg. \# Cand. Ans. & Avg. L/M \\ \hline \abovespace InsuranceQA & 12887 & 1000 & 1800x2 & 7 & 95 & 500 & 13.8 \\ TREC-QA & 1162 & 65 & 68 & 8 & 28 & 38 & 4.2 \\ WikiQA & 873 & 126 & 243 & 6 & 25 & 9 & 5.0 \\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} InsuranceQA\footnote{git clone https://github.com/shuzi/insuranceQA.git} is a recently released large-scale non-factoid QA dataset from the insurance domain. This dataset provides a training set, a validation set, and two test sets. We do not see obvious categorical differentiation between questions of the two test sets. For each question in dev/test sets, there is a set of 500 candidate answers, which include the ground-truth answers and randomly selected negative answers. More details can be found in \cite{feng2015applying}. TREC-QA\footnote{The data is obtained from \cite{yao2013} \url{http://cs.jhu.edu/~xuchen/packages/jacana-qa-naacl2013-data-results.tar.bz2}} was created by Wang et al. \yrcite{wangmengqiu2007} based on Text REtrieval Conference (TREC) QA track (8-13) data. We follow the exact approach of train/dev/test questions selection in \cite{wang2015}, in which all questions with only positive or negative answers are removed. Finally, we have 1162 training questions, 65 development questions and 68 test questions. WikiQA\footnote{The data is obtained from \cite{yang2015}} is an open domain question-answering dataset. We use the subtask that assumes that there is at least one correct answer for a question. The corresponding dataset consists of 20,360 question/candidate pairs in train, 1,130 pairs in dev and 2,352 pairs in test. We adopt the standard setup of only considering questions that have correct answers for evaluation. \subsection{Word Embeddings} In order to fairly compare our results with the ones in previous work, we use two different sets of pre-trained word embeddings. For the InsuranceQA dataset, we use the 100-dimensional vectors that were trained by Feng et al. \yrcite{feng2015applying} using word2vec \cite{word2vec2013}. Following Wang \& Nyberg \yrcite{wang2015}, Tan et al. \yrcite{tan:Arxiv15} and Yin et al.\yrcite{yin2015}, for the TREC-QA and the WikiQA datasets we use the 300-dimensional vectors that were trained using word2vec and are publicly available on the website of this tool\footnote{https://code.google.com/p/word2vec/}. \subsection{Neural Networks Setup} In Table \ref{tab:nn_hyperparams}, we show the selected hyperparameter values, which were tuned using the validation sets. We try to use as much as possible the same hyperparameters for all the three datasets. The size of the word embeddings is different due to the different pre-trained versions that we used for InsuranceQA and the other two datasets. We use a context window of size 3 for InsuranceQA, while we set this parameter to 4 for TREC-QA and WikiQA. Using the selected hyperparameters, the best results are normally achieved using between 15 and 25 training epochs. For AP-CNN, AP-biLSTM and QA-LSTM, we also use a learning rate schedule that decreases the learning rate $\lambda$ according to the training epoch $t$. Following \citet{santos2014}, we set the learning rate for epoch $t$, $\lambda_t$, using the equation: $\lambda_t = \dfrac{\lambda}{t}$. \begin{table*}[ht!] \caption{Neural Network Hyper-Parameters} \label{tab:nn_hyperparams} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{llrrrr} \hline \abovespace\belowspace \bf Hyp. & \bf Hyperpar. Name & \bf AP-CNN & \bf QA-CNN & \bf AP-biLSTM & \bf QA-biLSTM \\ \hline $d$ & Word Emb. size & 100/300 & 100/300 & 100/300 & 100/300 \\ $c$ & Conv. Filters / Hid.Vec. Size & 400 & 4000 & 141x2 & 141x2 \\ $k$ & Context Window size & 3/4 & 2 & 1 & 1 \\ $mbs$ & Minibatch size & 20 & 1 & 20 & 20 \\ $m$ & Loss margin & 0.5 & 0.009 & 0.2 & 0.1 \\ $\lambda$ & Init. Learning Rate & 1.1 & 0.05 & 1.1 & 1.1 \\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} In our experiments, the four NN architectures QA-CNN, AP-CNN, QA-biLSTM and AP-biLSTM are implemented using Theano \cite{bergstra:scipy2010}. \section*{Acknowledgements} The authors would like to thank Piero Molino for creating the script used to produce the text heat maps presented in this work. \section{Introduction} \label{introduction} Neural networks (NN) with attention mechanisms have recently proven to be successful at different computer vision (CV) and natural language processing (NLP) tasks such as image captioning \cite{xu:icml2015}, machine translation \cite{bahdanau2015:ICLR} and factoid question answering \cite{moritz:NIPS2015}. However, most recent work on neural attention models have focused on one-way attention mechanisms based on recurrent neural networks designed for generation tasks. Another important family of machine learning tasks are centered around pair-wise ranking or classification, which have a broad set of applications, including but not limited to, question answering, entailment, paraphrasing and any other pair-wise matching problems. The current state-of-the-art models usually include NN-based representation for the input pair, followed by a discriminative ranking or classification models. For example, a convolution (or a RNN) and a max-pooling is used to independently construct distributed vector representations of the input pair, followed by a large-margin training \cite{hu2014,weston2014,Shen@CIKM2014,dos2015learning}. The key contribution of this work is that we propose \emph{Attentive Pooling} (AP), a two-way attention mechanism, that significantly improves such discriminative models' performance on pair-wise ranking or classification, by enabling a joint learning of the representations of both inputs as well as their similarity measurement. Specifically, AP enables the pooling layer to be aware of the current input pair, in a way that information from the two input items can directly influence the computation of each other's representations. The main idea in AP consists of learning a similarity measure over projected segments (e.g. trigrams) of the two items in the input pair, and using the similarity scores between the segments to compute attention vectors in both directions. Next, the attention vectors are used to perform pooling. There are a few key benefits of our model. \begin{itemize} \item Thanks to the two-way attention, our model projects the paired inputs, even though they may not be always semantically comparable for some applications (e.g., questions and answers in question answering), into a common representation space that they can be compared in a more plausible way. \item Our model is effective in matching pairs of inputs with significant length variations. \item The two-way attention mechanism is independent of the underlying representation learning. For example, AP can be applied to both CNNs and RNNs, which is in contrast to the one-way attention used in the generation models mostly based on recurrent nets. \end{itemize} In this work, we perform an extensive number of experiments on applying attentive pooling CNNs (AP-CNN) and biLSTMs (AP-biLSTM) for the answer selection task. In this task, given a question $q$ and an candidate answer pool $P=\{a_1, a_2, \cdots , a_p\}$ for this question, the goal is to search for and select the candidate answer $a \in P$ that correctly answers $q$. We perform experiments with three publicly available benchmark datasets, which vary in data scale, complexity and length ratios between question and answers: InsuranceQA, TREC-QA and WikiQA. For the three datasets, AP-CNN and AP-biLSTM respectively outperform the CNN and the biLSTM that do not use attention. Additionally, AP-CNN achieves state-of-the-art results for the three datasets. Our experimental results also demonstrate that attentive pooling makes the CNN more robust to large input texts. This is an important finding, since recent work have demonstrated that, in the context of semantically equivalent question retrieval, CNN based representations do not scale well with the size of the input text \cite{dos2015learning}. Additionally, as AP-CNN does not rely only on the final vector representation to capture interactions between the input question and answer, it requires much less convolutional filters than the regular CNN. It means that AP-CNN-based representations are more compact, which can help to speed up the training process. Although we demonstrate experimental results for NLP tasks only, AP is a general method that can be also applied to different types of NNs that perform matching of two inputs. Therefore, we believe that AP can be useful for different applications, such as computer vision and bioinformatics. This paper is organized as follows. In Section \ref{neural_nets}, we describe two NN architectures for answer selection that have been recently proposed in the literature. In Section \ref{ap_networks}, we detail the attentive pooling approach. In Section \ref{related_work}, we discuss some related work. Sections \ref{experimental_setup} and \ref{experimental_results} detail our experimental setup and results, respectively. In Section \ref{conclusions} we present our final remarks. \section{Neural Networks for Answer Selection} \label{neural_nets} Different neural network architectures have been recently proposed to perform matching of semantically related text segments \cite{yu2014,hu2014,dos2015learning,wang2015,severyn2015,tan:Arxiv15}. In this section we briefly review two NN architectures that have previously been applied to the answer selection task: QA-CNN \cite{feng2015applying} and QA-biLSTM \cite{tan:Arxiv15}. Given a pair ($q$, $a$) consisting of a question $q$ and a candidate answer $a$, both networks score the pair by first computing fixed-length independent continuous vector representations $r^q$ and $r^a$, and then computing the cosine similarity between these two vectors. In Figure \ref{qacnn} we present a joint illustration of these two neural networks. The first layer in both QA-CNN and QA-biLSTM transforms each input word $w$ into a fixed-size real-valued word embedding $r^{w} \in \mathbb{R}^{d}$. Word embeddings (WEs) are encoded by column vectors in an embedding matrix $W^{0}\in\mathbb{R}^{d\times|V|}$, where $V$ is a fixed-sized vocabulary and $d$ is the dimention of the word embeddings. Given the input pair ($q$, $a$), where the question $q$ contains $M$ tokens and the candidate answer $a$ contains $L$ tokens, the output of the first layer consists of two sequences of word embeddings $q^{emb}=\{r^{w_{1}}, ..., r^{w_{M}}\}$ and $a^{emb}=\{r^{w_{1}}, ..., r^{w_{L}}\}$. Next, QA-CNN and QA-biLSTM use different approaches to process these sequences. While QA-CNN process both $q^{emb}$ and $a^{emb}$ using a convolution, QA-biLSTM uses a Bidirectional Long Short-Term Memory RNN \cite{lstm1997} to process these sequences. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{qacnn_bilstm}} \caption{Joint illustration of QA-CNN and QA-biLSTM.} \label{qacnn} \end{center} \vskip -0.2in \end{figure} \subsection{Convolution} Given the sequence $q^{emb}=\{r^{w_{1}}, ..., r^{w_{M}}\}$, let us define the matrix $Z^q = [z_1,...,z_M]$ as a matrix where each column contains a vector $z_m\in\mathbb{R}^{dk}$ that is the concatenation of a sequence of $k$ word embeddings centralized in the $m$-th word of the question. The output of the convolution with $c$ filters over the question $q$ is computed as follows: \begin{equation} \label{conv_layer_apcnn_Q} Q = W^{1}Z^q+ b^{1} \end{equation} where each column $m$ in $Q \in \mathbb{R}^{c \times M}$ contains features extracted in a context window around the $m$-th word of $q$. The matrix $W^{1}$ and the vector $b^{1}$ are parameters to be learned. The number of convolutional filters $c$, and the size of the word context window $k$ are hyper-parameters to be chosen by the user. In a similar manner, and using the same NN parameters $W^{1}$ and $b^{1}$, we compute $A \in \mathbb{R}^{c \times L}$, the output of the convolution over the candidate answer $a$. \begin{equation} \label{conv_layer_apcnn_A} A = W^{1}Z^a+ b^{1} \end{equation} \subsection{Bidirectional LSTM (biLSTM)} Our LSTM implementation is similar to the one in \cite{graves2013} with minor modification. Given the sequence $q^{emb}=\{r^{w_{1}}, ..., r^{w_{M}}\}$, the hidden vector $\mathbf{h}(t)$ (with size $H$) at the time step $t$ is updated as follows: \begin{eqnarray} i_{t} & = & \sigma(\mathbf{W}_{i}r^{w_{t}}+\mathbf{U}_{i}\mathbf{h}(t-1)+\mathbf{b}_{i})\\ f_{t} & = & \sigma(\mathbf{W}_{f}r^{w_{t}}+\mathbf{U}_{f}\mathbf{h}(t-1)+\mathbf{b}_{f})\\ o_{t} & = & \sigma(\mathbf{W}_{o}r^{w_{t}}+\mathbf{U}_{o}\mathbf{h}(t-1)+\mathbf{b}_{o})\\ \tilde{C}_{t} & = & \tanh(\mathbf{W}_{m}r^{w_{t}}+\mathbf{U}_{m}\mathbf{h}(t-1)+\mathbf{b}_{m})\\ C_{t} & = & i_{t}*\tilde{C}_{t}+f_{t}*C_{t-1}\\ \mathbf{h}_{t} & = & o_{t}*\tanh(C_{t}) \end{eqnarray} In the LSTM architecture, there are three gates (input $i$, forget $f$ and output $o$), and a cell memory vector $c$. $\sigma$ is the $sigmoid$ function. The input gate can determine how incoming vectors $r^{w_{t}}$ alter the state of the memory cell. The output gate can allow the memory cell to have an effect on the outputs. Finally, the forget gate allows the cell to remember or forget its previous state. $\mathbf{W} \in R^{H \times d}$, $\mathbf{U} \in R^{H \times H}$ and $\mathbf{b} \in R^{H \times 1}$ are the network parameters. Single direction LSTMs suffer a weakness of not utilizing the contextual information from the future tokens. Bidirectional LSTM utilizes both the previous and future context by processing the sequence on two directions, and generate two independent sequences of LSTM output vectors. One processes the input sequence in the forward direction, while the other processes the input in the reverse direction. The output at each time step is the concatenation of the two output vectors from both directions, ie. $h_t$ = $\overrightarrow{h_{t}} \parallel \overleftarrow{h_{t}}$. We define $c=2 \times H$ for the notation consistency with the previous subsection. After computing the hidden state $h_t$ for each time step $t$, we generate the matrices $Q \in \mathbb{R}^{c \times M}$ and $A \in \mathbb{R}^{c \times L}$, where the $j$-th column in $Q$ ($A$) corresponds to $j$-th hidden state $h_j$ that is computed by the biLSTM when processing $q$ ($a$). The same network parameters are used to process both questions and candidate answers. \subsection{Scoring and Training Procedure} Given the matrices $Q$ and $A$, we compute the vector representations $r^{q} \in \mathbb{R}^{c}$ and $r^{a} \in \mathbb{R}^{c}$ by applying a column-wise max-pooling over $Q$ and $A$, followed by a non-linearity. Formally, the $j$-th elements of the vectors $r^q$ and $r^a$ are compute as follows: \begin{equation} [r^{q}]_j =\tanh\left(\max_{1 < m < M}\left[Q_{j,m}\right]\right) \end{equation} \begin{equation} [r^{a}]_j =\tanh\left(\max_{1 < l < L}\left[A_{j,l}\right]\right) \end{equation} The last layer in QA-CNN and QA-biLSTM scores the input pair ($q$,$a$) by computing the cosine similarity between the two representations: \begin{equation} \label{cosine_sim} s(q,a)=\frac{r^q.r^a}{\|r^q\| \|r^a\|} \end{equation} Both networks are trained by minimizing a pairwise ranking loss function over the training set $D$. The input in each round is two pairs ($q$, $a^{+}$) and ($q$, $a^{-}$), where $a^{+}$ is a ground truth answer for $q$, and $a^{-}$ is an incorrect answer. As in \cite{weston2014,hu2014}, we define the training objective as a hinge loss: \begin{equation} L = \max \{ 0, m - s_{\theta}(q, a^+) + s_{\theta}(q, a^-) \} \end{equation} where $m$ is constant margin, $s_{\theta}(q, a^+)$ and $ s_{\theta}(q, a^-)$ are scores generated by the network with parameter set $\theta$. During training, for each question we randomly sample 50 negative answers from the entire answer set, but only use the one with the highest score to update the model. We use stochastic gradient descent (SGD) to minimize the loss function with respect to $\theta$. The backpropagation algorithm is used to compute the gradients of the network. \section{Related Work} \label{related_work} Traditional work on answer selection have normally used feature engineering, linguistic tools, or external resources \cite{yih2013,wangmengqiu2010,wangmengqiu2007}. Recently, deep learning (DL) approaches have been exploited for this task and achieved significant out-performance compared to traditional non-DL methods. For example, in \cite{yu2014,feng2015applying,severyn2015}, the authors generate the representations of questions and answers separately, and score a QA pair using a similarity metric on top of these representations. In \citet{wang2015}, first a joint feature vectors is learned from a joint long short-term memory (LSTM) model connecting questions and answers, and then the task is converted into a learning-to-rank problem. At the same time, attention-based systems have shown very promising results on a variety of NLP tasks, such as machine translation \cite{bahdanau2015:ICLR,sutskever2014}, caption generation \cite{xu:icml2015} and factoid question answering \cite{moritz:NIPS2015}. Such models learn to focus their “attention” to specific parts of their input. Some recently-proposed approaches introduce attention mechanisms in the answer selection task. Tan et al. \yrcite{tan:Arxiv15} developed an attentive reader based on bidirectional long short-term memory, which emphasizes certain part of the answer according to the question embedding. Unlike \cite{tan:Arxiv15}, in which attention is imposed only on answer embedding generation, AP-CNN and AP-biLSTM consider the interdependence between questions and answers. In the context of two-way attention, two very recent work are related to ours. Rockt{\"{a}}schel et al. \yrcite{Rocktaschel:reasoning15}, propose a two-way attention method that is inspired by bidirectional LSTMs that read a sequence and its reverse for improved encoding. Their approach, which is designed for RNNs only, differs in many aspects from the approach described in this work, which can be easily applied for CNNs and RNNs. Yin et al. \yrcite{yin2015} present a two-way attention mechanism that is tailored to CNNs. Some of the main differences between their approach and this work are: (1) they use a simple Euclidean distance to compute the interdependence between the two input texts, while in this work we apply similarity metric learning, which has the potential to learn better ways to measure the interaction between segments of the input items; (2) the models in \cite{yin2015} compute the attention vector using sum-pooling over the alignment matrix and use the convolutional outputs updated by the attention as the input for another level of convolutional layer. In this work we use max-pooling over the alignment matrix plus softmax, in order to explicitly create an attention vector that is used to perform the pooling. Experimental results show that such difference yields substantial improvement of performance on WikiQA dataset.
1,477,468,751,207
arxiv
\section{Introduction} With the rapid increase in the number of user-generated videos shared on the Internet, it is becoming increasingly advantageous to explore new ways of retrieving them---for example, by automatically detecting events occurring in them. One less explored approach is to analyze the soundtracks. But while the analysis of visual content is widely studied in disciplines such as image processing and computer vision, analysis of video soundtracks has largely been restricted to specific, inherently audio-focused tasks such as speech processing and music retrieval. However, when visual information cannot reliably identify content (e.g., due to poor lighting conditions), audio may still furnish vivid information. In other words, audio content is frequently complementary to visual content---and in addition, it offers more tractable processing. Multimodal approaches that use both visual and audio cues have recently gained traction. However, there has not been much in-depth exploration of how best to leverage the audio information. On the audio side, due to a historical focus on carefully-curated speech and music processing corpora, fewer audio researchers consider the problems posed by unfiltered generic audio with varied background noises---but these problems must be addressed to build audio classifiers that can handle user-generated video. In addition, past work on video analysis has often fed audio ``blindly" into a machine learner, without much consideration of how audio information is structured. In the last few years, the focus has been changing in both event detection and audio analysis. For example, sound event detection was included in the 2013 Detection and Classification of Acoustic Scenes and Events (DCASE) challenge at IEEE AASP \cite{SdGdBeLmPm15}. More recently, the YLI-MED annotated video corpus became available \cite{BjBdEbFgGhGlJaKsTjWj15}; YLI-MED is targeted toward multimedia event detection, so provides a good platform to study audio-based event detection (see Section \ref{sec:dataset}). Major areas of audio-based event detection research include audio data representation and learning methodologies. In this work, we focus on the first aspect, audio data representation, which aims to extract specific features that can refine an enormous amount of raw audio data into higher-level information about the audio signals. Section \ref{sec:related} gives an overview of current representation approaches and discusses their limitations in detail. To offer a brief summary, current approaches do not effectively capture signal variance within audio tracks nor local structure (for example, between Gaussian components), they risk losing information about geometric manifold structure and hidden structure within the data, they often require a lot of storage space, and they rarely leverage available information from labels. In this paper, we address these issues by introducing a Discriminative and Compact Audio Representation (DCAR) to model audio information. This method is implemented in two phases. First, each audio track is modeled using a Gaussian mixture model (GMM) with several mixture components to describe its statistical distribution. This is beneficial for capturing the variability within each audio track and for reducing the storage space required, relative to the full original number of frames. Second, by integrating the labels for the audio tracks and the local structure among the Gaussian components, we identify an embedding to reduce the dimensionality of the mixture components and render them more discriminative. In this phase, the dimensionality reduction task is formulated as an optimization problem on a Grassmannian manifold and solved via the conjugate gradient method. Then a new audio track can be represented with the aid of the learned embedding, which further compacts the audio information. For classification, we adopt the kernel ridge regression (KRR) method, which is compatible with the manifold structure of the data. As we argue in detail in Section~\ref{SecDGMMwhole}, DCAR represents a considerable advancement of the state-of-the-art in audio-based event detection. In a nutshell, the novelty of DCAR lies in its being a \textit{compact representation} of an audio signal that \textit{captures variability} and has \textit{better discriminative ability} than other representations. Our claim is supported by a series of experiments, described in Section \ref{SecExp}, conducted on the YLI-MED dataset. We first built binary classifiers for each pair of events in the dataset, and found that the proposed DCAR performed better than an i-vector strategy on pairwise discrimination. We then delved deeper, comparing multi-event detection results for DCAR with three existing methods (including simple GMMs and mean/variance vectors as well as i-vectors) for events that are difficult to distinguish vs.\ events that are easy to distinguish. We showed that DCAR can handle both easy and hard cases; Section~\ref{sec:easyhard} discusses how these results may follow from how each type of model leverages (or doesn't leverage) the intrinsic structure of the data. Finally, we conducted multi-event detection experiments on all ten events, again showing that DCAR is the most discriminative representation. In particular, DCAR shows notable accuracy gains on events where humans find it more difficult to classify the videos, i.e., events with lower average annotator confidence scores. The remainder of this paper is organized as follows: Section \ref{sec:related} surveys related audio work; Section \ref{SecDGMMwhole} presents the proposed DCAR model in detail; and Section \ref{sec:EDtask} describes the audio-based event detection process with KRR. Section \ref{sec:datameth} describes our methods and the real-world video dataset YLI-MED; and Section \ref{sec:experiments} discusses a series of binary and multi-event detection experiments. The results demonstrate that DCAR significantly improves event-detection performance. Conclusions and future work are discussed in Section \ref{sec:conc}. \section{Related Work} \label{sec:related} Audio representations include low-level features (e.g., energy, cepstral, and harmonic features) and intermediate-level features obtained via further processing steps such as filtering, linear combination, unsupervised learning, and matrix factorization (see overview in Barchiesi et al.\ 2015 \cite{BdGdSdPm15}). A typical audio representation method for event detection is to model each audio file as a vector so that traditional classification methods can be easily applied. The most popular low-level feature used is Mel-frequency cepstral coefficients (MFCCs) \cite{EaTjKaFs03}, which describe the local spectral envelope of audio signals. However, MFCC is a short-term frame-level representation, so it does not capture the whole structure hidden in each audio signal. As one means to address this, some researchers have used end-to-end classification methods (e.g., neural networks), for example to simultaneously learn intermediate-level audio concepts and train an event classifier \cite{RmEbBjFg15}. Several approaches have used first-order statistics derived from the frames' MFCC features, which empirically improves performance on audio-based event detection. For example, Jin et al.\ adopted a codebook model to define audio concepts \cite{JqSpRsBsDd12}. This method uses first-order statistics to represent audio: it quantizes low-level features into discrete codewords, generated via clustering, and provides a histogram of codeword counts for each audio file (i.e., it uses the mean of the data in each cluster). However, such methods do not capture the complexity of real-life audio recordings. For event detection, researchers have therefore modeled audio using the second-order statistical covariance matrix of the low-level MFCC features \cite{MrLhGlFgDa11,HzCyLkHvLc13,EbLhFg13,RgNwHp13}. There are two ways to compute the second-order statistics. The first assumes that each audio file can be characterized by the mean and variance of the MFCC features representing each audio frame, then modeled via a vector by concatenating the mean and variance \cite{RgNwHp13}; this representation can be referred to as a mean/variance vector or \textit{mv-vector}. The other method is to model all training audio via a Gaussian mixture model and then compute the Baum-Welch statistics of each audio file according to the mixture components, as in GMM-supervector representations \cite{MrLhGlFgDa11}. Again, each audio file is represented by stacking the means and covariance matrices. However, such a vectorization process will inevitably distort the geometric structure\footnote{By \textit{geometric structure}, we mean intrinsic structure within data such as affine structure, projective structure, etc.} of the data \cite{SbSa02}. An exciting area of recent work is the i-vector approach, which uses latent factor analysis to compensate for foreground and background variability \cite{DnKpDrDpOp11}. The i-vector approach can be seen as an extension of the GMM-supervector. It assumes that these high-dimensional supervectors can be confined to a low-dimensional subspace; this can be implemented by applying probabilistic principal component analysis (PCA) to the supervectors. The advantage of an i-vector is that the system learns the total variance from the training data and then uses it on new data, so that the representation of the new data has similar discriminativity to the representation of the training data. I-vectors have shown promising performance in audio-based event detection \cite{EbLhFg13,HzCyLkHvLc13}. In fact, many of these representation methods have shown promising performance, but they have some limitations with regard to audio-based event detection. For example, the signal variance within a given audio track may be large; training a Gaussian mixture model on all of the audio tracks (as in the GMM-supervector or i-vector approaches) does not capture that variability, and thus may not characterize a given event well. The second limitation is that each mixture component consists of both a mean vector and a covariance matrix, which can introduce many more variables, and so result in high computational complexity and require a lot of storage space. The third limitation is that the covariance matrices of the mixture components in these methods are usually flattened into one supervector, which may distort the geometric manifold structure within the data and lose information about hidden structure. Fourth, most audio representations are derived in an unsupervised manner, i.e., they do not make use of any existing label information. But in fact, label information has been very useful for representing data in classification tasks such as image classification \cite{JlZcMn12} and text classification \cite{LmTcSjLy09}. Last but not least, these methods do not explicitly consider the local structure between Gaussian components, which may be useful for distinguishing events. These drawbacks motivate us to propose a new audio representation method to capture the variability within each audio file and to characterize the distinct structures of events with the aid of valuable existing labels and local structure within the data; these characteristics of our method have significant benefits for event detection. \section{Discriminative and Compact Audio Representation}\label{SecDGMMwhole} In this section, we describe our proposed two-phase audio representation method. The first phase, described in Subsection \ref{SecGMM}, aims to capture the variability within each audio file. The second phase, described in Subsection \ref{SecDGMM}, identifies a discriminative embedding. \subsection{Phase 1: Characterizing Per-Track Variability} \label{SecGMM} Given a set of audio tracks, we first extract their low-level features, in this case MFCC features. Let $\mathbf{X}=\{\mathbf{X}^i\}_{i=1}^n$ denote a set of $n$ audio files. Each file $\mathbf{X}^i$ is segmented into $m_i$ frames. Each frame is computed using a 100ms Hamming window with a stride size of 10 ms per frame shift, and its corresponding representation $x^i_j$ ($1\leq i \leq n$ and $1 \leq j \leq m_i$) is built with the first 20 MFCC features and their first-order and second-order derivatives. Each frame is then modeled via a vector with $d$-dimensional MFCC features ($d=60$), i.e., $x^i_j \in \mathbb{R}^{60}$. Previous work has demonstrated that second-order statistics are much more appropriate for describing complicated multimedia data \cite{DnKpDrDpOp11,WwWrHzSsCx15}. Therefore, we train a GMM with $P$ components using the Expectation-Maximization algorithm for each audio track: \begin{equation}\label{eq:ExMax} \mathbf{X}^i = \{x_j^i\}_{j=1}^{m_i}\in \mathbb{R}^{d\times m_i}. \end{equation} The estimated GMM components are denoted as: \begin{equation}\label{eq:GMMcomps} G=\{g_i\}_{i=1}^N \end{equation} where $g_i=\{w_i,\mu_i,\Sigma_i\}$. When each audio file is modeled via $P$ components, $N=nP$. Each component has its corresponding weight $w_i$, mean $\mu_i$, and covariance matrix $\Sigma_i$. Generally, covariance matrices are positive semi-definite, and can be made strictly positive definite by adding a small constant to the diagonal elements of the matrix. For convenience, we use the notation $\Sigma$ to indicate a symmetric positive definite (SPD) matrix. After GMM modeling, each audio file---typically containing hundreds to thousands of frames---is reduced to a smaller number of mixture components with prior probabilities. The covariance matrices provide a compact and informative feature descriptor, which lies on a specific manifold, and obviously captures the (second-order) variability of the audio. \subsection{Phase 2: Identifying a Discriminative Embedding}\label{SecDGMM} In the second phase of the DCAR method, a discriminative embedding is identified by integrating the global and local structure of the training data, so that both training data and unseen new data can be re-represented in a discriminative and compact manner. \subsubsection{Overview of Phase 2} \label{sec:DCARovw} Although an audio file can be represented via the above mixture components, the model presented thus far ignores the global structure of the data (e.g., the valuable label information) and the local structure among the components (e.g., nearest neighbors). Meanwhile, the original feature representation is usually large (since there are 60 MFCC features, each mean vector has $60$ elements, and each covariance matrix contains $60\times 60$ elements), which may be time-consuming in later data processing. Therefore, in this subsection, we propose a new method for generating a discriminative and compact representation from the high-dimensional mixture components. The DCAR method is summarized in Figure \ref{Fig:DLGMM}. \begin{figure}[!htb] \centering \includegraphics[width=0.4\textwidth]{DLRCM-framework3.jpg} \caption{Framework for generating the discriminative and compact audio representation (DCAR). The left side shows the original $d$-dimensional GMM components ($\mu\in \mathbb{R}^d$ and $\Sigma\in \mathbb{R}^{d\times d}$); the right side shows the DCAR representation with $r$-dimensional ($r<d$) mixture components ($\hat{\mu}\in \mathbb{R}^r$ and $\hat{\Sigma}\in \mathbb{R}^{r\times r}$).} \label{Fig:DLGMM} \end{figure} Our main goal is to learn an embedding $\mathbf{W}\in\mathbb{R}^{d\times r}$ ($r<d$, where $d$ is the number of MFCC features and $r$ is the embedding space size) based on the GMM components of the labeled audio track ($G=\{g_i,\ell_i\}_{i=1}^N$) (where $\ell_i$ is the label for component $g_i$, based on the label of the corresponding audio file from which $g_i$ was generated). Therefore, the resulting low-dimensional GMM components should preserve the important structure of the original GMM components as much as possible. To accomplish this, we introduce an embedding $\mathbf{W}$ and define the new GMM components with mean \begin{equation}\label{meanMap} \hat{\mu} = \mathbf{W}^T\mu \end{equation} and covariance matrix \begin{equation}\label{SigmaMap} \hat{\Sigma} = \mathbf{W}^T\Sigma \mathbf{W}. \end{equation} As mentioned above, the covariance matrix $\Sigma$ is SPD, i.e, $0\prec \Sigma \in \mathcal{S}ym_{d}^+$. To maintain this property, i.e., $0\prec \hat{\Sigma} \in \mathcal{S}ym_{r}^+$, the embedding $\mathbf{W}$ is constrained to be full rank. A simple way of enforcing this requirement is to impose orthonormality constraints on $\mathbf{W}$ (i.e., $\mathbf{W}^T\mathbf{W}=\mathbf{I}_r$), so that the embedding can be identified by solving an optimization problem on the Grassmannian manifold. For event detection, each training file has label information, which we also assign to its GMM components. This valuable information can be interpreted as global structure for those components. There is also intrinsic internal structure among the components, such as the affinity between each pair of components. When reducing the dimensionality of GMM components, it is necessary to maintain these two types of structure. Motivated by the idea of linear discriminative analysis \cite{McLachlanG2004} and Maximum Margin Criterion \cite{LhJtZk04}, DCAR aims to minimize the intra-class distance while simultaneously maximizing the inter-class distance. In the next subsection, we introduce an undirected graph defined by a real symmetric affinity matrix $\mathbf{A}\in \mathbb{R}^{N\times N}$ that we use to encode these structures. \subsubsection{Affinity Matrix Construction}\label{SecAMC} The affinity matrix $\mathbf{A}$ is defined by building an intra (within)-class similarity graph and an inter (between)-class similarity graph, as follows. \begin{equation}\label{AffinityGraph} \mathbf{A}_{ij} = \mathbf{S}_w - \mathbf{S}_b \end{equation} $\mathbf{S}_w$ and $\mathbf{S}_b$ are two binary matrices describing the intra-class and inter-class similarity graphs respectively, formulated as: \begin{equation} \mathbf{S}_w(g_i,g_j)= \begin{cases} 1 & \text{ if } g_i \in \text{NN}_w(g_j) \text{ or } g_j \in \text{NN}_w(g_i) \\ 0 & \text{ otherwise} \end{cases} \end{equation} and \begin{equation} \mathbf{S}_b(g_i,g_j)= \begin{cases} 1 & \text{ if } g_i \in \text{NN}_b(g_j) \text{ or } g_j \in \text{NN}_b(g_i) \\ 0 & \text{ otherwise,}\\ \end{cases} \end{equation} where $\text{NN}_w(g_i)$ contains the $n_w$ nearest neighbors of component $g_i$ that share the same label as $\ell_i$, and $\text{NN}_b(g_i)$ is the set of $n_b$ nearest neighbors of $g_i$ that have different labels. Here, the nearest neighbors of each component can be identified via their similarity. We use heat kernel weight with a self-tuning technique (for parameters $\sigma_{\mu}$ and $\sigma_{\Sigma}$) to measure the similarity between components: \begin{equation}\label{GaussianKernel} S(g_{i},g_{j}) = \lambda \exp\Big (\frac{-\delta_{\mu}^2(\mu_{i},\mu_{j})}{2\sigma_{\mu}^2} \Big) + \exp\Big (\frac{-\delta_{\Sigma}^2(\Sigma_{i},\Sigma_{j})}{2\sigma_{\Sigma}^2} \Big) \end{equation} where $\lambda$ is a trade-off parameter to control the contribution from the components' means and covariance matrices and $\delta_{\mu}$ indicates the distance measure for the means of the mixture components. Here we use the simple euclidean distance \begin{equation}\label{DisMu} \delta_{\mu}^2(\mu_i,\mu_j)=\| \mu_i - \mu_j\|_2^2. \end{equation} $\delta_{\Sigma}$ indicates the distance measure for the covariance matrices of the components. A number of metrics have been used in previous research, including the Affine-Invariant Riemannian Metric (AIRM) \cite{PxFpAn06}, Stein Divergence \cite{CaSsBaPn11}, and the Log-Euclidean Metric (LEM) \cite{AvFpPxAn07}. AIRM imposes a high computational burden in practice, and we have observed experimentally that nearest neighbors selected according to LEM more often fall into the same event than nearest neighbors selected according to either AIRM or Stein (see Appendix \ref{app:comparison} for details). For these two reasons, we exploit LEM to compute $\delta_{\Sigma}$: \begin{equation}\label{DisSigma} \delta_{\Sigma}^2(\Sigma_i,\Sigma_j)=\| \log(\Sigma_i)-\log(\Sigma_j)\|_F^2. \end{equation} The constructed affinity matrix $\mathbf{A}$ thus effectively combines local structure, i.e., nearest neighbors, and global structure, i.e., label information---which is used to find the within-class nearest neighbor ($\text{NN}_w$) and the between-class nearest neighbor ($\text{NN}_b$). \subsubsection{Embedding Optimization} Once we have $\mathbf{A}$, the next step is to learn an embedding such that the structure among the original GMM components $\{g_i\}_{i=1}^N = \{\mu_i,\Sigma_i\}_{i=1}^N$ is reflected by the low-dimensional mixture components $\{\hat{g}_i\}_{i=1}^N = \{\hat{\mu}_i,\hat{\Sigma}_i\}_{i=1}^N$. This process can be modeled using the following optimization problem: \begin{equation}\label{DLDGMM1} \mathbf{F}(\mathbf{W}) = \min\limits_{\mathbf{W}^T\mathbf{W}=I_{r}}\sum_{i,j}A_{ij}\Big (\lambda \delta_{\mu}^2(\hat{\mu}_i,\hat{\mu}_j) + \delta_{\Sigma}^2(\hat{\Sigma}_i,\hat{\Sigma_j})\Big ) \end{equation} With the aid of the mapping functions in (\ref{meanMap}) and (\ref{SigmaMap}) and the distance metrics $\delta_{\mu}$ (\ref{DisMu}) and $\delta_{\Sigma}$ (\ref{DisSigma}), the optimization problem can be rewritten as: \begin{equation}\label{DLDGMM2} \begin{split} \mathbf{F}(\mathbf{W}) = & \min\limits_{\mathbf{W}^T\mathbf{W}=I_{r}}\sum_{i,j}A_{ij}\Big (\lambda \|\mathbf{W}^T(\mu_i-\mu_j)\|_F^2 \\ & + \|\log(\mathbf{W}^T\Sigma_i\mathbf{W}) - \log(\mathbf{W}^T\Sigma_j\mathbf{W})\|_F^2\Big ) \end{split} \end{equation} As in (\ref{GaussianKernel}), $\lambda$ is used to balance the effects of two terms, in tuning by cross-validation on the training data. Optimizing $\mathbf{F}(\mathbf{W})$ results in a situation where the low-dimensional components are close if their corresponding original high-dimensional components are event-aware neighbors; otherwise, they will be as far apart as possible. In image processing, there are several lines of research where a mapping has been learned from a high-dimensional manifold to a low-dimensional manifold \cite{HmSmHr14,HzWrSsLxCx15}. However, Harandi et al.\ \cite{HmSmHr14} exploit AIRM and Stein Divergence to measure the distance, and as we noted in Subsection \ref{SecAMC}, these metrics are not appropriate for handling audio data. Huang et al.\ \cite{HzWrSsLxCx15} identified an embedding on the logarithms of the SPD matrix, but our goal is to identify an embedding on GMM components, including both means and covariance matrices. The problem in (\ref{DLDGMM2}) is a typical optimization problem with orthogonality constraints; it can therefore be formulated as a unconstrained optimization problem on Grassmannian manifolds \cite{ApMrSr08}. Given that the objective function $\mathbf{F}(\mathbf{W})$ has the property that for any rotation matrix $\mathbf{R}\in SO(r)$ (i.e., $\mathbf{R}\mathbf{R}^T=\mathbf{R}^T\mathbf{R}=\mathbf{I}_r$), $\mathbf{F}(\mathbf{W})=\mathbf{F}(\mathbf{WR})$ (see Appendix \ref{app:invariance} for a detailed proof), this optimization problem is most compatible with a Grassmannian manifold. In other words, we can model the embedding $\mathbf{W}$ as a point on a Grassmannian manifold $\mathcal{G}(r,d)$, which consists of the set of all linear $r$-dimensional subspaces of $\mathbb{R}^d$. Here we employ the conjugate gradient (CG) technique to solve (\ref{DLDGMM2}), because CG is easy to implement, has low storage requirements, and provides superlunar convergence in the limit \cite{ApMrSr08}. On a Grassmannian manifold, the CG method performs minimization along geodesics with specific search directions. Here, the geodesic is the shortest path between two points on the manifold. For every point on the manifold $\mathcal{G}$, its tangent space is a vector space that contains the tangent vectors of all possible curves passing through that point. Unlike flat spaces, on a manifold we cannot directly transport a tangent vector from one point to another point by simple translation; therefore, the tangent vectors must parallel transport along the geodesics. More specifically, on the Grassmannian manifold, let $\nabla_{\mathbf{W}}$ and $\mathcal{D}_{\mathbf{W}}$ be the tangent vector and the gradient of $\mathbf{F}(\mathbf{W})$ at point $\mathbf{W}$, respectively. The gradient on the manifold at the $\tau$-th iteration can be obtained by subtracting the normal component at $\mathbf{W}^{(\tau)}$ from the transported vector: \begin{equation}\label{GradientWonM} \mathcal{D}_{\mathbf{W}}^{(\tau)} = \nabla_{\mathbf{W}}^{(\tau)} - \mathbf{W}^{(\tau)}(\mathbf{W}^{(\tau)})^T\nabla_{\mathbf{W}}^{(\tau)}. \end{equation} Then the search direction $\mathcal{H}_{\mathbf{W}}$ in the $(\tau+1)$-th iteration can be computed by parallel transporting the previous search direction and combining it with the gradient direction at the current solution: \begin{equation}\label{SearchDirectionWonM} \mathcal{H}_{\mathbf{W}}^{(\tau+1)} = \mathcal{D}_{\mathbf{W}}^{(\tau+1)} + \gamma^{(\tau+1)}\triangle\mathcal{H}_{\mathbf{W}}^{(\tau)}. \end{equation} Here $\triangle\mathcal{H}_{\mathbf{W}}^{(\tau)}$ is the parallel translation of the vector $\mathcal{H}_{\mathbf{W}}^{(\tau)}$. According to Absil et al.\ \cite{ApMrSr08}, the geodesic going from point $\mathbf{W}$ in the direction $\mathcal{H}_{\mathbf{W}}^{(\tau)}$ can be represented by the geodesic equation \begin{equation}\label{WangleT} \mathbf{W}(t) = \left [ \begin{array}{cc} \mathbf{W}V &U \end{array}\right ] \left [ \begin{array}{c} \cos \Lambda t \\ \sin \Lambda t\end{array}\right ] V^T. \end{equation} Thus, the parallel translation can be obtained by \begin{equation}\label{SearchDirectionParTraWonM} \triangle\mathcal{H}_{\mathbf{W}}^{(\tau)} = \big( -\mathbf{W}^{(\tau)} V \sin \Lambda t^{(\tau)} + U \cos \Lambda t^{(\tau)} \big)\Lambda V^T \end{equation} where $U\Lambda V^T$ is the compact singular value decomposition of $\mathcal{H}_{\mathbf{W}}^{(\tau)}$. We use the exact conjugacy condition to adaptively determine the step size $\gamma^{(\tau+1)}$, as follows: \begin{equation}\label{StepSizeWonM} \gamma^{(\tau+1)} = \frac{< \mathcal{D}_{\mathbf{W}}^{(\tau+1)}- \triangle\mathcal{D}_{\mathbf{W}}^{(\tau)}, \mathcal{D}_{\mathbf{W}}^{(\tau+1)}>}{< \mathcal{D}_{\mathbf{W}}^{(\tau)}, \mathcal{D}_{\mathbf{W}}^{(\tau)}>} \end{equation} where $<A,B>=Tr(A^TB)$. Similar to $\triangle\mathcal{H}_{\mathbf{W}}^{(\tau)}$, $\triangle\mathcal{D}_{\mathbf{W}}^{(\tau)}$ is the parallel translation of the vector $\mathcal{D}_{\mathbf{W}}^{(\tau)}$ on the Grassmannian manifold, which can be calculated thusly: \begin{equation}\label{TangetVectParTraWonM} \begin{small} \triangle\mathcal{D}_{\mathbf{W}}^{(\tau)} = \mathcal{D}_{\mathbf{W}}^{(\tau)} - \big( \mathbf{W}^{(\tau)} V \sin \Lambda t^{(\tau)} + U (\mathbf{I}-\cos \Lambda t^{(\tau)}) \big)U^T \mathcal{D}_{\mathbf{W}}^{(\tau)} \end{small} \end{equation} Going back to the objective function in (\ref{DLDGMM2}), by setting $\mathbf{F1}_{ij} = \|\mathbf{W}^T(\mu_i-\mu_j)\|_F^2$ and $\mathbf{F2}_{ij} = \|\log(\mathbf{W}^T\Sigma_i\mathbf{W}) - \log(\mathbf{W}^T\Sigma_j\mathbf{W})\|_F^2$, (\ref{DLDGMM2}) can be rewritten as \begin{equation}\label{eq:reopt} \mathbf{F}(\mathbf{W}) =\min\limits_{\mathbf{W}^T\mathbf{W}=I_{m}}\sum_{i,j}A_{ij}\big (\lambda \mathbf{F1}_{ij} + \mathbf{F2}_{ij} \big ). \end{equation} Then its tangent vector $\nabla_{\mathbf{W}}$ on the manifold can be computed in three steps (see Appendix \ref{app:F2} for details): \begin{equation}\label{DerivJ1} \nabla_{\mathbf{W}}\mathbf{F1}_{ij} = 2 (\mu_i-\mu_j)(\mu_i-\mu_j)^T\mathbf{W} \end{equation} \begin{equation}\label{DerivJ2} \begin{split} \nabla_{\mathbf{W}}\mathbf{F2}_{ij} =4\Big (\Sigma_i\mathbf{W}(\mathbf{W}^T\Sigma_i\mathbf{W})^{-1}-\Sigma_j\mathbf{W}(\mathbf{W}^T\Sigma_j\mathbf{W})^{-1}\Big ) \\ \times \Big(\log(\mathbf{W}^T\Sigma_i\mathbf{W})-\log(\mathbf{W}^T\Sigma_j\mathbf{W})\Big ) \end{split} \end{equation} \begin{equation}\label{DerivF} \nabla_{\mathbf{W}}= \sum_{i,j}A_{ij}\big (\lambda_1 \nabla_{\mathbf{W}}\mathbf{F1}_{ij} + \lambda_2 \nabla_{\mathbf{W}}\mathbf{F2}_{ij} \big ) \end{equation} This conjugate gradient method for solving (\ref{DLDGMM2}) is summarized in Algorithm \ref{AlgorithmDGMM}. \begin{algorithm}[h] \caption{Solving (\ref{DLDGMM2}) via a Conjugate Gradient on a Grassmannian Manifold} \label{AlgorithmDGMM} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand\algorithmicensure {\textbf{Output:}} \begin{algorithmic}[1] \REQUIRE A set of labeled $d$-dimensional GMM components $\{g_i,\ell_i\}_{i=1}^N$ with means $\{\mu_i\}_{i=1}^N$ and covariance matrices $\{\Sigma\}_{i=1}^N$, reduced dimensionality $r$, and parameter $\lambda$. \STATE Construct an affinity matrix $A$ using (\ref{AffinityGraph}) \STATE Initialize $\bm W^{(0)}$ such that $(W^{(0)})^TW^{(0)}=\mathbf{I}_r$ and $\tau=0$ \STATE Compute $\nabla_{\mathbf{W}}^{(0)}$ as in (\ref{DerivF}) and $\mathcal{D}_{\mathbf{W}}^{(0)}$ as in (\ref{GradientWonM}), and set $\mathcal{H}_{\mathbf{W}}^{(0)} = - \mathcal{D}_{\mathbf{W}}^{(0)}$ \FOR{$\tau=0,1,2,\cdots$} \STATE Perform compact singular value decomposition of $\mathcal{H}_{\mathbf{W}}^{(\tau)}$ \STATE Identify $\mathbf{W}^{(\tau)}$ by minimizing (\ref{DLDGMM2}) over $t$, where $\mathbf{W}^{(\tau)}(t)$ is computed as in (\ref{WangleT}), letting $t^{(\tau)} = t_{min}$ (here $t_{min}$ is the value of $t$ minimizing (\ref{DLDGMM2})) \STATE Compute the parallel translations of $\mathcal{H}_{\mathbf{W}}^{(\tau)}$ and $\mathcal{D}_{\mathbf{W}}^{(\tau)}$, i.e., compute $\triangle\mathcal{H}_{\mathbf{W}}^{(\tau)}$ as in (\ref{SearchDirectionParTraWonM}) and $\triangle\mathcal{D}_{\mathbf{W}}^{(\tau)}$ as in (\ref{TangetVectParTraWonM}) \STATE Set $\mathbf{W}^{(\tau+1)} = \mathbf{W}^{(\tau)} (t^{(\tau)})$ \STATE Compute $\nabla_{\mathbf{W}}^{(\tau+1)}$ as in (\ref{DerivF}) and $\mathcal{D}_{\mathbf{W}}^{(\tau+1)}$ as in (\ref{GradientWonM}) \STATE Find the step size $\gamma^{(\tau+1)}$ via (\ref{StepSizeWonM}) \STATE Find the new search direction $\mathcal{H}_{\mathbf{W}}^{(\tau+1)}$ via (\ref{SearchDirectionWonM}) \ENDFOR \ENSURE $\bm W^{(\tau+1)}$ \end{algorithmic} \end{algorithm} Then, given a new audio file, we can extract its MFCC features, train $P$ GMM components, and re-represent these components with the embedding $\bm W$ to get its discriminative, low-dimensional mixture components, i.e., the proposed DCAR representation. \section{Event Detection with DCARs} \label{sec:EDtask} As we describe above, each audio file is represented via several mixture components, including mean vectors and covariance matrices. It would be possible to flatten the matrices into vectors and then use traditional vector-based classification methods for event detection. However, the covariance matrices lie on the manifold of positive definite matrices, and such a vectorization process would ignore this manifold structure \cite{SbSa02}. Therefore, we use the Kernel Ridge Regression (KRR) method to build the event classifiers. Let $\hat{G}=\{\hat{g}_i\}_{i=1}^N$ and $\hat{g}_i= \{\hat{\mu}_i,\hat{\Sigma}_i\}$ be mixture components for the training audio tracks belonging to $L$ events. $Y\in \mathbb{R}^{N\times L}$ indicates the label information for $\hat{G}$, where $\mathbf{Y}_{ij}=1$ if the $i^{\text{th}}$ component belongs to the $j^{\text{th}}$ event; otherwise $\mathbf{Y}_{ij}=0$. The KRR method aims to train a classifier by solving the following optimization problem: \begin{equation}\label{KernelDA} \min_{\mathbf{H}}\mathbf{J}(\mathbf{H}) =\|\phi(\hat{G})^T \mathbf{H}-\mathbf{Y}\|_F^2 + \alpha \|\mathbf{H}\|_F^2 \end{equation} where $\phi$ is a feature mapping from the original feature space to a 'ensional space, and the kernel function can be written as $K=\phi(\hat{G})^T\phi(\hat{G})$. Since each component $\hat{g}_i$ has a mean $\hat{\mu}_i$ and a covariance matrix $\hat{\Sigma}_i$, we can define a combined kernel function to integrate these two parts, as follows: \begin{equation}\label{KernelIntegrated} K(\hat{g}_{i},\hat{g}_{j}) =\lambda K_{\mu}(\hat{\mu}_{i},\hat{\mu}_{j}) + K_{\Sigma}(\hat{\Sigma}_{i},\hat{\Sigma}_{j}) \end{equation} The trade-off parameter $\lambda$ (see (\ref{GaussianKernel}) can be tuned by cross-validation on the training data. As described in Section \ref{SecAMC}, we use a Gaussian kernel to calculate $K_{\mu}$ and $K_{\Sigma}$ via \begin{equation}\label{eq:Kmu} K_{\mu}(\hat{\mu}_{i},\hat{\mu}_{j}) = \exp\Big (\frac{-\| \hat{\mu}_i - \hat{\mu}_j\|_2^2}{2\sigma_{\hat{\mu}}^2} \Big) \end{equation} and \begin{equation}\label{eq:KSigma} K_{\Sigma}(\hat{\Sigma}_{i},\hat{\Sigma}_{j}) = \exp\Big (\frac{-\| \log(\hat{\Sigma}_i)-\log(\hat{\Sigma}_j)\|_F^2}{2\sigma_{\hat{\Sigma}}^2} \Big). \end{equation} The problem in (\ref{KernelDA}), as a quadratic convex problem, can be optimized by setting its derivative with respect to $H$ to zero, and then computing $H$ in closed form: \begin{equation}\label{eq:Hcomp} \mathbf{H} = \phi(\hat{G})(K+\alpha \mathbf{I})^{-1}\mathbf{Y} \end{equation} Here $K_{ij} = K(\hat{g}_{i},\hat{g}_{j})$ as given in (\ref{KernelIntegrated}). Given a new test audio track, $P$ mixture components $\{g_p\}_{p=1}^P=\{w_p,\mu_p,\Sigma_p\}_{p=1}^P$ can be obtained via the methods described in Section \ref{SecGMM}. Then the corresponding discriminative, low-dimensional mixture components $\{\hat{g}_p\}_{p=1}^P$ can be generated as in (\ref{meanMap}) for $\hat{\mu}_p=W^T\mu_p$, and as in (\ref{SigmaMap}) for $\hat{\Sigma}_p=W^T\Sigma_pW$, where the embedding $\mathbf{W}$ is learned from the training data. Then the class membership matrix $M = \{M_p\}_{p=1}^P$ (where $M_p \in \mathbb{R}^{1\times L}$ is the event membership of the $p$-th component) can be calculated: \begin{equation}\label{eq:MbrMatrix} M_p = \phi(\hat{g}_p)^T\mathbf{H} = \phi(\hat{g}_p)^T\phi(\hat{G})(K+\alpha \mathbf{I})^{-1}\mathbf{Y} = K_p(K+\alpha \mathbf{I})^{-1}\mathbf{Y}. \end{equation} Here $K_p = [K(\hat{g}_{p},\hat{g}_{i})]_{i=1}^N$, indicating the similarity between $\hat{g}_p$ and all of the training mixture components in $\hat{G}$. We can then make a final prediction about the event of the new audio track with $P$ components using an average voting scheme \begin{equation}\label{eq:AvgVote} \ell = arg\max_j\sum_{p=1}^Pw_pM_p(j) \end{equation} where $w_p$ is the weight of the $p^{\text{th}}$ component. \section{Data and Experimental Methods}\label{sec:datameth} We evaluated the event detection performance of our proposed representation against several baseline representations, using the recently released public data set YLI-MED. \subsection{Dataset} \label{sec:dataset} YLI-MED \cite{BjBdEbFgGhGlJaKsTjWj15} is an open video corpus for multimedia event detection research (modeled on the TRECVID MED corpus \cite{OpAgFjAbMmSaKwQg12}, but publicly available); the videos in it are drawn from the YFCC100M dataset \cite{TbSdFgEbNkPdBdLl16}. YLI-MED includes about 2000 videos that contain examples of ten events, with standard training and test videos for each event. Since this work focuses on analyzing the acoustic environment of videos, we conducted a series of experiments using the audio tracks. Table \ref{dataS} describes the data we used, including the number of training and test audio files and the range in length for the training and testing sets for each event. The wide variation in length among the tracks makes the event detection task more challenging. \begin{table*}[!thb] \centering \small \caption{Dataset Composition} \label{dataS} \begin{tabular}{c|l|cc|cc} \hline Event & & \multicolumn{2}{c|}{Training Data} & \multicolumn{2}{c}{Testing Data}\\\cline{3-6} ID &Event Name & \# of Videos & length (ms) & \# of Videos & length (ms) \\\hline Ev101 & Birthday Party & 99 &6850$\sim$248950 &131 &8380$\sim$328960\\ Ev102 & Flash Mob & 91 &8290$\sim$325630 &49 &11710$\sim$152560\\ Ev103 & Getting a Vehicle Unstuck & 89 &5590$\sim$591670 &39 &11170$\sim$157690\\ Ev104 & Parade &95 &7840$\sim$303850 &127 &5770$\sim$216460\\ Ev105 & Person Attempting a Board Trick &99 &5950$\sim$391150 &88 &5500$\sim$254980\\ Ev106 & Person Grooming an Animal &97 &5950$\sim$574300 &38 &7210$\sim$292870\\ Ev107 & Person Hand-Feeding an Animal &95 &6850$\sim$174880 &113 &7840$\sim$244450\\ Ev108 & Person Landing a Fish &99 &7930$\sim$363610 &41 &7480$\sim$250120\\ Ev109 & Wedding Ceremony &90 &9640$\sim$631630 &108 &9820$\sim$646300\\ Ev110 & Working on a Woodworking Project &98 &5590$\sim$373690 &44 &6760$\sim$281080\\ \hline \end{tabular} \end{table*} \subsection{Methodology} \label{sec:expmethods} To evaluate our proposed DCAR method, we compared it with three state-of-the-art audio representations used for event detection: mv-vector \cite{RgNwHp13}, i-vector \cite{EbLhFg13}, and GMM. By \textit{GMM}, we mean here the base GMMs obtained by extracting the GMM components from each audio file (as described in Section \ref{SecGMM}), but without the discriminative dimensional reduction step used in DCAR (described in Section \ref{SecDGMM}). As we mentioned in Section \ref{sec:related}, an i-vector models all of the training audio frames to obtain a GMM supervector for each frame, then factorizes them to obtain the i-vector representation. In contrast, an mv-vector models each audio file via the mean and variance of the MFCC features, then concatenates the mean and variance to obtain a vector representation.\footnote{As we want to evaluate the most comparable aspects of the representations, we do not consider the temporal information from the RQA features for the mv-vector method.} There are several parameters for each of the representations, which we tuned using cross-validation on the training data to obtain the best result. For GMM and DCAR, the number of components for each audio track is tuned from 1 to 10, with a step of 1. For i-vector, the number of components in all of the training data is tuned to one of the values in $\{2^7,2^8,2^9,2^{10}\}$ and the vector dimensionality is tuned to one of the values in $\{200,400,600,800,1000\}$. For DCAR, the number of nearest neighbors ($n_w$ and $n_b$) is set to be 5 for affinity matrix construction, the embedding space size $r$ is tuned in $[L,60]$ with a step of 5 ($L$ is the number of events), and the trade-off parameter $\lambda$ is tuned to one of $\{10^k\}_{k=-2}^2$.\footnote{In addition, we tried normalizing each track, with each MFCC feature having a mean of 0 and a variance of 1. All GMM components then have 0 means, so the KRR classifier depends solely on the covariance matrix. However, for event detection, information about means appears to be important. When we tuned the trade-off parameter ($\lambda$ in Eq. (\ref{KernelIntegrated})), we found that we usually obtained the best results with both means and covariance matrices.} Because our focus is on comparing different audio representations, we describe here experiments that all used the same classification method, KRR.\footnote{To check the validity of this approach, we also tested several other classification techniques with mv-vector and i-vector representations, including SVM (support vector machines), KNN (k--nearest neighbor), and PLDA (parallel latent Dirichlet allocation). The performance rankings between representations were parallel across all classification techniques.} For the $l$-th event in $t$ testing tracks, we compared the prediction result to the ground truth to determine the number of true positives ($TP_l$), false positives ($FP_l$), true negatives ($TN_l$), and false negatives ($FN_l$). We then evaluated event detection performance using four metrics, $Accuracy$, $FScore$, False Alarm Rate ($FAR$), and $MissRate$, defined (respectively) as: \begin{equation}\label{eq:AccuracyCalc} Accuracy = \frac{\sum_{l=1}^L TP_l}{t} \end{equation} \begin{equation}\label{eq:FScoreCalc} FScore_l = \frac{2\times TP_l}{2\times TP_l+FP_l+FN_l} \end{equation} \begin{equation}\label{eq:FARCalc} \quad FAR_l = \frac{FP_l}{TN_l+FP_l} \end{equation} \begin{equation}\label{eq:MissRateCalc} \quad MissRate_l= \frac{FN_l}{FN_l+TP_l}. \end{equation} $Accuracy$ is calculated on all $t$ testing tracks (i.e., we use combined or overall accuracy), and the other three metrics are calculated for each event and then averaged over $L$ events to evaluate performance. Larger $Accuracy$ and $FScore$ values indicate better performance, and smaller $FAR$ and $MissRate$ values indicate better performance. \begin{figure*}[!htb] \centering \subfigure[I-vector (lower triangle) vs.\ DCAR (upper triangle)]{ \includegraphics[height=0.37\textwidth, width=0.48\textwidth]{BC45_iV_DGMM_Compare_matrix_1.jpg}} \subfigure[Histogram of Relative Gain (DCAR vs.\ i-vector)]{ \includegraphics[height=0.37\textwidth,width=0.48\textwidth]{BC45_iV_DGMM_Improvement_hist_1.jpg}} \caption{Binary classification accuracy. a) Comparison between i-vector (in the lower left triangle) and DCAR (in the upper right triangle) for each pairwise classification, with darker color indicating higher accuracy. b) Histogram of the accuracy improvement obtained by DCAR relative to i-vector across the 45 classifications.} \label{Fig:BC45AccIvDGMM} \end{figure*} \section{Experimental Results} \label{SecExp} \label{sec:experiments} We evaluated the four representations under study in a combination of binary detection and multi-event detection tasks, described below. \subsection{Binary-Event Detection} \label{sec:binary} In the first experiment, we built 45 binary classifiers (one for each pair of events). We had two goals in conducting this experiment. The first was to compare two representation strategies: modeling GMMs on all training tracks, in this case as the first phase of the i-vector approach, vs.\ modeling GMMs on each training track, using DCAR. The second was to investigate \textit{how} the events are distinguished. As the graphs in Figure \ref{Fig:BC45AccIvDGMM} show, DCAR outperforms i-vector on most tasks. On average, DCAR achieves an accuracy improvement of $10.74\%$ over i-vector (0.8293 vs.\ 0.7489) across the binary detection tasks. The win-tie-loss value for pairwise tests at the 0.05 significance level for DCAR against i-vector is 35-7-3; the win-tie-loss value at the 0.01 significance level is 40-2-3. From these results, we can see that there are some event pairs that are particularly difficult to distinguish, such as Ev106--Ev107 and Ev107--Ev108. Considering the nature of the events involved, it could be argued that distinguishing between events with a \textit{Person Grooming an Animal} (Ev106), a \textit{Person Hand-Feeding an Animal} (Ev107), and a \textit{Person Landing a Fish} (Ev108) could be non-trivial even for humans. Nonetheless, compared with i-vector, our proposed DCAR increases binary classification accuracy even on these difficult pairs. This result demonstrates that modeling each audio file via a Gaussian mixture model is more suitable to characterizing audio content, and that integrating label information and local structure is useful in generating discriminative representations. \subsection{Easy vs.\ Hard Cases} \label{sec:easyhard} To further explore how our proposed DCAR method performs on audio-based event detection under different difficulty levels, we extracted two subsets from YLI-MED. Subset EC5 (``EasyCase'') contains five events (Ev101, Ev104, Ev105, Ev108, and Ev109) that are generally easy to distinguish from (most of) the others. Subset HC4 (``HardCase'') contains four events (Ev103, Ev106, Ev107, and Ev110) that are more difficult to distinguish.\footnote{The division was made based on results from the experiments described in Subsections \ref{sec:binary} and \ref{sec:tendetect}, as well as \textit{a priori} understanding of the events' similarity from a human point of view. Because multiple criteria were used, Ev102 did not fall clearly into either category.} \subsubsection{Dimensionality Tuning} Before comparing DCAR with other representations, we conducted a set of multi-event detection experiments to study how the dimensionality parameter ($r$) affects DCAR under these two difficulty levels. Here, we used five-fold cross-validation on the training data to tune $r$. The parameter was tuned from 5 to 60 with a step of 5. Combined $Accuracy$ and average $MissRate$ on EC5 and HC4 for each step are given in Figure \ref{Fig:VarDC5OC4VarR}. \begin{figure*}[!htb] \centering \subfigure[EC5]{ \includegraphics[height=0.3\textwidth,width=0.47\textwidth]{DC5-var-r-Acc-MR-1.jpg}} \subfigure[HC4]{ \includegraphics[height=0.3\textwidth,width=0.47\textwidth]{OC4-var-r-Acc-MR-1.jpg}} \caption{Effects of varying the parameter ($r$) in DCAR for the EasyCase subset (five-event detection), and for the HardCase subset (four-event detection) in terms of Accuracy and MissRate.} \label{Fig:VarDC5OC4VarR} \end{figure*} The results show that DCAR performs better as $r$ increases, reaches the best value at $r=25$ for both cases, and then again decreases in performance as $r$ grows larger. We believe this is because a smaller $r$ cannot optimally characterize the hidden structure of the data, while larger a $r$ may separate the structure into more dimensions until it is essentially equivalent to the original data, thus decreasing the efficacy of the representation. \subsubsection{Easy and Hard Results} Moving on to comparing DCAR with other state-of-the-art representations at these two difficulty levels, Table \ref{DC5OC4Compare4Ms} shows the multi-event detection performance of DCAR and three baseline representations---base GMM, mv-vector, and i-vector---in terms of $FScore$, $Accuracy$, $MissRate$ and $FAR$ on EC5 and HC4. \begin{table*}[!thb \centering \caption{Comparison of four representations (mv-vector, i-vector, GMM, and DCAR) on multi-event detection for easy-to-distinguish events (EC5) and hard-to-distinguish events (HC4). (Best results in boldface.)}\label{DC5OC4Compare4Ms} \begin{tabular}{l|cccc||cccc} \hline & \multicolumn{4}{c||}{Subset EC5} & \multicolumn{4}{c}{Subset HC4} \\ Evaluation & \multicolumn{4}{c||}{(Ev101,Ev104, Ev105, Ev108, Ev109)} & \multicolumn{4}{c}{(Ev103,Ev106, Ev107, Ev110)} \\ \cline{2-9} Metric &mv-vector &i-vector &GMM &DCAR &mv-vector &i-vector &GMM &DCAR \\\hline FScore($\uparrow$) &0.4773 &0.6415 &0.6670 &\bf{0.7067} &0.4278 &0.2795 &0.4821 &\bf{0.5282} \\Accuracy($\uparrow$) &0.5455 &0.6828 &0.7131 &\bf{0.7434} &0.4573 &0.2863 &0.5000 &\bf{0.5684}\\MissRate($\downarrow$) &0.5168 &0.3367 &0.3252 &\bf{0.2779} &0.5393 &0.6975 &0.4840 &\bf{0.4577}\\FAR($\downarrow$) &0.1136 &0.0785 &0.0730 &\bf{0.0647} &0.1788 &0.2409 &0.1684 &\bf{0.1496}\\ \hline \end{tabular} \end{table*} For both subsets, DCAR consistently achieves the best results (marked in bold) on each evaluation metric, in comparison with the three baselines. (For $Accuracy$, p = 0.01 or better for pairwise comparisons of DCAR vs.\ mv-vector and i-vector, and p = 0.05 or better for DCAR vs.\ the base GMM, for both subsets. Significance is assessed using McNemar's two-tailed test for correlated proportions.) However, interestingly, i-vector performs better than mv-vector on the EC5 data, but worse than mv-vector on the HC4 data (p = 0.0001 or better for $Accuracy$ comparisons for both subsets). \subsubsection{DCAR, Variance, and Structure} We can make a number of observations about these results. First, it seems that modeling GMM components for each audio track (as in the GMM, DCAR, and mv-vector representations\footnote{For these purposes, we can treat the mean and variance within the mv-vector as a GMM with one component.}) is more effective than modeling a GMM on all the training audio tracks together (as in i-vector) when the events are semantically related to each other (as in HC4). We believe this is due to the fact that, in real-world applications (e.g., with user-generated content), each audio track may have a large variance. The set of strategies that model each track via GMM capture the hidden structure within each audio track, while the i-vector strategy may smooth away that structure (even between events), leading to a less useful representation. Second, GMM and DCAR perform better than mv-vector on both subsets. We believe this indicates that one mixture component (as in mv-vector) may not sufficiently capture the full structure of the audio; in addition, vectorizing the mean and variance inevitably distorts the intrinsic geometrical structure among the data. Third, DCAR outperforms the base GMM. As we described in Section \ref{SecDGMMwhole}, DCAR begins by extracting such a GMM model, but it also takes into account the label information and the intrinsic nearest neighbor structure among the audio files when modeling the training data, and outputs a mapping function to effectively represent the test data. In sum, these experimental results further confirm that discriminative dimensionality reduction is beneficial for characterizing the distinguishing information for each audio file, leading to a better representation. \subsection{Ten-Event Detection} \label{sec:tendetect} Because event detection is a kind of supervised learning, learning becomes more difficult as the number of events increases. In the next experiment, we again compared the proposed DCAR model with the three baseline representations, this time on a ten-event detection task. As before, the parameters for each method were tuned by cross-validation on the training data. Table \ref{C10perECompare4Ms} gives the detection performance on each event and their average over the ten events in terms of $FScore$ and $MissRate$. \begin{table*}[!thb] \centering \caption{Per-event comparison of detection performance (as FScore and MissRate) using four representations: mv-vector, i-vector, GMM, and DCAR. (Best results in boldface; second-best underlined.)}\label{C10perECompare4Ms} \begin{tabular}{c|cccc||cccc} \hline & \multicolumn{4}{c||}{FScore ($\uparrow$)} & \multicolumn{4}{c}{MissRate ($\downarrow$)} \\\cline{2-9} &mv-vector &i-vector &GMM &DCAR &mv-vector &i-vector &GMM &DCAR \\\hline Ev101 &0.7259 &\bf{0.7842} &0.7303 &\underline{0.7835} &0.2824 &0.1679 &\underline{0.1527} &\bf{0.1298} \\ Ev102 &0.2837 &0.3396 &\underline{0.3651} &\bf{0.4603} &0.5918 &0.6327 &\underline{0.5306} &\bf{0.4082}\\ Ev103 &0.2178 &\underline{0.2569} &0.2410 &\bf{0.3820} &0.7179 &\underline{0.6410} &0.7436 &\bf{0.5641}\\ Ev104 &0.4274 &\underline{0.6206} &0.6000 &\bf{0.6207} &0.6063 &0.4331 &\underline{0.3622} &\bf{0.3621}\\ Ev105 &0.3354 &0.3899 &\bf{0.5714} &\underline{0.5178} &0.6932 &0.6477 &\bf{0.3864} &\underline{0.4205}\\ Ev106 &0.1964 &0.1835 &\underline{0.2963} &\bf{0.3750} &0.7105 &0.7368 &\underline{0.6842} &\bf{0.6053}\\ Ev107 &\underline{0.3850} &0.3298 &0.3250 &\bf{0.4024} &\bf{0.6814} &0.7257 &0.7699 &\underline{0.7080}\\ Ev108 &0.3191 &0.3853 &\underline{0.3878} &\bf{0.4231} &0.6341 &\underline{0.4878} &0.5366 &\bf{0.4634}\\ Ev109 &0.4211 &\underline{0.5028} &0.4286 &\bf{0.5176} &0.6667 &\bf{0.5833} &0.6667 &\underline{0.5926}\\ Ev110 &0.0833 &\underline{0.2299} &\bf{0.2857} &0.2162 &0.9091 &\underline{0.7727} &\bf{0.7500} &0.8182\\ \hline Average &0.3395 &0.4023 &0.4231 &\bf{0.4699} &0.6494 &0.5829 &0.5583 &\bf{0.5072}\\ \hline \end{tabular} \end{table*} For all of the individual events and on average, DCAR achieves superior or competitive performance. DCAR also performs better in terms of $Accuracy$ and $FAR$. The overall $Accuracy$ scores for mv-vector, i-vector, GMM, and DCAR are 0.3907, 0.4640, 0.4923, and {\bf{0.5321}}, respectively (p = 0.01 for DCAR vs.\ each baseline [McNemar's two-tailed]), and the average $FAR$ scores are 0.0674, 0.0593, 0.0570, and {\bf{0.0523}}. Although other representations may perform as well or better on some particular events, DCAR consistently outperforms the other representations for all evaluation metrics on the \textit{average} or \textit{overall} scores (an average of more than 8\% gain on all metrics relative to the second best representation). These results further demonstrate that modeling each audio file via GMM and then integrating both label information and local structure are beneficial to constructing a discriminative audio representation for event detection. In addition to comparing results across the four methods for ten-event detection, we also experimented with applying feature reduction methods at the frame level before training the GMM, using PCA \cite{Ji05} and linear discriminant analysis (LDA) \cite{McLachlanG2004}. (As an alternative to DCAR's approach to dimensionality reduction.) The number of principal components ($r$) in PCA was tuned from 5 to 60 with a step of 5. For LDA, $r=L-1$, where $L$ is the number of events. Average or overall results are given in Table \ref{tab:redcomp}. The results with PCA are a little better than GMM without PCA, but the accuracy difference is not statistically significant (p = 0.7117, McNemar's two-tailed). Results with LDA are much worse than GMM without LDA. We hypothesize that the main reason for the poor performance of LDA+GMM is that LDA only considers $L-1$ components, which is usually too few to capture sufficient information for later GMM training. \begin{table*}[!thb] \centering \caption{Comparison of detection performance for GMM representations with and without pre-training feature reduction.} \label{tab:redcomp} \begin{tabular}{c|ccc} \hline Evaluation & GMM & PCA+GMM & LDA+GMM \\ Metric & & $r=30$ & $r=9$ \\ \hline FScore ($\uparrow$) & 0.4231 & 0.4293 & 0.3278 \\ Accuracy ($\uparrow$) & 0.4923 & 0.4987 & 0.3419 \\ MissRate ($\downarrow$) & 0.5583 & 0.5508 & 0.6574 \\ FAR ($\downarrow$) & 0.0570 & 0.0562 & 0.0935 \\ \hline \end{tabular} \end{table*} \subsection{Intra-Event Variation} Delving deeper into how the effectiveness of a representation may depend on variable characteristics of audio tracks, we looked at the degree to which some test tracks in YLI-MED could be classified more accurately than others. \subsubsection{Variable Performance Within Events} We took the predicted result for each test audio track in the experiments with four representations described in Subsection \ref{sec:tendetect} and calculated how many of the representations made correct predictions for that track. \begin{figure}[!htb] \centering \includegraphics[width=0.7\textwidth]{PercentageAudiosDiffAcc-4Rs_v4.jpg} \caption{Per-event percentages of test tracks that are correctly classified by how many representations. \textit{Acc} is the proportion of representation types that correctly classify a given audio track. } \label{Fig:PUC4Ms} \end{figure} Figure \ref{Fig:PUC4Ms} shows the distribution of the number of representations making accurate predictions for each audio track, broken down by event. Generally, there is wide variation in accuracy among audio files belonging to the same event, with the exception of Ev101 (Birthday Party); this suggests that Ev101 may have distinctive audio characteristics that lead to more consistent classification. It is worth noting that there are some audio files that are never correctly classified by any of the representations (i.e., where \textit{acc} = 0). For example, more than 50\% of the audio tracks for Ev110 (Working on a Woodworking Project) could not be correctly classified by any of the four representations. This situation highlights a challenging property of the event detection task: some of the events are quite confusable due to their inherent characteristics. For example, Ev103 (Getting a Vehicle Unstuck) may have similar audio properties to Ev110 (Working on a Woodworking Project) in that both are likely to involve sounds generated by motors. This is also the reason we included Ev103 and Ev110 in the ``Hard Case'' HC4 dataset for the experiments described in Section \ref{sec:easyhard}. \subsubsection{Relationship to Annotator Confidence} When the YLI-MED videos were collected, three annotators were asked to give a confidence score for each video, chosen from \{1,2,3\}, with 1 being ``Not sure'' and 3 being ``Absolutely sure'' that the video contains an example of the event in question. The average of the three scores can be used as an indicator of how easily classifiable a given video is with respect to the event category, from the perspective of human beings. \begin{figure}[!htb] \centering \includegraphics[width=0.7\textwidth]{C10_Accuracy_HumanConfidence_v5.jpg} \caption{Accuracy of the four representations for audio from videos with varying annotator confidence.} \label{Fig:AccConfidence4Ms} \end{figure} Figure \ref{Fig:AccConfidence4Ms} shows the combined $Accuracy$ scores of different representations for audio tracks from videos in three confidence ranges (79 of the test audio tracks in the range [1,2), 169 in the range [2,3), and 530 with a average confidence of [3]). DCAR achieves the best performance in every range. However, it is interesting that i-vector shows a notable improvement with each increment of increasing annotator confidence (+24.1\% between [1,2) and [2,3) and +22.9\% between [2,3) and [3]), while DCAR shows a dramatic improvement between low- and intermediate-confidence videos but performs similarly on intermediate- and high-confidence videos (+37.3\% for the first step, but -2.3\% for the second); GMM follows the same pattern (+45.6\% then -1.8\%). The mv-vector approach shows yet a different pattern, performing similarly poorly on all but the high-confidence videos (only +2.8\% improvement for the first step, but +29.8\% for the second). These differing patterns may indicate that the i-vector approach is more sensitive to particular audio cues associated with the characteristics of an event that humans find most important in categorizing it, while a lower threshold of cue distinctiveness overall is required for GMM and DCAR. On the other hand, the results in Sections \ref{sec:easyhard} and \ref{sec:tendetect} suggest that modeling audio files with only one mixture component, as in mv-vector, generally cannot sufficiently capture the full structure of the audio signal, but it may be that it can capture that structure at least somewhat more often when the signal is more distinctive (i.e., it has a particularly high distinctiveness threshold). In other words, in cases where humans do not consider the events occurring in videos to be clear or prototypical examples of the target category, and thus those videos are less likely to have plentiful audio cues distinctive to that event category, a more discriminative representation may be required to improve event detection performance.\footnote{The experiments here did not include any negative test tracks; the task was simply to identify which of the target events each video's contents are \textit{most} like. If negative examples were included, it might be less clear what constitutes the ``best'' performance in categorizing videos that are on the borders of their categories.} Fortunately, it appears that the proposed DCAR method can partially address this problem, as shown by its relatively high performance on lower-confidence videos. \section{Conclusions and Future Work} \label{sec:conc} In this article, we have presented a new audio representation, DCAR, and demonstrated its use in event detection. One distinguishing characteristic of the DCAR method is that it can capture the variability within each audio file. Another is that it achieves better discriminative ability by integrating label information and the graph of the components' nearest neighbors among the audio files, i.e., it can successfully characterize both global and local structure among audio files. Representing audio using the proposed DCAR notably improves performance on event detection, as compared with state-of-the-art representations (an average of more than 8\% relative gain for ten-event detection and more than 10\% gain for binary classification across all metrics). The proposed representation benefits from leveraging global and local structure within audio data; however, videos are of course \textit{multi}modal. Other data sources such as visual content, captions, and other metadata can provide valuable information for event detection; we therefore plan to extend the current model by incorporating such information. Within audio, we also hope to evaluate the use of DCAR for other related tasks, such as audio scene classification (for example, testing it with the DCASE acoustic scenes dataset \cite{SdGdBeLmPm15}). Related work in audio (e.g., Barchiesi et al.\ 2015 \cite{BdGdSdPm15}) has demonstrated that the temporal evolution of different events plays an important role in audio analysis; another possible direction for expanding DCAR is to take into consideration complex temporal information in modeling video events. Last but not least, we might explore extracting the information-rich segments from each audio track rather than modeling the whole track. \section*{Acknowledgments} This work was partially supported by the NSFC (61370129, 61375062), the PCSIRT (Grant IRT201206), and a collaborative Laboratory Directed Research \& Development grant led by Lawrence Livermore National Laboratory (U.S.\ Dept.\ of Energy contract DE-AC52-07NA27344). Any findings and conclusions are those of the authors, and do not necessarily reflect the views of the funders. \bibliographystyle{abbrv}
1,477,468,751,208
arxiv
\section{Introduction} It has been known for forty years that quasi-cyclic codes are good \cite{CPW} in the asymptotic sense: the product of their rate by their relative distance does not vanish when the length goes to infinity. Thirteen years ago, it was shown that even the self-dual subclass was good \cite{LS2}. A decade ago, it was proved that binary dihedral codes of rate one half were good \cite{BM}, then that their self-dual doubly even subclass is also good \cite{W}. The last two papers used a non-constructive probabilistic argument, where the order of $2$ modulo the length is controlled but not determined. In the present article, we will consider so-called pure double circulant codes, that is 2-quasi cyclic codes with a systematic generator matrix consisting of two circulant matrices. These codes have been studied in a number of papers since the 1960's \cite{CPW,K,VR}. In particular, it is known that binary extended square codes, which form one of the oldest and most studied family of self-dual codes, are double circulant in many lengths \cite{J,MM}. We will show that double circulant self-dual codes over an arbitrary finite field of order $q$ are either dihedral or consta-dihedral depending on the parity of $q.$ (A special case of the first statement is anticipated in \cite{MM}). While the notion of dihedral codes has been considered by several authors, the notion of constadihedral codes has been introduced in \cite{VR} in terms of twisted group rings. We give an alternative definition in terms of group representations. We believe, but do not prove here that the two definitions are related. Further, building on the Chinese Remainder Theorem (CRT) approach of \cite{LS}, we will give exact counting formulae for these codes. From there, we will give an alternative proof that dihedral codes are good, with codes of lengths a prime number as in \cite{CPW}. Our proof depends on Artin conjecture \cite{M}, proved under the Generalized Riemann Hypothesis (GRH) in \cite{H}. It is however, conceptually clearer, and valid for more general alphabets than that of \cite{BM}. Also we give a new family of good long self-dual quasi-cyclic codes. They differ from that of \cite{LS2} by the index, the power of the shift under which they are invariant. The material is organized as follows. Section 2 collects together the definitions and notation that we need thereafter. Section 3 studies the automorphism group of self-dual double circulant codes, first for even then for odd characteristic. A general notion of consta-dihedral codes is introduced in the language of representation theory. Section 4 studies the asymptotics of double circulant self-dual codes by combining enumerative formulae with the expurgated random coding argument made familiar by the Gilbert-Varshamov bound. \section{Definitions and notation} Let $GF(q)$ denote a finite field of characteristic $p.$ In the following, we will consider codes over $GF(q)$ of length $2n$ with $n$ odd and coprime to $q.$ Their generator matrix $G$ will be of the form $G=(I,A)$ where $I$ is the identity matrix of order $n$ and $A$ is a circulant matrix of the same order. We will call these codes {\em double circulant}. These codes are sometimes called pure double circulant to distinguish them from bordered double circulant which are not quasi-cyclic \cite{T+}. By a {\em dihedral} group $D_n,$ we will denote the group of order $2n$ with two generators $r$ and $s$ of respective orders $n$ and $2$ and satisfying the relation $srs=r^{-1}.$ A code of length $2n$ is called {\em dihedral} if it is invariant under $D_n$ acting transitively on its coordinate places. If $C(n)$ is a family of codes of parameters $[n,k_n,d_n],$ the rate $R$ and relative distance $\delta$ are defined as $$R=\limsup_{n \rightarrow \infty}\frac{k_n}{n},$$ and $$\delta=\liminf_{n \rightarrow \infty}\frac{d_n}{n}.$$ Both limits are finite as limits of bounded quantities. Such a family of codes is said to be {\it good } if $R\delta \neq 0.$ \section{Symmetry} \subsection{Even $q$} Let $q$ be an even prime power. Let $M_n(q)$ denotes the set of all $n\times n$ matrices over $GF(q)$. \begin{lem}\label{iso} If $A$ is a circulant matrix in $M_n(q),$ then there exists an $(n\times n)$-permutation matrix $P$ such that $PAP=A^{t}$ where $A^{t}$ denotes the transpose of $A$. \end{lem} \noindent {\bf Proof. } Assume, for simplicity, that $n$ is odd. Denote by $\pi$ the permutation $(1,n)(2,n-1)\cdots (\frac{n-1}{2},\frac{n+3}{2}).$ Permuting the columns of $A$ with respect to $\pi$ yields a symmetric back-circulant matrix. Let $P$ be the permutation matrix attached to $\pi.$ The preceding explanation shows that $AP=(AP)^t=P^tA^t,$ or $PAP=A^t.$ \hfill $\Box$ \\ \begin{thm}\label{dihedral} For $n\geq 3$ odd, and $q$ even, every self-dual double circulant code over $GF(q)$ of length $2n$ is dihedral. \end{thm} \noindent {\bf Proof. } Let $C$ be a self-dual double circulant code of length $2n$ with generator matrix $G=(I,A)$. The parity-check matrix $H=(A^{t},I)$ is also a generator matrix of $C$ due to self-duality. Let $P$ be the $(n\times n)$ permutation matrix such that $PAP=A^{t}$. Since left multiplication by $P$ amounts to changing the positions of some rows, $PH=(PA^{t},P)$ is also a generator matrix for $C$. On the other hand, right multiplication of $PH$ by $P$ is equivalent to multiplying $PH$ by the block diagonal matrix $\left( \begin{smallmatrix} P & 0 \\ 0 & P \end{smallmatrix} \right)$, yielding $PHP=(A,I)$. This right multiplication corresponds to applying the following permutation in $S_{2n}$ $$\pi = (2,n)(3, n-1)\ldots (\frac{n+1}{2},\frac{n+3}{2})(n+2,2n)(n+3,2n-1)\ldots (\frac{3n+1}{2})(\frac{3n+3}{2}).$$ i.e. $PHP=PH\pi$. Moreover, we can obtain the generator matrix $(I,A)$ from $(A,I)$ by applying the permutation $$\sigma= (1,n+1)(2,n+2)(3,n+3)\ldots (n,2n).$$ Hence $C$ is invariant under the following product $$\pi\sigma=(1, n+1)(2,2n)(3,2n-1)\ldots(n-1,n+3)(n,n+2).$$ Furthermore, since $I$ and $A$ are circulant matrices, $C$ is invariant under also the permutation $$\tau=(1,2,\ldots, n)(n+1,n+2,\ldots, 2n).$$ Therefore, $C$ is invariant under the subgroup $\langle \tau, \pi\sigma \rangle$ of $S_{2n}$.\ Since $\tau$ is a product of $n$-cycles and $\pi\sigma$ is a product of transpositions, we have $\tau^n=1$ and $(\pi\sigma)^{2}=1$. Observe that $$(\pi\sigma)\tau=(1,n+2)(2,n+1)(3,2n)(4,2n-1)\ldots(n-1,n+4)(n,n+3).$$ Then we can easily obtain the following equality $$(\pi\sigma)\tau(\pi\sigma)=(1,n,n-1,n-2,\ldots,2)(n+1,2n,2n-1,\ldots,n+2)=\tau^{-1}.$$ Therefore, $\langle \tau, \pi\sigma \rangle$ is isomorphic to the dihedral group $D_n$. \hfill $\Box$ \\ \subsection{Odd $q$} Recall that a {\em monomial} matrix over $GF(q)$ of order $g$ has exactly one nonzero element per row and per column. The monomial matrices form a group $M(g,q)$ of order $g!(q-1)^g$ under multiplication. This group is abstractly isomorphic to the wreath product $\mbox{\msbm Z}_{q-1}\wr S_g.$ By a {\em monomial representation} of a group $G$ over $GF(q)$ we shall mean a group morphism from $G$ into $M(g,q).$ A code of length $2n$ will be said to be {\em consta-dihedral} if it is held invariant under right multiplication by a monomial representation of the dihedral group $D_n.$ An alternative, but related definition can be found in \cite{SR}. We can now state the main result of this subsection. \begin{thm}\label{consta} For $n\geq 3$ odd, and $q$ odd, every self-dual double circulant code $C$ of length $2n$ over $GF(q)$ is consta-dihedral. \end{thm} \noindent {\bf Proof. } Keep the matrix notations of Theorem 2. Let the generator matrix of $C$ be $G=(I,A)$ with $A$ circulant and $AA^t=-I.$ Computing $A^tG=(A^t,-I)$ and conjugating by $P$ of Lemma \ref{iso} we get $PA^tGP=(A,-I).$ Define the antiswap involution $a$ by the rule $a(x,y)=a(y,-x),$ where $x,y$ are vectors of length $n$ over $GF(q).$ Note that $a^2=-1.$ Clearly $ a \in M(2n,q).$ Thus $\pi a \in M(2n,q)$ and it preserves $C.$ A monomial representation of $D_n$ is then $\langle \tau , \pi a\rangle.$ Thus $C$ is consta-dihedral. \hfill $\Box$ \\ \section{Asymptotics} \subsection{Enumeration} In this section we give enumerative results for self-dual double circulant codes. It is important to notice that there are 2-quasi-cyclic codes that are not double circulant. An example in length $168$ is given in \cite{J}. Thus, the formula of \cite[Prop. 6.2]{LS} does not apply. We will need the following counting formula. An alternative proof for $q$ prime can be found in \cite[Th 1.3, Th 1.3']{Mac} where the number of orthogonal circulant matrices over $GF(q)$ for $q$ prime is computed. Recall that $-1$ is a square in $GF(q)$, a field of characteristic $p$, if one of the following conditions holds \begin{enumerate} \item $q$ is even \item $p \equiv 1 \pmod{4}$ \item $p \equiv 3 \pmod{4}$ and $q$ is a square. \end{enumerate} Note that \cite[Prop. 6.2]{LS} we know that 2-quasi-cyclic self-dual codes, hence a fortiori self-dual double circulant codes over $GF(q)$ exist only if $-1$ is a square in $GF(q).$ {\lem \label{count}Let $n$ denote a positive odd integer. Assume that $-1$ is a square in $GF(q).$ If $x^n-1$ factors as a product of two irreducible polynomials over $GF(q),$ the number of self-dual double circulant codes of length $2n$ is $2(q^\frac{n-1}{2} +1)$ if $q$ is odd and $(q^\frac{n-1}{2} +1)$ if $q$ is even. } \noindent {\bf Proof. } By the CRT approach of \cite{LS} any 2-quasi-cyclic code of length $2n$ over $GF(q)$ decomposes as the 'CRT product' of a self-dual code ${\bf C}_1$ of length $2$ over $GF(q)$ and of a hermitian self-dual code ${\bf C}_n$ of length $2$ over $GF(q^{n-1}).$ To obtain a double-circulant code we must ensure that the leftmost entry of their generator matrix $G$ is $G_{1,1}=1.$ If $q$ is even the only possibility for ${\bf C}_1$ is the code spanned by $[1,1].$ If $q$ is odd there are two codes $[1,a]$ and $[1,-a]$ where $a^2=-1.$ For ${\bf C}_n$ the generator matrix is $[1,b]$ with $b$ such that $1+b^{1+r}=0,$ with $q^{n-1}=r^2.$ By finite field theory, this equation in $b$ admits $1+r$ roots in $GF(r^2).$ Note that if $q$ is even, $b$ ranges over the elements of order dividing $1+r=\frac{r^2-1}{r-1},$ and that if $q$ is odd, $b^2$ ranges over elements of order $2(1+r).$ In both cases, we use the fact that the multiplicative group of $GF(r^2)$ is cyclic of order $r^2-1.$ \hfill $\Box$ \\ The following, more general, result is an analogue for double circulant codes of the Proposition \cite[Prop.6.2]{LS} for 2-quasi-cyclic codes. It is of interest in its own right, but not needed for the asymptotic bounds of this section. {\prop Let $n$ be an odd integer, and $q$ a prime power coprime with $n.$ Suppose that $-1$ is a square in $GF(q).$ Assume that the factorization of $x^n-1$ into irreducible polynomials over $GF(q)$ is of the form $$x^n-1=\alpha (x-1)\prod_{j=2}^{s}g_j(x) \prod_{j=1}^{t}h_j(x)h_j^*(x),$$ with $\alpha$ a scalar of $GF(q),$ $n=s+2t$ and $g_j$ a self-reciprocal polynomial of degree $2d_j,$ the polynomial $h_j$ is of degree $e_j$ and $*$ denotes reciprocation. For convenience, let $g_1=x-1$ and, in case of $n$ even, let $g_2=x+1.$ The number of self-dual 2-quasi-cyclic codes over $GF(q)$ is then $$4 \prod_{j=3}^s(1+q^{d_j}) \prod_{j=1}^t(q^{e_j}-1)$$ if $q$ is odd and $n$ is even $$2\prod_{j=2}^s(1+q^{d_j}) \prod_{j=1}^t(q^{e_j}-1) $$ if $q$ is odd and $n$ is odd $$\prod_{j=2}^s(1+q^{d_j}) \prod_{j=1}^t(q^{e_j}-1) $$ if $q$ is even and $n$ is odd. } \noindent {\bf Proof. } (sketch). The part of the proof dealing with self-reciprocal polynomials $g_j$ is analogous to the previous lemma. In the case of reciprocal pairs $(h_j,h_j^*)$, note that the number of linear codes of length $2$ over some $GF(Q)$ admitting, along with their duals, a systematic form is $Q-1,$ all of dimension $1.$ Indeed their generator matrix is of the form $[1,u]$ with $u$ nonzero. We conclude by letting $Q=q^{e_j}.$ \hfill $\Box$ \\ \subsection{Arithmetic} In number theory, Artin's conjecture on primitive roots states that a given integer $q$ which is neither a perfect square nor $-1$ is a primitive root modulo infinitely many primes $\ell$ \cite{M}. It was proved conditionally under GRH by Hooley \cite{H}. In this case, by the correspondence between cyclotomic cosets and irreducible factors of $x^\ell-1$ \cite{HP}, the factorization of $x^\ell-1$ into irreducible polynomials over $GF(q)$ contains exactly two factors, one of which is $x-1$ \cite{CPW}. \subsection{Distance bound} We will need a $q$-ary version of a classical lemma from \cite{CPW}. Let $a(x)$ denote a polynomial of $GF(q)[x]$ coprime with $x^n-1,$ and let $C_a$ be the double circulant code with generator matrix $(1,a).$ Assume the factorization of $x^n-1$ into irreducible polynomials is $x^n-1=(x-1)h(x).$ We call {\it constant vectors} the codewords of the cyclic code of length $n$ generated by $h.$ {\lem \label{CPW}If $u$ is not a constant vector then there are only at most $(q-1)$ polynomials $a$ such that $u\in C_a.$} \noindent {\bf Proof. } Write $u=(v,w)$ with $v,w$ of length $n.$ The condition $u \in C_a$ is equivalent to the equation $w=av \pmod{x^n-1}.$ If $v$ is invertible$\pmod{x^n-1},$ then $v$ is uniquely determined by this equation. If not and if $u$ is not a constant vector the only possibility is that both $w$ and $v$ are multiples of $(x-1).$ Letting $v=(x-1)v',$ and $w=(x-1)w',$ yields $w'=av'\pmod{h(x)},$ which gives $a$ $\pmod{h(x)},$ since $v'$ is invertible $\pmod{h(x)}.$ Now $a \pmod{(x-1)}$ can take $q-1$ nonzero values. The result follows by the CRT applied to $a,$ since $a,$ being of degree at most $n-1$ is completely determined by its residue $\pmod{x^n-1}.$ \hfill $\Box$ \\ Recall the $q-$ary entropy function defined for $0<x< \frac{q-1}{q}$ by $$ H_q(x)=x\log_q(q-1-x\log_q(x)-(1-x)\log_q(1-x).$$ We are now ready for the main result of this section. {\thm If $q$ is not a square, then there are infinite families of self-dual double circulant codes of relative distance $$\delta \ge H_q^{-1}(\frac{1}{4}).$$} \noindent {\bf Proof. } Let $q$ be fixed and $n$ a prime going to infinity that satisfies the Artin conjecture for $q$ . The double circulant codes containing a vector of weight $d\sim \delta n$ or less are by standard entropic estimates of \cite{HP} and Lemma \ref{CPW} of the order of $(q-1) q^{2n H_q(\delta)},$ up to subexponential terms. This number will be less than the total number of self-dual double circulant codes which is by Lemma \ref{count} of the order of $q^{n/2},$ as soon as $\delta$ is of the order of the stated bound. \hfill $\Box$ \\ \section{Conclusion and Open problems} In this paper, we have studied the class of double circulant self-dual codes over finite fields, under the aspects of symmetry, enumeration, and asymptotic performance. The self-dual condition shows that these codes in odd dimension are held invariant by the dihedral group of order the length of the code in the even characteristic case, and by a monomial representation of that group in the odd characteristic case. It is possible that a similar phenomenon occurs for $n$ even and, more generally, for quasi-cyclic codes of higher index than two. Further, we have derived an exact enumeration formula for this family of codes. This formula can be interpreted as an enumeration of circulant orthogonal matrices over finite fields, thus generalizing a result of MacWilliams \cite{Mac} in the prime field case, to general finite fields. Our approach to asymptotic bounds on the minimum distance relies on some deep number-theoretic conjectures ( Artin or GRH). It would be a worthwhile task to remove this dependency by looking at lengths where the factorization of $x^n-1$ into irreducible polynomials contains more than two elements. {\bf Acknowledgement:} The authors are indebted to Hatoon Shoaib for helpful discussions. The second author was supported by T\"{U}B\.{I}TAK 2214-International Doctoral Research Fellowship Programme.
1,477,468,751,209
arxiv
\section{Section heading}\label{} Almost two decades after the discovery of the high-temperature superconductors (high-$T_c$), the theoretical description of this phenomenon still represents a challenge for the physicists. The study of the Fermi surface and the dispersions of bands is essential to better understand the superconducting and the normal properties of the high-$T_c$'s \cite{borisov}. Furthermore, the investigation of the asymmetries between hole- and electron-doped regimes may contribute to clarify the mechanisms of superconductivity in these materials \cite{Guo}. Due to the strong correlations at the Cu-sites, the one-band Hubbard model has been largely used to describe such systems. Nevertheless, the fact that the oxygen sites may be occupied by holes when the system is doped suggests that a model which also takes into account the oxygen can be more adequate to treat these systems in the doped regime \cite{calegari2005EB}. In the present work, the two-pole approximation \cite{Roth} has been used to study the FS associated with an extended $d-p$ Hubbard model. Superconductivity with $d_{x^2-y^2}$-wave symmetry is considered by using the factorization procedure proposed by Beenen and Edwards \cite{beenen}. The Hamiltonian model is an improved version of the model studied in reference \cite{calegari2005EB}. It is given by: \begin{eqnarray} H&=&\sum_{\langle i\rangle j,\sigma }\left\lbrace \left[ ( \varepsilon_{d}-\mu)d_{i\sigma}^{\dag}d_{j\sigma } +(\varepsilon_{p}-\mu)p_{i\sigma }^{\dag}p_{j\sigma}\right]\delta_{ij}\right. \nonumber\\ & &\left. ~~~+t_{ij}^{d}d_{i\sigma}^{\dag}d_{j\sigma }+t_{ij}^{p}p_{i\sigma }^{\dag}p_{j\sigma }+t_{ij}^{pd}( d_{i\sigma}^{\dag}p_{j\sigma +}p_{i\sigma }^{\dag}d_{j\sigma })\right\rbrace \nonumber\\ \nonumber\\ & &~~~+U\sum_{i}n_{i\uparrow}^{d}n_{i\downarrow}^{d} +\sum_{\langle\langle i\rangle\rangle j,\sigma }(t_{ij}^{ld}d_{i\sigma }^{\dag}d_{j\sigma }+t_{ij}^{lp}p_{i\sigma }^{\dag}p_{j\sigma }) \nonumber\\ \label{eq1} \end{eqnarray} where $\mu$ is the chemical potential. The symbols $\langle ...\rangle$ $\left(\langle\langle ...\rangle\rangle\right)$ denote the sum over the first(second)-nearest-neighbors of $i$. The hopping to the second-nearest-neighbors is necessary to describe correctly the FS topology, mainly in the electron-doped regime. The quantity $U$ stands for the local Coulomb interaction between two $d$-electrons with opposite spins. The Green's functions necessary to treat the problem are obtained following the standard Roth's procedure \cite{Roth}. In order to include superconductivity and $d-p$ hybridization, in the present work, the resulting Green's functions consist of a five-pole approximation \cite{calegari2005EB}: \begin{equation} G_{\bf{k}\sigma }(\omega)=\sum_{s=1}^5\frac{Z_{\bf{k}\sigma }^{(s)}} {\omega-E_{s\bf{k}\sigma}} \label{G11} \end{equation} with each pole corresponding to a quasi-particle band $E_{s\bf{k}\sigma}$. The quantity $Z_{\bf{k}\sigma }^{(s)}$ stands for the spectral weight \cite{calegari2005EB}. The quantities $E_{s\bf{k}\sigma}$ and $Z_{\bf{k}\sigma }^{(s)}$ are obtained as in reference \cite{calegari2005EB}. \begin{figure}[t!] \begin{center} \leavevmode \includegraphics[angle=-90,width=8cm]{calegari-1-2-000621-f1.eps} \end{center} \begin{center} \leavevmode \includegraphics[angle=-90,width=3.5cm]{calegari-1-2-000621-f2.eps} \leavevmode \includegraphics[angle=-90,width=3.5cm]{calegari-1-2-000621-f3.eps} \end{center} \caption{(a), (b) and (c) show the evolution of the Fermi surface for different occupations $n_T$ in the hole doping regime with $k_BT=0.0011$eV. The model parameters are $U=3.5$eV, $V_0^{pd}=0.2$eV, $\varepsilon_{p}-\varepsilon_{d}=3.6$eV, $t^d=-0.5$eV, $t^p=-0.7$eV, $t^{ld}=0.04$eV and $t^{lp}=0$. The symbols $\odot$ show the experimental data for La$_{x-2}$Sr$_x$CuO$_4$ taken from Ref. \cite{Ino2}. (d) Fermi surface for $n_T=0.76$ and different values of hybridization. (e) Fermi surface for $n_T=0.76$ and different values of Coulomb interaction $U$.} \label{figure:FSh} \end{figure} The figures \ref{figure:FSh}(a), \ref{figure:FSh}(b) and \ref{figure:FSh}(c) show Fermi surfaces for different hole-doping. The symbols $\odot$ show the experimental ARPES results for La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) \cite{Ino2}. The figure \ref{figure:FSh}(a) shows that, in the hole overdoped regime $(n_T=0.70$, with $n_T=n_{\sigma}^d+n_{-\sigma}^d )$, the Fermi surface nature is electron-like centered in $(0,0)$. Whereas, in the underdoped regime $(n_T=0.90$ in figure \ref{figure:FSh}(c)), the Fermi surface is hole-like centered in $(\pi,\pi)$. Hence, there is a critical doping $x_c$ $(x_c\simeq 1-n_T^{(c)})$, where the Fermi surface changes its nature from electron- to hole-like. It has been verified experimentally that some thermodynamic properties as the entropy and the magnetic susceptibility are greatly enhanced with a peak in $x_c$ \cite{avella}. Moreover, also in $x_c$, the Hall coefficient $R_H$ reverses its sign \cite{avella}. The figure \ref{figure:FSh}(d) shows the FS for $n_T=0.76$ and different values of hybridization. Here, the hybridization has been considered ${\bf k}$-independent \cite{calegari2005EB} $(V_0^{pd})^2\equiv \langle V_{\bf k}^{dp}V_{\bf k}^{pd} \rangle$, where $\langle ...\rangle$ is the average over the Brillouin zone and $V_{\bf{k}}^{dp}(V_{\bf{k}}^{pd})$ are the Fourier transform of $t_{ij}^{dp}(t_{ij}^{pd})$. It can be observed that the topology of the FS is directly associated with the hybridization $V_0^{pd}$. As a consequence, the value of $x_c$ depends on the hybridization. It is important to note that this dependence is pronounced in the points $(0,\pi)$ and $(\pi,0)$, where the intensity of the superconducting order parameter is maximum in the particular case of $d_{x^2-y^2}$-wave symmetry. In the present work, it has been found that this effect is related to a decreasing of the order parameter, as the hybridization increases. Here, the quantity $V_0^{pd}$ is the {\bf k}-independent hybridization \cite{calegari2005EB}. The figure \ref{figure:FSh}(e) shows that the Coulomb interaction is also related to a change in the FS topology. \begin{figure}[t!] \begin{center} \includegraphics[angle=-90,width=6.2cm]{calegari-1-2-000621-f4.eps} \end{center} \caption{(a) and (b) show the Fermi surface for different electron-doping. The full lines show the result for $U=4$eV, $V_0^{pd}=0.2$eV and $k_BT=0.0011$eV. The remaining parameters are, $\varepsilon_{p}-\varepsilon_{d}=2.0$eV, $t^d=-0.3$eV, $t^p=-0.7$eV, $t^{ld}=0.02$eV and $t^{lp}=0$. The symbols $\odot$ correspond to the experimental data taken from Refs. \cite{markiewicz2,King}. (c) Fermi surface for for $U=4$eV, $n_T=1.22$ and two different hybridizations. (d) Fermi surface for $V_0^{pd}=0.2$eV, $n_T=1.22$ and two different values of Coulomb interaction. } \label{figure:SFe} \end{figure} The FS associated with the electron-doped case is shown in figure \ref{figure:SFe}. As it can be noted, the change in the topology of the FS upon doping does not occur. It should be observed in figure \ref{figure:SFe}(c) that, as in the hole-doped situation, the hybridization acts in the sense of changing the area enclosed by the FS. However, here, different of the hole-doped case, the area enclosed by the FS decreases if the hybridization increases. It has been found that the effect of the Coulomb interaction on the Fermi surface is similar to the hybridization effect. It is important to highlight that, in hole-doped systems, experimental results indicate that above $x_c$, where the Hall coefficient $R_H$ is negative, the superconductivity disappears \cite{ono}. The peculiar evolution of FS in the hole-doped regime and its relation with $R_H$ through $x_c$ suggest that, in cuprate systems, the effect of the hybridization is very important, at least in the hole-doped case.
1,477,468,751,210
arxiv
\section{introduction} Topological defects interpolating between distinct quantum ground states can host localized fermions, harboring rich physics (see, e.g., Refs. \cite{Jackiw76,Su79,Kitaev01,Nayak08}). In solids, nontrivial topology can emerge from an inversion of electronic bands at the interface of two materials \cite{Volkov85} or from an inverted band ordering in ${\bm k}$ space \cite{Kane05,Bernevig06,Murakami06}, leading in each case to localized boundary states. In particular, two-dimensional topological insulators (2DTIs) with strong spin-orbit-coupling (SOC) \cite{Kane05,Bernevig06,Murakami06,Koenig07,Knez11} possess a pair of edge states related by time reversal and existing in the bandgap of the material. Recently, monolayer transition metal dichalcogenides in the distorted octahedral 1T$^\prime$ phase have been predicted to be 2DTIs with an intrinsic inverted band structure \cite{Qian14}. Experimentally, the edge states have been reported in single-layer tungsten ditelluride (WTe$_2$) \cite{Fei17,Wu18} and tungsten diselenide (WSe$_2$) \cite{Ugeda18}. In the latter case, the topological edge states come as boundary states at the crystallographically aligned interface between a 1T$^\prime$ phase domain and a semiconducting 1H domain of WSe$_2$. Crystalline phase boundaries in WSe$_2$ are well ordered, accessible to high-resolution scanned probe microscopy and offer other opportunities for testing predictions regarding topological edge states (see recent review in \cite{Culcer20}). In the ongoing quest for topological electronics, transport properties of WSe$_2$ phase interfaces deserve particular attention. There is also a general theoretical reason for taking a closer look at the boundary states in WSe$_2$. In typical 2DTIs \cite{Kane05,Bernevig06}, the edge states resemble massless Dirac fermions in the sense that they exhibit a linear level crossing over a substantial energy range. In this energy range, the essential properties of the edge states, such as their electric transport, can be successfully explained by a Dirac-like model. In contrast, in WSe$_2$, the crossing of the boundary states is highly nonlinear \cite{Ugeda18}, rendering the picture of the Dirac fermions invalid in this case. Consequently, transport calculations based on Dirac-like models cannot be directly applied to the boundary states in WSe$_2$. This paper examines the boundary states in 2D WSe$_2$, using an effective Hamiltonian theory for 1T$^\prime$ - 1H phase boundaries. The effective Hamiltonian operates in a reduced Hilbert space spanned by the conduction and valence bands, including the spin, as adopted in \cite{Shi19}. We find a strongly nonlinear boundary spectrum reminiscent of a SO-split parabolic band in a 1D conductor. Its nonlinearity and particle-hole asymmetry are consistent with the ab initio calculations of Ugeda {\em et al} \cite{Ugeda18}. The solution for the boundary states is implemented to calculate their electric conductance and thermopower in the ballistic regime. A subtlety is that the ballistic transport depends on how the boundary spectrum merges into the bulk bands. This happens at special points on the bulk conduction and valence bands at which the bound state ceases to exist. The implications of such spectrum termination points for electron transport have not been fully understood yet. This question is clarified here for 1T$^\prime$ - WSe$_2$ in the context of the recent experimental and ab initio study \cite{Ugeda18}. Notably, the temperature and chemical potential dependences of both conductance and thermopower are found to be sensitive to the termination points of the boundary spectrum. Furthermore, through the spectrum termination points the thermoelectric coefficients depend on a structural inversion asymmetry, providing extra information on the material properties. These results establish a link between the bulk band structure of 1T$^\prime$ - WSe$_2$ and the boundary electron transport, and complement the earlier transport studies of 2DTI systems (see, e.g., Refs. \cite{Takahashi10,Ghaemi10,Takahashi12,Xu14,Xu17,Gusev19}). The following sections explain the details of the calculations and provide an extended discussion of the results. \section{Effective Hamiltonian description of mixed-phase 2D ${\rm WSe}_2$} \subsection{1T$^\prime$ phase. Intrinsic band inversion} \label{Model} To set the scene, we define the Hamiltonian for a plane-wave state with wave vector ${\bm k}=[k_x, k_y, 0]$ in a homogeneous 1T$^\prime$ phase, \begin{equation} H({\bm k}) = H_0({\bm k}) + H_{\rm SO}({\bm k}). \label{H_k} \end{equation} Here, the first term describes a monolayer without spin-orbit coupling (SOC), while the second term accounts for SOC due to a structural $z \to -z$ reflection asymmetry. As long as only the properties of the conduction and valence bands are of concern, we can work in the reduced Hilbert space in which a state vector $|{\bm k}\rangle$ has four components, $|{\bm k}\rangle = [ \Psi_{\rm c \uparrow }({\bm k}), \Psi_{\rm v \uparrow}({\bm k}), \Psi_{\rm c \downarrow}({\bm k}), \Psi_{\rm v \downarrow}({\bm k})]^T$, where subscripts ${\rm c}$ and ${\rm v}$ refer to the conduction and valence bands, while $\uparrow$ and $\downarrow$ to the spin states. A Hamiltonian acting in this reduced space -- an effective Hamiltonian -- can be represented by a $4\times 4$ matrix \cite{Qian14,Shi19,Kormanyos15}. In particular (see \cite{Shi19}), \begin{eqnarray} H_0({\bm k}) &=& \left[ \begin{array}{cccc} \epsilon_{\bm k} + m_{\bm k} & v_x k_x + iv_y k_y & 0 & 0 \\ v_x k_x - iv_y k_y & \epsilon_{\bm k} - m_{\bm k} & 0 & 0 \\ 0 & 0 & \epsilon_{\bm k} + m_{\bm k} & - v_x k_x + iv_y k_y \\ 0 & 0 & - v_x k_x - iv_y k_y & \epsilon_{\bm k} - m_{\bm k} \end{array} \right] \nonumber\\ &=& \epsilon_{\bm k}\sigma_0\tau_0 + v_x k_x \sigma_z\tau_1 - v_y k_y \sigma_0 \tau_2 + m_{\bm k} \sigma_0 \tau_3, \label{H_0} \end{eqnarray} where $\epsilon_{\bm k}$ and $m_{\bm k}$ are quadratic functions of the wave vector given by \begin{equation} \epsilon_{\bm k}= \epsilon_0 + \epsilon_x k^2_x +\epsilon_y k^2_y, \qquad m_{\bm k}= m_0 + m_x k^2_x +m_y k^2_y, \label{m_k} \end{equation} and $\epsilon_0$, $m_0$, $\epsilon_{x,y}$, $m_{x,y}$, and $v_{x,y}$ are the band structure constants of the effective model. In particular, $\epsilon_0 \pm m_0$ are the energies of the conduction and valence bands at the $\Gamma$ point; $\epsilon_{x,y}$ and $m_{x,y}$ characterize the band curvature, while $v_{x,y}$ the atomic SOC. We use the Pauli matrices in the band ($\tau_1, \tau_2$, and $\tau_3$) and spin ($\sigma_z$) subspaces along with the corresponding unit matrices $\tau_0$ and $\sigma_0$. The lack of the $z \to - z$ symmetry allows for $H_{\rm SO}({\bm k})$ of different types. We consider a particular one \begin{equation} H_{\rm SO}({\bm k}) = \left[ \begin{array}{cccc} \lambda k_y & i\delta & 0 & 0 \\ - i\delta & \lambda k_y & 0 & 0 \\ 0 & 0 & -\lambda k_y & -i\delta \\ 0 & 0 & i\delta & -\lambda k_y \end{array} \right] = (\lambda k_y \tau_0 - \delta \tau_2)\sigma_z, \label{H_SO} \end{equation} where $\lambda$ and $\delta$ are the structural SOC constants. The term specified by equation (\ref{H_SO}) can originate from an applied out-of-plane electric field which controls the coupling constants $\lambda$ and $\delta$ \cite{Shi19}. For the purpose of this study, it is sufficient that the block-diagonal $H_{\rm SO}({\bm k})$ (\ref{H_SO}) lifts the spin degeneracy of $H_0({\bm k})$ (\ref{H_0}). The inclusion of the off-diagonal SOC would result in a more involving boundary problem later on. \begin{figure}[t] \begin{center} \includegraphics[width=85mm]{E_WSe2.eps} \includegraphics[width=70mm]{Geo_Interface.eps} \caption{\label{E_WSe2} (a) Conduction and valence bands of single-layer 1T$^\prime$ - WSe$_2$ from the effective Hamiltonian model. The plot shows the band dispersion in equation (\ref{E_bulk}) along the $k_y$ direction for $\epsilon_0 = 0.2$ eV, $\epsilon_x= -10$ eV${\rm \AA}^2$, $\epsilon_y= -9$ eV${\rm \AA}^2$, $v_x = 0.8$ eV${\rm \AA}$, $v_y = 0.45$ eV${\rm \AA}$, $m_0 = 0.45$ eV, $m_x = -13$ eV${\rm \AA}^2$, $m_y = -12$ eV${\rm \AA}^2$, and $\lambda =\delta=0$. The effective model is tailored to qualitatively reproduce the ab initio band structure calculations along the $Y-\Gamma-Y$ direction in the Brillouin zone of 1T$^\prime$ - WSe$_2$ \cite{Ugeda18}, e.g. the bandgap $E_g \approx 120$ meV. Along the $k_x$ direction, the band dispersion is similar, with a somewhat larger bandgap. (b) Schematic of a 1T$^\prime$ - 1H phase boundary in the effective model of WSe$_2$ (see also equation (\ref{m(x)}) and Table \ref{Table}). } \end{center} \end{figure} The band structure of the effective model is given by the eigenvalues of the total Hamiltonian (\ref{H_k}). It is instructive to look at the dispersion of the conduction and valence bands: \begin{eqnarray} E^\pm_\sigma({\bm k}) &=& \epsilon_0 + \epsilon_x k^2_x + \epsilon_y k^2_y + \lambda \sigma k_y \nonumber\\ & \pm & \sqrt{ v^2_xk^2_x + ( v_yk_y + \delta\sigma )^2 + (m_0 + m_x k^2_x + m_y k^2_y)^2, } \label{E_bulk} \end{eqnarray} where $\sigma = \pm 1$ is the eigenvalue of $\sigma_z$. With appropriately chosen parameters, equation (\ref{E_bulk}) qualitatively reproduces the conduction and valence bands of single-layer 1T$^\prime$ - WSe$_2$ (see figure \ref{E_WSe2}(a)). The positions of the bands, their profiles and the energy gap between them overall agree with the ab initio calculations along the $Y-\Gamma-Y$ direction in the Brillouin zone (cf. \cite{Ugeda18}). The effective model is tailored to have the bandgap $E_g \approx 120$ meV as calculated in \cite{Ugeda18}. Notably, the band curvature at $k_y=0$ ($\Gamma$ point) indicates an inverted band ordering (cf. \cite{Qian14}), which is a necessary prerequisite for the occurrence of the topological boundary modes. As in the Bernevig-Hughes-Zhang (BHZ) model \cite{Bernevig06}, an intrinsic band inversion is realized under conditions $m_0m_x < 0$ and $m_0m_y < 0$ for the coefficients in the gap term $m_{\bm k}$ in equation (\ref{m_k}), see also the caption for figure \ref{E_WSe2}(a). \subsection{ 1T$^\prime$ - 1H phase interface. Topological boundary states} \label{Boundary states} The experiment of Ugeda {\em et al} \cite{Ugeda18} dealt with a crystallographically well-defined interface between a 1T$^\prime$ phase domain and a semiconducting 1H domain in contiguous single layers of WSe$_2$. We assume that the two phases are separated by a straight boundary, choosing the $x$ and $y$ axes perpendicular and parallel to it, as shown in figure \ref{E_WSe2}(b). In this geometry, $k_y$ remains a good quantum number, while $k_x$ needs to be replaced by the operator $-i\partial_x$. Compared to the 1T$^\prime$ phase, the 1H one has a larger bandgap and a normal band ordering. This difference can be accounted for by an appropriately generalized gap term $m_{\bm k}$. To model the 1T$^\prime$ - 1H interface, we use the position-dependent gap term \begin{equation} m_{\bm k} \to m_0(x) - m_x(x)\partial^2_x+ m_y(x) k^2_y, \label{m(x)} \end{equation} where the coefficients $m_0(x)$ and $m_{x,y}(x)$ coincide with $m_0$ and $m_{x,y}$ in the 1T$^\prime$ domain ($x\geq 0$), while taking different values $\overline{m}_0$ and $\overline{m}_{x,y}$ on the 1H side ($x\leq 0$). The relative sign of $\overline{m}_0$ and $\overline{m}_{x,y}$ is positive, meaning a normal band ordering in the 1H domain (see also Table \ref{Table} summarizing the effective interface model). \begin{table} \caption{Parametrization of the gap term (\ref{m(x)}) in the model of the 1T$^\prime$ - 1H WSe$_2$ interface. The relative sign of $m_0$ and $m_{x,y}$ (resp. $\overline{m}_0$ and $\overline{m}_{x,y}$) corresponds to an inverted (resp. normal) band ordering in the 1T$^\prime$ (resp. 1H) phase.} \begin{indented} \item[] \begin{tabular}{@{}ccccc} \br & $m_0(x)$ & $m_x(x)$ & $m_y(x)$ & Band structure ordering \\ \mr 1T$^\prime$ domain ($x\geq 0$) & $m_0$ & $m_x$ & $m_y$ & inverted ($m_0m_{x,y} < 0$) \\ 1H domain ($x\leq 0$) & $\overline{m}_0$ & $\overline{m}_x$ & $\overline{m}_y $ & normal ($\overline{m}_0\overline{m}_{x,y} > 0$) \\ \br \end{tabular} \end{indented} \label{Table} \end{table} Using equations (\ref{H_k}) -- (\ref{H_SO}) and (\ref{m(x)}), we can write the interface Hamiltonian as \begin{equation} H_{\sigma k_y}(x) = H^{^A}_{\sigma k_y}(x) + H^{^S}_{\sigma k_y}(x), \label{H_x} \end{equation} where $H^{^S}_{\sigma k_y}(x)$ and $H^{^A}_{\sigma k_y}(x)$ are the particle-hole symmetric and asymmetric parts of the Hamiltonian. For given spin direction $\sigma = \pm 1$ (resp. $\uparrow$ and $\downarrow$), the two terms in equation (\ref{H_x}) are $2\times 2$ matrices given by \begin{equation} H^{^A}_{\sigma k_y}(x) = (\epsilon_0 + \epsilon_y k^2_y + \lambda \sigma k_y - \epsilon_x \partial^2_x) \tau_0 \label{H_A} \end{equation} and \begin{equation} H^{^S}_{\sigma k_y}(x) = -i\sigma v_x \partial_x \tau_1 - (v_y k_y + \delta\sigma) \tau_2 + [m_0(x) + m_y(x) k^2_y - m_x(x)\partial^2_x] \tau_3. \label{H_S} \end{equation} This yields the eigenvalue equation \begin{equation} H^{^A}_{\sigma k_y}(x)|x\rangle + H^{^S}_{\sigma k_y}(x)|x\rangle =E |x\rangle \label{Eq_E} \end{equation} for energy $E$ and a real-space two-component wave function $|x\rangle$. The latter is assumed to vanish away from the interface: $|x\rangle \to 0$ for $x \to \pm\infty$, while being continuous at $x=0$. The above boundary problem is solved in \ref{Boundary solution}. The result for the wave function is \begin{equation} |x\rangle = |0\rangle \cases{ -\frac{\varkappa_3 + \varkappa_2}{\varkappa_1 - \varkappa_2} {\rm e}^{-\varkappa_1 x} + \frac{\varkappa_3 + \varkappa_1}{\varkappa_1 - \varkappa_2}{\rm e}^{-\varkappa_2 x}, & for $x \geq 0$,\\ {\rm e}^{\varkappa_3 x}, & for $x \leq 0$.\\} \label{Sol_x} \end{equation} It describes a bound state localized on the length-scales $\varkappa^{-1}_{1,2}$ and $\varkappa^{-1}_3$ in the 1T$^\prime$ and 1H domains, respectively, where \begin{equation} \varkappa_{1,2} = \left| \frac{ v_x }{2m_x }\right| \pm \sqrt{ \left( \frac{ v_x }{2m_x } \right)^2 + \frac{ m_0 + m_y k^2_y}{ m_x } } \label{kappa_12_top} \end{equation} and \begin{equation} \varkappa_3 = {\rm sgn}(m_0 \overline{m}_0) \left|\frac{ v_x }{2\overline{m}_x }\right| + \sqrt{ \left( \frac{ v_x }{2\overline{m}_x } \right)^2 + \frac{ \overline{m}_0 + \overline{m}_y k^2_y}{ \overline{m}_x } }. \label{kappa_3} \end{equation} The inverted 1T$^\prime$ band structure allows the $k_y$ values in the segment \begin{equation} -k_0 \leq k_y \leq k_0, \qquad k_0 = \sqrt{-m_0/m_y}, \label{k_0} \end{equation} where the endpoints $\pm k_0$ are the zeros of the gap term $m_0 + m_y k^2_y$ in equation (\ref{kappa_12_top}). Further, it can be shown that the wave function at the boundary, $|0\rangle$, is an eigenstate of the Pauli matrix $\tau_2$ defined by \begin{equation} \tau_2 |0\rangle = -\sigma \, {\rm sgn}(v_x m_x) |0\rangle, \end{equation} i.e. the choice of the eigenstate depends on the spin projection as well as on the relative sign of the band structure parameters $v_x$ and $m_x$. For $\sigma =\pm 1$, there are two orthogonal boundary modes. Their energy dispersion is given by \begin{equation} E_{\sigma k_y} = \epsilon + \epsilon_y k^2_y + v \sigma k_y, \label{Sol_E} \end{equation} with \begin{equation} v = v_y \, {\rm sgn}(v_x m_x) + \lambda, \qquad \epsilon = \epsilon_0 + \delta \, {\rm sgn}(v_x m_x). \label{v_epsilon} \end{equation} The parameters $v$ and $\epsilon$ absorb the structural SOC constants and account for the signs of other involved parameters (see \ref{Boundary solution}). Overall, the above boundary solution is analogous to the edge states of a 2DTI in the BHZ model. There are a few new details, though. The solution obtained for the BHZ model (see, e.g., \cite{Zhou08}) is a "hard-wall" one, i.e. the electronic wave function vanishes upon approaching the boundary of a 2DTI. In contrast, equation (\ref{Sol_x}) accounts for the leakage of the wave function into the semiconducting (1H) region, which was observed in the scanning tunneling experiment of Ugeda {\em et al} \cite{Ugeda18}. Further, the SOC (\ref{H_SO}) makes the bulk bands asymmetric with respect to $k_y \to -k_y$. In this case, we find a specific dependence of the boundary spectrum on the SOC constants $\lambda$ and $\delta$. The implications of this finding for the thermoelectric coefficients will be discussed in the next section. Finally, the dependence on the signs of the model parameters is generic, allowing for different types of band structures. Figure \ref{E_Interface} shows the energies of the boundary states (see equation (\ref{Sol_E})) along with the bulk bands of 1T$^\prime$ - WSe$_2$. Two boundary modes with opposite spins $\uparrow$ and $\downarrow$ connect the bulk conduction and valence bands, crossing at the $\Gamma$ point. Their dispersion resembles a SOC - split parabolic band in a one-dimensional conductor. However, the boundary modes terminate on the conduction and valence bands, so only one Kramers pair occurs in the bandgap. The termination points of the boundary spectrum are the endpoints of the allowed $k_y$ segment in equation (\ref{k_0}). In figure \ref{E_Interface}, $\pm k_0$ are approximately $\pm 0.2$ ${\rm \AA}^{-1}$. At these points the bound state (\ref{Sol_x}) gets delocalized, spreading into the bulk of the 1T$^\prime$ domain. In energy, the spectrum termination points lie at \begin{equation} E_{\rm c, v} = \epsilon + \epsilon_y k^2_0 \mp v k_0 = \epsilon - \frac{\epsilon_y m_0}{m_y} \mp v \sqrt{ - \frac{m_0}{m_y} } \label{E_cv} \end{equation} in the conduction ("$-$") and valence ("$+$") band, respectively. In figure \ref{E_Interface}, these energies are $E_{\rm c} \approx -50$ meV and $E_{\rm v} \approx -225$ meV. The spectrum termination points $\pm k_0$ do not coincide with the positions of the local extrema of the bulk bands, so the energy difference $E_{\rm c} - E_{\rm v} \approx 175$ meV is somewhat larger than the bandgap $E_g \approx 120$ meV, although the scale is the same. \begin{figure}[t] \begin{center} \includegraphics[width=110mm]{E_Interface.eps} \end{center} \caption{ Energy dispersion of boundary modes with spin projections $\uparrow$ and $\downarrow$, see equation (\ref{Sol_E}), for the same parameters as in figure \ref{E_WSe2}(a). The boundary modes disperse between the bulk conduction and valence bands of 1T$^\prime$ - WSe$_2$, forming a Kramers pair in the bandgap. $E_{\rm c}$ and $E_{\rm v}$ are the energies at the termination points ($\approx \pm 0.2$ ${\rm \AA}^{-1}$) of the boundary spectrum. The dashed line indicates the Fermi level adjusted in the bandgap of 1T$^\prime$ - WSe$_2$. } \label{E_Interface} \end{figure} Beside the energy spectrum, the above results provide an estimate for the distance over which the boundary states decay from the interface. In the 1T$^\prime$ domain, the decay length is of order $|2m_x/v_x|$ (see equation (\ref{kappa_12_top})), which for the chosen parameters is about $3$ nm. For comparison, the estimate of Ugeda {\em et al} \cite{Ugeda18} is $2$ nm. It is also worth mentioning that a matrix element between the Kramers partners $|k_y, \uparrow\rangle$ and $|-k_y, \downarrow\rangle$ satisfies the relation \begin{equation} \langle -k_y, \downarrow | V |k_y, \uparrow \rangle = - \langle -k_y, \downarrow | V^\dagger |k_y, \uparrow\rangle \label{Kramers} \end{equation} valid for a local operator $V$ commuting with the time-reversal operator $\mathbb{T} = -i\sigma_y K$ ($\dagger$ and $K$ denote hermitian and complex conjugations). As a consequence, a hermitian potential preserving the time-reversal symmetry causes no backscattering of the boundary modes in the bandgap because the matrix element $\langle -k_y, \downarrow | V |k_y, \uparrow\rangle$ vanishes identically in that case. The mean free path of the boundary states can be limited by elastic spin-flip scattering with a potential $V \not = \mathbb{T} V \mathbb{T}^{-1}$. In this case, spin-flip scattering is formally analogous to the intervalley scattering of edge states in spinless graphene \cite{GT12} and can be treated by the same methods. For example, in the self-consistent Born approximation, the intervalley scattering determines the transport mean free path of the edge states, while intravalley scattering only contributes to the quasiparticle life-time \cite{GT12}. A similar situation can be expected for a phase boundary in WSe$_2$ in the presence of elastic spin-flip scatterers. \section{Electric conductance and thermopower of the boundary states} When the Fermi level is adjusted in the bandgap of the 1T$^\prime$ domain (see also figure \ref{E_Interface}), the phase boundary acts as a quasi-1D conductor with a Kramers pair of propagating modes. We discuss first the equilibrium case. For one mode, say the $\uparrow$ one, the electric current can be calculated as the equilibrium expectation value in $k$ space: \begin{equation} j_\uparrow = e \int_{-k_0}^{k_0} \nu_{\uparrow}(k_y) f[E_\uparrow(k_y)] \frac{dk_y}{2\pi}= e \int_{E_{\rm v}}^{E_{\rm c}} N_{\uparrow}(E) \nu_{\uparrow}(E) f(E) dE, \label{j_up} \end{equation} where we trace over all $k_y$ values of the boundary mode (see equation (\ref{k_0})), $\nu_{\uparrow}(k_y) = \hbar^{-1} \partial E_{\uparrow}(k_y) / \partial k_y$ is the mode velocity, and $f(E) = \bigr[ {\rm e}^{(E-\mu)/(k_{_B}T)} + 1 \bigl]^{-1}$ is the Fermi occupation number ($\mu$ and $k_{_B}$ are the chemical potential and Boltzmann constant) \footnote{ A recent lattice-model study of equilibrium boundary currents has been reported in Wei Chen, Phys. Rev. B {\bf 101}, 195120 (2020). }. The $k_y$ integration is replaced by the energy integral with the 1D density of states (DOS) $N_{\uparrow}(E) = h^{-1} |\nu_{\uparrow}(E)|^{-1}$ and velocity \begin{equation} \nu_\uparrow(E) = \frac{ {\rm sgn}(\epsilon_y) }{\hbar} \sqrt{ v^2 - 4\epsilon_y (\epsilon - E) }, \label{nu_up} \end{equation} obtained from equation (\ref{Sol_E}). Only the energies between the spectrum termination points $E_{\rm v}$ and $E_{\rm c}$ (\ref{E_cv}) contribute to the current because this energy window corresponds to a chiral (one-way moving) state \footnote{ In the energy interval from $E_{\rm c}$ to the top of the boundary band (see figure \ref{E_Interface}), the boundary spectrum is symmetric with respect to the position of the maximum. Therefore, this energy interval does not contribute to the current in equation (\ref{j_up}). }. The sign of its velocity, ${\rm sgn}(\nu_{\uparrow}) = {\rm sgn}(\epsilon_y)$, determines the direction of the current: \begin{equation} j_\uparrow = \frac{e}{h} \int_{E_{\rm v}}^{E_{\rm c}} {\rm sgn}(\nu_{\uparrow}) f(E) dE = {\rm sgn}(\epsilon_y) \frac{e}{h} \int_{E_{\rm v}}^{E_{\rm c}} f(E) dE. \label{j_up1} \end{equation} At zero temperature, the current $j_\uparrow = {\rm sgn}(\epsilon_y) (e/h) (\mu - E_{\rm v})$ is carried by all occupied states from $E_{\rm v}$ to $\mu$. Likewise, the current $j_\downarrow$ depends on the sign of the velocity of the $\downarrow$ mode, ${\rm sgn}(\nu_{\downarrow}) = -{\rm sgn}(\epsilon_y)$, which is opposite to that in equation (\ref{j_up1}). At equilibrium, the two modes are equally occupied, rendering the net electric current $j = j_\uparrow + j_\downarrow$ null. We now turn to the non-equilibrium transport. It can be realized by attaching a boundary channel to two electronic reservoirs, each being in equilibrium with its own chemical potential and temperature. We assume a ballistic boundary channel, which is justified if it is shorter than both elastic and inelastic mean free paths. Now, the counter-propagating $\uparrow$ and $\downarrow$ states come from different reservoirs with unequal occupation numbers. Say, the $\uparrow$ occupation number is still $f(E)$, while that of the $\downarrow$ mode is $f^\prime(E) = \bigr[ {\rm e}^{(E-\mu^\prime)/(k_{_B}T^\prime)} + 1 \bigl]^{-1}$, with chemical potential $\mu^\prime \not = \mu$ and temperature $T^\prime \not = T$. The net electric current $j = j_\uparrow + j_\downarrow$ can be written as \begin{eqnarray} j &=& {\rm sgn}(\epsilon_y) \frac{e}{h} \int_{E_{\rm v}}^{E_{\rm c}} [f(E) - f^\prime(E)] dE \nonumber\\ &\approx& {\rm sgn}(\epsilon_y) \biggl[ G \,\, \frac{\mu -\mu^\prime}{e} + G S \,\, (T-T^\prime) \biggr], \label{j} \end{eqnarray} where we linearized $j$ with respect to the differences $\mu -\mu^\prime$ and $T-T^\prime$ (both assumed small enough), introducing the electric conductance, $G$, and the Seebeck coefficient (thermopower), $S$ \cite{Mahan00}: \begin{equation} G = \frac{e^2}{h} \int_{E_{\rm v}}^{E_{\rm c}} \biggl(- \frac{\partial f}{\partial E} \biggr) dE = \frac{e^2}{h} [f(E_{\rm v}) - f(E_{\rm c})], \label{G} \end{equation} \begin{eqnarray} S &=& \frac{ 1/e }{f(E_{\rm v}) - f(E_{\rm c})} \int_{E_{\rm v}}^{E_{\rm c}} \frac{E-\mu}{T} \biggl( -\frac{\partial f}{\partial E} \biggr) dE \nonumber\\ &=& \frac{k_{_B}/e}{f(E_{\rm v}) - f(E_{\rm c})} \int_{ \frac{E_{\rm v} - \mu}{2k_{_B}T} }^{ \frac{E_{\rm c} - \mu}{2k_{_B}T} } \frac{\eta d\eta}{\cosh^2\eta}. \label{S} \end{eqnarray} Here, the energy bounds $E_{\rm c}$ and $E_{\rm v}$ (\ref{E_cv}) contain the details of the band structure of the 1T$^\prime$ - WSe$_2$. In particular, the transport coefficients reflect the particle-hole asymmetry as well as the atomic and structural SOC. The overall sign of the current in equation (\ref{j}) depends on the curvature of the particle-hole asymmetric dispersion along the $k_y$ direction (see equation (\ref{H_A})). This sign determines which of the two reservoirs acts as the electron source and which as the sink. The calculation of the transport coefficients is restricted to the subgap states, implying $k_{_B}T < \frac{1}{2} E_g$, which holds well up to the room temperatures for 1T$^\prime$ - WSe$_2$ with $E_g \approx 120$ meV. \begin{figure}[t!] \begin{center} \includegraphics[width=75.25mm]{G_S_T.eps} \includegraphics[width=75.25mm]{G_S_mu.eps} \end{center} \caption{ Electric conductance $G$ (in units of $e^2/h$) and thermopower $S$ (in units of $k_{_B}/|e|$) along a 1T$^\prime$ -1H phase boundary in WSe$_2$ (see equations (\ref{G}) and (\ref{S})): (a) temperature dependence of $G$ and $S$ with the Fermi level in the bandgap, $\mu= -0.1$ eV, and (b) dependence of thermopower on chemical potential inside the bandgap for $T=100$ K. Solid and dashed curves are for vanishing and finite structural SOC, with $\lambda=\delta=0$ and $\lambda = 0.02$ eV${\rm \AA}$, $\delta = 0.002$ eV, respectively. Other band structure parameters as the same as in figure \ref{E_WSe2}(a). } \label{G_S} \end{figure} Figure \ref{G_S}(a) shows the temperature dependence of equations (\ref{G}) and (\ref{S}). The large bandgap of the 1T$^\prime$ - WSe$_2$ manifests itself as the conductance plateau at $e^2/h$ up to $\approx 100$ K. The deviation from $e^2/h$ remains less than 20$\%$ up to the room temperatures. The thermopower is exponentially suppressed, but grows faster than the conductance deviation from $e^2/h$. These observations are corroborated by the asymptotic formulae \begin{eqnarray} G &\approx& \frac{e^2}{h} \biggl( 1 - {\rm e}^{ - \frac{\mu - E_{\rm v}}{k_{_B}T} } - {\rm e}^{ - \frac{E_{\rm c} - \mu}{k_{_B}T} } \biggr), \label{G_low}\\ S &\approx& \frac{k_{_B}}{e} \biggl( \frac{\mu - E_{\rm v}}{k_{_B}T} {\rm e}^{ - \frac{\mu - E_{\rm v}}{k_{_B}T} } - \frac{E_{\rm c} - \mu}{k_BT} {\rm e}^{ - \frac{E_{\rm c} - \mu}{k_{_B}T} } \biggl), \label{S_low} \end{eqnarray} for $k_{_B}T < |E_{\rm c, v} - \mu|$. As a function of the chemical potential, $G(\mu)$ has a plateau-like maximum close to $e^2/h$ \footnote{We note that $G$ does not include the contribution of the bulk states. The total conductance of the boundary and bulk states is expected to have a plateau-like minimum at $e^2/h$ \cite{Bernevig06}. } (see figure \ref{G_S}(b)). Away from the center of the plateau $G(\mu)$ drops exponentially. This behaviour is band-structure-dependent, and the knowledge of the spectrum termination points $E_{\rm c}$ and $E_{\rm v}$ is the minimal information needed to understand it. A specific feature of the thermopower $S(\mu)$ is a sign reversal inside the bandgap of the material. The zero of $S(\mu)$ is given by the average of the energy bounds $E_{\rm c}$ and $E_{\rm v}$ \begin{equation} \mu_0 = \frac{E_{\rm c} + E_{\rm v}}{2} = \epsilon - \frac{\epsilon_y m_0}{m_y}, \label{mu_0} \end{equation} which is estimated to be about $-0.14$ eV for 1T$^\prime$ - WSe$_2$. The point of the sign reversal $\mu_0$ lies at the center of the conductance plateau and reflects the particle-hole asymmetry of the band structure. Away from the plateau center, the function $S(\mu)$ shows an exponential increase, depending on the spectrum termination points $E_{\rm c}$ and $E_{\rm v}$. Also noteworthy is the effect of the structural inversion asymmetry on the thermoelectric coefficients (compare solid and dashed curves in figure \ref{G_S}). It is caused by the shifts of the energy bounds \begin{equation} \Delta E_{\rm c, v} = \delta \, {\rm sgn}(v_x m_x) \mp \lambda k_0, \label{dE_cv} \end{equation} due to the structural SOC (see equations (\ref{v_epsilon}) and (\ref{E_cv})). The effect is well visible already for small values of the SOC constants $\lambda$ and $\delta$ such as in typical 2D semiconductor heterostructures. This can be explained by the exponential sensitivity of $G$ and $S$ to the changes in the energy bounds $E_{\rm c}$ and $E_{\rm v}$. It is worth reminding that we consider the SOC without mixing the spin states. The Rashba-like SOC \cite{Shi19} requires a separate treatment. It can be included in equation (\ref{H_A}), while the particle-hole symmetric part of the Hamiltonian (\ref{H_S}) remains unchanged. We can therefore still use the approach in \ref{Boundary solution} and expect similar results to those in figure \ref{G_S}. \section{Discussion and Conclusions} In fact, the temperature dependence of the boundary conductance has been measured for a related material, 1T$^\prime$ - WTe$_2$ \cite{Fei17,Wu18}. A conductance plateau followed by a decrease in $G(T)$ has been seen in both experiments \cite{Fei17,Wu18}, e.g., in \cite{Wu18} the conductance plateau persisted up to 100K. The behaviour of $G(T)$ in figure \ref{G_S}(a) is quite similar, indicating that the proposed model captures the essential features of the boundary transport. Especially, the spectrum termination points of the boundary states have been found important for modelling the temperature and chemical potential dependences of the ballistic conductance and thermopower. Concretely, this work has found that the spectrum termination points determine the boundaries of the conductance plateau and the position of the zero of the electric thermopower. We have expressed these features in terms of the bulk band parameters, so the behaviour of the boundary transport can be predicted solely on the basis of the bulk band structure. These results distinguish the present study from the related previous work (see, e.g., Refs. \cite{Takahashi10,Ghaemi10,Takahashi12,Xu14,Xu17,Gusev19}). Because of their large bandgap, the 1T$^\prime$ materials should be particularly suitable for measurements of the boundary thermopower, at least conceptually. In the model studied above, the thermopower is small for temperatures below the bandgap, but it can be detected by the sign reversal as an applied gate voltage shifts the Fermi level between the conduction and valence bands. These findings contrast with the prediction of theory \cite{Xu14} invoking energy-dependent scattering times in and outside the bandgap of a 2DTI (see also review in \cite{Xu17}). On the other hand, Gusev {\em et al} \cite{Gusev19} have observed an ambipolar thermopower in a HgTe - based 2DTI system, but attributed their findings mainly to the bulk carriers. Regardless of the bulk contribution, the transport in 2DTIs typically involves two edges at the opposite sides of the sample. A crystalline phase boundary, on the contrary, acts as a single topological channel, offering access to still unexplored regimes of topological phases of matter. \ack The author thanks Wei Chen for useful discussions. This work was supported by the German Research Foundation (DFG) through TRR 80.
1,477,468,751,211
arxiv
\section{Introduction} A number of physical processes are modeled by generalizations of the well-known equations of mathematical physics such as, e.g., the KdV and mKdV equations, the Kadomtsev--Petviashvili equation, which contain time-dependent coefficients. That is why last decade these equations do attract attention of researchers. A number of the papers devoted to the study of variable coefficient KdV or mKdV equations with time-dependent coefficients were commented in~\cite{Popovych&Vaneeva2010}. In the majority of papers the results were obtained mainly for the equations which are reducible to the standard KdV or mKdV equations by point transformations. Unfortunately equivalence properties are neglected usually and finding of exact solutions is reduced to complicated calculations of systems involving a number of unknown functions using computer algebra packages. It is shown in~\cite{Popovych&Vaneeva2010,Vaneeva2012} that the usage of equivalence transformations allows one to obtain the results in a much simpler way. In this paper this fact is reaffirmed via presentation the correct group classification of a class of variable coefficient KdV equations using equivalence based approach. Namely, we investigate Lie symmetry properties and exact solutions of variable coefficient KdV equations of the form \begin{equation}\label{vc_mKdV} u_t+uu_x+g(t)u_{xxx}+h(t)u=0, \end{equation} where $g$ and $h$ are arbitrary smooth functions of the variable $t$, $g\neq0.$ It is shown in Section~2 that using equivalence transformations the function $h$ can be always set to the zero value and therefore the form of $h$ does not affect results of group classification. The group classification of class~(\ref{vc_mKdV}) with $h=0$ is carried out in~\cite{Popovych&Vaneeva2010}. So, using the known classification list and equivalence transformations we present group classification of the initial class~(\ref{vc_mKdV}) without direct calculations. An interesting property of class~(\ref{vc_mKdV}) is that it is normalized, i.e., all admissible point transformations within this class are generated by transformations from the corresponding equivalence groups. Therefore, there are no additional equivalence transformations between cases of the classification list, which is constructed using the equivalence relations associated with the corresponding equivalence group. In other words, the same list represents the group classification result for the corresponding class up to the general equivalence with respect to point transformations. Recently the authors of~\cite{john10b} obtained a partial group classification of class~(\ref{vc_mKdV}) (the notation $a$ and $b$ was used there instead of $h$ and $g$, respectively). The reason of failure was neglecting an opportunity to use equivalence transformations. This is why only some cases of Lie symmetry extensions were found, namely the cases with $h={\rm const}$, $h=1/t$ and $h=2/t$. In fact the group classification problem for class~(\ref{vc_mKdV}) up to its equivalence group is already solved since this class is reducible to class~(\ref{vc_mKdV}) with $h=0$ whose group classification is carried out in~\cite{Popovych&Vaneeva2010}. Using the known classification list and equivalence transformations we present group classifications of class~(\ref{vc_mKdV}) without the simplification of both equations admitting extensions of Lie symmetry algebras and these algebras themselves by equivalence transformations. The extended classification list can be useful for applications and convenient to be compared with the results of~\cite{john10b}. Note that in~\cite{Gungor&Lahno&Zhdanov2004,Magadeev1993} group classifications for more general classes that include class~(\ref{vc_mKdV}) were carried out. Nevertheless those results obtained up to very wide equivalence group seem to be inconvenient to derive group classification for class~(\ref{vc_mKdV}). \section{Equivalence transformations} An important step under solving a group classification problem is the construction of the equivalence group of the class of differential equations under consideration. The usage of transformations from the related equivalence group often gives an opportunity to essentially simplify a group classification problem and to present the final results in a closed and concise form. Moreover, sometimes this appears to be a crucial point in the exhaustive solution of such problems~\cite{IPS2007a,Vaneeva2012,VJPS2007,VPS_2009}. There exist several kinds of equivalence groups. The \emph{usual equivalence group} of a class of differential equations consists of the nondegenerate point transformations in the space of independent and dependent variables and arbitrary elements of the class such that the transformation components for the variables do not depend on arbitrary elements and each equation from the class is mapped by these transformations to equations from the same class. If any point transformation between two fixed equations from the class belongs to its (usual) equivalence group then this class is called \emph{normalized}. See theoretical background on normalized classes in~\cite{Popovych2006c,Popovych&Kunzinger&Eshraghi2010}. We find the equivalence group $G^\sim_{1}$ of class~(\ref{vc_mKdV}) using the results obtained in~\cite{Popovych&Vaneeva2010} for more general class of variable coefficient KdV-like equations. Namely, in~\cite{Popovych&Vaneeva2010} a hierarchy of normalized subclasses of the general third-order evolution equations was constructed. The equivalence group for normalized class of variable coefficient KdV equations \begin{equation}\label{EqvcmKdV} u_t+f(t)uu_x+g(t)u_{xxx}+h(t)u+(p(t)+q(t)x)u_x+k(t)x+l(t)=0, \end{equation} as well as criterion of reducibility of equations from this class to the standard KdV equation were found therein. The equivalence group $G^\sim$ of class~(\ref{EqvcmKdV}) consists of the transformations \begin{equation}\label{EqvcKdVEquivGroup} \tilde t=\alpha(t),\quad \tilde x=\beta(t)x+\gamma(t),\quad \tilde u=\theta(t)u+\varphi(t)x+\psi(t),\quad \end{equation} where $\alpha$, $\beta$, $\gamma$, $\theta$, $\varphi$ and $\psi$ run through the set of smooth functions of~$t$, $\alpha_t\beta\theta\ne0$. The arbitrary elements of~(\ref{EqvcmKdV}) are transformed as follows \begin{eqnarray}\label{EqvcKdVEquivGroupArbitraryElementTrans1} &\displaystyle\tilde f=\frac{\beta}{\alpha_t\theta}f, \quad \tilde g=\frac{\beta^3}{\alpha_t}g, \quad \tilde h=\frac1{\alpha_t}\left(h-\frac\varphi\theta f-\frac{\theta_t}\theta\right), \\ &\displaystyle\tilde q=\frac1{\alpha_t}\left(q-\frac\varphi\theta f+\frac{\beta_t}\beta\right),\quad \tilde p=\frac1{\alpha_t}\left(\beta p-\gamma q+\frac{\gamma\varphi-\beta\psi}\theta f+\gamma_t-\gamma\frac{\beta_t}\beta\right), \\ &\displaystyle\label{EqvcKdVEquivGroupArbitraryElementTrans3}\tilde k=\frac1{\alpha_t\beta}\left(\theta k-\varphi\alpha_t\tilde h-\varphi_t\right), \quad \tilde l=\frac1{\alpha_t}\left(\theta l-{\gamma}{\alpha_t}\tilde k-\psi\alpha_t\tilde h-\varphi p-\psi_t\right). \end{eqnarray} We also adduce the criterion of reducibility of~(\ref{EqvcmKdV}) to the standard KdV equation. \begin{proposition}[\cite{Popovych&Vaneeva2010}] An equation of form~(\ref{EqvcmKdV}) is similar to the standard (constant coefficient) KdV equation if and only if its coefficients satisfy the condition \begin{equation}\label{EqvcKdVEquivToKdV} s_t=2gs^2-3qs+\frac fgk, \quad\mbox{where}\quad s:=\frac{2q-h}g+\frac{f_tg-fg_t}{fg^2}. \end{equation} \end{proposition} Class~(\ref{vc_mKdV}) is a subclass of class~(\ref{EqvcmKdV}) singled out by the conditions $f=1$ and $p=q=k=l=0.$ Substituting these values of the functions $f, p, q, k$ and $l$ to~(\ref{EqvcKdVEquivToKdV}) we obtain the following assertion. \begin{corollary} An equation from class~(\ref{vc_mKdV}) is reduced to the standard KdV equation by a point transformation if and only if there exist a constant $c_0$ and $\varepsilon\in\{0,1\}$ such that \begin{equation}\label{EqvcKdVEquivToKdV2} h=\frac{\varepsilon}2\frac{g}{\int\!g\, dt+c_0}-\frac{g_t}g. \end{equation} \end{corollary} As class~(\ref{EqvcmKdV}) is normalized~\cite{Popovych&Vaneeva2010}, its equivalence group $G^\sim$ generates the entire set of admissible (form-preserving) transformations for this class. Therefore, to describe the set of admissible transformations for class~(\ref{vc_mKdV}) we should set $\tilde f=f=1,$ $\tilde p=p=\tilde q=q=\tilde k=k=\tilde l=l=0$ in~(\ref{EqvcKdVEquivGroupArbitraryElementTrans1})--(\ref{EqvcKdVEquivGroupArbitraryElementTrans3}) and solve the resulting equations with respect to transformation parameters. It appears that class~(\ref{vc_mKdV}) admits generalized extended equivalence group and it is normalized in generalized sense only. Summing up the above consideration, we formulate the following theorem. \begin{theorem} The generalized extended equivalence group~$\hat G^{\sim}_1$ of class~(\ref{vc_mKdV}) consists of the transformations \[ \displaystyle\tilde t=\alpha,\ \ \tilde x=\beta x+\gamma,\ \ \tilde u=\lambda(\beta u+\beta_t x+\gamma_t),\ \ \tilde h=\lambda\, h-2\lambda\frac{\beta_t}{\beta}-\lambda_t, \ \ \tilde g=\beta^3\lambda\, g. \] Here $\alpha$ is an arbitrary smooth function of $t$ with $\alpha_t\neq0,$ $\beta=(\delta_1\int e^{-\int h dt} dt+\delta_2)^{-1}$, $\gamma= \delta_3\int \beta^2e^{-\int h dt} dt+\delta_4$; $\delta_1,\dots,\delta_4$ are arbitrary constants, $(\delta_1,\delta_2)\neq(0,0)$ and $\lambda=1/\alpha_t$. \end{theorem} The usual equivalence group~$G^{\sim}_1$ of class~(\ref{vc_mKdV}) is the subgroup of the generalized extended equivalence group~$\hat G^{\sim}_1$, which is singled out with the condition $\delta_1=\delta_3=0$. The parameterization of transformations from~$\hat G^{\sim}_1$ by the arbitrary function $\alpha(t)$ allows us to simplify the group classification problem for class~(\ref{vc_mKdV}) via reducing the number of arbitrary elements. For example, we can gauge arbitrary elements via setting either $h=0$ or $g=1$. Thus, the gauge $h=0$ can be made by the equivalence transformation \begin{equation}\label{gauge_h=0} \hat t=\int e^{-\int h(t)\, dt}dt,\quad \hat x=x, \quad \hat u=e^{\int h(t)\, dt}u, \end{equation} that connects equation~(\ref{vc_mKdV}) with the equation $\hat u_{\hat t}+\hat u\hat u_{\hat x}+\hat g(\hat t){\hat u}_{\hat x\hat x\hat x}=0.$ The new arbitrary element $\hat g$ is expressed via $g$ and $h$ in the following way: \[ \hat g(\hat t)=e^{\int h(t)\, dt}g(t). \] This is why without loss of generality we can restrict the study to the class \begin{equation}\label{vc_mKdV_h=0} u_t+uu_{x}+g(t)u_{xxx}=0, \end{equation} since all results on symmetries and exact solutions for this class can be extended to class~(\ref{vc_mKdV}) with transformations of the form~(\ref{gauge_h=0}). The equivalence group for class~(\ref{vc_mKdV_h=0}) can be obtained from Theorem 1 by setting $\tilde h=h=0$. Note that class~(\ref{vc_mKdV_h=0}) is normalized in the usual sense. \begin{theorem}[\cite{Popovych&Vaneeva2010}] The equivalence group~$G^{\sim}_0$ of class~(\ref{vc_mKdV_h=0}) is formed by the transformations \begin{eqnarray*} &\displaystyle\tilde t=\frac{at+b}{ct+d},\quad \tilde x=\frac{e_2x+e_1t+e_0}{ct+d},\\ &\displaystyle\tilde u=\frac{e_2(ct+d)u-e_2cx-e_0c+e_1d}\varepsilon,\quad \tilde g=\frac{e_2{}^3}{ct+d}\frac g\varepsilon, \end{eqnarray*} where $a$, $b$, $c$, $d$, $e_0$, $e_1$ and $e_2$ are arbitrary constants with $\varepsilon=ad-bc\ne0$ and $e_2\ne0$, the tuple $(a,b,c,d,e_0,e_1,e_2)$ is defined up to nonzero multiplier and hence without loss of generality we can assume that $\varepsilon=\pm1$. \end{theorem} \section{Lie symmetries} The group classification of class~(\ref{vc_mKdV_h=0}) up to $G_0^\sim$-equivalence is carried out in~\cite{Popovych&Vaneeva2010} in the framework of classical approach~\cite{Olver1986,Ovsiannikov1982}. The result reads as follows. The kernel of the maximal Lie invariance algebras of equations from class~(\ref{vc_mKdV_h=0}) coincides with the two-dimensional algebra $\langle\partial_x,\, t\partial_x+\partial_u\rangle$. All possible $G_0^\sim$-inequiva\-lent cases of extension of the maximal Lie invariance algebras are exhausted by the cases 1--4 of Table~1. \begin{table}\centering \caption{The group classification of the class $u_t+uu_{x}+g\,u_{xxx}=0$, $g\neq0$} \label{Vaneeva:table1} \begin{tabular}{@{\,\,}c@{\,\,}@{\,\,}c@{\,\,}@{\,\,}l@{\,\,}} \hline\noalign{\smallskip} N&$g(t)$&\hfil Basis of $A^{\max}$ \\ \noalign{\smallskip}\svhline\noalign{\smallskip} 0&$\forall$&$\partial_x,\quad t\partial_x+\partial_u$\\[0.5ex] 1& $ t^n$&$\partial_x,\quad t\partial_x+\partial_u,\quad 3t\partial_t+(n+1)x\partial_x+(n-2)u\partial_u$\\[0.5ex] 2&$ e^{t}$& $\partial_x,\quad t\partial_x+\partial_u,\quad 3\partial_t+x\partial_x+u\partial_u$\\[0.5ex] 3&$ e^{\delta\arctan t}\sqrt{t^2+1}$& $\partial_x,\quad t\partial_x+\partial_u,\quad 3(t^2+1)\partial_t+(3t+\delta)x\partial_x+((-3t+\delta)u+3x)\partial_u$\\[0.5ex] 4&$1$& $\partial_x,\quad t\partial_x+\partial_u,\quad 3t\partial_t+x\partial_x-2u\partial_u,\quad \partial_t$\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} Here $n, \delta$ are arbitrary constants, $n\geq1/2$, $n\neq1$, $\delta\geq 0\ {\rm mod}\ G^\sim_0.$ \end{table} For any equation from class~(\ref{vc_mKdV}) there exists an imaged equation in class~(\ref{vc_mKdV_h=0}) with respect to transformation~(\ref{gauge_h=0}). The equivalence group $G_0^\sim$ of class~(\ref{vc_mKdV_h=0}) is induced by the equivalence group~$\hat G^\sim_1$ of class~(\ref{vc_mKdV}) which, in turn, is induced by the equivalence group~$G^\sim$ of class~(\ref{EqvcmKdV}). These guarantee that Table~\ref{Vaneeva:table1} presents also the group classification list for class~(\ref{vc_mKdV}) up to $\hat G^\sim_1$-equivalence (resp. for the class~(\ref{EqvcmKdV}) up to $G^\sim$-equivalence). As all of the above classes are normalized, we can state that we obtain Lie symmetry classifications of these classes up to general point equivalence. This leads to the following assertion. \begin{corollary} An equation from class~(\ref{vc_mKdV}) (resp.~(\ref{EqvcmKdV})) admits a four-dimensional Lie invariance algebra if and only if it is reduced by a point transformation to constant coefficient KdV equation, i.e., if and only if condition~(\ref{EqvcKdVEquivToKdV2}) (resp.~(\ref{EqvcKdVEquivToKdV})) holds. \end{corollary} To derive the group classification of class~(\ref{vc_mKdV}) which is not simplified by equivalence transformations, we first apply transformations from the group $G_0^\sim$ to the classification list presented in Table~\ref{Vaneeva:table1} and obtain the following extended list: \medskip 0. arbitrary $\hat g\colon$ $\langle\partial_{\hat x},\ \hat t\partial_{\hat x}+\partial_{\hat u}\rangle$; \smallskip 1. $\displaystyle \hat g=c_0(a\,\hat t+b)^n(c\,\hat t+d)^{1-n}$, $n\ne0,1$:\quad$\langle \partial_{\hat x},\ \hat t\partial_{\hat x}+ \partial_{\hat u},\ X_3\rangle$,\quad where \begin{eqnarray*}&X_3= 3(a\,\hat t+b)(c\,\hat t+d)\partial_{\hat t}+ \left(3ac\hat t+ad(n+1)+bc(2-n)\right)\hat x\partial_{\hat x}+\\ &\left[3ac\hat x-(3ac\hat t+ad(2-n)+bc(n+1))\hat u\right]\partial_{\hat u}; \end{eqnarray*} 2. $\displaystyle \hat g=c_0(c\,\hat t+d)\exp\left(\frac{a\,\hat t+b}{c\,\hat t+d}\right)$:\quad $\langle \partial_{\hat x},\ \hat t\partial_{\hat x}+ \partial_{\hat u},\ X_3\rangle$,\quad where \[X_3=3(c\,\hat t+d)^2\partial_{\hat t}+\left(3c(c\hat t+d)+\varepsilon\right)\hat x\partial_{\hat x}+\\ \left[3c^2\hat x+ (\varepsilon-3c(c\hat t+d))\hat u\right]\partial_{\hat u}; \] 3. $\displaystyle \hat g=c_0e^{\delta\arctan\left(\frac{a\,\hat t+b}{c\,\hat t+d}\right)} \sqrt{(a\,\hat t+b)^2+(c\,\hat t+d)^2}$:\quad $\langle \partial_{\hat x},\ \hat t\partial_{\hat x}+ \partial_{\hat u},\ X_3\rangle$,\quad where \begin{eqnarray*}&X_3= 3\left((a\,\hat t+b)^2+(c\,\hat t+d)^2\right)\partial_{\hat t}+ \left(3a(a\hat t+b)+3c(c\hat t+d)+\varepsilon\delta\right)\hat x\partial_{\hat x}+\\ &\left(3(a^2+c^2)\hat x-(3a(a\hat t+b)+3c(c\hat t+d)-\varepsilon\delta)\hat u\right)\partial_{\hat u}; \end{eqnarray*} 4a. $\hat g=c_0$: \quad $\langle\partial_{\hat x},\ \hat t\partial_{\hat x}+ \partial_{\hat u},\ \partial_{\hat t},\,3\hat t\partial_{\hat t}+\hat x\partial_{\hat x}-2\hat u\partial_{\hat u}\rangle;$ \smallskip 4b. $\hat g=c\hat t+d$, $c\ne0$: \quad $\langle\partial_{\hat x},\ \hat t\partial_{\hat x}+ \partial_{\hat u},\ 3(c\hat t+d)\partial_{\hat t}+2c\hat x\partial_{\hat x}-c\hat u\partial_{\hat u},\ X_4\rangle$,\quad where \[ X_4= (c\hat t+d)^2\partial_{\hat t}+c(c\hat t+d)\hat x\partial_{\hat x}+ c(c\hat x-(c\hat t+d)\hat u)\partial_{\hat u}. \] Here $c_0$, $a$, $b$, $c$, $d$ and $\delta$ are arbitrary constants, $(a^2+b^2)(c^2+d^2)\ne0$, $\varepsilon=ad-bc,$ $c_0\neq0$. Then we find preimages of equations from the class $\hat u_{\hat t}+\hat u\hat u_{\hat x}+\hat g(\hat t){\hat u}_{\hat x\hat x\hat x}=0$ with arbitrary elements collected in the above list with respect to transformation~(\ref{gauge_h=0}). The last step is to transform basis operators of the corresponding Lie symmetry algebras. The results are presented in Table~2. \begin{table \caption{The group classification of the class $u_t+uu_{x}+gu_{xxx}+hu=0$, $g\neq0$} \label{Vaneeva:table2} \begin{tabular}{@{\,\,}c@{\,\,}@{\,\,}c@{\,\,}@{\,\,}c@{\,\,}@{\,\,}l@{\,\,}} \hline\noalign{\medskip} N&$h(t)$&$g(t)$&\hfil Basis of $A^{\max}$ \\ \noalign{\smallskip}\svhline\noalign{\medskip} 0&$\forall$&$\forall$&$\partial_x,\ T\partial_x+T_t\partial_u$\\[1ex] 1&$\forall$&$c_0T_t(aT+b)^n(cT+d)^{1-n}$ &$ \partial_x,\ T\partial_x+T_t\partial_u,\ 3T_t^{-1}(aT+b)(cT+d)\partial_t+\bigl[3acT+$\\[0.5ex] &&&$ad(n+1)+bc(2-n)\bigr]x\partial_x+\Bigl(3acxT_t-\bigl[3acT+$\\[0.5ex] &&&$3hT_t^{-1}(aT+b)(cT+d)+bc(n+1)+ad(2-n)\bigr]u\Bigr)\partial_u$\\[1ex] 2&$\forall$&$c_0T_t(cT+d)\exp\left(\frac{aT+b}{cT+d}\right)$& $ \partial_x,\ T\partial_x+T_t\partial_u,\ 3T_t^{-1}(cT+d)^2\partial_t+(3c(cT+d)+\varepsilon)x\partial_x+$\\[0.5ex] &&&$\left[3c^2xT_t+\left(\varepsilon-3(cT+d)(c+h(cT+d)T_t^{-1})\right)u\right]\partial_u$\\[1ex] 3&$\forall$&$c_0T_te^{\delta\arctan\left(\frac{aT+b}{cT+d}\right)}G(t)$& $ \partial_x,\ T\partial_x+T_t\partial_u,\ 3T_t^{-1}G^2\partial_t+$\\[0.5ex] &&&$\bigl[3a(aT+b)+3c(cT+d)+\varepsilon\delta\bigr] x\partial_{ x}+\bigl[3(a^2+c^2)xT_t-$\\[0.5ex] &&&$\bigl(3a(aT+b)+3c(cT+d)-\varepsilon\delta+3hT_t^{-1}G^2\bigr)u\bigr]\partial_u$\\[1ex] 4a&$\forall$&$c_0T_t$& $ \partial_x,\ T\partial_x+T_t\partial_u,\ T_t^{-1}(\partial_t-hu\partial_u),$\\[0.5ex] &&&$ 3TT_t^{-1}\partial_t+x\partial_x-(2+3TT_t^{-1}h)u\partial_u$\\[1ex] 4b&$\forall$&$(cT+d)T_t$& $ \partial_x,\ T\partial_x+T_t\partial_u,\ T_t^{-1}(cT+d)^2\partial_t+c(cT+d)x\partial_x+$\\[0.5ex] &&&$[c^2xT_t-(cT+d)(c+T_t^{-1}(cT+d)h)u]\partial_u,$\\[0.5ex] &&&$ 3T_t^{-1}(cT+d)\partial_t+2cx\partial_x-(c+3T_t^{-1}(cT+d)h)u\partial_u$\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} Here $T=\int e^{-\int h(t)\, dt}dt$, $T_t=e^{-\int h(t)\, dt}$, $G=\sqrt{(aT+b)^2+(cT+d)^2}$; $n $ $c_0$, $a$, $b$, $c$, $d$ and $\delta$ are arbitrary constants, $(a^2+b^2)(c^2+d^2)\ne0$, $\varepsilon=ad-bc,$ $c_0\neq0$, $n\neq0,1$. In the case (4b) $c\neq0$. \end{table} It is easy to see that Table 2 includes all cases presented in~\cite{john10b} as particular cases. \section{Generation of exact solutions} A number of recent papers concern the construction of exact solutions to different classes of KdV- or mKdV-like equations using, e.g., such methods as ``generalized $(G'/G)$-expansion method'', ``Exp-function method'', ``Jacobi elliptic function expansion method'', etc. A number of references are presented in~\cite{Popovych&Vaneeva2010}. Almost in all cases exact solutions were constructed only for equations which are reducible to the standard KdV or mKdV equations by point transformations and usually these were only solutions similar to the well-known one-soliton solution. In this section we show that the usage of equivalence transformations allows one to obtain more results in a simpler way. This approach is used also in~\cite{Tang&Zhao&Huang&Lou2009}. The $N$-soliton solution of the KdV equation in the canonical form \begin{equation}\label{canonical_KdV} U_t-6UU_{x}+U_{xxx}=0 \end{equation} was constructed as early as in the seventies by Hirota~\cite{Polyanin&Zaitsev}. The two-soliton solution of equation~(\ref{canonical_KdV}) has the form \begin{equation}\label{sol2soliton} U=-2\frac{\partial^2}{\partial x^2}\ln\left(1+b_1e^{\theta_1}+b_2e^{\theta_2}+Ab_1b_2e^{\theta_1+\theta_2}\right), \end{equation} where $a_i, b_i$ are arbitrary constants, $\theta_i=a_ix-a_i^3t,$ $i=1,2;$ $ A=\left(\frac{a_1-a_2}{a_1+a_2}\right)^2$. Combining the simple transformation $\hat u=-6 U$ that connects~(\ref{canonical_KdV}) with the KdV equation of the form \begin{equation}\label{KdV_canonical} \hat u_{\hat t}+{\hat u}\hat u_{\hat x}+\hat u_{\hat x\hat x\hat x}=0 \end{equation} and transformation~(\ref{gauge_h=0}), we obtain the formula \[\textstyle u=-6e^{-\int h(t)dt}\,U\left(\int e^{-\int h(t)\, dt}dt,\,x\right)\] for generation of exact solutions for the equations of the general form \begin{equation}\label{KdV_canonical_preimage} u_t+uu_{x}+e^{-\int h(t)\, dt}u_{xxx}+h(t)u=0. \end{equation} These equations are preimages of~(\ref{KdV_canonical}) with respect to transformation~(\ref{gauge_h=0}). Here $h$ is an arbitrary nonvanishing smooth function of the variable~$t$. The two-soliton solution~(\ref{sol2soliton}) leads to the following solution of~(\ref{KdV_canonical_preimage}) \begin{equation} u=12e^{-\int h(t)dt}\frac{\partial^2}{\partial x^2}\ln\left(1+b_1e^{\theta_1}+b_2e^{\theta_2}+Ab_1b_2e^{\theta_1+\theta_2}\right), \end{equation} where $a_i, b_i$ are arbitrary constants, $\theta_i=a_ix-a_i^3\int e^{-\int h(t)\, dt}dt,$ $i=1,2;$ $ A=\left(\frac{a_1-a_2}{a_1+a_2}\right)^2$. In a similar way one can construct $N$-soliton, rational and other types of solutions for equations from class~(\ref{KdV_canonical_preimage}) using known solutions of classical KdV equation. \section{Conclusion} In this paper group classification problem for class~(\ref{vc_mKdV}) is carried out with respect to the corresponding equivalence group using equivalence based approach. Using the normalization property it is proved that this classification coincides with the one carried out up to general point equivalence. The classification list extended by equivalence transformations is also presented. Such list is convenient for further applications. It is shown that the usage of equivalence groups is a crucial point for exhaustive solution of the problem. Moreover, equivalence transformations allow one to construct exact solutions of different types in a much easier way than by direct solving. These transformations can also be utilized to obtain conservation laws, Lax pairs and other related objects for equations reducible to well-known equations of mathematical physics by point transformations without direct calculations. \begin{acknowledgement} The author thanks the Organizing Committee and especially Prof. Vladimir Dobrev for hospitality and giving an opportunity to give a talk. Her participation in the Workshop was supported by the Abdus Salam International Centre for Theoretical Physics. The author is also grateful to Prof. Roman Popovych for useful discussions and valuable comments. \end{acknowledgement}
1,477,468,751,212
arxiv
\section{Introduction and Data} WLM (Wolf-Lundmark-Melotte) is a dwarf irregular galaxy and is a member of the Local Group. It is at a distance of $932\pm33$ kpc \citep{mcconnachie05}. In \citeyear{valcheva07} \citeauthor{valcheva07} did a photometric study on part of WLM and found a C/M ratio of $0.56\pm0.12$ which differed greatly (by a factor of 20) from previous values. In \citeyear{leaman09} \citeauthor{leaman09} used Ca-II triplet spectroscopy on $78$ red giant stars and found a mean [Fe/H] of $-1.27\pm0.04$ dex and also found that stars closer to the centre of the galaxy were more metal rich by $0.30\pm0.06$ dex. The data used here is near-infrared (NIR) $JHK$ observations made on October $16^\mathrm{th}$ $2007$ using WFCAM on UKIRT in Hawaii. \section{Results} \subsection{C/M ratio and Metallicity} The C/M ratio represents the number ratio between carbon rich (C-type) and oxygen-rich (M-type) asymptotic giant branch (AGB) stars. Using a histogram of $J-K$ colour we find a C to M split at $J-K=1.05\pm0.05$ mag which gives ratios of $0.27$ to $0.89$ for the inner half square degree of data depending on foreground removal methods. In the central area dominated by the galaxy (an ellipse with RA=$\pm0.07^{\circ}$ Dec$=\pm0.15^{\circ}$) we obtain ratios between $0.4$ to $0.8$. When we correct our C/M ratio using the C-type catalogue of \citet{battinelli03} (to account for flaws in using $J-K$ colour as cut-off) these ratios range from $0.36$ to $1.43$ and $0.55$ to $1.24$ for inner and central regions respectively. The lower ratio of the central region compared to the inner region could be due to a metal rich star-forming region within the central part of WLM. Using the same sky area as \citet{valcheva07}, we on the whole find our reduced data agrees more with their unreduced data and vice versa which could be due to their adopted foreground having a large number of C-type stars. The C/M ratio is calibrated to [Fe/H] using the equation B.1 from \citet{cioni09}. By applying this equation to our data we obtain [Fe/H] values from $-1.12$ to $-1.37$ dex for original C/M ratios and $-1.18$ to $-1.43$ dex for corrected ones for the inner field. For the central field these values are from $-1.20$ to $-1.34$ dex and $-1.27$ to $-1.40$ dex for original and corrected C/M ratios. \subsection{Distance modulus (m-M)} The tip of the RGB (TRGB) represents the split between the RGB and AGB populations. The TRGB is found at $K=18.7\pm0.1$ mag and is used to calculate the distance modulus once a value for its absolute magnitude is known. We explored two methods of obtaining this value; evolutionary tracks and [Fe/H]. With the tracks by \citet{marigo08} we did two calculations, one for age ranges and the other for metallicity giving distance moduli of (m-M)$=24.37$ mag at a constant metallicity and varying age, and (m-M)$=24.39$ mag at a constant age and varying metallicity. For the calculated [Fe/H] we make use of the relation between [Fe/H] and the absolute magnitudes of RGB stars, by \citet{ferraro00}. Here, it is assumed the overall metallicity of the galaxy is the same for both the AGB and RGB population. The mean of the values for the different methods explored is (m-M)$=24.89\pm0.25$ mag ($\sim951$ kpc). This value agrees with previous measurements. \subsection{Spectral Energy Distribution (SED)} By combining our NIR data with optical data from \citet{mcconnachie05} and mid-infrared data from \citet{boyer09} we can investigate the SED of AGB stars in WLM. The SED allows us to obtain bolometric fluxes (and bolometric magnitudes from the distance modulus). We found $1281$ matches between all the datasets after applying some small systematic shifts. For the sources to be usable in a SED there needed to be a magnitude present in every band, in total $52$ sources met this criteria. When deriving the bolometric flux we found that the most luminous stars were not the C-type AGBs but supergiants. The bolometric fluxes were also converted into bolometric magnitudes giving us additional data to confirm stellar types.
1,477,468,751,213
arxiv
\section{} \setcounter{equation}{0} \section{Introduction} Let us consider smooth solutions to the stationary Navier-Stokes system \begin{equation}\label{nse} u\cdot \nabla u-\Delta u=\nabla p, \qquad {\rm div }u=0 \end{equation} in $\mathbb R^3$ with the additional condition at infinity: $u(x)\to 0$ as $|x|\to \infty$. Then the question is weather or not $u$ must be identically zero. The point addressed in this short note is under which additional assumptions the answer to the above question is positive. In the monograph \cite{Galdi-book}, it has been shown that the condition \begin{equation}\label{9/2} u\in L_\frac 92(\mathbb R^3) \end{equation} implies $u=0$. A plausible conjecture is that a sufficient condition for the positive answer could be as follows \begin{equation}\label{finite dissipation} \int\limits_{\mathbb R^3}|\nabla u|^2dx<\infty. \end{equation} At the moment of writing this note, a proof of whether or not (\ref{finite dissipation}) is a sufficient condition for $u$ to be identically zero is not known yet. We are going to show that however under an additional condition it is true. To describe that condition, we need the following definition. We say that a divergence free vector valued field $u$ belongs to the space $BMO^{-1}(\mathbb R^3)$ if there exists skew symmetric tensor $d$ from $BMO(\mathbb R^3)$ such that $$ u={\rm div}\,d=(d_{ij,j}).$$ It is known, see for instance \cite{Stein1970}, that if $d\in BMO(\mathbb R^3)$, then $$ \Gamma(s):=\sup\limits_{x_0\in\mathbb R^3,0<r}\Big(\frac 1{|B(r)|}\int\limits_{B(x_0,r)}|d-[d]_{x_0,r}|^sdx \Big)^\frac 1s<\infty$$ for each $1\leq s<\infty$. Here, we denote by $B(x_0,r)$ the ball of radius $r$ centred at point $x_0$ and $[d]_{x_0,r}$ is the mean value of $d$ over the ball $B(x_0,r)$. Our aim is to prove the following result. \begin{theorem}\label{main result} The following statements are true: \noindent (A) any smooth divergence free vector-valued field $u\in BMO^{-1}(\mathbb R^3)$, satisfying condition (\ref{finite dissipation}) and the system (\ref{nse}), is identically equal to zero; \noindent (B) any smooth divergence free vector-valued field $u\in BMO^{-1}(\mathbb R^3)$, satisfying the condition \begin{equation}\label{6} u\in L_6(\mathbb R^3) \end{equation} and the system (\ref{nse}), is identically equal to zero. \end{theorem} By the known inequality $$\|v\|_{6,\Omega}\leq c\|\nabla v\|_{2,\Omega},$$ being valid for any $v\in C^\infty_0(\mathbb R^3 ) $, the statement (A) follows from the statement (B). Before proving Theorem \ref{main result}, let us show that the assumptions in (B) do not follow immediately from the condition (\ref{9/2} ) of \cite{Galdi-book}. Indeed, we can let $$w=\sin \Big((|x|^2+1)^{\frac 14-\varepsilon }\Big)(1,1,1)$$ and $$v= {\rm rot} \,w.$$ Direct calculations show that $v\in BMO^{-1}(\mathbb R^3)\cap L_6(\mathbb R^3 ) $ but $v \notin L_\frac 92(\mathbb R^3 ) $. Finally, we would like to mention that there are many interesting papers devoted to the above or related questions, see, for example, \cite{GilWein1978}, \cite{KNSS2009}, \cite{ChaeYoneda2013}, and \cite{Chae2014}. \setcounter{equation}{0} \section{Proof of Main Result} \subsection{Caccioppoli Type Inequality} This is the main technical part of the proof. We take an arbitrary ball $B(x_0,R)\in \mathbb R^3$ and a non-negative cut off function $\varphi\in C^\infty_0(B(x_0,R))$ with the following properties $\varphi(x)=1$ in $B(x_0,\varrho )$, $\varphi(x)=0$ out of $B(x_0,r)$, and $|\nabla \varphi(x)|\leq c/(r-\varrho)$ for any $R/2\leq\varrho<r\leq R$. We let $$ \overline{u}=u-u_0,\qquad \overline{d}=d-[d]_{x_0,R},$$ where $u_0$ is an arbitrary constant. We also know that, for any $2<s<\infty$, there exists a constant $c_0=c_0(s)>0$ and a function $w\in W^1_s(B(x_0,r))$, vanishing on $\partial B(x_0,r)$, such that ${\rm div}\,w=\nabla \varphi\cdot \overline u$ and \begin{equation}\label{Bogovskii} \int\limits_{B(x_0,r)}|\nabla w|^sdx\leq c_0\int\limits_{B(x_0,r)}|\nabla \varphi\cdot \overline u|^sdx\leq \frac {c_0} {(r-\varrho)^s}\int\limits_{B(x_0,R)}| \overline u|^sdx.\end{equation} Now, we can test the Navier-Stokes equations (\ref{nse}) with the function $\varphi \overline u-w$, integrate by parts in $B(x_0,r)$, and find the following identity $$\int\limits_{B(x_0,r)}\varphi |\nabla u|^2dx= -\int\limits_{B(x_0,r)}\nabla u :(\nabla \varphi\otimes\overline u) dx+\int\limits_{B(x_0,r)}\nabla w :\nabla u dx+$$ $$-\int\limits_{B(x_0,r)}(u\cdot\nabla u)\cdot\varphi\overline u dx+\int\limits_{B(x_0,r)}(u\cdot\nabla u)\cdot wdx=I_1+I_2+I_3+I_4.$$ $I_1$ and $I_2$ can be estimated easily. As a result, we find $$|I_1|+|I_2|\leq \Big(\int\limits_{B(x_0,r)}|\nabla u|^2dx\Big)^\frac 12\frac{R^{3\frac {s-2}{2s}}}{r-\varrho}\Big(\int\limits_{B(x_0,R)}|\overline u|^sdx\Big)^\frac 1s. $$ To estimate $I_3$ and $I_4$, we are going to use the skew symmetry of the matrix $d$. We have $$|I_3|=\Big| \int\limits_{B(x_0,r)} \overline d_{jm,m}\overline u_{i,j}\overline u_i\varphi dx\Big|=\Big| \int\limits_{B(x_0,r)} \overline d_{jm}\overline u_{i,j}\overline u_i\varphi_{,m} dx\Big|\leq $$ $$\leq \frac 1{r-\varrho}\Big(\int\limits_{B(x_0,r)}|\nabla u|^2dx\Big)^\frac 12 \Big(\int\limits_{B(x_0,r)}|\overline d|^2|\overline u|^2dx\Big)^\frac 12\leq $$ $$\leq \frac 1{r-\varrho}\Big(\int\limits_{B(x_0,r)}|\nabla u|^2dx\Big)^\frac 12 \Big(\int\limits_{B(x_0,r)}|\overline u|^sdx\Big)^\frac 1s\Big(\int\limits_{B(x_0,r)}|\overline d|^\frac {2s}{s-2}dx\Big)^\frac {s-2}{2s}\leq $$ $$\leq c\frac {R^{3\frac {s-2}{2s}}}{r-\varrho}\Big(\int\limits_{B(x_0,r)}|\nabla u|^2dx\Big)^\frac 12 \Big(\int\limits_{B(x_0,R)}|\overline u|^sdx\Big)^\frac 1s\Gamma(2s/(s-2)).$$ It remains to evaluated $I_4$: $$|I_4|=\Big| \int\limits_{B(x_0,r)} \overline d_{jm,m}\overline u_{i,j} w_idx\Big|=\Big| \int\limits_{B(x_0,r)} \overline d_{jm}\overline u_{i,j} w_{i,m} dx\Big|\leq $$ $$\leq \frac 1{r-\varrho}\Big(\int\limits_{B(x_0,r)}|\nabla u|^2dx\Big)^\frac 12 \Big(\int\limits_{B(x_0,r)}|\overline d|^2|\nabla w|^2dx\Big)^\frac 12\leq $$ $$\leq \frac 1{r-\varrho}\Big(\int\limits_{B(x_0,r)}|\nabla u|^2dx\Big)^\frac 12 \Big(\int\limits_{B(x_0,r)}|\nabla w|^sdx\Big)^\frac 1s\Big(\int\limits_{B(x_0,r)}|\overline d|^\frac {2s}{s-2}dx\Big)^\frac {s-2}{2s}\leq $$ $$\leq c(s)\frac {R^{3\frac {s-2}{2s}}}{r-\varrho}\Big(\int\limits_{B(x_0,r)}|\nabla u|^2dx\Big)^\frac 12 \Big(\int\limits_{B(x_0,R)}|\overline u|^sdx\Big)^\frac 1s\Gamma(2s/(s-2)).$$ So, we have the following inequality $$\int\limits_{B(x_0,\varrho )}|\nabla u|^2dx \leq c(s)\frac {R^{3\frac {s-2}{2s}}}{r-\varrho}\Big(\int\limits_{B(x_0,r)}|\nabla u|^2dx\Big)^\frac 12 \Big(\int\limits_{B(x_0,R)}|\overline u|^sdx\Big)^\frac 1s.$$ We can apply Young's inequality and conclude $$\int\limits_{B(x_0,\varrho )}|\nabla u|^2dx\leq \frac 14 \int\limits_{B(x_0,r)}|\nabla u|^2dx +c(s)\frac 1 {(r-\varrho)^2}R^{3\frac {s-2}{s}} \Big(\int\limits_{B(x_0,R)}|\overline u|^sdx\Big)^\frac 2s.$$ The later inequality is valid for any $R/2\leq \varrho<r\leq R$. It is known, see for instance \cite{Giaquinta1983} (this is just a matter of suitable iterations), that such an inequality implies the following Caccioppoli type inequality \begin{equation}\label{Caccioppoli} \int\limits_{B(x_0,R/2 )}|\nabla u|^2dx\leq c(s)R^{3\frac {s-2}{s}-2} \Big(\int\limits_{B(x_0,R)}|\overline u|^sdx\Big)^\frac 2s \end{equation} which holds for any $B(x_0,R)$ in $\mathbb R^3$. \subsection{(A) implies (B)} Indeed, if we let $s=6$ and $u_0=0$, then (\ref{Caccioppoli}) takes the form $$\int\limits_{B(x_0,R/2 )}|\nabla u|^2dx\leq c(s)\Big(\int\limits_{B(x_0,R)}| u|^6dx\Big)^\frac 13\leq c(s)\Big(\int\limits_{\mathbb R^3}| u|^6dx\Big)^\frac 13.$$ Passing $R\to \infty$, we conclude that (\ref{finite dissipation}) holds. \subsection{Proof of (A)} Now, we let $s=3$ and $u_0=[u]_{x_0,R}$ and use the Gagliardo-Nirenberg type inequality $$ \Big(\int\limits_{B(x_0,R)}|\overline u|^3dx\Big)^\frac 13\leq c\Big(\int\limits_{B(x_0,R )}|\nabla u|^\frac 32dx\Big)^\frac 23 $$ with a universal positive constant $c$. Now, (\ref{Caccioppoli}) can reduced to the following reverse H\"older inequality $$\frac 1{|B(R/2)|}\int\limits_{B(x_0,R/2 )}|\nabla u|^2dx\leq c\Big(\frac 1{|B(R)|}\int\limits_{B(x_0,R )}|\nabla u|^\frac 32dx\Big)^\frac 43 $$ with a constant $c$ that is independent of $x_0$ and $R$. We let $h:=|\nabla u|^\frac 32\in L_\frac 43(\mathbb R^3)$ and let $M(h)$ is the maximal function of $h$, i.e., $$M_h(x_0)=\sup_{R>0}\int\limits_{B(x_0,R )}h(x)dx.$$ Then from the above inequality, it follows that $$M_{h^\frac 43}(x_0)\leq cM^\frac 43_h(x_0).$$ It is known, see \cite{Stein1970}, that the right hand side of the latter inequality is integrable in $\mathbb R^3$ and the corresponding integral is bounded from above by the quantity $$\int\limits_{B(x_0,R )}h^\frac 43dx= \int\limits_{B(x_0,R )}|\nabla u|^2dx$$ times a universal constant. So, this means that $M_{h^\frac 43}\in L_1(\mathbb R^3)$. Since $h^\frac 43\in L_1(\mathbb R^3)$, it is possible only if $h$ is identically equal to zero, see \cite{Stein1970}. So, $\nabla u$ is identically equal to zero.
1,477,468,751,214
arxiv
\section{Introduction}\label{s:introduction} Since fusion configurations involve very hot plasmas, they typically require a careful design to maintain fast moving particles inside the core of the device on sufficiently long times. In the magnetic confinement approach \cite{bellan_2006_fundamentals,chen_introduction,freidberg2008plasma,haz_mei_03,miyamoto_2006_plasma, piel2010plasma}, in particular in tokamak plasmas, a strong external field is applied to confine the plasma by enforcing the oscillatory nature of the fast motions. Various models are in use to describe such phenomena. In the kinetic modeling, the unknowns are the number densities of particles, $f\equiv f(t,\bfx,\bfv)$ depending on time $t\geq 0$, position $\bfx\in\Omega\subset \R^3$ and velocity $\bfv\in\R^3$. Such kinetic models provide an appropriate description of turbulent transport in a fairly general context, but in fusion configurations their numerical simulations require to solve a stiff six-dimensional problem, leading to a huge computational cost. To bypass this obstacle, it is classical --- see for instance \cite{Garbet-et-al_2010} --- to use reduced asymptotic models that describe only the slowest part of the plasma dynamics hence effectively reducing both the stiffness of the problem and the number of variables (since fastest variables are omitted). Over the years, due to its rich and fundamental nature, the physically-based derivation of such models has grown as a --- still very active --- field of its own, often referred to as gyrokinetics. Besides the already mentioned general monographs \cite{bellan_2006_fundamentals,chen_introduction,freidberg2008plasma,haz_mei_03,miyamoto_2006_plasma, piel2010plasma}, the reader may consult \cite{Krommes,bri_hahm_07,Matteo-PhD,Scott_gyrokinetic,PDFF} and references therein as more specialized entering gates to the field. Despite considerable efforts in recent years, concerning mathematically rigorous derivations from collisionless\footnote{See for instance \cite{herda_2016_massless,herda_2016_anisotropic} and references therein for an introduction to the corresponding collisional issues.} kinetic equations, the state of art is such that one must choose between linear models that neglect couplings due to self-consistent fields or nonlinear ones set in a deceptively simple geometry. See for instance the introductions and bibliographies of \cite{HanKwan_PhD,Lutz_PhD,Herda_PhD} for relatively recent panoramas on the question. For instance, for the kind of problem considered here, on the nonlinear side of the literature the most significant mathematical result --- which requires a careful analysis --- is restricted to a two-dimensional setting with a constant magnetic field and interactions described through the Poisson equation, and yet validates only half\footnote{The nontrivial half, however. This is possible there only because a very specific geometric cancellation uncouples part of the slow dynamics from the remaining one, which is expected to be slaved to it. See however the recent \cite{Bostan_2D-VP} for a more complete model, derived under more stringent assumptions.} of the slow dynamics; see \cite{laure0}, building on \cite{gol_lsr_99} and recently revisited in \cite{Miot-2D-gyrokinetic}. We consider here a plasma confined by a strong unsteady inhomogeneous magnetic field without any a priori geometric constraint but, in order to allow for such a generality, in most of the present paper\footnote{See however Section~\ref{s:nl} where we analyze a smoothed Vlasov-Poisson system.} we do neglect effects of self-consistent fields. The plasma is thus entirely modeled with a scalar linear kinetic equation, where the unknown is one of the number densities of particles. The approach that we follow focuses on the characteristic equations associated with the kinetic conservation law. By itself the study of those equations may follow the classical roadmap of the averaging of ordinary differential equations, as expounded in \cite{Bogoliubov-Mitropolsky_oscillations,Sanders-Verhulst-Murdock_averaging}. Yet, here, beyond the body of work already required to follow this road in usual ODE problems, a careful track of the dependence of averaging estimates on initial data, living here in an unbounded phase space, is necessary so as to derive asymptotics for the solutions of the original partial differential equations problem. To be more specific, the Lorentz force term in our original nondimensionalized kinetic equation is scaled by a large parameter, $1/\eps$, where $\eps$ stands for the typical cyclotron period, {\it i.e.} the typical rotation period of particles about a magnetic field line (or Larmor rotation). The dynamical time scales we focus on are in any case much larger than the cyclotron period and we establish asymptotic descriptions in the limit $\eps\to0$. As is classical in the field, we distinguish between short-time scales that are $\cO(1)$ with respect to $\eps$, and long time scales that are $\sim1/\eps$ in the limit $\eps\to0$. Correspondingly, slow dynamics refer to dynamics where typical time derivatives are at most of order $\cO(1)$ on short-time scales, and at most of order $\cO(\eps)$ on long-time scales so that on long time scales two kinds of fast dynamics may co-exist, principal ones at typical speed of order $1/\eps$ and subprincipal ones at typical speed of order $1$; see for instance \cite{cheve2} for a description of those various oscillations in a specific class of axi-symmetric geometries, without electric field and with a magnetic field nowhere toroidal and whose angle to the toroidal direction is also independent of the poloidal angle. With this terminology in hands, our results may be roughly stated as the identification and mathematical proofs of \begin{enumerate} \item a second-order --- that is, up to $\cO(\eps^2)$ --- description of the slow dynamics on short time scales but in arbitrary geometry; \item a first-order description of the slow dynamics on long time scales but in an axi-symmetric geometry with a magnetic field everywhere poloidal and an electric field everywhere orthogonal to the magnetic field. \end{enumerate} The geometry of the latter is very specific and the proof of such a description is mostly carried out here to illustrate that the short-time second-order description contains all the ingredients to analyze long-time dynamics at first-order. Note that in any case, on long-time scales some restrictions are indeed necessary to ensure that sub-principally fast dynamics do not prevent long-time confinement and are of oscillatory type so that the issue of the identification of a long-time slow dynamics becomes meaningful. In Section~\ref{s:nl} we also prove a second-order description of the dynamics driven by a smoothed Vlasov-Poisson system, hence allowing for both nonlinear self-consistent effects and arbitrary geometry, but we restrict there to initial data that are well-prepared in the sense that their initial dependence on fast angles is weak. A key feature of our analysis that underpins a treatment of essentially arbitrary fields is that we make no explicit use of any geometric structure, neither Hamiltonian (see for instance the pioneer work of R. G. Littlejohn \cite{littleJ1, littleJ2, littleJ3} and later \cite{Benettin-Sempio,FrenodLutz_geometrical_gyro-kinetic}) nor Lagrangian (see \cite{Possanner}). The main role of these structures in the averaging process is to ease the identification of terms that are asymptotically irrevelant as time-derivatives of small terms. Instead, in the present contribution this explicit identification hinges heavily on the linearity of principal oscillations. As an upset, besides generality, we gain the freedom to use change of variables that are also arbitrary and to focus on slow variables instead of carrying geometric constraints all along. A key motivation for our methodology is that in the design of well-adapted numerical schemes, that capture the slow part of the dynamics even with discretization meshes too rough to compute stiff scales, one might correspondingly aim at large classes of schemes of arbitrary order; see for instance \cite{Lee,FR1,FR2}. Likewise our choice of studying first characteristics instead of using directly partial differential equations techniques and our will to prove error estimates echoes the particle-in-cell methodology and its numerical analysis. Alternative PDE-based methods include most notably two-scale convergence analyses \cite{fre_son_97,fre_son_98} and filtering techniques hinging on ergodic von Neumann's theorem \cite{Bostan_transport,bostan_10}. Two main advantages of going through characteristics are that the limiting partial differential equation is by construction a conservation law for a density distribution and that increasing the order of description may be carried out merely by continuing the argument used to identify the leading order. We benefit from the latter to \emph{prove} for the first time a second-order description in full generality.
1,477,468,751,215
arxiv
\section{Introduction}\label{s1} Synchronization is a phenomenon in which two or more systems coordinate and act at the same time with similar behavior. Synchronization determined phenomenon such as the chorusing of crickets, a flash of fireflies, pendulum clocks, and even the life cycle of creatures~\cite{Pikovsky,RBrown,Glass} have been extensively observed in our daily life. In particular, synchronization, the rhythms of two or more different objects adjusted in unison, is a qualitative transition and thus motives wide applications in various fields, such as data communication, timekeeping, navigation, cryptography, and neuroscience~\cite{Winfree,Goldbeter,Taylor,Strogatz,Manrubia,Bregni}. Benefiting from current advanced nano-fabrication techniques, especially those for high-quality-factor on-chip optomechanical resonators~\cite{Aspelmeyer}, it is possible to demonstrate the synchronization of resonators in on-chip nano-scale platforms~\cite{Holmes,Li,Zhangmian,Bagheri,Shah,YangNan}. For example, a pair of closely placed optomechanical resonators with different mechanical frequencies were synchronized by indirect coupling through the coupled optical fields~\cite{Zhangmian}. More recently, two nanomechanical oscillators separated for about $80$ $\mu$m were synchronized through the same optical field in an optical racetrack~\cite{Bagheri}. In this paper, we show that mechanical oscillations can be synchronized by optomechanical couplings to two coupled optical modes, in which one is active and the other one is passive. With balanced gain and loss, such kinds of systems are called parity-time~($\mathcal{PT}$)-symmetric optomechanical systems, which have attracted great attentions in recent years~\cite{HXu,Jinghui,Schonleber,Liuzhongpeng,Jinghui2,Zhangjing1,Lvxinyou}. Various appealing phenomena and important applications have been proposed in particular systems with $\mathcal{PT}$-symmetric structure~\cite{HXu,Jinghui,Schonleber,Liuzhongpeng,Jinghui2,Zhangjing1,Lvxinyou,Bender1,Bender2,Agarwal,Mostafazadeh,Pengbo,Feng1,Hodaei,Guo,Ruter,Ramezani,Lin,Feng2,Regensburger,Changlong,Pengbo2,Schindler,West,JWiersig,WChen,HHodaei,JDoppler}. Although the optomechanical interaction has influence on our $\mathcal{PT}$-symmetric system, this influence is negligibly small under the parameter regime we consider~\cite{Jinghui,Schonleber,Liuzhongpeng,Jinghui2,Lvxinyou}. By introducing the $\mathcal{PT}$-symmetric structure, we observe an interesting phenomenon that the two mechanical modes of the coupled optomechanical resonators tend to oscillate in unison by decreasing the optical coupling strength between them. This observation somewhat conflicts with the normal phenomenon that: the stronger coupling strength between two systems is, the easier the synchronization can be realized. Another counterintuitive phenomenon presented as the enhancement of synchronization between the two mechanical modes when considering the noises acting on the optomechanical resonators. \section{Coupled-optomechanical resonators with optical $\mathcal{PT}$-symmetry} The system we consider consists of two coupled whispering-gallery-mode (WGM) resonators, and is depicted in Fig.~\ref{Fig1}(a). The left WGM resonator~($\mu C_1$) is an active one which can be realized, e.g., by $\rm{Er}^{3+}$-doped silica disk, and the right one~($\mu C_2$) is a passive resonator. Each resonator supports an optical mode $\alpha_i$ and a mechanical mode $\beta_i$ ($i=1,2$), and the inter-cavity optical coupling strength $\kappa$ between $\alpha_1$ and $\alpha_2$ is related to the distance between the two resonators. As is well known, although the two mechanical modes $\beta_1$ and $\beta_2$, located in two different resonators, are not directly coupled, they can be indirectly coupled through the inter-cavity optical coupling and the intra-cavity optomechanical coupling. We elaborate this indirect mechanical coupling in Fig.~\ref{Fig1}(b). Each WGM resonator is equivalent to a Fabry-Perot cavity, with one fixed mirror and one movable one. The optical modes $\alpha_1$ and $\alpha_2$ represent the optical fields in the Fabry-Perot cavities and the mechanical modes $\beta_1$ and $\beta_2$ indicate the motions of the movable mirrors. In each equivalent Fabry-Perot cavity, the movable mirror suffers a radiation-pressure force induced by the optical mode $\alpha_i$ (i=1,2). Such a force is proportional to the circulating optical intensity $|\alpha_i|^2$ in the cavity, which leads to the mechanical motion $\beta_i$. In the meantime, the movable mirror induces a frequency-shift of the optical mode in the cavity, which influences the dynamics of $\alpha_i$. In Fig.~\ref{Fig1}(b), $\alpha_1$ ($\alpha_2$) and $\beta_1$ ($\beta_2$) interact with each other through this kind of radiation-pressure coupling, and $\alpha_1$ and $\alpha_2$ are directly coupled through the inter-cavity evanescent optical fields. Therefore, the mechanical modes $\beta_1$ and $\beta_2$ are coupled indirectly by the evanescent optical coupling between $\alpha_1$ and $\alpha_2$. \begin{figure}[h] \centerline{ \includegraphics[width=8.4 cm, clip]{Fig1.eps}} \caption{(color online) Schematic diagram of the optically-coupled $\mathcal{PT}$ optomechanical system. (a) $\mu C_1$ denotes an active WGM resonator with gain medium and $\mu C_2$ is a passive one. (b) Equivalent diagram of the $\mathcal{PT}$ optomechanical system, where the WGM resonators are replaced by Fabry-Perot cavities with a moveable end mirror and a fixed one. The two cavities are directly coupled through the inter-cavity evanescent optical fields, and the optical coupling strength $\kappa$ depends on the distance between the two Fabry-Perot cavities~\cite{Zhangmian}. }\label{Fig1} \end{figure} The $\mathcal{PT}$-optomechanical system we consider can be represented by the following equations: \begin{eqnarray}\label{Dynamical Equations of the optomechanical system} \dot{\alpha}_1&=&-\Gamma_{\rm op1}\alpha_1-i \kappa \alpha_2 -i g_{om}\alpha_1(\beta_1+\beta^*_1) +\sqrt{2\gamma_{1ex}}\epsilon_1,\nonumber\\ \dot{\alpha}_2&=&-\Gamma_{\rm op2}\alpha_2-i \kappa \alpha_1 -i g_{om}\alpha_2(\beta_2+\beta^*_2) +\sqrt{2\gamma_{2ex}}\epsilon_2,\nonumber\\ \dot{\beta}_1&=&-(\Gamma_{m1}+i\Omega_1)\beta_1-i g_{om} |\alpha_1|^2,\nonumber\\ \dot{\beta}_2&=&-(\Gamma_{m2}+i\Omega_2)\beta_2-i g_{om} |\alpha_2|^2, \end{eqnarray} where $\Gamma_{\rm op1}=-\gamma_1+i\Delta_1$ and $\Gamma_{\rm op2}=\gamma_2+i\Delta_2$. $\gamma_i$, $\gamma_{iex}$, $\Delta_i=\omega_{ci}-\omega_{L}$, and $\epsilon_i$ ($i=1,2$) denote the gain (loss) rate of the resonator $\mu C_i$, the external damping rate induced by the coupling between the resonator and the input/output fiber-taper, the detuning frequency between the resonance frequency ($\omega_{ci}$) of the cavity mode and the frequency ($\omega_{L}$) of the driving field, and the amplitude of the driving field, respectively. Without loss of generality, here we assume that $\Omega_2 \ge \Omega_1$. $\Omega_i$ and $\Gamma_{mi}$ represent the frequency and damping rate of the mechanical mode $\beta_i$. To simplify our discussion, we assume that the gain cavity $\mu C_1$ and the lossy cavity $\mu C_2$ have the same vacuum optomechanical coupling strength $g_{om}$ which quantifies the interaction between a single photon and a single phonon. We also assume that the gain rate of $\mu C_1$ is equal to the damping rate of $\mu C_2$, i.e., $\gamma_2=\gamma_1\equiv\gamma$, which means that the gain and loss in the system are well balanced. Additionally, we consider the case of critical coupling such that $\gamma_{1ex}=\gamma_{2ex}=\gamma/2$. In general, the vacuum optomechanical coupling strength $g_{om}$ of typical optical cavities is very small~\cite{Aspelmeyer}, and thus that the influence of optomechanical interaction on optical structure in our system can be ignored. Under the condition of symmetric optical driving detunings ($\Delta_{-}=\Delta_2-\Delta_1=0$), there exists a phase transition point, called exceptional point (EP)~\cite{Jinghui,Schonleber,Liuzhongpeng,Jinghui2,Lvxinyou}, corresponding to a critical inter-cavity coupling strength $\kappa_{\rm EP}=\gamma$. When $\kappa>\kappa_{\rm EP}$ which is in so-called $\mathcal{PT}$-symmetric regime, there exist two non-degenerate optical supermodes with the same damping rate. When $\kappa\le\kappa_{\rm EP}$ which is in the so-called broken $\mathcal{PT}$-symmetric regime, the two optical supermodes are degenerate but with different damping rates. When the system is far away from the EP, the interaction between the optical supermodes and mechanical modes, i.e. the effective radiation-pressure coupling in the supermode picture, is weak. This kind of interaction will be greatly enhanced as $\kappa$ approaches to $\kappa_{\rm EP}$. This results from the topological-singularity-induced amplification of the optomechanical nonlinearity in the vicinity of the exceptional point~\cite{Jinghui,Schonleber,Liuzhongpeng,Jinghui2,Zhangjing1}. However, slightly different from Refs~\cite{Jinghui,Schonleber,Liuzhongpeng,Jinghui2,Lvxinyou}, in this work we consider asymmetric optical driving detunings, i.e., $\Delta_{-}=\Delta_2-\Delta_1\ne 0$, in order to synchronize the two mechanical modes which will be discussed in the following section. The difference between the two optical driving detunings $\Delta_{-}$ is small enough that the properties of $\mathcal{PT}$-symmetric structure in our system is still held, i.e., the optomechanical interaction can still be greatly amplified near the exceptional point. Here, we consider the condition (see Appendix \ref{Weaker condition of PT-symmetry}) \begin{equation}\label{Weaker condition of PT-symmetric optomechanical system} g_{om} \ll \Delta_{-}\ll \sqrt[3]{\frac{2}{3}\gamma\left(g_{om}^{2}\frac{\Omega_2+\Omega_1}{\Omega_1\Omega_2}\gamma\epsilon^2 \right)^{2}} \ll \gamma, \kappa, \end{equation} then the non-degeneracy between the optical supermodes at exceptional point can be approximated given by \begin{equation} \frac{\Delta_{\rm{split}}}{\gamma}\approx \sqrt{\frac{\Delta_{-}^3}{\frac{2}{3}\gamma \left( g_{om}^2\frac{\Omega_2+\Omega_1}{\Omega_1\Omega_2} \gamma\epsilon^2\right)^{2/3}}},\nonumber \end{equation} where $\Delta_{\rm{split}}=\rm{Im}[\omega_{o+}-\omega_{0-}]=\rm{Re}[\omega_{o+}-\omega_{0-}]$, and $\omega_{o\pm}$ are the eigenvalues of optical supermodes. It is clear that this non-degeneracy $\Delta_{\rm{split}}$ is very small that the $\mathcal{PT}$-symmetric structure in our system is still held. Given the system parameters $\gamma=30$~MHz, $\Delta_1=4.2$~MHz, $\Delta_2=5$~MHz, $\Omega_1=5$~MHz, $\Omega_2=15$~MHz, $\Gamma_{m1}=8$~kHz, $\Gamma_{m2}=8$~kHz, $g_{om}=3$~kHz, and $\epsilon=70$~MHz$^{1/2}$, the simulation results of the mode splitting and linewidth of the optical supermodes are shown in Figs.~\ref{Fig2} (a) and (b). \begin{figure}[h] \centerline{ \includegraphics[width=8.6 cm,clip]{Fig2.eps}} \caption{(Color online) (a) Linewidth of the supermodes, i.e., the real parts of the eigenfrequencies, (b) mode splitting of the supermodes, i.e., the imaginary parts of the eigenfrequencies. The green region is the broken-$\mathcal{PT}$-symmetry regime, the pink region corresponds to the $\mathcal{PT}$-symmetry regime.}\label{Fig2} \end{figure} It is obvious that the non-degeneracy at EP in Fig.~\ref{Fig2} is negligibly small, and the broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$-symmetric regimes can be clearly observed. It should be noted that although one eigenfrequency of the optical supermodes has the positive real component in the broken-$\mathcal{PT}$-symmetric regime (Fig.~\ref{Fig2}(a)), the saturation nonlinearity induced by the optomechanical coupling will suppress the divergence induced by this positive rate~\cite{XinZhou, Hassan}. \section{Frequency synchronization via $\mathcal{PT}$-symmetry}\label{s3} When the degrees of freedom of the optical modes are adiabatically eliminated under the condition that the optical decay rates are much larger than the mechanical decay rates, the enhanced optomechanical coupling, induced by the topological-singularity-induced amplification of the optomechanical nonlinearity, will lead to significant effective frequency shifts $\delta\Omega_1$ and $\delta\Omega_2$ for the mechanical modes $\beta_1$ and $\beta_2$ in the vicinity of EP. In fact, under the condition depicted in Eq.~(\ref{Weaker condition of PT-symmetric optomechanical system}) and $\epsilon_1=\epsilon_2\equiv\epsilon$, $\delta\Omega_1$ and $\delta\Omega_2$ near EP can be written as (detailed derivation see Appendix~\ref{Appendix: effective frequency shifts and coupling}) \begin{equation}\label{Mechanical frquency shift in the vicinity of exceptional point} \delta\Omega_{1}=-\delta\Omega_2\approx\frac{g_{om}^2\Delta_{-}(\gamma^2+\kappa^2)^2\gamma\epsilon^2} {\left[(\kappa^2-\gamma^2)^2+\gamma^2\Delta_{-}^{2} \right]^2}. \end{equation} Here, in order to synchronize the two mechanical oscillators, we require that $\Delta_{1}$ and $\Delta_2$ have small difference, which makes sure that $\delta\Omega_{1}$ and $\delta\Omega_{2}$ are opposite in sign, and the influence on the structure of $\mathcal{PT}$-symmetry is very small simultaneously. \begin{figure}[h] \centerline{ \includegraphics[width=8.6 cm,clip]{Fig3.eps}} \caption{(Color online) (a) Optomechanics-induced mechanical frequency shifts $\delta\Omega_{1,2}$ of the two optomechanical resonators versus the optical coupling strength $\kappa$ both in broken-$\mathcal{PT}$-symmetric regime and $\mathcal{PT}$-symmetric regime. (b) Effective coupling strength $\kappa_{\rm{mech}}$ between two mechanical modes versus the optical coupling strength $\kappa$. }\label{Fig3} \end{figure} We show in Fig.~\ref{Fig3}(a) the optomechanics-induced mechanical frequency shift $\delta\Omega_1$ (red-solid curve) and $\delta\Omega_2$ (blue-dashed curve) of the two resonators versus the optical coupling strength $\kappa$, both in broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$-symmetric regimes. When the system is far away from the exceptional point, the optomechanics-induced mechanical frequency shift $\delta\Omega_i$ is negligibly small. However, $\delta\Omega_i$ will be greatly enhanced such that $\delta\Omega_i$ is almost comparable with or even larger than $\Omega_i$, when $\kappa$ approaches to $\kappa_{\rm{EP}}$. As these two enhanced frequency shifts for the mechanical modes are opposite in sign, they will lead to significant modifications of mechanical frequencies $\Omega_1$ and $\Omega_2$, and make the two mechanical frequencies $\Omega_1$ and $\Omega_2$ to approach each other. Thus the two oscillators tend to be resonant with each other, and occurs synchronization. Moreover, the enhanced optomechanical coupling can also induce an enhancement of the effective mechanical interaction between mechanical modes $\beta_1$ and $\beta_2$ in the vicinity of the EP. In fact, by adiabatically eliminating the degrees of freedom of the optical modes, we obtain the effective coupling strength $\kappa_{\rm{mech}}$ between the two mechanical modes $\beta_1$ and $\beta_2$ as \begin{equation}\label{Effective mechanical coupling in the vicinity of exceptional point} \kappa_{\rm mech}\approx\frac{4g_{om}^2\Delta_{-}\kappa^2\gamma^3\epsilon^2} {\left[ (\kappa^2-\gamma^2)^2+\gamma^2\Delta_{-}^2\right]^2}. \end{equation} In Fig.~\ref{Fig3}(b) the effective mechanical coupling strength $\kappa_{\rm{mech}}$ versus the optical coupling strength $\kappa$ is plotted, both in broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$-symmetric regimes. It can be clearly seen that the effective mechanical coupling strength $\kappa_{\rm{mech}}$ is negligibly small when the system is far away from the exceptional point, but can be significantly enhanced when $\kappa$ approaches to $\kappa_{\rm{EP}}$. This enhanced effective mechanical interaction in the vicinity of the EP can also contribute to synchronization between the two mechanical modes $\beta_1$ and $\beta_2$, since the enhanced $\kappa_{\rm{mech}}$ can greatly change the mechanical frequencies $\Omega_1$ and $\Omega_2$ and make the two mechanical frequencies to get close to each other (detailed discussion can be found in Appendix~\ref{The influence of the effective mechanical coupling on synchronization}). Actually, the effective mechanical frequencies of the two mechanical oscillators in the vicinity of the EP can be expressed as $\Omega_{\rm{1,eff}}=\Omega_1+\delta\Omega_1+\delta\Omega_{\rm{coup}}$ and $\Omega_{\rm{2,eff}}=\Omega_2+\delta\Omega_2-\delta\Omega_{\rm{coup}}$, respectively, where $\delta\Omega_{\rm{coup}}$ is induced by the effective mechanical coupling strength $\kappa_{\rm{mech}}$ (see Appendix~\ref{The influence of the effective mechanical coupling on synchronization}). This means that the enhanced optomechanics-induced mechanical frequency shifts $\delta\Omega_1 / \delta\Omega_2$ and effective mechanical coupling strength $\kappa_{\rm{mech}}$ can result in significant modifications of mechanical frequencies $\Omega_1 / \Omega_2$ together, and thus jointly contribute to the synchronization between the two mechanical oscillators, i.e., $\Omega_{\rm{1,eff}}=\Omega_{\rm{2,eff}}$. We show in Fig.~\ref{Fig4}(a) that the effective mechanical frequencies $\Omega_{\rm{1,eff}}$ (red-solid curve) and $\Omega_{\rm{2,eff}}$ (blue-dashed curve) of the two resonators versus the optical coupling strength $\kappa$, both in broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$-symmetric regimes. It is clear that the two mechanical oscillators tend to be resonant with each other, i.e., $\Omega_{\rm{1,eff}}=\Omega_{\rm{2,eff}}$, and thus synchronize, when $\kappa$ approaches to $\kappa_{\rm{EP}}$. As is well known, the frequency-mismatch between two synchronized oscillators should be very small in traditional lossy systems~\cite{Li,Zhangmian}, i.e., $|\Omega_1-\Omega_2|\ll \Omega_{1},\Omega_2$. However, as shown in Fig.~\ref{Fig4}, our $\mathcal{PT}$-symmetric system can perfectly synchronize two far-off-resonant mechanical oscillators. Actually, as shown in Fig.~\ref{Fig4}(a), the effective mechanical frequencies of the two optomechanical resonators $\Omega_{\rm{1,eff}}$ and $\Omega_{\rm{2,eff}}$ coincide with each other when $\kappa$ approaches $\kappa_{\rm{EP}}$. \begin{figure}[h] \centerline{ \includegraphics[width=9 cm, clip]{Fig4.eps}} \caption{(color online) (a) Effective mechanical frequencies $\Omega_{\rm 1,eff}$ and $\Omega_{\rm 2,eff}$ versus the optical coupling strength $\kappa$, where the red solid (blue dashed) curve represents the frequency of $\beta_1$ ($\beta_2$), the light green (pink) area is the broken- $\mathcal{PT}$-symmetric ($\mathcal{PT}$-symmetric) regime. (b) Numerical results of cross-correlation $M_{cc}$ with different values of $\kappa$ in broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$-symmetric regimes. (c) Spectrograms of mechanical modes $x_1$ and $x_2$ with increasing optical coupling strength $\kappa$ in the broken- $\mathcal{PT}$-symmetric regime. Here, $\kappa\uparrow$ and $\kappa\downarrow$ denote the increase and decrease of $\kappa$. (d) Spectrograms of mechanical modes $x_1$ and $x_2$ with decreasing optical coupling strength $\kappa$ in the $\mathcal{PT}$-symmetric regime, in which weaker coupling strength $\kappa$ makes the two resonators more easily to be synchronized.}\label{Fig4} \end{figure} In addition, we find an counterintuitive phenomenon that {\it weaker} coupling between two optomechanical resonators may be {\it helpful} for synchronization for our $\mathcal PT$ optomechanical system. In fact, as shown in Fig.~\ref{Fig4}(a), in the $\mathcal PT$-symmetric regime (the pink region), when the coupling strength $\kappa$ between two resonators is {\it decreased}, the effective mechanical frequencies of the two resonators tend to coincide with each other, which means that $\beta_1$ and $\beta_2$ are inclined to oscillate in unison with the weaker coupling strength $\kappa$ in the $\mathcal PT$-symmetric regime. The broken-$\mathcal PT$-symmetric regime is the normal regime where stronger coupling between the two optomechanical resonators makes the two mechanical modes $\beta_1$ and $\beta_2$ be inclined to be synchronized. We can more easily see this phenomenon by plotting the spectra of the normalized mechanical displacements of the two optomechanical resonators $x_1=(\beta_1+\beta_1^*)/2$ (the red solid curve) and $x_2=(\beta_2+\beta_2^*)/2$ (the blue dashed curve) in Figs.~\ref{Fig4} (c) and (d), where $\kappa$ is increased from $2$ MHz to $29.86$ MHz in Fig.~\ref{Fig4} (c), and is decreased from $50$ MHz to $30.81$ MHz in Fig.~\ref{Fig4} (d). To give more insights into the phenomena shown by us, we plot in Fig.~\ref{Fig4}(b) the cross-correlation function $M_{cc}$ between the two mechanical displacements $x_1$ and $x_2$ with different inter-cavity optical coupling strength $\kappa$, where $M_{cc}$ is defined as~\cite{RNBracewell,Rabiner,Anstey,White,Heel,Lewis} \begin{eqnarray} M_{cc}&=&\mathop{\max}\limits_{0<t<+\infty}\frac{1}{\sqrt{\phi_1\phi_2}}{\int_{0}^{+\infty} {x_1(\tau-t)x_2(\tau)d\tau}},\nonumber\\ \phi_i&=&\int_{0}^{+\infty}{x_i^2(\tau)d\tau}. \end{eqnarray} This normalized cross-correlation function varies between 0 and 1. The maximum value of $M_{cc}=1$ indicates that the two time series of the mechanical displacements $x_1$ and $x_2$ have the exact same shape, even though their amplitudes may be different, which implies that the two self-sustained oscillators have the same frequency, that is, the onset of synchronization. As shown in Fig.~\ref{Fig4}(b), in the $\mathcal{PT}$-symmetric regime, smaller $\kappa$ induces higher value of $M_{cc}$ (the red solid curve), and $M_{cc}$ reaches the maximum value (the unit) as $\kappa$ decreases and approaches EP, which means that the two mechanical displacements $x_1$ and $x_2$ tend to be synchronized with the decrease of the inter-cavity coupling strength. However, in the broken-$\mathcal{PT}$ symmetric regime (the blue dashed curve), the cross-correlation function increases and tends to unit with the increase of $\kappa$, which means that stronger inter-cavity coupling strength will be helpful for synchronization as we expect. \section{Noise-enhanced synchronization in $\mathcal{PT}$-symmetric optomechanical system}\label{s4} \subsection{Stochastic noises in the optical modes} We now study the effects of the stochastic noises on our $\mathcal{PT}$-symmetric system. Two independently-identically-distributed Gaussian white noises $\xi_{1,2}$ are introduced for the two optical modes $\alpha_{1,2}$, such that $\left<\xi_i(t)\,\xi_j(t+\tau)\right>=2D\delta_{ij}\delta(\tau)$, where $D$ is the intensity of the noises. Here, we have included the shifts of damping rates induced by stochastic noises into the gain ($\gamma_1$) and loss ($\gamma_2$) rates in our optomechanical system. Thus the dynamical equations of our $\mathcal{PT}$-symmetric system can be reexpressed as \begin{eqnarray}\label{Dynamical Equations with noises} \dot{\alpha}_1&=&i\left(\Delta_1+g_{om}x_1\right)\alpha_1+\gamma_1\alpha_1-i\kappa \alpha_2 +\sqrt{2\gamma_{1ex}}\epsilon_1 \nonumber\\ &&+\xi_1(t),\nonumber\\ \dot{\alpha}_2&=&i\left(\Delta_2+g_{om}x_2\right)\alpha_2-\gamma_2\alpha_2-i\kappa \alpha_1 +\sqrt{2\gamma_{2ex}}\epsilon_2 \nonumber\\ &&+\xi_2(t),\nonumber\\ \ddot{x}_1&=&-2\Gamma_{m1}\dot{x}_1-\Omega_1^2x_1-g_{om}\left|\alpha_1\right|^2,\nonumber\\ \ddot{x}_2&=&-2\Gamma_{m2}\dot{x}_2-\Omega_2^2x_2-g_{om}\left|\alpha_2\right|^2. \end{eqnarray} \begin{figure}[ptb] \centerline{ \includegraphics[width=8.5 cm, clip]{Fig5.eps}} \caption{(color online) (a)Effects of the stochastic noises on $M_{cc}$ with respect to different stochastic noise intensity $D$ in broken $\mathcal{PT}$-symmetric regime with $\kappa=27.76$ MHz. (b) Variances of $M_{cc}$ versus noise level $D$ in (a). (c) Effects of the stochastic noises on $M_{cc}$ versus different $D$ in $\mathcal{PT}$-symmetric regime with $\kappa=32.19$ MHz. The variance of $M_{cc}$ is presented in (d). }\label{Fig5} \end{figure} We present the numerical results of the cross-correlation function $M_{cc}$ between the two mechanical oscillators in Figs.~\ref{Fig5}(a) and (c) by changing the noise strength $D$ and fixing other parameters both in broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$ -symmetric regimes. It can be seen that $M_{cc}$ is enhanced with increasing noise intensity $D$ both in broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$-symmetric regime, reaches the maximal values at particular noise level, and then decreases at higher noise intensity. It means that synchronization process may benefit from noises~\cite{Neiman1,Han,Neiman2,Nakao,Nagai,Lai,Zhou,Daihai} in our optomechanical $\mathcal{PT}$-symmetric system. To interpret what we observe, we can see that the noise will randomly shift the frequencies of the mechanical modes, especially when we approach the EP where the effects of noise are enhanced~\cite{HSchomerus2,SYLee,GYoo,JZhang}. Since the frequencies of the two mechanical modes are far-separated, these random frequency shifts may decrease the difference between the frequencies of the two mechanical modes in a certain probability with increasing noise strength $D$, and thus increase the cross-correlation function $M_{cc}$. When we increase the noise strength $D$ further, the noise will be strong enough to destroy the periodic oscillation of single mechanical oscillator and the $\mathcal{PT}$-symmetric structure of the optomechanical system, and thus decrease the degree of synchronization between the two mechanical oscillators. This interpretation can also be confirmed by checking the variance of $M_{cc}$ versus the noise strength $D$ ( Fig.~\ref{Fig5}(b) and (d)). The variance of $M_{cc}$ first increases with increasing noise strength $D$ (note that $M_{cc}$ increases at the same time), which means that more noises enter the system although $M_{cc}$ is increased. The variance of $M_{cc}$ then decreases when we increase $D$ further, because the value of $M_{cc}$ is too small in this case and the noise-induced fluctuations in $M_{cc}$ are suppressed. To give more insights for synchronization with optically stochastic noises in our $\mathcal{PT}$-symmetric optomechnical system, we show additional analysis of another index of synchronization---the Kramers rate, which is more suitable to describe noisy synchronized systems. The Kramers rates of two subsystems are alternative indices to show the correlation between two subsystems. When the Kramers rates of two subsystems coincide with each other, the two subsystems are well correlated~\cite{Neiman1}. We then calculate the Kramers rates $r_1$ and $r_2$ of the mechanical displacements $x_1$ and $x_2$, respectively. The Kramers rate is originally defined as the transition rate between neighboring potential wells of a particle caused by stochastic forces, which was first proposed by Kramers in 1940~\cite{Kramers}. Here, we use the mean first passage time \cite{Klein,Hofmann}, i.e., the average time that the particle moves from one potential well to the other well, to evaluate the Kramers rates $r_1$ and $r_2$ of mechanical displacements $x_1$ and $x_2$. We obtain the histograms of $x_{1,2}$ through numerical simulation first, and then find out the locations of with the maximum probability of $x_{1,2}$, i.e., the potential wells of $x_{1,2}$, based on the distribution of histograms, by which we can obtain the mean first passage times $\tau_{1,2}$, i.e., the average value of the time intervals between two potential wells for each mechanical displacement. The Kramers rates $r_1$ and $r_2$ can then be calculated by the reciprocal of the mean first passage times $\tau_{1,2}$, i.e., $r_{i}=1/\tau_i$ ($i=1,2$). The simulation results for $r_1$ and $r_2$ are presented in Fig.~\ref{Fig6}. It can be seen that, both in broken-$\mathcal{PT}$-symmetric ( Fig.~\ref{Fig6}(a)) and $\mathcal{PT}$-symmetric ( Fig.~\ref{Fig6}(b)) regimes, the Kramers rates $r_1$ and $r_2$ get closer with the increase of the noise intensity, which means that the partial frequencies of the mechanical displacements $x_1$ and $x_2$ get closer when the noise intensity $D$ is increased. It means that the optically stochastic noises can improve the correlation between $x_1$ and $x_2$. \begin{figure}[h] \centerline{ \includegraphics[width=8.6 cm,clip]{Fig6.eps}} \caption{(Color online) The Kramers rates $r_1$ and $r_2$ of mechanical displacements $x_1$ and $x_2$ versus the noise intensity $D$ in broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$-symmetric regime. (a) The red solid curve (blue dashed curve) represents the curve for Kramers rate $r_1$ ($r_2$) versus the noise intensity $D$ in the broken-$\mathcal{PT}$-symmetric regime. Here the optical coupling strength $\kappa=27.76$ MHz is fixed. (b) The Kramers rates $r_1$ and $r_2$ with different stochastic noise intensity $D$ correspond to the $\mathcal{PT}$-symmetric regime, where the optical coupling strength is fixed as $\kappa=32.19$ MHz.}\label{Fig6} \end{figure} \subsection{Thermal noises in the mechanical modes} In the above analysis we do not consider the effects of the thermal noises in the mechanical modes. Actually, these thermal noises in the mechanical modes can also benefit the synchronization between the two mechanical modes in our $\mathcal{PT}$-symmetric optomechanical system. In order to simplify our discussions, we only consider the thermal noises in the mechanical modes in this section, and assume that the thermal noises in the mechanical modes are white noises, based on which the Langevin equation of the mechanical modes can be expressed as \begin{eqnarray}\label{The reduced Langevin equation with Brownian noises} \ddot{x}_1&=&-2\Gamma_{m}\dot{x}_1-\tilde{\Omega}_1^2x_1-\kappa_{\rm{mech}} x_2 +\Gamma_{\rm{noise}1}(t),\nonumber\\ \ddot{x}_2&=&-2\Gamma_{m}\dot{x}_2-\tilde{\Omega}_2^2x_2-\kappa_{\rm{mech}} x_1 +\Gamma_{\rm{noise}2}(t), \end{eqnarray} where the constant driving terms induced by optical modes have been included into $x_{1,2}$ by a coordinate transformation for simplicity. The mechanical damping rate $\Gamma_{m}$ includes the damping rate shift $\delta\Gamma_m$ induced by the corresponding thermal noise, i.e., $\Gamma_m=\Gamma_{mo}+\delta\Gamma_m$, where $\Gamma_{mo}$ is the original mechanical damping rate without considering thermal noise. The mechanical thermal noises $\Gamma_{\rm{noise1}}$ and $\Gamma_{\rm{noise2}}$ are diffusion terms with $\delta$-correlated Gaussian distribution \begin{eqnarray} \left\langle \Gamma_{\rm{noise}\ i}(t) \right\rangle &=& 0,\nonumber\\ \left\langle \Gamma_{\rm{noise}\ i}(t)\Gamma_{\rm{noise}\ j}(t') \right\rangle &=& 4\Gamma_m k T\delta(t-t'), \end{eqnarray} where $k$ is the Boltzman's constant and $T$ is the temperature. To show the positive influence of thermal noises on the synchronization, we present the numerical results of the normalized correlation function $R$~\cite{Risken} between the two mechanical oscillators in Figs.~\ref{Fig7}(a) and (b) by changing the temperature $T$ and fixing other parameters in both broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$-symmetric regimes, where $T_r$ is the room temperature. In the broken-$\mathcal{PT}$-symmetric regime with optical coupling strength $\kappa=27.76$ MHz, $R$ (blue-dashed curve) is enhanced with increasing temperature $T$, and reaches 0.61 at the room temperature $T_r$, which is larger than 0.48 when we ignore the thermal noises. Similarly, in the $\mathcal{PT}$-symmetric regime with optical coupling strength $\kappa=32.19$ MHz, $R$ (red-solid curve) increases with temperature $T$, and reaches 0.65 at the room temperature, which is larger than 0.51 when we ignore the thermal noises. It means that the thermal noises in the mechanical modes can also benefit the synchronization between the two mechanical modes in our optomechanical $\mathcal{PT}$-symmetric system. \begin{figure}[h] \centerline{ \includegraphics[width=8.6 cm,clip]{Fig7.eps}} \caption{(Color online) Numerical results of the normalized correlation function $R$ with different values of temperature $T$ in broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$-symmtric regimes, where $T_r$ denotes the room temperature. (a) Effects of the thermal noises on $R$ with respect to different temperature $T$ in broken-$\mathcal{PT}$-symmetric regime with $\kappa=27.76$ MHz. (b) Effects of the thermal noises on $R$ versus different $T$ in $\mathcal{PT}$-symmetric regime with $\kappa=32.19$ MHz. } \label{Fig7} \end{figure} To give more insights into the phenomenon presented, we calculate the Kramers rates $r_1$ and $r_2$ of mechanical displacements $x_1$ and $x_2$. The simulation results for Kramers rates $r_1$ and $r_2$ are shown in Figs.~\ref{Fig8}(a) and (b). In Fig.~\ref{Fig8}(a), the red solid curve denotes Kramers rate $r_1$ with different values of temperature $T$ in the broken-$\mathcal{PT}$-symmetric regime with optical coupling strength $\kappa=27.76$ MHz, and the blue dashed curve corresponds to the Kramers rate $r_2$. We can see in Fig.~\ref{Fig8}(a) that Kramers rates $r_1$ and $r_2$ tend to get closer to each other as the temperature $T$ increases to the room temperature $T_r$. Similar phenomenon can be observed in the $\mathcal{PT}$-symmetric regime as shown in Fig.~\ref{Fig8}(b), i.e., the mechanical thermal noises tend to decrease the difference between the Kramers rates $r_1$ and $r_2$ as the temperature increases to the room temperature, where the optical coupling strength is fixed as $\kappa=32.19$ MHz. These simulation results indicate that more mechanical thermal noises can lead the partial frequencies of the two mechanical displacements $x_1$ and $x_2$ to tend to be consistent with each other, and thus benefit the synchronization in our $\mathcal{PT}$-symmetric optommechanical system. \begin{figure}[h] \centerline{ \includegraphics[width=8.6 cm,clip]{Fig8.eps}} \caption{(Color online) The Kramers rates $r_1$ and $r_2$ of mechanical displacements $x_1$ and $x_2$ versus the temperature $T$ in both broken-$\mathcal{PT}$-symmetric and $\mathcal{PT}$-symmetric regime, where $T_r$ is the room temperature. (a) The red solid curve (blue dashed curve) denotes the Kramers rate $r_1$ ($r_2$) with increasing the temperature $T$ in the broken-$\mathcal{PT}$-symmetric regime, where the optical coupling strength $\kappa=27.76$ MHz is fixed. (b) The Kramers rates $r_1$ and $r_2$ versus the temperature $T$ correspond to the $\mathcal{PT}$-symmetric regime ($\kappa=32.19$ MHz).}\label{Fig8} \end{figure} Furthermore, we can also observe the beneficial effect of the mechanical thermal noises on the synchronization by theoretically analyzing the correlation function between the two mechanical modes, when we consider small time $t$. Actually, at small time limit \cite{Risken}, the normalized correlation function between the two mechanical modes can be approximated as (see the derivations in Appendix \ref{Derivation of the normalized correlation function}) \begin{eqnarray}\label{Normalized correlation function} R(\tau,t)&\approx& 1-2\tilde\Omega_1^2\tau t + \frac{q}{2} \kappa_{\rm{mech}}\tilde\Omega_1^2\tau t^2 +\frac{q}{3} \kappa_{\rm{mech}}\tilde\Omega_1^2 \tau t^3\nonumber\\ &=&1-2\tilde\Omega_1^2\tau t+2\Gamma_m k T\kappa_{\rm{mech}}\tilde\Omega_1^2\tau t^2\nonumber\\ &&+\frac{4}{3}\Gamma_m k T\kappa_{\rm{mech}}\tilde\Omega_1^2 \tau t^3, \end{eqnarray} where $q$ is the intensity of the mechanical thermal noises, i.e., $q=4\Gamma_m k T$. It is shown in Eq.~(\ref{Normalized correlation function}) that the normalized correlation function $R$ can be enhanced by the increase of the intensity of the thermal noises, which is in consonance with the above simulation results, as shown in Figs.~\ref{Fig7} and \ref{Fig8}. It proves that the thermal noises in the mechanical modes can benefit the synchronization in our $\mathcal{PT}$-symmetric synchronization system. \section{Conclusion and discussion}\label{s5} We have shown that the mechanical motions of two coupled $\mathcal{PT}$-symmetric optomechanical resonators with far-off-resonant mechanical frequencies can be synchronized when the system approaches the EP. In particular, in the $\mathcal{PT}$-symmetric regime, the two mechanical modes are easier to be synchronized with weaker optical coupling strength between the two optomechanical resonators. Additionally, it is shown that noises will be enhanced in the vicinity of the EP in our system, and the enhanced noises will benefit the synchronization process if only the strengths of the noises are not too strong. Our study opens up a new dimension of research for $\mathcal{PT}$-symmetric optomechanical system for possible applications such as metrology, cooling, and communication. It also gives new perspectives for synchronization in optomechanical systems. \section{Acknowledgments}\label{s6} JZ is supported by the NSFC under Grant Nos. 61622306, 11674194. YXL and JZ are supported by the National Basic Research Program of China (973 Program) under Grant No. 2014CB921401, the Tsinghua University Initiative Scientific Research Program, and the Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-discipline Foundation. JZ is also supported by the Youth Innovation Fund of Beijing National Research Center for Information Science and Technology (BNRist). LY is supported by the NSF grant No. EFMA1641109, ARO grant No. W911NF1210026 and ARO grant No. W911NF1710189.
1,477,468,751,216
arxiv
\section{Introduction} Finsler geometry is well known mathematical framework which is a nearest generalization of Riemannian metric geometry~\cite{Rund:1959,Asanov:1985,Matsumoto:1986}. There are various approaches to Finsler formulation of a gravity theory. In Ref. \cite{Chang:2008yv} it has been found that some Finsler structure makes the modified Newton's gravity equivalent to the modified Newtonian dynamics. In Ref. \cite{Huang:2007en} the theory of gravitation in the spacetime with Finsler structure is constructed. In Ref. \cite{Pfeifer:2011xi} gravitational dynamics for Finsler spacetimes in terms of an action integral on the unit tangent bundle is constructed. In Ref. \cite{Gibbons:2007iu} it was shown that a deformation of very special relativity leads in a natural way to Finsler geometry. Earlier, Finslerian post-Riemannian corrections to the equations of geodesics have been analyzed in Ref.~\cite{Aringazin-Asanov:1985}. In Ref.~\cite{Aringazin-Asanov:1987} a class of static even-power Finsler metric functions have been introduced, for which bounds on characteristic Finslerian parameters coming from classical gravitational tests have been obtained. General set up for Finslerian theory of gravitation and Finslerian parameterized post-Newtonian formalism confronted to observable gravitational effects in the Solar system, as well as Finslerian approach to gauge field theory have been presented in Ref.~\cite{Aringazin-Asanov:1988}; Maxwell equations in Finslerian space-time have been obtained from variational principle, and it was shown that in the eikonal approximation these reduce to equations of null geodesics in Finsler space-time. In Ref.~\cite{Aringazin:1989} generalized non-regular Finslerian metrics has been introduced for which all orbital gravitational effects coincide with that implied by the Riemannian general relativity, while the calculated gyroscope precession effect makes a difference. In Ref. \cite{Lammerzahl:2012kw} a class of spherically symmetric and static Finsler spacetimes which are small perturbations of the Schwarzschild spacetime have been considered. The authors derived equations of motion for freely falling particles and equations for light rays, and discuss the bounds put on the perturbation functions by observations in the Solar system. In Ref. \cite{Girelli:2006fw} it was shown that the notion of Finsler metric is the geometric structure encoding the notion of ``rainbow metric''. In Ref. \cite{Vacaru:2010fi} some generalizations and alternatives to Einstein gravity including modifications with broken local Lorentz invariance were considered. It was also shown how such theories (and general relativity) can be equivalently reformulated in Finsler like variables. In this paper, we present some approach to Finsler gravity equations that is different from the above mentioned. With general even-power decomposition anzatz for Finsler metrics, i.e. classes of Finsler metrics which allow decomposition to products of even numbers of Riemannian metrics, being introduced in Ref.~\cite{Aringazin-Asanov:1987}, we consider some special class of Finsler metrics: the metric which can be decomposed as the product of two general Riemannian metrics. In this case one can write Lagrangian with two metrics. Below, we give definitions (Secs. \ref{finsler}, \ref{special}) and then elaborate various procedures to implement this framework (Secs. \ref{pure}, \ref{bimetric}, \ref{general}). \section{Finsler geometry} \label{finsler} Let $M$ is a 4D manifold. The tangent space of $M$ is denoted by $T M$. Suppose $(x^\alpha)$ are local coordinates around $x \in M$. Then we denote the standard basis vectors for $T_x M$ by $y_\alpha = \frac{\partial}{\partial x^\alpha}$, and the standard basis vectors for the cotangent space $T^*_x M$ as $dx^\alpha$. \textbf{Definition of Finsler metric space}. \textit{Finsler metric space is a manifold $M$ equipped with a function $F : T M > [0, \infty)$ (called Finsler metric function) such that \begin{enumerate} \item $F$ is smooth on $T M \backslash \{0\}$ ; \item $F|_{T_xM} : T_x M \rightarrow [0, \infty)$ is a Minkowski norm for all $x \in M$ ; \item $F$ is homogeneous of degree one with respect to $y_\alpha$, i.e. $F(x,\lambda y) = \lambda F(x,y)$, $\lambda \not= 0$; \item the Hessian of its square, in the fibre coordinates $\dfrac{\partial^2 F}{\partial y^\alpha \partial y^\beta}$, has constant rank and is non-degenerate for all $x,y \in TM$. \end{enumerate} } Finsler metric is defines as \begin{equation} \label{0-15} ds = F(x,dx). \end{equation} \textbf{Definition of Finsler metric tensor.} \textit{Let $F$ be a Finsler metric function on $T M$ and consider $T M$ in the standard induced coordinates. A Finsler metric tensor is the symmetric (0,2) $d-$tensor $g(x, y)$ with components \begin{equation} \label{0-10} g_{\alpha \beta}(x,y) = \frac{1}{2} \dfrac{\partial^2 F}{\partial y^\alpha \partial y^\beta}. \end{equation} } In the local coordinates, Finsler metric can be written as \begin{equation} \label{0-20} ds^n = G_{\alpha_1 , \alpha_2 , \cdots , \alpha_n} dx^{\alpha_1} dx^{\alpha_2} \cdots dx^{\alpha_n} = g_{\mu \nu} dx^\mu dx^\nu, \end{equation} where $g_{\mu \nu} = G_{\mu \nu; \alpha_1 , \alpha_2 , \cdots , \alpha_{n-2}} dx^{\alpha_1} dx^{\alpha_2} \cdots dx^{\alpha_{n-2}}$. \section{Special class of Finsler metrics} \label{special} Let us consider the special class of Finsler metrics, \begin{equation} ds^4 = G_{\alpha \beta\gamma \delta} dx^\alpha dx^\beta dx^\gamma dx^\delta = ds_1^2 ds_2^2, \label{2-10} \end{equation} where \begin{eqnarray} ds^2_1 &=& g_{1; \alpha \beta} dx^\alpha dx^\beta, \label{2-20}\\ ds^2_2 &=& g_{2; \alpha \beta} dx^\alpha dx^\beta, \label{2-30} \end{eqnarray} i.e. \begin{equation} G_{\alpha \beta\gamma \delta} = g_{1; \{\alpha \beta} g_{2; \gamma \delta\}}, \label{2-40} \end{equation} where $\{ \}$ means full symmetrization over the indices $\alpha, \beta, \gamma, \delta$. It is well known that there exist some problems for the definition of dynamical equations for the Finsler metric (analogous to Einstein equations for the standard metrics \eqref{2-20}). In the present paper, we show that in the special choice of Finsler metric \eqref{2-10} (we called it decomposed Finsler metric) it is possible to introduce well defined gravitational Finsler equations. The main idea is to use two Riemannian metrics $ds^2_{1,2}$ in the definition of Lagrangian and corresponding field equations. After that we can use well known bimetric formalism for constructing corresponding Lagrangian and field equations. \section{Pure geometrical gravity for decomposed Finsler metric} \label{pure} One can construct many different pure geometrical Finsler gravities. Below, we would like to present local and nonlocal models. \subsection{Local pure geometrical decomposed Finsler gravity} In this approach we construct pure geometrical Lagrangian $\mathcal L$ as follows \begin{equation}\label{3e-10} \mathcal L = - \dfrac{1}{2 \varkappa} \left( k_1 R_1 \sqrt{-g_2} + k_2 R_2 \sqrt{-g_1} \right) + \mathcal L_m \end{equation} where $\varkappa$ is Einstein gravitational constant; $k_{1,2}$ are constants; $R_{1,2}$ are scalar curvatures for $g_{1,2}$ metrics and $\mathcal L_m$ is Lagrangian density for a matter. Using Palatini formalism we will obtain following equations. Varying with respect to the metrics components $g_{1,2; \mu \nu}$ gives us \begin{eqnarray} \label{3e-20} k_1 R_{1; \mu \nu} - k_2 R_2 \frac{1}{2} g_{1; \mu \nu} &=& \varkappa T_{1; \mu \nu} , \\ \label{3e-30} k_2 R_{2; \mu \nu} - k_1 R_1 \frac{1}{2} g_{2; \mu \nu} &=& \varkappa T_{2; \mu \nu} \end{eqnarray} where $T_{1,2; \mu \nu}$ are energy momentum tensors obtained varying with respect to metrics $g_{1,2}$. Varying with respect to the affine connection $\Gamma^\alpha_{\beta \gamma}$ and using standard mathematical manipulations due to Palatini approach gives us the equation determining the form of the connection by the metric tensor, \begin{equation} \label{3e-40} \Gamma_{i; \lambda \sigma}^\mu = \dfrac{1}{2} g_i^{\gamma \mu} \left( \Delta_{i; \gamma \lambda \sigma} + \Delta_{i; \gamma \sigma \lambda} - \Delta_{i; \sigma \lambda \gamma} \right), \quad i=1,2, \end{equation} where $\Delta_{i; \lambda \gamma \sigma}$ is \begin{equation}\label{3e-50} \begin{split} \Delta_{i; \lambda \gamma \sigma} = & \dfrac{\partial g_{i; \lambda \gamma}}{\partial x^\sigma} + g_{i; \lambda \gamma} \left[ \frac{1}{\sqrt{-g_j}} \left(\sqrt{-g_j}\right)_{, \sigma} - \frac{1}{\sqrt{-g_i}} \left(\sqrt{-g_i}\right)_{, \sigma} \right] = \\ & \dfrac{\partial g_{i; \lambda \gamma}}{\partial x^\sigma} + g_{i; \lambda \gamma} \left( \ln \sqrt{\dfrac{g_j}{g_i}} \right)_{, \sigma} , \quad i+j=2. \end{split} \end{equation} We see that such interpretation of Finsler geometry gives rise to the affine connection that is not compatible with the metric. \subsection{Nonlocal pure geometrical Finsler gravity} \label{nonlocal} Nonlocal version of a pure geometrical Finsler gravity can be defined in the following way: \begin{equation}\label{4b-10} S = \int R_1(x_1) G(x_1, x_2) R_2(x_2) \sqrt{-g_1} \sqrt{-g_2} \; dx_1 dx_2 \end{equation} where $S$ is the action; $R_{1,2}$ are the same as for equation \eqref{3e-10} and $G(x_1, x_2)$ is some nonlocal function. \section{Bimetric interpretation of decomposed Finsler metric} \label{bimetric} In this section, we use the bimetric formalism to derive dynamical equations for the metric geometry. \subsection{$ds_1^2$ as the metric for gravity and $ds_2^2$ as the metric for matter} In this subsection we use the following interpretation of the decomposed Finsler metric. The first metric $ds_1^2$ is used as the metric for gravity and the second metric $ds_2^2$ as the metric for matter (for the corresponding bi-metric formalism see Ref. \cite{Drummond:2001rj}). Let us introduce two vierbein bundles into the space-time manifold. Each bundle supports its own metric. One is associated with underlying gravity and the other with matter. The vierbein bundle appropriate to gravity is $e_{\mu a}$, with the associated metric \begin{equation} ds_1^2 = e_{\mu a} e_\nu^{\phantom{\nu} a} \label{3a-10} \end{equation} where raising and lowering of vierbein indices $a$ are carried out with the standard Minkowski metric $\eta_{ab} = \left\{ +1, -1, -1, -1 \right\}$. The vierbein associated with matter is $\bar e_{\mu a}$ and the raising and lowering of $\bar a$-indices are made by means of the Minkowski metric $\bar \eta_{ab} = \eta_{ab}$. The associated metric is \begin{equation} ds_2^2 = \bar e_{\mu \bar a} \bar e_\nu^{\phantom{\nu} \bar a} . \label{3a-20} \end{equation} The two vierbein bundles are related to each other by some local Lorentz transformation $M^a_{\phantom{a} \bar a} \in SL(4,R)$ and the factor $e^\phi$ is the local scaling. The dynamics is derived from the action which we represent in three parts, the gravitational term $S_g$ corresponding to the metric $ds_1^2$, the matter term $S_m$ corresponding to the matter metric $ds_2^2$, and the linking action $S_L$, which depends on the variables that determine the relationship between the two vierbein bundles. The full action $I$ is the sum of the three terms, \begin{equation} S = S_g + S_M + S_L . \label{3a-30} \end{equation} The gravitational action has standard form \begin{equation} S_g = - \frac{1}{4 \pi G} \int R \sqrt{-g} \ d^4 x \label{3a-40} \end{equation} where $G$ is the coupling constant having the same dimensionality as Newton's constant $G_N$. The linking action is \begin{equation} S_L = \frac{1}{4 \pi F} \int g^{\mu \nu} \mathrm{Tr} \left( j_\mu j_\nu \right) \sqrt{-g} \ d^4 x + \frac{1}{4 \pi F'} \int g^{\mu \nu} \left( \partial_\mu \phi \partial_\nu \phi \right) \sqrt{-g} \ d^4 x \label{3a-50} \end{equation} where $F$ and $F'$ are new gravitational constants. The matrix valued current $j_\mu$ is given by \begin{equation} j_\mu = \left( D_\mu M \right) M^{-1}, \label{3a-60} \end{equation} where $D_\mu$ is the covariant derivative. The matter action is given by \begin{equation} S_M = \int L_M(\psi_A; \bar e_{\mu \bar a}) \sqrt{-g} \ d^4 x, \label{3a-70} \end{equation} where $\psi_A$ is matter; $A$ is collective index and $\bar e_{\mu \bar a}$ is the vierbein desribing matter. \subsection{Finsler MOND gravity} In this subsection, we apply two parts \eqref{2-20} and \eqref{2-30} of the Finsler metric \eqref{2-10} to address the problem of describing dark matter. We follow the approach developed in Ref. \cite{Milgrom:2009gv}. Due to this approach, one can form nontrivial tensors and scalars using the difference of two corresponding Levi-Civita connections $\Gamma^\alpha_{\beta \gamma}$ and $\hat \Gamma^\alpha_{\beta \gamma}$, \begin{equation} C^\alpha_{\beta \gamma} = \Gamma^\alpha_{\beta \gamma} - \hat \Gamma^\alpha_{\beta \gamma}. \label{3b-10} \end{equation} Introducing two covariant derivatives denoted by $(;)$ with respect to the connection $\Gamma^\alpha_{\beta \gamma}$ and $(:)$ with respect to the connection $\hat \Gamma^\alpha_{\beta \gamma}$, we obtain the following relationships between metrics and connections: \begin{eqnarray} g_{\mu \nu : \lambda} &=& g_{\alpha \nu} C^\alpha_{\mu \lambda} + g_{\alpha \mu} C^\alpha_{\nu \lambda} , \quad \hat g_{\mu \nu ; \lambda} = - \hat g_{\alpha \nu} C^\alpha_{\mu \lambda} - \hat g_{\alpha \mu} C^\alpha_{\nu \lambda} , \label{3b-20}\\ C^\lambda_{\alpha \beta} &=& \frac{1}{2} g^{\lambda \rho} \left( g_{\alpha \rho : \beta} + g_{\beta \rho : \alpha} - g_{\alpha \beta : \rho} \right) = - \frac{1}{2} \hat g^{\lambda \rho} \left( \hat g_{\alpha \rho ; \beta} + \hat g_{\beta \rho ; \alpha} - \hat g_{\alpha \beta ; \rho} \right). \label{3b-30} \end{eqnarray} We introduce the tensor \begin{equation} \Upsilon_{\mu \nu} = C^\gamma_{\mu \lambda} C^\lambda_{\nu \gamma} - C^\gamma_{\mu \nu} C^\lambda_{\lambda \gamma}. \label{3b-40} \end{equation} and the following scalars and tensors: \begin{eqnarray} R_{\mu \nu} &=& \Gamma^\alpha_{\mu \alpha, \nu} - \Gamma^\alpha_{\mu \nu, \alpha} + \Gamma^\gamma_{\mu \lambda} \Gamma^\lambda_{\nu \gamma} - \Gamma^\gamma_{\mu \nu} \Gamma^\lambda_{\lambda \gamma}, \label{3b-50}\\ \hat R_{\mu \nu} &=& \hat \Gamma^\alpha_{\mu \alpha, \nu} - \hat \Gamma^\alpha_{\mu \nu, \alpha} + \hat \Gamma^\gamma_{\mu \lambda} \hat \Gamma^\lambda_{\nu \gamma} - \hat \Gamma^\gamma_{\mu \nu} \hat \Gamma^\lambda_{\lambda \gamma}, \label{3b-60}\\ R &=& g^{\mu \nu} R_{\mu \nu}, \label{3b-70}\\ \hat R &=& \hat g^{\mu \nu} \hat R_{\mu \nu}, \label{3b-80}\\ \Upsilon &=& g^{\mu \nu} \Upsilon_{\mu \nu}, \label{3b-90}\\ \hat R_m &=& g^{\mu \nu} \hat R_{\mu \nu}, \label{3b-100}\\ R_m &=& \hat g^{\mu \nu} R_{\mu \nu}. \label{3b-110} \end{eqnarray} Now one can construct gravitational Lagrangian densities using the scalars $R$, $\hat R$, $\Upsilon$, $\hat R_m$, and $R_m$, as well as scalars constructed by contracting $C^\lambda_{\mu \nu}$ with the two metrics and their reciprocals. For completeness, we note that the entities $\hat g/g, \bar \omega = g^{\mu \nu} \hat g_{\mu \nu}$, etc. can be used for this purpose as well. In Ref. \cite{Milgrom:2009gv} it was pointed out that there exists such a class of relativistic theories that have MOND-like theories in a non-relativistic limit, and which produce enhanced, MOND-like gravitational lensing. \subsection{Finsler geometry and variable Speed of Light Cosmology} One can relate two metrics \eqref{2-20} and \eqref{2-30} of spacetime in the following way \cite{Moffat:2004qs}: \begin{equation} \hat g_{\mu \nu} = g_{\mu \nu} + B \partial_\mu \phi \partial_\nu \phi . \label{3c-10} \end{equation} Here, metric $\hat g_{\mu \nu}$ is called ``matter'' metric and $g_{\mu \nu}$ is gravitational metric. Corresponding Lagrangian consists of a scalar field coupled to matter through the matter metric $\hat g_{\mu \nu}$ with the action \begin{equation} S = S_{\text{grav}} + S_{\phi} + \hat{S}_m, \label{3c-20} \end{equation} where \begin{equation} S_{\text{grav}} = - \frac{1}{\kappa} \int \left( R[g] + 2 \Lambda \right) \sqrt{-g} \ d\Omega. \label{3c-30} \end{equation} Here, $\Lambda$ is the cosmological constant and $c$ denotes the currently measured speed of light. In the bimetric gravity one can choose either ${\hat g}_{\mu\nu}$ or $g_{\mu\nu}$ to be comoving metric frame in the FRW universe (here we follow Ref. \cite{Moffat:2004qs}). There are two characteristic speeds: the speed of light $c_\gamma$ and the speed of gravitational waves $c_g$. If we choose ${\hat g}_{\mu\nu}$ as the comoving metric, then \begin{equation} d{\hat s}^2 \equiv {\hat g}_{\mu\nu}dx^\mu dx^\nu = c^2 dt^2 - R^2(t) \left[ \frac{dr^2}{1-kr^2} + r^2(d\theta^2 + \sin^2\theta d\phi^2) \right], \label{3c-40} \end{equation} and \begin{equation} ds^2 \equiv g_{\mu\nu}dx^\mu dx^\nu = c^2 \left( 1-\frac{B}{c^2}\dot\phi^2 \right) dt^2 - R^2(t) \left[ \frac{dr^2}{1-kr^2} + r^2(d\theta^2 + \sin^2\theta d\phi^2) \right]. \label{3c-50} \end{equation} From \eqref{3c-50} we see that the speed of gravitational waves is given by $c_g(t)=c\left( 1-\frac{B}{c^2}\dot\phi^2 \right)^{1/2}$, while the speed of light is constant. Alternatively, if we choose $g_{\mu\nu}$ as the comoving metric, we obtain \begin{equation} ds^2 \equiv g_{\mu\nu}dx^\mu dx^\nu = c_g^2 dt^2 - R^2(t) \left[ \frac{dr^2}{1 - kr^2} + r^2 \left( d\theta^2 + \sin^2\theta d\phi^2 \right) \right] \label{3c-60} \end{equation} and \begin{equation} d{\hat s}^2 \equiv {\hat g}_{\mu\nu}dx^\mu dx^\nu = c^2 \left(1+\frac{B}{c^2}\dot\phi^2 \right)dt^2 - R^2(t) \left[ \frac{dr^2}{1-kr^2} + r^2 \left( d\theta^2 + \sin^2\theta d\phi^2 \right) \right]. \label{3c-70} \end{equation} Now the speed of light is given by $c_\gamma(t) = c \left( 1+\frac{B}{c^2}\dot\phi^2 \right)^{1/2}$, while the speed of gravitational waves $c_g=c$ is constant. One can show that by regauging clocks, we cannot make both $c_\gamma$ and $c_g$ simultaneously constant. This makes the time dependence of either $c_\gamma$ or $c_g$ a non-trivial feature of the bimetric gravity. In Ref. \cite{Moffat:2004qs} it was shown that the variable speed of light mechanism in the bimetric gravity model can solve the flatness and horizon problems and it is possible that one can avoid the initial fine-tuning problems of generic inflationary models. \subsection{Finsler geometry and massive gravity} Dynamics with two metric can be defined by the action \cite{Visser:1997hd} \begin{equation} S = \int d^4 x \left\{ \frac{1}{16 \pi} \left[ \sqrt{-g} R + \sqrt{-b} L_{mass}(g, g_0) + \sqrt{-g} L_{matter} (g, X) \right] \right\} \label{3d-10} \end{equation} where $g_{0,\mu \nu}$ represents the background metric tensor ($ds_1^2$ metric in our formalism), $g_{\mu \nu}$ is the dynamical metric tensor ($ds_2^2$ metric in our formalism), and $X$ stands for any nongravitational field. The theory with such Lagrangian is used for the explanation of an acceleration of the Universe in modern epoch \cite{Visser:1997hd}. The modified Einstein equations thus are \begin{equation} R^{\mu\nu} - \frac{1}{2} g^{\mu \nu} R = 8\pi G \left( T^{\mu\nu}_{mass} + T^{\mu\nu} \right), \label{3d-20} \end{equation} where $T^{\mu\nu}_{mass}$ is an extra contribution to the stress-energy tensor, \begin{equation} T_{mass}^{\mu\nu} = - \dfrac{m_g^2 c^2}{8\pi G \hbar^2} \left\{ (g_0^{-1})^{\mu\sigma} \; \left[ (g- g_0)_{\sigma\rho} - {1\over2} (g_0)_{\sigma\rho} \; (g_0^{-1})^{\alpha\beta} (g- g_0)_{\alpha\beta} \right] (g_0^{-1})^{\rho\nu} \right\}. \label{3d-30} \end{equation} This kind of gravity has the following features: \begin{itemize} \item black holes (of the usual type) do not exist in this theory; \item the expansion of the universe can be completely divorced from the cosmological distribution of matter. \end{itemize} \section{Non bimetric case} \label{general} In this section, we discuss more general case. Let us consider the Finsler metric \eqref{2-10} where $G_{\alpha,\beta,\gamma,\delta}$ does not allow the decomposition into two metrics as in \eqref{2-40}. We will try the case when it can be decomposed using four metrics in the following way: \begin{equation} ds^4 = G_{\alpha \beta\gamma \delta} = \sum\limits_{i,j=1}^4 ds_i^2 ds_j^2 \label{5-10} \end{equation} where for any $i=1,2,3,4$ metric $ds_i^2$ is represented as \begin{equation} ds_i^2 = g_{i; \mu \nu} dx^\mu dx^\nu . \label{5-20} \end{equation} First of all it is necessary to solve the equations \begin{equation} G_{\alpha \beta \gamma \delta} = \sum\limits_{i,j=1}^4 g_{i; \{ \alpha \beta} g_{j; \gamma \delta \}} \label{5-30} \end{equation} where $\{ \alpha \beta \gamma \delta \}$ is the symmetrization over $\alpha,\beta,\gamma,\delta$. In order to consider the solvability of this equation, we calculate the number of independent elements of $G_{\alpha \beta \gamma \delta}$ and four matrices $g_{i; \mu \nu}$. Let us remind that $G_{\alpha \beta \gamma \delta}$ is symmetric over all indices. The tensor $G_{\alpha \beta \gamma \delta}$ has \begin{itemize} \item all indices $\alpha \beta \gamma \delta$ are different: 1; \item two pairs with equal indices, $G_{\alpha \alpha \beta \beta}$: 10; \item three equal indices: 16; \item all indices are equal: 4. \end{itemize} Thus we have 31 independent components of $G_{\alpha \beta \gamma \delta}$. Four metrics $g_{i; \mu \nu}$ have in total 40 independent components. Consequently, 9 degrees of freedom can be chosen in an arbitrary way. For each of the metric $g_{i; \mu \nu}$ one can introduce appropriate covariant derivative $\stackrel{i}{\nabla}_\mu, i=1,2,3,4$. Then one can introduce the following associated geometric entities: \begin{eqnarray} (R_i)_{\mu \nu} &=& (\Gamma_i)^\alpha_{\mu \alpha, \nu} - (\Gamma_i)^\alpha_{\mu \nu, \alpha} + (\Gamma_i)^\gamma_{\mu \lambda} (\Gamma_i)^\lambda_{\nu \gamma} - (\Gamma_i)^\gamma_{\mu \nu} (\Gamma_i)^\lambda_{\lambda \gamma}, \label{5-40}\\ R_i &=& g_i^{\mu \nu} (R_i)_{\mu \nu}, \label{5-50}\\ (C_{ij})^\lambda_{\alpha \beta} &=& \frac{1}{2} g_i^{\lambda \rho} \left( \stackrel{j}{\nabla_\beta} g_{\alpha \rho} + \stackrel{j}{\nabla_\alpha} g_{\beta \rho} - \stackrel{j}{\nabla_\rho} g_{\alpha \beta} \right), \label{5-60}\\ (\Upsilon_{ij})_{\mu \nu} &=& (C_{ij})^\gamma_{\mu \lambda} (C_{ij})^\lambda_{\nu \gamma} - (C_{ij})^\gamma_{\mu \nu} (C_{ij})^\lambda_{\lambda \gamma}, \label{5-80}\\ \Upsilon_{ijk} &=& g_k^{\mu \nu} (\Upsilon_{ij})_{\mu \nu}, \label{5-90}\\ (R_{ij}) &=& g_i^{\mu \nu} (R_j)_{\mu \nu}, \label{5-100}\\ \omega_{ij} &=& g_i^{\mu \nu} g_{j; \mu \nu} \label{5-110} \end{eqnarray} where $(R_i)_{\mu \nu}$ is the Ricci tensor for the metric $g_{i; \mu \nu}$. Using the scalars $\Upsilon_{ijk}$, $R_{ij}$, etc. one can write Lagrangian. Then, we define the action with terms where integrating is made over all $g_i = \det g_{i; \mu \nu}$. Below, we present examples of the models of pure geometrical Finsler gravities. \subsection{Local pure geometrical decomposed Finsler gravities} First, we can define Lagrangian in the following local form: \begin{equation}\label{5b-10} \mathcal L = \sum \limits_{i,j} k_{i} R_{i} \sqrt{-g_j} + \sum \limits_{i,j,k} k_{ij} R_{ij} \sqrt{-g_k} + (\text{various type of scalars from \eqref{5-80}-\eqref{5-110}}), \end{equation} where $k_i$ and $k_{ij}$ are constants. \subsection{Nonlocal pure geometrical decomposed Finsler gravities} Second, we can also define action in the following nonlocal form: \begin{equation}\label{5c-10} \begin{split} S = & \int \left( R_i(x_i) G(x_i, x_j) R_j(x_j) \sqrt{-g_i} \sqrt{-g_j} \; dx_i dx_j + \right. \\ & \left. R_i(x_i) G(x_i, x_j, x_k) R_j(x_j) R_j(x_k) \sqrt{-g_i} \sqrt{-g_j} \sqrt{-g_k} \; dx_i dx_j dx_k + \cdots \right), \end{split} \end{equation} where $G(x_i, x_j), G(x_i, x_j, x_k), \ldots$ are nonlocal functions. \section{Conclusions} We have shown that for some special choice of Finsler metric one can obtain a new particular formulation of Finsler gravity. The main point of this formulation is that we specify the class of Finsler metrics defined by the condition that they allow decomposition to the product of two general Riemannian metrics. In this case the Finsler metric is thought of as equivalent to two Riemannian metrics (Sec. \ref{special}). In section \ref{pure} we present new formulation of Finsler gravity for our class of Finsler metric. In section \ref{bimetric} we present and discuss various physical applications of this special form of Finsler gravity. As one of the results we obtained that in this case the affine connection becomes non-compatible with the metric. The difference is given by the gradient $(\ln \sqrt{g_1/g_2})_{, \mu}$. Also, in section \ref{general} we constructed local and nonlocal Finsler gravities arised from the more general case, namely, when Finsler metrics allows decomposition to four Riemannian metrics. \section*{Acknowledgments} This work was partially supported by a grant in fundamental research in natural sciences by Science Committee of the Ministry of Education and Science of Kazakhstan and by a grant of VolkswagenStiftung.
1,477,468,751,217
arxiv
\section{Introduction} One of the fundamental observations of data science is that high-dimensional data often exhibits low-dimensional structure. Detecting and utilizing structures such as sparsity, union of subspaces, or low-dimensional manifolds has been the driving force of innovation and success for many modern algorithms pertaining to image and video processing, clustering, and pattern recognition, and has led to better understanding of the success of neural network classifiers and other machine learning models. In particular, a common assumption in machine learning is the \textit{manifold hypothesis} \cite{chen2012nonlinear,fefferman2016testing,jones2008manifold,lee2007nonlinear}, which is that data lies on or near a low-dimensional embedded manifold in the high-dimensional ambient space. Myriad manifold learning algorithms have been proposed for elucidating the structure of these manifolds by embedding the data into a significantly lower-dimensional space, e.g., \cite{belkin2003laplacian,coifman2006diffusion,donoho2003hessian,hinton2006reducing,maaten2008visualizing,tenenbaum2000global} among many others. Such methods have been applied to data as diverse as stock prices \cite{huang2017nonlinear}, medical images \cite{wolz2012nonlinear} and single-cell sequencing data \cite{becht2019dimensionality}. \subsection{Challenges in image manifold learning} Many applications result in Euclidean data in $\mathbb{R}^n$ or $\mathbb{C}^n$ for large $n$. However, in imaging applications in which data is obtained through photography, video recording, hyperspectral imaging, MRI, or related methods, the resulting Euclidean vectors, matrices, or tensors are better modeled as functional data, since images correspond to objects that are naturally thought of as \textit{prima facie} infinite-dimensional. That is, one obtains \begin{equation}\label{EQN:Imaging} x = \mathcal{H}[f]+\eta,\end{equation} where $x$ is the (discrete) image, $\mathcal{H}:X\to\mathbb{R}^n$ is an imaging (or discretization) operator mapping a Banach space $X$ (often $L_p(\mathbb{R}^m)$ for some $p$ and $m$, or more commonly $L_p(\Omega)$ for some compact $\Omega\subset\mathbb{R}^m$) and $\eta$ is some noise (often treated as stochastic and is based on the imaging operator as well as other external factors). The noise $\eta$ can come from multiple sources including background noise, e.g., randomly occurring features such as non-diseased tissue that are not of primary interest for the task \cite{rolland1992effect}; such noise typically has much higher intrinsic dimension than the signal of interest. Image data can also be corrupted by electronic and quantum noise, which is particularly prevalent in scientific and clinical medical imaging where, for instance, radiation dose may prevent the usage of strong light sources \cite{barrett2015task}. Many dimensionality reduction methods operate in the following sequence: \begin{enumerate} \item[Step 1:] Given data $\{x_i\}_{i=1}^N\subset\mathbb{R}^n$, form an $\varepsilon$--neighborhood or $k$--nearest neighbor graph, $G$, over $\{x_i\}$ whose edge weights are the Euclidean distances between connected nodes \item[Step 2:] Compute graph-theoretic (shortest path) distances for all pairs of vertices $x_i,x_j$ in $G$ \item[Step 3:] Embed the graph into $\mathbb{R}^d$ for some $d\ll n$. \end{enumerate} Examples which use variations of this procedure are ISOMAP \cite{tenenbaum2000global}, Local Linear Embedding \cite{roweis2000nonlinear}, Laplacian Eigenmaps \cite{belkin2003laplacian}, UMAP \cite{mcinnes2018umap}, and Diffusion Maps \cite{coifman2006diffusion}. Each step above has been an avenue of substantial research. While these methods have enjoyed great success in many areas, there are a few drawbacks, especially for imaging applications. First, the graph formation step is typically done in a heuristic fashion and can be problematic in that it is very sensitive to parameter tuning, i.e., choosing $\varepsilon$ or $k$ and how to weight the edges. Second, the most common framework above assumes that $\{x_i\}$ comes from a Riemannian manifold embedded in Euclidean space. Under this assumption, variants of the above procedure are designed so that (hopefully) the graph-theoretic geodesics closely approximate the manifold geodesics between data points. Bernstein et al.~\cite{bernstein2000graph} prove that if the sampling density of the points $\{x_i\}$ is sufficiently small with respect to the minimum radius of curvature of the ambient manifold and a prescribed tolerance, the graph geodesics of an $\varepsilon$--neighborhood graph can approximate the manifold geodesics between all pairs $(x_i,x_j)$ within the prescribed tolerance. Their results show that success typically requires dense sampling of the manifold, which is often unrealistic in image applications due to sparsity of sampling. Additionally, images of the same objects can have different dimensions ($n$) in the imaging domain under different imaging systems, which may lead to quite different results in the dimensionality reduction procedure. Many algorithms will downsample images to alleviate this issue, but this can lead to information loss and is not necessary in our proposed framework. Finally, such models tacitly assume that Euclidean distances between data vectors are semantically meaningful. However, this assumption may be invalid in many applications. Indeed, small variation in pixel intensities results in large Euclidean distances, but the images may be semantically the same. For instance, in object recognition, one would expect a model to understand that two images of a car are the same object even if the car is translated in the frame of one of the images. These two images can have large Euclidean distance, even though they are semantically identical. \subsection{Functional image manifolds in Wasserstein space} We propose the following paradigm shift from the previous discussion. In contrast to many imaging techniques which assume that imaged data is on a manifold without reference to the function space underlying them, we assume a \textit{functional manifold hypothesis} that $\{x_i\}\subset\mathbb{R}^n$ is obtained from imaging a functional manifold $\mathcal{M}$. A natural question arises: what function space naturally represents image data? Our analysis below assumes that images correspond to probability measures with finite second moment; i.e., that $x = \mathcal{H}(\mu)$ as in \eqref{EQN:Imaging} where $\mu\in\mathbb{W}_2(\mathbb{R}^m)$, the 2--Wasserstein space of probability measures with finite second moment $M_2(\mu):=\int_{\mathbb{R}^m}|x|^2d\mu(x)<\infty$. The 2--Wasserstein space is equipped with the Wasserstein metric arising from Optimal Transport Theory \cite{villani2008optimal}. Given two measures $\mu,\nu\in\mathbb{W}_2(\mathbb{R}^m)$, define the set of couplings $\Gamma(\mu,\nu):=\{\gamma\in\mathcal{P}(\mathbb{R}^{2m}):\pi_1\gamma = \mu, \pi_2\gamma = \nu\}$ where $\mathcal{P}(\mathbb{R}^{2m})$ is the set of all probability measures on $\mathbb{R}^{2m}$, $\pi_1$ is the projection onto the first $m$ coordinates, and $\pi_2$ onto the last $m$ coordinates. Thus $\Gamma(\mu,\nu)$ is the set of all joint probability measures on $\mathbb{R}^{2m}$ whose marginals are $\mu$ and $\nu$ \cite{santambrogio2015optimal}. Then we may define \begin{equation}\label{EQN:Wasserstein} W_2(\mu,\nu) = \inf_{\gamma\in\Gamma(\mu,\nu)}\left(\int_{\mathbb{R}^{2m}}|x-y|^2d\gamma(x,y)\right)^\frac12. \end{equation} Our initial assumption will be that measures are absolutely continuous; however, we will provide a bridge to transfer results from these to arbitrary measures in \cref{SEC:DiscreteWassmap}. Treating images as probability measures and considering their 2--Wasserstein distances mitigates issues of geodesic blowup inherent in assuming $L_2$ as the ambient function space (see \cite{donoho2005image}) as $\mathbb{W}_2$ is a \textit{length space}, meaning it contains all geodesics, which are necessarily finite \cite{sturm2006geometry}. Additionally, the Wasserstein distance contains more semantic meaning in that the Wasserstein distance between images captures how much energy is required to morph one image into another, and the displacement interpolant (the line from one measure to another in $\mathbb{W}_2$) provides a more natural nonlinear path between measures (see \cite{kolouri2017optimal}, for instance). Our initial theoretical and experimental results indicate that Wasserstein distances have significant advantages over other choices in terms of recovering image manifold parametrizations and providing good low-dimensional embeddings of image manifolds (\cref{SEC:Experiments}). Additionally, use of Wasserstein distance and optimal transport theory provides one with a significant and powerful theoretical framework due to the substantial work on optimal transport related to PDEs and other fields (e.g., \cite{peyre2019computational,santambrogio2015optimal,villani2003topics,villani2008optimal}). In addition to the bulk of theory developed, the use of optimal transport in the past few years has yielded a plethora of advances to the state-of-the-art in many subfields of Machine Learning (ML). For example, use of Wasserstein distances in training of Generative Adversarial Networks (GANs) leads to substantial improvement and stability of such networks \cite{WassersteinGAN,WassersteinProximal}, and in image processing, use of optimal transport ideas has enabled linearization of nonlinear classification problems \cite{kolouri2016radon,kolouri2017optimal}. Note that the Wasserstein manifold assumption is general, and subsumes a setting which is natural in imaging applications. In many cases, we may readily assume that objects being imaged are compactly supported, nonnegative, and integrable (e.g., we may consider a car to be an element of $L_1^{\geq0}(\Omega)$ for a compact set $\Omega\subset\mathbb{R}^3$, where the function value at a given point is the density of the car in that location). Thus, one could consider the data $\{x_i\}\subset\mathbb{R}^n$ to be obtained from imaging a functional manifold $\mathcal{M}\subset L_1^{\geq0}(\Omega)$ for some compact set $\Omega\subset\mathbb{R}^m$. Assuming the images have unit $L_1$--norm implies that we may view this manifold as a subset of $\mathcal{P}(\Omega)$, the set of probability measures supported on $\Omega$ via the mapping $f_i\mapsto \mu_i$ such that $d\mu_i=f_idx$ (with $dx$ being the Lebesgue measure on $\mathbb{R}^m$). These measures have finite second moment since \[\int_\Omega |x|^2d\mu_i(x) \leq \max_{x\in\Omega}|x|^2 \int_\Omega d\mu_i(x) <\infty.\] Therefore, each $\mu_i$ is an element of the 2--Wasserstein space $\mathbb{W}_2(\mathbb{R}^m)$. \subsection{Main results} We briefly summarize our main results here. For some definitions and background on optimal transport and Wasserstein distance, see \cref{SEC:OTP}. We propose a variant of ISOMAP called Wassmap which uses Wasserstein distances instead of Euclidean distances. In this work, we treat the case when the graph geodesic computation is excluded. This algorithm and the assumption that images correspond to elements of $\mathbb{W}_2$ are used to explore settings in which image manifold parametrizations can be exactly recovered up to rigid transformation by our algorithm. Our first result is the following. \begin{thmx}\label{THM:WassmapRecoveryIntro} Let $\Theta\subset\mathbb{R}^d$ be a parameter set that generates a smooth submanifold $\mc{M}(\Theta)\subset\mathbb{W}_2(\mathbb{R}^m)$ such that $(\mc{M},W_2)$ is isometric up to a constant to $(\Theta,|\cdot|_{\mathbb{R}^d})$. If $\{\theta_i\}_{i=1}^N\subset\Theta$, and $\{\mu_{\theta_i}\}_{i=1}^N\subset\mc{M}$ are the corresponding measures on the manifold, then the Functional Wassmap Algorithm (\cref{ALG:Wassmap}) with embedding dimension $d$ recovers $\{\theta_i\}$ up to rigid transformation and global scaling. \end{thmx} As a special case of \cref{THM:WassmapRecoveryIntro}, we find that Functional Wassmap recovers the translation set for manifolds generated by translation of a fixed measure. This result utilizes a known fact that the Wasserstein distance between a measure and its translate is the Euclidean distance between the translation vectors. We also find that dilation sets can be recovered up to a scaling factor dependent upon the generating measure. Given a dilation matrix $D_\theta:=\textnormal{diag}(\theta_1^{-1},\dots,\theta_m^{-1})$ for $\theta_i\in\mathbb{R}_+$, define the dilation manifold with generator $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ via \[\mc{M}^{\textnormal{dil}}(\mu_0,\Theta):=\left\{\det(D_\theta)\mu_0(D_\theta \cdot):\theta\in\Theta\right\}.\] Let $P_i\mu$ denote the $i$--th marginal of $\mu$ defined by \begin{equation}\label{EQN:ithMoment}P_i\mu(E) := \int_{\mathbb{R}\times\dots\mathbb{R}\times E\times\mathbb{R}\times\dots\mathbb{R}}d\mu(x), \qquad E\subset\mathbb{R}.\end{equation} In the results below, the second moment of the $i$--th marginal is thus $M_2(P_i\mu):=\int_{\mathbb{R}^m}|x_i|^2d\mu(x)$. This choice of notation rather than $\mu_i$ is to avoid confusion, as subscripts $0$ and $\theta$ will be used frequently in the sequel. \begin{thmx}\label{THM:WassmapDilationGeneralIntro} Suppose $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ is absolutely continuous. Given $\{\theta_i\}_{i=1}^N\subset\mathbb{R}^m_+$, and corresponding measures $\{\mu_{\theta_i}\}_{i=1}^N\subset\mc{M}^{\textnormal{dil}}(\mu_0,\Theta),$ the Functional Wassmap Algorithm (\cref{ALG:Wassmap}) with embedding dimension $m$ recovers $\{S\theta_i\}_{i=1}^N\subset \mathbb{R}^m$ up to rigid transformation, where $S$ is the diagonal matrix \[ S= \textnormal{diag}\big(M^{\frac{1}{2}}_2(P_1\mu_0),\cdots, M^{\frac{1}{2}}_2(P_m\mu_0)\big). \] \end{thmx} \cref{THM:WassmapDilationGeneralIntro} requires a general computation of Wasserstein distances between dilates of a fixed measure. This result indicates that both the dilation parameter set $\Theta$ as well as the underlying structure of the generating measure $\mu_0$ influence the set that is recovered via Wassmap. Our third main result is the following, which allows one to transfer results that hold for absolutely continuous generating measures to general measures as long as Wasserstein distances between the measures are solely functions of the generating parameters. \begin{thmx}\label{THM:DiscreteContinuousIntro} Suppose that for all absolutely continuous $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$, $\mathcal{M}(\mu_0,\Theta)=\{T_{\theta\sharp}\mu_0:\theta\in\Theta\}$ is a smooth submanifold of $\mathbb{W}_2(\mathbb{R}^m)$, that $T_\theta$ is Lipschitz for all $\theta$, and that for all $\theta,\theta'\in\Theta$ and all absolutely continuous $\mu_0$, $W_2(T_{\theta\sharp}\mu_0,T_{\theta'\sharp}\mu_{0}) = f(\theta,\theta')$ for some function $f$ only dependent upon $\theta$ and $\theta'$. Then for any $\nu_0\in\mathbb{W}_2(\mathbb{R}^m)$, $W_2(T_{\theta\sharp}\nu_0,T_{\theta'\sharp}\nu_{0})= f(\theta,\theta')$. \end{thmx} This theorem provides a bridge to transfer results for absolutely continuous measures to discrete measures which arise in practice in imaging applications, e.g., as obtained via \eqref{EQN:Imaging}. More specifically, \cref{THM:DiscreteContinuousIntro} implies that in certain cases, if Functional Wassmap (\cref{ALG:Wassmap}) recovers an image manifold parametrization for absolutely continuous measures, then it recovers the parametrization for arbitrary measures, and Discrete Wassmap (\cref{ALG:DiscreteWassmap}) recovers the parametrization for discrete measures. After discussing prior art in the next subsection, the rest of the paper is organized as follows: \cref{SEC:OTP} describes in brief the background of Wasserstein distances and other details from optimal transport theory, \cref{SEC:Theory} describes the Functional Wassmap algorithm and contains the main results and proofs related to it, \cref{SEC:DiscreteWassmap} describes the Discrete Wassmap algorithm, and a restatement and proof of \cref{THM:DiscreteContinuousIntro}. We end with \cref{SEC:Experiments} containing experiments and a brief conclusion section. \subsection{Prior art} Most nonlinear dimensionality reduction methods assume data on or near a low-dimensional manifold in Euclidean space and utilize Euclidean distances between points to estimate manifold geodesics. ISOMAP, described by Tenenbaum et al.~\cite{tenenbaum2000global} is one of the most classical of these algorithms and is the inspiration of this work. Bernstein et al.~\cite{bernstein2000graph} showed that dense sampling is required to well-approximate geodesics in the ISOMAP procedure, still under the assumption of Euclidean manifolds. Zha and Zhang \cite{zha2003isometric} proposed continuum ISOMAP, assuming continuous sampling of the manifold, and utilizing an integral operator formulation of ISOMAP and Multi-dimensional Scaling (MDS) \cite{mardia1979multivariate}. Continuum ISOMAP therein illuminates the theory of classical ISOMAP maintaining the Euclidean manifold assumption, but is not a practical algorithm. Donoho and Grimes \cite{donoho2005image} utilized the functional manifold hypothesis that data lives in a submanifold of $L_2(\mathbb{R}^m)$ as a theoretical tool to study the performance of ISOMAP. Due to the fact that geodesics formed by the metric induced by the $L_2$ metric can blow up, e.g., in the case of translates of indicator functions, the authors require convolution of the input measures by Gaussians. They also utilize normalization with respect to a reference geodesic, which our framework does not require. The works of Kolouri et al.~\cite{kolouri2019generalized,kolouri2016radon,kolouri2017optimal} and others \cite{aldroubi2021partitioning,khurana2022supervised,moosmuller2020linear} consider absolutely continuous measures, but instead of working with the Wasserstein distances directly, they instead work with the optimal transport maps (see \eqref{EQN:MP} below). This approach can speed up computations \cite{khurana2022supervised} compared to our approach, but the theory does not transfer to arbitrary measures as is done here. As with the Donoho and Grimes framework, these works also require a reference image to define the transport distance, whereas our method of utilizing Wasserstein distances directly avoids this. Additionally, exact recovery results for image manifold parametrizations appear to be easier to obtain with our framework than these approaches. Recently, Kileel et al.~\cite{kileel2021manifold} study the problem of manifold learning with arbitrary norms. Their assumption remains that the data manifold is embedded in Euclidean space, but they construct an analogue of the graph Laplacian which utilizes an arbitrary norm as opposed to the standard Euclidean norm. In similarity with our work, Kileel et al.~are motivated by the fact that Euclidean distances may lack semantic meaning or may lead to inflated computational load compared with other norms. Additionally, in their experiments they employ an approximation of the $W_1$ distance (replace the cost $|x-y|^2$ with $|x-y|$ in \eqref{EQN:Wasserstein}) using wavelet expansions for sparse representations of image data. This is computationally faster than $W_2$ approximations; however, the results presented herein are primarily aimed at understanding exact recovery of image manifold parametrizations up to rigid motion. Our algorithms could be employed with any Wasserstein metric (replace $2$ with $1\leq p\leq\infty)$ but we leave this for future study. \section{Basics of Wasserstein Distances and Optimal Transport}\label{SEC:OTP} Given a measure space $X$, we denote the space of all finite measures on $X$ by $\mc{M}(X)$. Given a measure space $Y$, and a continuous map $T:X\to Y$, the pushforward of a measure $\mu\in\mc{M}(X)$ via the map $T$, denoted $T_\#\mu\in\mc{M}(Y)$, is the measure which satisfies \[T_\#\mu(E) = \mu(T^{-1}(E)),\quad \textnormal{for all measurable } E\subset Y. \] The \textit{Monge Problem}: suppose $X$ and $Y$ are measure spaces, $c:X\times Y\to\mathbb{R}_+$ is a cost function, and $\mu\in\mc{M}(X)$ and $\nu\in\mc{M}(Y)$; then the Monge Problem is to find the \textit{optimal transport map} $T:X\to Y$ which minimizes \begin{equation}\label{EQN:MP}\tag{MP} \min_T \left\{\int_X c(x,T(x))d\mu(x): T_\#\mu = \nu \right\}. \end{equation} The Monge problem does not always admit a solution, even in seemingly innocuous cases such as discrete measures. To get around this difficulty, Kantorovich proposed the following relaxation, which we call the \textit{Kantorovich Problem}: \begin{equation}\label{EQN:KP}\tag{KP} \min_\pi \left\{ \int_{X\times Y} c(x,y)d\pi(x,y):\pi\in \Pi(\mu,\nu)\right\} \end{equation} where $\Pi(\mu,\nu)$ is the class of transport plans, or couplings: \begin{equation*} \Pi(\mu,\nu) = \{\pi\in\mc{P}(X\times Y): (\pi_x)_\sharp\gamma = \mu, (\pi_y)_\sharp\gamma = \nu\}. \end{equation*} Here, $\pi_x, \pi_y$ are the projections of $X\times Y$ onto $X$ and $Y$ (i.e., the marginals on $X$ and $Y$), respectively. Of use to us will also be the \textit{Dual Problem} to the Kantorovich Problem: \begin{equation}\label{EQN:DP}\tag{DP} \sup \left\{\int_{X}\phi d\mu + \int_Y\psi d\nu:\phi\in L_1(\mu), \psi\in L_1(\nu), \phi(x)+\psi(y)\leq c(x,y) \right\}. \end{equation} Finally, intimately tied to all of these problems is the \textit{Wasserstein Distance.} Here, we specialize to the concrete case $X=Y=\mathbb{R}^n$ and utilize the $\ell_2$--norm (quadratic) cost function, i.e., $c(x,y):=|x-y|^2$ (here and throughout, $|\cdot|$ denotes the Euclidean norm on $\mathbb{R}^m$ where $m$ may be determined from context). For $\mu,\nu\in\mathcal{P}(\mathbb{R}^m)$ with finite $2$--nd moment, the $2$--Wasserstein distance is defined by \[W_2(\mu,\nu):= \min_\pi \left(\int_{\mathbb{R}^{2m}}|x-y|^2d\pi(x,y):\pi\in\Pi(\mu,\nu)\right)^\frac12.\] Evidently, $W_2(\mu,\nu)^2 = (\min$--KP), but in this setting much more is true. The following is a combination of several results in \cite[Chapter 1]{santambrogio2015optimal} and Brenier's Theorem \cite{brenier1991polar} (see also \cite{peyre2019computational}). \begin{theorem}\label{THM:ProbEquiv} Let $c(x,y)=|x-y|^2$. Suppose $\mu,\nu\in\mathbb{W}_2(\mathbb{R}^m)$, and at least one of which is absolutely continuous. Then there exists an optimal transport map $T$ from $\mu$ to $\nu$ and a unique optimal transport plan $\pi\in\Pi(\mu,\nu)$. Additionally, \[(\min\textnormal{--MP}) = (\min\textnormal{--KP}) = (\max\textnormal{--DP}) = W_2(\mu,\nu)^2.\] \end{theorem} \section{Functional Wassmap: Algorithm and Theory}\label{SEC:Theory} In this section, we consider the problem of when an image manifold treated as a submanifold of the quadratic Wasserstein space is isometric to Euclidean space. First, we consider the case when the geodesics on the manifold $\mc{M}\subset\mathbb{W}_2(\mathbb{R}^m)$ are given by the $W_2$ distance between measures and consider when the metric space $(\mc{M},W_2)$ is isometric up to a constant to a subset of Euclidean space $(\Theta,|\cdot|_{\mathbb{R}^d})$. \begin{algorithm}[h!] \caption{Functional Wasserstein Isometric Mapping (Functional Wassmap)}\label{ALG:Wassmap} \begin{algorithmic}[1] \STATE \textbf{Input: }{Probability measures $\{\mu_i\}_{i=1}^N\subset \mathbb{W}_2(\mathbb{R}^m)$; embedding dimension $d$ } \STATE \textbf{Output: }{Low-dimensional embedding points $\{z_i\}_{i=1}^N\subset\mathbb{R}^d$} \STATE{Compute pairwise Wasserstein distance matrix $W_{ij} = W_2^2(\mu_i,\mu_j)$} \STATE{ $B = -\frac12 HWH$, where ($H=I-\frac{1}{N}\mathbbm{1}_N)$} \STATE (Truncated SVD):{ $B_d=V_d\Lambda_d V_d^T$} \STATE{$z_i = (V_d\Lambda_d^\frac{1}{2})(i,:),$ for $i=1,\dots,N$} \STATE \textbf{Return:}{$\{z_i\}_{i=1}^N$} \end{algorithmic} \end{algorithm} \subsection{Multidimensional scaling} Steps 4--6 of \cref{ALG:Wassmap} are the \textit{classical Multi-dimensional Scaling} Algorithm, or MDS. An important result for MDS is the following. \begin{definition} A matrix $D\in\mathbb{R}^{N\times N}$ is a \textit{distance matrix} provided $D=D^T$, $D_{ii}=0$ for all $i$, and $D_{ij}\geq0$ for all $i\neq j$. A distance matrix is \textit{Euclidean} provided there exists a point configuration $\{z_i\}_{i=1}^N\subset\mathbb{R}^d$ for some $d$ such that $D_{ij} = |z_i-z_j|.$ \end{definition} \begin{theorem}[\cite{young1938discussion}]\label{THM:MDS} Let $D$ be a distance matrix, and $B, V_d,$ and $\Lambda_d$ be as in \cref{ALG:Wassmap}. $D$ is Euclidean if and only if $B$ is symmetric positive semidefinite. Moreover, if $D$ is Euclidean, then the points $\{z_1,\dots,z_N\}$ are unique up to rigid transformation and are given by $(V_d\Lambda_d^\frac12)(i,:),$ $i=1,\dots,N.$ \end{theorem} \begin{corollary}\label{COR:WassmapRecovery} Let $\Theta\subset\mathbb{R}^d$ be a parameter set that generates a smooth submanifold $\mc{M}(\Theta)\subset\mathbb{W}_2(\mathbb{R}^m)$ such that $(\mc{M},W_2)$ is isometric up to a constant to $(\Theta,|\cdot|_{\mathbb{R}^d})$. If $\{\theta_i\}_{i=1}^N\subset\Theta$, and $\{\mu_{\theta_i}\}_{i=1}^N\subset\mc{M}$ are the corresponding measures on the manifold, then the Functional Wassmap Algorithm (\cref{ALG:Wassmap}) with embedding dimension $d$ recovers $\{\theta_i\}$ up to rigid transformation and global scaling. \end{corollary} \begin{proof} The isometry condition implies existence of a global constant $c>0$ such that $W_2(\mu_{\theta_i},\mu_{\theta_j}) = c|\theta_i-\theta_j|$ for all $i,j$. Hence the matrix $W$ in \cref{ALG:Wassmap} is a Euclidean distance matrix with point configuration $\{c\theta_i\}_{i=1}^N\subset\mathbb{R}^d$, and uniqueness up to rigid transformation is given by \cref{THM:MDS}. \end{proof} In subsequent results, we will compute the global scaling factor $c$ for some image manifolds, in which case we obtain recovery of $\{c\theta_i\}$ up to rigid transformation. \subsection{Comparison to other techniques} Donoho and Grimes \cite{donoho2005image} developed a theoretical framework for understanding the behavior of ISOMAP on image manifolds. They studied whether a normalized version of geodesic distance is equivalent to the Euclidean distance, in which case ISOMAP recovers the underlying parametrization of image manifolds. They show several positive cases including translation, pivoting and morphing boundaries of black objects on white backgrounds. They also show that ISOMAP may fail when the parameter space is not convex or the image manifold is not flat (for example, the dilation manifold of rectangles or ellipses). In comparison, Wassmap does not require normalization when computing distances between images. For translation manifolds, Wassmap retreives the underlying parameters without requiring the parameter space to be convex. Wassmap also recovers translation and dilation manifolds generated by a base measure which has nonsmooth pdf, like the indicator function of a domain, whereas ISOMAP fails in this case due to geodesic blowup. \subsection{Translation manifolds} Given a fixed generating measure $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ and translation set $\Theta\subset\mathbb{R}^m$, define \begin{equation} \mathcal{M}^{\textnormal{trans}}(\mu_0,\Theta):=\{\mu_0(\cdot-\theta):\theta\in\Theta\}.\label{eqn:trans_manifold_def} \end{equation} This simple translation manifold satisfies $\dim(\mathcal{M}) = \dim(\textnormal{span}(\Theta))$. We show the following: \begin{theorem}\label{THM:WassmapTranslation} Let $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ be absolutely continuous. Given $\{\theta_i\}_{i=1}^N\subset\mathbb{R}^m$ and corresponding measures $\{\mu_{\theta_i}\}_{i=1}^N\subset\mathcal{M}^{\textnormal{trans}}(\mu_0,\Theta)$, the Functional Wassmap algorithm (\cref{ALG:Wassmap}) with embedding dimension $m$ recovers $\{\theta_i\}_{i=1}^N$ up to rigid transformation. \end{theorem} The crux of the proof of this theorem is the following lemma. This lemma is known \cite[Remark 2.19]{peyre2019computational}, but for completeness we present the full proof. \begin{lemma}\label{LEM:Translation} Let $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ be absolutely continuous, and $\theta,\theta'\in\mathbb{R}^m$. Then, \[W_2(\mu_0(\cdot-\theta),\mu_0(\cdot-\theta')) = |\theta-\theta'|.\] \end{lemma} \begin{proof} First, notice that $T(x) = x+\theta-\theta'$ is such that $T_\#(\mu_0(\cdot-\theta)) = \mu_0(\cdot-\theta')$. Using the Monge formulation, we have that if $\mu_0$ has density $f$, then \begin{align*} W_2(\mu_0(\cdot-\theta),\mu_0(\cdot-\theta'))^2 & \leq \int_{\mathbb{R}^m}|x-(x+\theta-\theta')|^2f(x-\theta)dx\\ & = |\theta-\theta'|^2\int_{\mathbb{R}^m}f(x)dx\\ & = |\theta-\theta'|^2. \end{align*} With the upper bound in hand, we turn to proof of the lower bound. Assume that $S$ is the optimal transport map from $\mu_0(\cdot-\theta)$ to $\mu_0(\cdot-\theta')$, and let $T$ be defined by $S(x) = T(x-\theta)+\theta'$. Note that $T_\# \mu_0 = \mu_0$. Then we have by \eqref{EQN:MP}, and the substitution $x \mapsto x-\theta$, \begin{align*} W_2(\mu_0(\cdot-\theta),\mu_0(\cdot-\theta'))^2 & = \int_{\mathbb{R}^m}|S(x)-x|^2f(x-\theta)dx\\ & = \int_{\mathbb{R}^m}|T(x-\theta)-(x-\theta)-(\theta-\theta')|^2f(x-\theta)dx\\ & = \int_{\mathbb{R}^m}\left[|T(x)-x|^2-2\bracket{T(x)-x,\theta-\theta'}+|\theta-\theta'|^2\right]f(x)dx\\ & = |\theta-\theta'|^2, \end{align*} where the final equality comes from the fact that $\int_{\mathbb{R}^m}|T(x)-x|^2d\mu_0 = \int_{\mathbb{R}^m}(T(x)-x)d\mu_0 = 0$ again by \eqref{EQN:MP} and the fact that $T_\#\mu_0 = \mu_0.$ The upper and lower bounds match, hence the result is proved. \end{proof} \begin{proof}[Proof of \cref{THM:WassmapTranslation}] Combine \cref{LEM:Translation} and \cref{COR:WassmapRecovery}. \end{proof} \subsection{Dilation manifolds} \label{subsec:dilationthy} Here we will consider dilation manifolds, where we are given a dilation set $\Theta\subset\mathbb{R}^m_{+}$ ($\theta\in\Theta$ has strictly positive entries $\vartheta_1,\dots,\vartheta_m$), and we define the corresponding manifold with a fixed generator $\mu_0\in \mathbb{W}_2(\mathbb{R}^m)$ via \[\mc{M}^{\textnormal{dil}}(\mu_0,\Theta):=\left\{\det(D_\theta)\mu_0(D_\theta \cdot):\theta\in\Theta\right\},\] where the dilation matrix is defined by \[D_\theta := \textnormal{diag}\left(\frac{1}{\vartheta_1},\dots,\frac{1}{\vartheta_m}\right).\] Recall that $M_2(\mu):=\int_{\mathbb{R}^m}|x|^2d\mu(x)$ is the second moment of a measure $\mu\in\mathcal{P}(\mathbb{R}^m)$, and $P_i\mu$ is the $i$--th marginal of $\mu$ defined as in \eqref{EQN:ithMoment}. \begin{theorem}\label{THM:WassmapDilationGeneral} Let $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ be absolutely continuous. Given $\{\theta_i\}_{i=1}^N\subset\mathbb{R}^m_+$, and corresponding measures $\{\mu_{\theta_i}\}_{i=1}^N\subset\mc{M}^{\textnormal{dil}}(\mu_0,\Theta),$ the Functional Wassmap Algorithm (\cref{ALG:Wassmap}) with embedding dimension $m$ recovers $\{S\theta_i\}_{i=1}^N\subset \mathbb{R}^m$ up to rigid transformation, where $S$ is the diagonal matrix \[ S= \textnormal{diag}\big(M^{\frac{1}{2}}_2(P_1\mu_0),\cdots, M^{\frac{1}{2}}_2(P_m\mu_0)\big). \] \end{theorem} This theorem can be derived from the following lemma. \begin{lemma}\label{LEM:Dilation} Let $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ be absolutely continuous with density $f$. Let $\theta,\theta'\in\Theta\subset\mathbb{R}^m_+$, and let $\mu_\theta$ be defined by $d\mu_{\theta} = \det(D_\theta)f(D_\theta\cdot)dx$, and similarly for $d\mu_{\theta'}$. Then \[W_2(\mu_\theta,\mu_{\theta'})^2 = \sum_{i=1}^m|\vartheta_i-\vartheta'_i|^2\int_{\mathbb{R}^m}|x_i|^2d\mu_0.\] \end{lemma} \begin{proof} The proof proceeds by using \eqref{EQN:MP} to find an upper bound for the Wasserstein distance in question, and \eqref{EQN:DP} to find a lower bound. These being the same, we use \cref{THM:ProbEquiv} to conclude the result. To show the upper bound, we use the fact that \eqref{EQN:MP} has a solution, and note that the map $T = D_{\theta'}^{-1}D_{\theta}$ satisfies $T_\#\mu_\theta = \mu_{\theta'}$. Indeed, for any measurable $E\subset\mathbb{R}^m$, we have, via the substitution $x = D_{\theta'}^{-1}D_\theta y$, \begin{align*} \mu_{\theta'}(E) & = \int_E \det(D_{\theta'})f(D_{\theta'}x)dx\\ & = \int_{D_\theta^{-1}D_{\theta'}(E)} \det(D_\theta)f(D_\theta y)dy\\ & = \mu_\theta(D_\theta^{-1}D_{\theta'}E)\\ & = \mu_\theta(T^{-1}(E)). \end{align*} Hence, $T$ is the pushforward from $\mu_\theta$ to $\mu_{\theta'}$. By \eqref{EQN:MP} and \cref{THM:ProbEquiv}, \begin{align*} W_2(\mu_\theta,\mu_{\theta'})^2 & \leq \int_{\mathbb{R}^m}|x-D_{\theta'}^{-1}D_\theta x|^2d\mu_\theta(x)\\ & = \int_{\mathbb{R}^m}\sum_{i=1}^m \left|\left(1-\frac{\vartheta'_i}{\vartheta_i}\right)x_i\right|^2\det(D_\theta)f(D_\theta x)dx\\ & = \sum_{i=1}^m \frac{1}{|\vartheta_i|^2}|\vartheta_i-\vartheta_i'|^2\int_{\mathbb{R}^m}|x_i|^2\det(D_\theta)f(D_\theta x)dx\\ & = \sum_{i=1}^m |\vartheta_i-\vartheta_i'|^2\int_{\mathbb{R}^m}|x_i|^2f(x)dx\\ & = \sum_{i=1}^m |\vartheta_i-\vartheta_i'|^2\int_{\mathbb{R}^m}|x_i|^2d\mu_0(x). \end{align*} The penultimate equality follows from substituting $x\mapsto D_\theta x$. Now we use \eqref{EQN:DP} to find a lower bound for the Wasserstein distance by setting \[\phi(x) = \sum_{i=1}^m\left(1-\frac{\vartheta_i'}{\vartheta_i}\right)x_i^2,\qquad \psi(y) = \sum_{i=1}^m\left(1-\frac{\vartheta_i}{\vartheta_i'}\right)y_i^2.\] These are easily seen to be in $L_1(\mu_\theta)$ and $L_1(\mu_{\theta'})$, respectively. Additionally, \\ \[|x-y|^2-\phi(x)- \psi(y) = \sum_{i=1}^m\left(\frac{\vartheta_i'}{\vartheta_i}x_i^2+\frac{\vartheta_i}{\vartheta_i'}y_i^2-2x_iy_i\right) = \sum_{i=1}^m\left(\sqrt{\frac{\vartheta_i'}{\vartheta_i}}x_i-\sqrt{\frac{\vartheta_i}{\vartheta_i'}}y_i\right)^2\geq0,\] hence $\phi$ and $\psi$ are feasible solutions to \eqref{EQN:DP}. Finally, by \eqref{EQN:DP} and \cref{THM:ProbEquiv}, \begin{align*} W^2_2(\mu_\theta,\mu_{\theta'}) & \geq \int_{\mathbb{R}^m}\sum_{i=1}^m\left(1-\frac{\vartheta_i'}{\vartheta_i}\right)x_i^2\det(D_\theta)f(D_\theta x)dx\\ & \qquad\quad + \int_{\mathbb{R}^m}\sum_{i=1}^m\left(1-\frac{\vartheta_i}{\vartheta_i'}\right)y_i^2\det(D_{\theta'})f(D_{\theta'}y)dy\\ & = \int_{\mathbb{R}^m}\sum_{i=1}^m(\vartheta_i^2-\vartheta_i\vartheta_i')x_i^2f(x)dx + \int_{\mathbb{R}^m}\sum_{i=1}^m((\vartheta_i')^2-\vartheta_i\vartheta_i')y_i^2f(y)dy\\ & = \sum_{i=1}^m|\vartheta_i-\vartheta_i'|^2\int_{\mathbb{R}^m}|x_i|^2d\mu_0(x), \end{align*} and the lemma is proved. \end{proof} \begin{proof}[Proof of \cref{THM:WassmapDilationGeneral}] \cref{LEM:Dilation} implies that $W_2(\mu_\theta,\mu_\theta') = |S\theta-S\theta'|$, so then the Wasserstein distance matrix arising from $\{\mu_{\theta_i}\}$ is a Euclidean distance matrix with point configuration $\{S\theta_i\}_{i=1}^N\subset\mathbb{R}^m$. The uniqueness of recovery of this set up to rigid transformation is given by \cref{THM:MDS}. \end{proof} Note that the matrix $S$ is determined by the generator $\mu_0$ of the manifold. Thus, in order to retrieve the parameters $\{\theta_i\}_{i=1}^N$, information on $\mu_0$ is required. In certain conditions, \cref{ALG:Wassmap} recovers $\{\theta_i\}_{i=1}^m$ up to a constant. \begin{corollary}\label{COR:WassmapDilation} Suppose $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ is such that $\int_{\mathbb{R}^m}|x_i|^2d\mu_0 = c^2$ for some constant $c>0$ and for all $i$. Given $\{\theta_i\}_{i=1}^N\subset\mathbb{R}^m_+$, and corresponding measures $\{\mu_{\theta_i}\}_{i=1}^N\subset\mc{M}^{\textnormal{dil}}(\mu_0,\Theta),$ the Functional Wassmap Algorithm (\cref{ALG:Wassmap}) with embedding dimension $m$ recovers $\{c\theta_i\}_{i=1}^N$ up to rigid transformation. \end{corollary} \begin{remark} Note that if the dilations occur only along certain coordinates, i.e., $\Theta$ is supported on a $d$--dimensional coordinate plane for some $1\leq d<n$, then one can specify the embedding dimension in \cref{COR:WassmapDilation} to be $d$ rather than $m$. In this case, one recovers the isometric projection of $\Theta$ into $\mathbb{R}^d$ which ignores the undilated coordinates. For example, if $\Theta$ only has elements other than 1 in coordinates $\{i_1,\dots,i_d\}$, then Functional Wassmap will recover (up to rigid transformation) $P(\Theta)\subset\mathbb{R}^d$ where $P(\vartheta_1,\dots,\vartheta_m) := (\vartheta_{i_1},\dots,\vartheta_{i_d})$. \end{remark} \begin{corollary}[Isotropic Dilations]\label{THM:WassmapIsotropicDilation} Suppose $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ is absolutely continuous and $\{\theta_i\}_{i=1}^N\subset\Theta\subset\{c(1,\dots,1):c\in\mathbb{R}_+\}\subset\mathbb{R}^m$. Given the corresponding measures $\{\mu_{\theta_i}\}_{i=1}^N\subset\mc{M}^{\textnormal{dil}}(\mu_0,\Theta)$, the Functional Wassmap Algorithm (\cref{ALG:Wassmap}) with embedding dimension $m$ recovers $\{(\frac{M_2(\mu_0)}{m})^\frac12\theta_i\}_{i=1}^N$ up to rigid transformation. \end{corollary} \begin{proof}[Proof of \cref{COR:WassmapDilation}] Combine \cref{LEM:Dilation} with \cref{COR:WassmapRecovery}. \end{proof} \begin{proof}[Proof of \cref{THM:WassmapIsotropicDilation}] According to \cref{LEM:Dilation}, if $\theta = (c,\dots,c)$ and likewise $\theta' = (c',\dots,c')$, then \[W_2(\mu_{\theta},\mu_{\theta'})^2 = |c-c'|^2\int_{\mathbb{R}^m}\sum_{i=1}^m|x_i|^2d\mu_0 = |c-c'|^2M_2(\mu_0) = |\theta-\theta'|^2\frac{M_2(\mu_0)}{m},\] and the conclusion follows from \cref{COR:WassmapRecovery}. \end{proof} Note that $\{(\frac{M_2(\mu_0)}{m})^\frac12\theta_i\}$ is equivalent up to rigid transformation to $\{S\theta_i\}$ where $S$ is as in \cref{COR:WassmapDilation}, so the conclusion of \cref{THM:WassmapIsotropicDilation} is not contradictory. We end this subsection by giving some concrete examples. The first is for the simple case when the density function of $\mu_0$ is symmetric, giving a concrete example of \cref{COR:WassmapDilation}. \begin{proposition}\label{PROP:DilateSymmetry} Let $d\mu_0=f(x)dx$ be symmetric about the $x_1=x_2$ line, and let $d\mu_{\theta} =\det(D_\theta)f(D_\theta x) d$, where $D_\theta$ is as above. Then \[W_2(\mu_\theta,\mu_{\theta'})^2 = [(\vartheta_1-\vartheta'_1)^2+(\vartheta_2-\vartheta_2')^2]\int_{x_2\geq x_1} (x_1^2+x_2^2)f(x)dx.\] \end{proposition} The proof of this proposition follows from direct calculation of the moments in \cref{COR:WassmapDilation} and so is omitted. The converse of \cref{PROP:DilateSymmetry} is not necessarily true. That is, the condition $W_2(\mu_\theta,\mu_{\theta'})^2 = c[(\vartheta_1-\vartheta_1')^2+(\vartheta_2-\vartheta_2')^2]$ for some $c\in \mathbb{R}$ does not imply that $\mu_0$ is symmetric across $x_1=x_2$. Indeed, consider the following example: suppose $d\mu_0 = \frac{1}{|A|}\mathbbm{1}_Adx$, where $A$ is a rectangle with range $(1,2)$ on $x_1$ axis and $(-1,3)$ on $x_2$ axis. Then \[W_2(\mu_\theta,\mu_{\theta'})^2 = \frac{7}{3}[(\vartheta_1-\vartheta_1')^2+(\vartheta_2-\vartheta_2')^2].\] For further illustration, the following corollary, easily obtained by computing the relevant second moments from \cref{THM:WassmapDilationGeneral}, shows what one recovers for a dilation manifold when the generating measure is the indicator function of a domain suitably normalized. \begin{corollary}\label{COR:Rectangle} Let $A$ be a rectangle in $\mathbb{R}^m$ with endpoints $a_{i,1},a_{i,2}$ on the $i$--th coordinate axis, and let $d\mu_0=\frac{1}{|A|}\mathbbm{1}_{A}dx$. Then if $\theta,\theta'\subset\mathbb{R}^m_+$ are dilation vectors, \[W_2(\mu_\theta,\mu_{\theta'})^2 = \frac13 \sum_{i=1}^m|\vartheta_i-\vartheta_i'|^2(a_{2,i}^2+a_{2,i}a_{1,i}+a_{1,i}^2).\] Consequently, \cref{ALG:Wassmap} recovers $\{S_A\theta_i\}_{i=1}^N$, where $S_A$ is the diagonal matrix whose diagonal entries are defined as \[(S_A)_{i,i}= \sqrt{\frac{1}{3}(a_{2,i}^2+a_{2,i}a_{1,i}+a_{1,i}^2)}. \] \end{corollary} Note that if the parameter set is a lattice in $\mathbb{R}^m$, i.e., $\Theta = \alpha_1\mathbb{Z}\times\cdots\times\alpha_m\mathbb{Z}$, then Functional Wassmap will recover the set $\alpha_1M_2^\frac12(P_1\mu_0)\mathbb{Z}\times\cdots\times\alpha_mM_2^\frac12(P_m\mu_0)\mathbb{Z}$ up to rigid transformation. \subsection{Rotation manifolds}\label{SEC:Rotation} We will show in subsequent experiments that the discrete Wassmap algorithm is capable of recovering the underlying circle governing a rotational manifold. However, at present, the authors do not have a proof analogous to the above results for this case. Let a rotation manifold be defined as follows: \[\mc{M}^{\textnormal{rot}}(\mu_0,\Theta) := \{\mu_0(R_\theta\cdot):R_\theta\in \textnormal{SO}(m)\}.\] Consider the following: \begin{theorem}[{\cite[Theorem 1.22]{santambrogio2015optimal}}]\label{thm:KPexist} Suppose $\mu,\nu\in \mathbb{W}_2(\mathbb{R}^m)$ and $\mu$ gives no mass to $(d-1)$--surfaces of class $C^2$. Then there exists a unique optimal transport map $T$ from $\mu$ to $\nu$, and it is of the form $T=\nabla u$ for a convex function $u$. \end{theorem} A direct consequence of \cref{thm:KPexist} is that a rotation matrix $R_\theta$ is not the optimal transport map from a measure to its pushforward under $R_\theta$ as this is not the gradient of a convex function. Consequently, exactly computing $W_2(\mu_0(R_\theta\cdot),\mu_0(R_{\theta'}\cdot))$ is nontrivial. On the other hand, we can give an upper bound for the Wasserstein distance of a rotated version of a fixed measure with itself. Restrict to clockwise rotation in $\mathbb{R}^2$ by angle $\theta\in(0,2\pi)$, and let $R_\theta$ be the resulting rotation matrix. One can verify that \begin{equation*} W_2(\mu_0,\mu_0(R_\theta\cdot))^2\leq \int_{\mathbb{R}^2}|R_\theta(x)-x|^2d\mu_0 = 2\sin\left(\frac{\theta}{2}\right)M_2(\mu_0). \end{equation*} \section{Discrete Wassmap: Algorithm and Theory}\label{SEC:DiscreteWassmap} In imaging practice, one obtains discrete vectors rather than continuous distributions, so a practical version of \cref{ALG:Wassmap} must take this into account. To do this, one must consider how to form a probability measure from a given image. Given a two-dimensional (planar) or multidimensional (e.g., volumetric) image in pixel/voxel representation, that is $g = [g_1,\ldots,g_D]\in\mathbb{R}^D$ where $D$ is the total number of pixels or voxels, we will assign a discrete measure $P(g)\in \bb{W}_2(\mathbb{R}^m)$ by selecting a set of $D$ locations $x_n\in\mathbb{R}^m$, and assigning mass $g_n>0$ to a corresponding physical location $x_n$ and normalizing: \begin{equation} P(g) = \frac{1}{\|g\|_1}\sum_{n=1}^D g_n\delta_{x_n}, \label{eqn:image_to_discrete_measure} \end{equation}where $\delta_{x_n}$ is a Dirac mass at location $x_n$. The locations $x_n$ are most conveniently assumed, at least initially, to lie on a regular grid in the ambient space $\mathbb{R}^m$. Given two images (with common ambient dimension $m$ but not necessarily the same $D$), the problem of computing the Wasserstein distance between $\mu_i = P(g_i)$ and $\mu_j = P(g_j)$ reduces to a discrete optimization problem for which many algorithms exist \cite{gerber2017multiscale,peyre2019computational}. Below we summarize the Discrete Wassmap algorithm which mimics the procedure described above. Note that we state the algorithm for image input, but one could equally well state it simply for discrete probability measures input in which case one simply skips the measure construction step. \begin{algorithm}[h!] \caption{Discrete Wasserstein Isometric Mapping (Discrete Wassmap)}\label{ALG:DiscreteWassmap} \begin{algorithmic}[1] \STATE \textbf{Input: }{Image data $\{g_i\}_{i=1}^N\subset \mathbb{R}^D$; embedding dimension $d$ } \STATE \textbf{Output: }{Low-dimensional embedding points $\{z_i\}_{i=1}^N\subset\mathbb{R}^d$} \STATE {(Measure Construction): $\mu_i = P(g_i)$} \STATE {Compute pairwise Wasserstein distance matrix $W_{ij} = W_2^2(\mu_i,\mu_j)$} \STATE{$B = -\frac12 HWH$, where ($H=I-\frac{1}{N}\mathbbm{1}_N)$} \STATE {(Truncated SVD): $B_d=V_d\Lambda_dV_d^T$} \STATE{$z_i = (V_d\Lambda_d^\frac{1}{2})(i,:) $} \STATE \textbf{Return:}{$\{z_i\}$} \end{algorithmic} \end{algorithm} \subsection{Transferring Wasserstein computations to arbitrary measures} An important consideration for the theory of exactness of Discrete Wassmap is to understand how (or even if) any of the Wasserstein distance computations in \cref{SEC:Theory} carry over to the setting of discrete measures. For instance, if one translates a discrete measure, is the Wasserstein distance the same as in the absolutely continuous case (the magnitude of the translation)? Here we show that this is the case for a wide variety of discrete measures and transformations of them. We will state our results in terms of the pushforward operators defining the transformation of a base measure. The following theorem provides a bridge which allows one to transfer results on recovery of Wasserstein image manifold parametrizations from manifolds generated by absolutely continuous measures to those generated by arbitrary measures. Note that there is no requirement on the generating measure $\mu_0$ aside from the fact that it lies in $\mathbb{W}_2$; it may have a mix of continuous and discrete spectra, and need not have compact support. \Cref{THM:DiscreteContinuous} shows that if Wasserstein distances between absolutely continuous measures generated by a given parameter set depend only on the parameters, then the Wasserstein distances between arbitrary measures likewise depend only on the parameters. Thus, if Functional Wassmap recovers a parameter set for absolutely continuous generating measure $\mu_0$, then Discrete Wassmap recovers the manifold generated analogously from a discrete measure $\mu_0$. Additionally, since \cref{THM:DiscreteContinuous} holds for arbitrary measures, we find that if Functional Wassmap recovers a parameter set for absolutely continuous generating measure, then in fact it also recovers the parameter set for arbitrary generating measure. \begin{theorem}\label{THM:DiscreteContinuous} Suppose that for all absolutely continuous $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$, $\mathcal{M}(\mu_0,\Theta)=\{T_{\theta\sharp}\mu_0:\theta\in\Theta\}$ is a smooth submanifold of $\mathbb{W}_2(\mathbb{R}^m)$, that $T_\theta$ is Lipschitz for all $\theta$, and that for all $\theta,\theta'\in\Theta$ and all absolutely continuous $\mu_0$, $W_2(T_{\theta\sharp}\mu_0,T_{\theta'\sharp}\mu_{0}) = f(\theta,\theta')$ for some function $f$ dependent only upon $\theta$ and $\theta'$. Then for any $\nu_0\in\mathbb{W}_2(\mathbb{R}^m)$, $W_2(T_{\theta\sharp}\nu_0,T_{\theta'\sharp}\nu_{0})= f(\theta,\theta')$. \end{theorem} The crux of the proof of this theorem is the lemma below. We let $g_\sigma(x)=\frac{1}{(\sqrt{2\pi}\sigma)^m} e^{\frac{-|x|^2}{2\sigma^2}}$ be the multivariate Gaussian kernel on $\mathbb{R}^m$, and below $\ast$ represents convolution of measures. We take $\nu_\sigma\rightharpoonup\nu$ to mean weak convergence of measures, which by the Portmanteau Theorem is equivalent to the statement $\nu_\sigma(A)\to\nu(A)$ for all continuity sets $A$ of $\nu$ (i.e., $\nu(\partial A)=0$). \begin{lemma}\label{LEM:Discrete} Let $\mu\in\mathbb{W}_2(\mathbb{R}^m)$. Suppose that $T_\theta,T_{\theta'}:\mathbb{R}^m\to\mathbb{R}^m$ are Lipschitz with Lipschitz constant at most $L$. Then \[W_2(T_{\theta\sharp}\mu,T_{\theta'\sharp}\mu)=\lim\limits_{\sigma\rightarrow 0}W_2(T_{\theta\sharp}(\mu*g_\sigma), T_{\theta'\sharp}(\mu*g_\sigma)).\] \end{lemma} \begin{proof} By \cite[Lemma 5.2]{santambrogio2015optimal}, $W_2(T_{\theta\sharp}\mu,T_{\theta'\sharp}\mu)=\lim\limits_{\sigma\rightarrow 0}W_2(T_{\theta\sharp}\mu*g_\sigma, T_{\theta'\sharp}\mu*g_\sigma)$. While it is not true in general that $W_2(T_{\theta\sharp}\mu\ast g_\sigma,T_{\theta'\sharp}\mu\ast g_\sigma) = W_2(T_{\theta\sharp}(\mu*g_\sigma), T_{\theta'\sharp}(\mu*g_\sigma))$, we will show that the limits of these two expressions is the same. Considering their difference and utilizing the triangle inequality in both steps below, we have \begin{multline*} |W_2(T_{\theta\sharp}\mu*g_\sigma, T_{\theta'\sharp}\mu*g_\sigma)- W_2(T_{\theta\sharp}(\mu*g_\sigma),T_{\theta'\sharp}(\mu*g_\sigma))| \\ \leq W_2(T_{\theta\sharp}\mu*g_\sigma,T_{\theta\sharp}(\mu*g_\sigma)) + W_2(T_{\theta'\sharp}(\mu*g_\sigma), T_{\theta'\sharp}\mu*g_\sigma) \\ \leq W_2(T_{\theta\sharp}\mu,T_{\theta\sharp}(\mu*g_\sigma))+W_2(T_{\theta\sharp}\mu,T_{\theta\sharp}\mu*g_\sigma) \\ + W_2(T_{\theta'\sharp}\mu,T_{\theta'\sharp}(\mu*g_\sigma))+W_2(T_{\theta'\sharp}\mu,T_{\theta'\sharp}\mu*g_\sigma). \end{multline*} We claim that $\lim\limits_{\sigma\rightarrow0}W_2(T_{\theta\sharp}\mu,T_{\theta\sharp}(\mu*g_\sigma))=\lim\limits_{\sigma\rightarrow0}W_2(T_{\theta\sharp}\mu,T_{\theta\sharp}\mu*g_\sigma)=0$ for any $\theta$. By \cite[Lemma 5.11]{santambrogio2015optimal}, it is sufficient to prove the following: \begin{enumerate} \item $T_{\theta\sharp}(\mu*g_\sigma)\rightharpoonup T_{\theta\sharp}\mu$ \item $T_{\theta\sharp}\mu*g_\sigma \rightharpoonup T_{\theta\sharp}\mu$. \item $\int|x|^2dT_{\theta\sharp}(\mu*g_\sigma)\rightarrow \int |x|^2dT_{\theta\sharp}\mu$ \item $\int|x|^2dT_{\theta\sharp}\mu*g_\sigma\rightarrow \int |x|^2dT_{\theta\sharp}\mu$. \end{enumerate} Here and subsequently all integrals are over $\mathbb{R}^m$. Proof of 1) Suppose $A$ is a continuity set of $\mu$, then by definition $T^{-1}(A)$ is a continuity set of $T_{\theta\sharp}\mu$. Then, \[T_{\theta\sharp}(\mu*g_\sigma)(A) = \mu*g_\sigma(T^{-1}(A)) \to \mu(T^{-1}(A)) = T_{\theta\sharp}\mu(A),\] where convergence follows from the fact that $\mu\ast g_\sigma\rightharpoonup \mu$ for any $\mu$. Item 2 is well-known and follows from a simple computation, so is omitted. Proof of 3) First, we note that by direct computation, \[\int|x|^2dT_{\theta\sharp}(\mu\ast g_\sigma) = \int\int|T(x+y)|^2g_\sigma(y)dy d\mu(x)\] and \[\int|x|^2dT_{\theta\sharp}\mu = \int|T(x)|^2d\mu(x) = \int\int|T(x)|^2g_\sigma(y)dy d\mu(x),\] where the second equality follows from $g_\sigma$ being a probability density function. With these observations in hand, we have \begin{align*} & \bigg|\int |x|^2 d(T_{\theta\sharp}(\mu*g_\sigma)-T_{\theta\sharp}\mu)\bigg| \\ & \quad \leq \int\int\big||T(x+y)|^2-|T(x)|^2\big|g_\sigma(y)dy d\mu(x)\\ & \quad \leq \int\int(|T(x+y)|+|T(x)|)|T(x+y)-T(x)|g_\sigma(y)dy d\mu(x)\\ & \quad \leq L^2\int\int (2|x|+|y|)|y|g_\sigma(y)dy d\mu(x). \end{align*} The final inequality follows from twice utilizing the fact that $T$ is Lipschitz. That this quantity goes to $0$ as $\sigma\to0$ follows from the fact that any Gaussian moment tends to $0$. Indeed, by substitution, \[\int_{\mathbb{R}^m}|y|^pg_\sigma(y)dy = \sigma^p\int_{\mathbb{R}^m}|y|^pg_1(y)dy.\] The integral above is a constant depending only upon $p$ and $m$, so the conclusion follows by application of the Dominated Convergence Theorem. Proof of 4) By similar argument, but noting that \[\int|x|^2d(T_{\theta\sharp}\mu\ast g_\sigma) = \int\int|T(x)+y|^2g_\sigma(y)dy d\mu(x),\] we see that \begin{align*} \bigg|\int |x|^2 d(T_{\theta\sharp}\mu*g_\sigma-T_{\theta\sharp}\mu)\bigg| & \leq \int\int\big||T(x)+y|^2-|T(x)|^2\big|g_\sigma(y)dy d\mu(x)\\ & = \int\int(|y|^2+2\langle T(x),y \rangle)g_\sigma(y)dy d\mu(x)\\ & = \int\int |y|^2 g_\sigma(y) dy d\mu(x) + 2\int\int \langle T(x),y \rangle g_\sigma(y)dyd\mu(x)\\ & \leq \int |y|^2 g_\sigma(y) dy + 2L\int |x|d\mu(x)\int |y|g_\sigma(y) dy \end{align*} The first and second moment of $g_\sigma$ tends to $0$ as before, while the first moment of $\mu$ is finite since $\mu$ is in $\mathbb{W}_2$. \end{proof} With this Lemma we are now in a position to finish the proof of the main theorem of this section. \begin{proof}[Proof of \cref{THM:DiscreteContinuous}] Let $g_\sigma$ be the multivariate Gaussian with variance $\sigma$ as before. Then by \cref{LEM:Discrete}, we have \[ W_2(T_{\theta\sharp}\nu_0,T_{\theta'\sharp}\nu_0) = \lim_{\sigma\to0} W_2(T_{\theta\sharp}(\nu_0\ast g_\sigma),T_{\theta'\sharp}(\nu_0\ast g_\sigma)) = \lim_{\sigma\to0} f(\theta,\theta') = f(\theta,\theta'). \] \end{proof} By combining the above results, we readily see that Discrete Wassmap recovers the translation set for translation of discrete measures, and the scaled dilation set for dilation image manifolds. \begin{corollary} Suppose $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ is discrete. Then given $\{\theta_i\}_{i=1}^N\subset\Theta\subset\mathbb{R}^m$ and corresponding measures $\{\mu_{\theta_i}\}_{i=1}^N\subset\mathcal{M}^{\text{trans}}(\mu_0,\Theta)$, the Discrete Wassmap algorithm (\cref{ALG:DiscreteWassmap}) with embedding dimension $m$ recovers $\{\theta_i\}_{i=1}^N$ up to rigid transformation. If $\mu_0\in\mathbb{W}_2(\mathbb{R}^m)$ is discrete. Then given $\{\theta_i\}_{i=1}^N\subset\Theta\subset\mathbb{R}^m_+$ and corresponding measures $\{\mu_{\theta_i}\}_{i=1}^N\subset\mathcal{M}^{\textnormal{dil}}(\mu_0,\Theta)$, then the Discrete Wassmap algorithm with embedding dimension $m$ recovers $\{S\theta_i\}_{i=1}^N$ up to rigid transformation, where $S$ is as in \cref{THM:WassmapDilationGeneral}. \end{corollary} \begin{proof} Combine \cref{THM:DiscreteContinuous} with \cref{THM:WassmapTranslation,THM:WassmapDilationGeneral}. \end{proof} \begin{remark} Similarly, if $\mu_0 \in\mathbb{W}_2(\mathbb{R}^m)$ is arbitrary, then \cref{THM:DiscreteContinuous,THM:WassmapTranslation,THM:WassmapDilationGeneral} imply that the Functional Wassmap algorithm (\cref{ALG:Wassmap}) recovers the underlying translation or scaled dilation sets, respectively. \end{remark} \section{Experiments}\label{SEC:Experiments} To demonstrate our theoretical results, we provide several experiments\footnote{Code for this work may be found at \url{https://github.com/keatonhamm/Wassmap}.} using both synthetically generated two-dimensional image data and the standard MNIST digits dataset \cite{lecun1998mnist}. For each synthetic experiment, a fixed absolutely continuous base measure $\mu_0 \in \bb{W}_2(\mathbb{R}^2)$ with density $f_0(x)$ is selected, then a manifold $\mc{M}(\mu_0,\Theta)$ is sampled by applying the parametric transformation $\mc{T}_\theta$ to $\mu_0$ for a finite number of $\theta$ values $\{\theta_1,\ldots,\theta_N\}\subset\Theta$, resulting in the measures $\mu_{\theta_i}$ and corresponding densities $f_{\theta_i}$. These (continuum) images are subsequently discretized by performing a spatial sampling, selecting $\{x_1,\ldots,x_D\}\subset\mathbb{R}^2$, evaluating each density $f_{\theta_i}(x)$ at these points, then forming the discrete measure \eqref{eqn:image_to_discrete_measure}. Comparisons are shown to traditional ISOMAP embeddings. ISOMAP is the nearest, most faithful comparison to our method as it is also a global algorithm. Note that ISOMAP assumes a `pixel' representation of images, that is, each image is treated as an element of $\mathbb{R}^D$ for some fixed $D$. One can obtain such a representation by following the steps outlined above but keeping points of zero density. Future work will address local manifold learning algorithms utilizing the Wasserstein manifold framework proposed here. \subsection{Translation manifold} In this set of experiments, we take the base measure $\mu_0$ to be the indicator function of a disc of radius $1$, that is $d\mu_0 = \frac{1}{\pi}\mathbbm{1}_{D}(x)dx$. For a given translation set $\Theta\subset\mathbb{R}^2$, the translation manifold is then generated via \eqref{eqn:trans_manifold_def}. We consider two translation sets: $\Theta_0 = [-1,1]^2$ and $\Theta_1 = [-2,-1]\times[-1,1]\cup [1,2]\times[-1,1]$. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{figures/trans1_paramset.pdf} \hspace{1.6em} \includegraphics[trim = 0 -2em 0 0, clip, width=0.32\textwidth]{figures/trans1_images_PC.pdf} \hspace{2em}\includegraphics[width=0.37\textwidth]{figures/trans1_wassmap_embed.pdf}\hspace{1em} \includegraphics[width=0.39\textwidth]{figures/TranslationISOMAPEmbedding.pdf} \caption{Translation manifold generated by the characteristic function of the unit disk with parameter set $\Theta_0 = [-1,1]^2$. We consider a uniform $4\times 4$ grid in the parameter space to generate $\{\theta_i\}$. Shown are the original translation grid, the point cloud images $\mu_{\theta_i}$, the Wassmap embedding, and the ISOMAP embedding.} \label{FIG:Translation} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.43\textwidth]{figures/TranslationNonconvexGridOverlap.pdf} \includegraphics[width=0.43\textwidth]{figures/TranslationNonconvexWassmapEmbeddingOverlap.pdf} \includegraphics[scale=0.22]{figures/TranslationNonconvexISOMAPEmbeddingOverlap.pdf} \caption{Translation manifold generated by the characteristic function of the unit disk with parameter set $\Theta_1$. We consider a uniform $6\times 6$ grid in each disjoint piece of the parameter space to generate $\{\theta_i\}$. Shown are the original translation grid, Wassmap embedding, and ISOMAP embedding.} \label{FIG:TranslationNonconvex} \end{figure} Both \cref{FIG:Translation,FIG:TranslationNonconvex} show that Wassmap recovers the underlying translation grid up to rigid motion as predicted by \cref{THM:WassmapTranslation}; note in \cref{FIG:Translation} a rotation appears, but the side-lengths of the embedded grid are 2 as in the original parameter set $\Theta_0$. In both experiments, the translated discs overlap; consequently, ISOMAP produces embeddings that appear coherent despite failing to recover the parameter set. In particular, the ISOMAP embedding of $\Theta_0$ appears to be a morphed grid, but the scale is dilated in a way that the Wassmap embedding is not. We note that in cases where the translated images do not overlap, the ISOMAP embedding is meaningless because all images have the same distance apart; nonetheless in such cases Wassmap still recovers the translation set. \Cref{FIG:TranslationNonconvex} illustrates the fact that Wassmap is capable of recovering nonconvex translational parameter sets, in contrast to both discrete and continuum ISOMAP \cite{donoho2005image}. \subsection{Dilation manifold} To illustrate the case of dilations, we consider the same base measure as in the translation case (disc support function centered at $(0,0)$), but now apply the isotropic dilation transformation $D_\theta$ as discussed in \cref{subsec:dilationthy}, where the parameters $\vartheta_1,\vartheta_2$ come from a regular 4x4 subgrid of $[0.5,2]\times[0.5,4]$. The dilation parameter grid, the resulting sample images, the Wassmap embedding and the Isomap embedding are shown in \cref{FIG:Dilation}. Note that the ISOMAP embedding is poor, whereas the Wassmap embedding recovers the structure of the parameter set faithfully. Note that the dilation grid has size $1.5\times3.5$, and the Wassmap embedding has size approximately $1.75\times .75$. One can compute the projected second moment of the base measure $d\mu_0 = \frac{1}{\pi}\mathbbm{1}_D dx$ as $(M_2(P_i\mu_0))^\frac12 = \frac{1}{2}$. \Cref{COR:WassmapDilation} states that Wassmap recovers the dilation grid up to this factor and a global rotation. Thus we see that the Wassmap embedding does recover the original dilation grid multiplied by $0.5$ (the moment term) rotated by $\pi/2$ as predicted by \cref{COR:WassmapDilation}. \begin{figure}[h!] \centering \includegraphics[trim = 0 0 0 2em, clip, width=0.4\textwidth]{figures/DilationGrid.pdf} \hspace{.04\textwidth} \includegraphics[trim = 0 -3em 0 -3em, clip, scale = .21]{figures/DilationImages.pdf}\\ \hspace{2em}\includegraphics[trim = 0 2em 0 0, clip, width=0.38\textwidth]{figures/DilationWassmapEmbedding.pdf} \hspace{1em}\includegraphics[trim = -3em -2em -2em -3em, clip,width=0.38\textwidth]{figures/DilationISOMAPEmbedding.pdf} \caption{Dilation manifold generated by the characteristic function of the unit disk with parameter set $\Theta=[0.5,2]\times[0.5,4]$. We consider a uniform $4\times 4$ grid to generate $\{\theta_i\}$. Shown are the original dilation grid, the images $\mu_{\theta_i}$, the Wassmap embedding, and the ISOMAP embedding. }\label{FIG:Dilation} \end{figure} \subsection{Rotation manifold} To illustrate the case of rotational manifolds, we consider two possibilities. First, the base measure $\mu_0$ is the indicator of an origin-centered ellipse with major radius 1 and minor radius 0.5. This measure will be rotated about the origin to obtain the sampled manifold at uniform angles $\theta_i\in[0,2\pi]$; results are shown in \cref{FIG:RotationCentered}. Second, we consider the same base measure but centered at $(3,2)$, applying the same origin-centered rotation; results are shown in \cref{FIG:RotationUncentered}. We see in both cases that the Wassmap embeddings recover the circle from which the rotation angles are generated. This provides evidence that Wassmap is capable of recovering rotational manifolds, though at present we are not able to prove this as discussed in \cref{SEC:Rotation}. ISOMAP appears capable of finding the circle for the rotation of the centered ellipse for certain choices of parameters. The uncentered ellipse results in an incoherent ISOMAP embedding due to the nonoverlapping support of the rotated images. If one samples the circle more densely so that the supports overlap, ISOMAP can recover the circle if one uses a 2-nearest neighbor graph; however, other graph constructions fail to find the structure. This illustrates two advantages of Wassmap: it can work when the manifold is sampled sparsely, and it is not sensitive to the parameter choice, whereas ISOMAP requires dense sampling of the manifold and can be extremely sensitive to parameter choices. We highlight one facet of \cref{FIG:RotationCentered}: the Wassmap embedding appears to have fewer points than the original set of rotation angles. This is due to the periodic symmetry of the base measure under rotations. We have chosen the rotation set so that $\theta=0$ and $\theta=\pi$ are in the set, hence each of the points in the Wassmap embedding are actually duplicate points due to symmetry of the ellipse. If one chooses angles which do not catch symmetry of the ellipse, then the Wassmap embedding will recover all of the points; we do not show these experiments here due to space limitations. \begin{figure} \centering \hspace{0.1\textwidth}\includegraphics[trim = 0 -2em 0 -2em, clip, width=0.36\textwidth]{figures/RotationAngles.pdf} \hspace{0.1\textwidth} \includegraphics[trim = 0 3em 0 2em, clip, width=0.4\textwidth]{figures/RotationWassmapEmbedding.pdf} \includegraphics[trim=0 20em 0 18em, clip,width=0.55\textwidth]{figures/RotationImages.pdf} \includegraphics[width=0.35\textwidth]{figures/RotationISOMAPEmbedding.pdf} \caption{Rotation manifold generated by the characteristic function of an ellipse with major radius 1 and minor radius 0.5 centered at the origin. Rotation angles are uniformly sampled between $0$ and $2\pi$. Shown are the original points on the circle $(\cos\theta_i,\sin\theta_i)$, the images $\mu_{\theta_i}$, the Wassmap embedding, and the ISOMAP embedding.}\label{FIG:RotationCentered} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{figures/RotationNoncenteredAngles.pdf} \includegraphics[width=0.3\textwidth]{figures/RotationNoncenteredImages.pdf} \includegraphics[width=0.3\textwidth]{figures/RotationNoncenteredWassmapEmbedding.pdf} \caption{Rotation manifold generated by the characteristic function of an ellipse with major radius 1 and minor radius 0.5 centered at $(3,2)$. Rotation angles are uniformly sampled between $0$ and $2\pi$. Shown are the original points on the circle $(\cos\theta_i,\sin\theta_i)$, the images $\mu_{\theta_i}$ plotted on the same figure, and the Wassmap embedding.}\label{FIG:RotationUncentered} \end{figure} \subsection{Mimicking MNIST digits via diffeomorphisms} Here we will investigate a more general family of diffeomorphisms than those done previously. Our motivation is to mimic the MNIST handwritten digit dataset \cite{lecun1998mnist} by creating a standard handwritten $0$ represented by an elliptic annulus. To capture variations in the writing, we consider a parametrized family of diffeomorphisms of the form $f_\theta(x) = f_0(T_\theta(x))$, where $f_0$ is the base $0$ and the family of diffeomorphisms $T_\theta$ is defined via smoothly morphing a coordinate grid via a global sheer and local rotation as follows: \begin{align} T_\theta(x) = \left[\begin{array}{cc} \cos(\alpha_\theta(x)) & -\sin(\alpha_\theta(x)) \\ \sin(\alpha_\theta(x)) & \cos(\alpha_\theta(x)) \end{array}\right]x \end{align} where $\alpha_\theta(x_1,x_2) = \theta_1\cos(x_1+\theta_2 x_2)\cos(x_2)$. \Cref{fig:deformation_family} illustrates the base image and a sample grid deformation of it. \Cref{fig:diffeo_2d_3d_comparison} shows the Wassmap and Isomap embeddings of $\{f_{\theta_i}\}$ as defined above for a uniform $16\times 16$ grid of $\theta$ values with $0\leq \theta_1\leq \pi/2 $ and $0.1\leq \theta_2\leq 1$. Both embeddings into $\mathbb{R}^2$ and $\mathbb{R}^3$ are shown. In this case, the 2-dimensional ISOMAP embedding is similar to the Wassmap embedding, but the 3-dimensional Wassmap embedding is significantly more structured than the ISOMAP one. This evidence suggests that Wassmap may be capable of finding structure of manifolds generated by a restricted family of diffeomorphisms, but more exploration is needed in the future. \begin{figure}[h!] \centering \includegraphics[width=.6\textwidth]{figures/grid_morph_overlay.pdf} \caption{Example image generated with the grid deformation family. The base object function $f_0(x)$ is evaluated on the standard grid (left) and a morphed grid with $\theta_1 = 0.4$ and $\theta_2 =\pi/4$ (right).} \label{fig:deformation_family} \end{figure} \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{figures/wassmap_diffeo_comparison_2d_3d.pdf} \caption{Comparison of the two- and three-dimensional embeddings generated with Wassmap and ISOMAP for the grid deformation family. The Wassmap technique produces a much smoother embedding with clear geometric structure, while the ISOMAP method produces a less coherent embedding. The surfaces were produced using a surface reconstruction method \cite{mycrust}. } \label{fig:diffeo_2d_3d_comparison} \end{figure*} \subsection{Embedding MNIST} To conclude, we show the effect of Wassmap on embedding MNIST. As a prelude, we randomly sample 100 handwritten 0s and 100 1s from MNIST and compute the 2-dimensional Wassmap and ISOMAP embeddings. \Cref{FIG:MNIST01} shows the resulting embeddings including three ISOMAP embeddings corresponding to different choices of $\varepsilon$ when forming the $\varepsilon$--neighborhood graph. Wassmap produces an embedding in which the classes are easily separated by a kernel SVM or nearest neighbor classifier, whereas the ISOMAP embedding is sensitive to the choice of $\varepsilon$, and in some instances results in nontrivial class overlaps. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{figures/WassmapMNIST01.pdf} \includegraphics[width=0.45\textwidth]{figures/ISOMAPMNIST01_eps1000.pdf} \includegraphics[width=0.45\textwidth]{figures/ISOMAPMNIST01_eps2000.pdf} \includegraphics[width=0.45\textwidth]{figures/ISOMAPMNIST01_eps2500.pdf} \caption{Random sample of 0s (red) and 1s (blue) from MNIST. Shown are the Wassmap Embedding (top left) and ISOMAP embeddings for varying choice of $\varepsilon$ to form the $\varepsilon$--neighborhood graph: (top right) $\varepsilon = 1000$, (bottom left) $\varepsilon=2000$, (bottom right) $\varepsilon=2500$.} \label{FIG:MNIST01} \end{figure} For a larger-scale test, we randomly subsample all MNIST classes, compute Wassmap and ISOMAP embeddings into $\mathbb{R}^4$, and display scatter plots of projections onto the each 2--dimensional coordinate plane of the embeddings in \cref{fig:pairwise_mnist}. While these are somewhat challenging to interpret, we note that the Wassmap embedding appears to show better clustering of each class and separation of classes. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{figures/pairwise_scatter_wass_4x4.pdf} \includegraphics[width=0.45\textwidth]{figures/pairwise_scatter_L2_4x4.pdf} \caption{Pairwise scatter plots for the Wassmap (left) and ISOMAP (right) embedding of MNIST digits into $\mathbb{R}^4$. The $(i,j)$th image displays a scatter plot of the projection of the embedding onto the $(i,j)$ coordinate plane; along the diagonal, a histogram of the $i$th embedding coordinate is shown. It is observed that Wassmap does a better job of separating the MNIST classes in several dimensions, whereas -- with the exception of the dark green `ones' class -- the ISOMAP embedding shows no clear separation among the classes.} \label{fig:pairwise_mnist} \end{figure} \section{Conclusion and Future Outlook} This paper proposed the use of Wasserstein distances in the ISOMAP algorithm as a more suitable measure of distance between images. The resulting Wassmap algorithm and its variants were shown to recover (up to rigid transformation) several parametrizations of image manifolds, including translation and dilation sets. We provided a bridge which transfers functional manifold recovery results to discrete recovery, which illustrate that the Discrete Wassmap algorithm recovers parametrizations of image manifolds generated by discrete measures. The practical experiments illustrate the effectiveness of the proposed framework on various synthetic and benchmark data. There is more to be explored regarding Wassmap, including its potential to recover rotation manifolds, those generated by composition of different operations (e.g., translation plus dilation or rotation), and manifolds generated by some class of parametrized diffeomorphisms acting on one or multiple generators. It also remains to explore the effects of additive noise ($\eta$ in \eqref{EQN:Imaging}) and the structure of the imaging operator $\mathcal{H}$. Future work will also explore the use of Wasserstein distances in other manifold learning paradigms, including local methods such as LLE and tSNE, as well as use of $W_p$ for other $p\in[1,\infty]$ (for example, Kileel et al.~\cite{kileel2021manifold} approximate the classic Earthmover's Distance $W_1$ in the Laplacian eigenmap setting). \bibliographystyle{siamplain}
1,477,468,751,218
arxiv
\section{Introduction} Cosmic inflation \cite{inflation} has been a paradigm beyond the Standard Big Bang Cosmology in which flatness, isotropy, homogeneity, horizon and relic problems are explained and solved. The inflaton generates scale-invariant and Gaussian spectrum of density fluctuations. Furthermore, quantum fluctuations during inflation provides a seed for the large-scale structure formation as we see now. An economical proposal to utilize the Higgs doublet in the Standard Model(SM) as the inflaton has recently drawn some attention \cite{higgsinf}. This is the so called Higgs inflation. In this scenario, the chaotic inflation can be realized due to a large non-minimal coupling of the Higgs doublet to gravity \cite{nonminimal}, instead of having a tiny Higgs quartic coupling, which is contradictory with the Higgs mass bound. However, some time after the proposal, it has been shown by power counting formalism that the Hubble scale during inflation is proximate to the unitarity bound on the new physics scale associated with the breakdown of the semi-classical approximation in the effective theory \cite{boundH}. This result is independent of the frames and the backgrounds that the power counting is applied to \cite{frameindep}. Nonetheless, a singlet field with non-minimal coupling could be a viable inflaton candidate for a small singlet quartic self-coupling for which the Hubble scale can be smaller than the unitarity bound. Weak-scale supersymmetry \cite{susy} is a solution to the hierarchy problem in the SM and has been one of main topics in the search for new physics at the Large Hadron Collider(LHC). In the Minimal Supersymmetric Standard Model(MSSM), there are two Higgs doublets and the Higgs quartic coupling is given in terms of the electroweak gauge couplings. Then, one may ask whether SUSY can help address the naturalness issue of the Higgs inflation in the context of the MSSM. However, apart from the unitarity problem in the Higgs inflation, it has been shown that in the MSSM, the Higgs inflation cannot be realized due to the instability along the $\beta$ field which is the ratio of two Higgs VEVs \cite{higgsnmssm}. On the other hand, in the Next-to-Minimal Supersymmetric Standard Model(NMSSM) \cite{nmssm}, an additional Higgs self-coupling can be introduced by the superpotential term coupling the Higgs doublets to a singlet chiral superfield and it can provide the vacuum energy needed for inflation \cite{higgsnmssm}. Since this new Higgs self-coupling can be made small without violating the LEP bound on the Higgs mass, there is a possibility for the Higgs inflation to work within the semi-classical approximation. However, even in this case, the singlet field, which would be a non-inflaton field, gets a tachyonic mass during inflation and would spoil the slow-roll inflation of the Higgs fields \cite{jsugra}. In this paper, we revisit the tachyonic mass problem in the NMSSM in 4D Jordan frame supergravity \cite{jsugra}. For this purpose, we consider a simple toy model with two singlet chiral superfields\footnote{The single field inflation with non-minimal coupling cannot be realized in supergravity.} to capture the main difficulty of the inflation with non-minimal coupling in supergravity and provide a solution to the problem. Thus, we introduce two singlet fields: one singlet field becomes the inflaton and the other singlet field provides a nonzero F-term potential through the coupling to the inflaton field. Then, we find that there appears a tachyonic instability along the non-inflaton singlet for the minimal form of the frame function due to the negative supergravity mass correction. As a solution to the tachyonic mass problem, we add a higher order correction for the non-inflaton field in the frame function. In this case, for an appropriate value of the coefficient of the new term, we show that it is possible to make the non-inflaton singlet field get a positive squared mass and stable during inflation while the inflaton dynamics is unchanged. We give an example where heavy fields coupled only to the non-inflaton field generates such a higher order correction with necessary coefficient in the one-loop effective frame function. Our result can be applied directly to the Higgs inflation with zero D-term in the NMSSM. For a successful Higgs inflation in the NMSSM, the Higgs parameters in the NMSSM are constrained for the necessary inflationary parameters. In particular, the non-minimal Higgs coupling gives rise to the effective $\mu$ term by Giudice-Masiero mechanism \cite{gmmech}. As a result, for a large non-minimal coupling, gravitino mass is much smaller than the effective $\mu$ term which is of order the soft mass parameters. Thus, gravitino can be LSP and become a dark matter candidate. The paper is organized as follows. We first explain a general framework for 4D Jordan frame supergravity where non-minimal couplings for scalar fields are suitably introduced. Then we take a minimal inflationary model with two singlet chiral superfields and point out the tachyonic mass problem. Consequently, we propose a solution to the tachyonic instability problem and find a necessary condition for satisfying the slow-roll inflation and the unitarity bound on the heavy field mass. Next we discuss the implication of the Higgs inflation on the NMSSM phenomenology in the later section. Finally, a conclusion is drawn. There are two appendices dealing with the K\"ahler metric in Jordan frame supergravity and containing an example where the one-loop correction to the frame function is calculated in the presence of heavy fields. \section{Jordan frame supergravity} We start with the general Einstein-frame action in 4D ${\cal N}=1$ supergravity \cite{esugra}, \begin{equation} S_E=\int d^4 x \sqrt{-g_E}\Big(\frac{1}{2}R -K_{i{\bar j}}D_\mu\phi^i D^\mu {\bar\phi}^{\bar j}-V_E\Big) \end{equation} where the covariant derivatives for scalar fields $\phi^i$ are given by $D_\mu\phi^i=\partial_\mu\phi^i-A^a_\mu \eta^i_a$. Here the Einstein-frame scalar potential is given in terms of the K\"ahler potential $K$, the superpotential $W$ and the gauge kinetic function $f_{ab}$ by \begin{equation} V_E = V_F + V_D \label{spot} \end{equation} where \begin{eqnarray} V_F &=& e^K \Big((K^{-1})^{i{\bar j}}(D_iW)(D_{\bar j} W^\dagger)-3|W|^2\Big), \\ V_D &=& \frac{1}{2} {\rm Re}f^{-1}_{ab} \Big(-i\eta^i_a\partial_i K +3ir_a\Big)\Big(-i\eta^i_b\partial_i K +3ir_b\Big) \end{eqnarray} with $G\equiv K+\ln |W|^2$ and the gauge transformations of the K\"ahler potential and the superpotential being $\delta_a K=3(r_a+{\bar r}_a)$ and $\delta_a W=-3r_a W$, respectively. We note that $r_a$ is nonzero only for the gauged $U(1)_R$ symmetry for which the superpotential transforms with $r_R=-\frac{2}{3}ig_R$. Performing a Weyl transformation of the metric with $g^E_{\mu\nu}=(-\Omega/3)g^J_{\mu\nu}$, we obtain the general Jordan-frame supergravity action from the above Einstein-frame action as follows, \begin{equation} S_J=\int d^4x \sqrt{-g_J}\Big(-\frac{1}{6}\Omega R-\frac{1}{4\Omega}(\partial_\mu\Omega)(\partial^\mu\Omega) +\frac{1}{3}\Omega K_{i{\bar j}}D_\mu\phi^i D^\mu {\bar\phi}^{\bar j}-V_J \Big) \end{equation} where the Jordan-frame scalar potential is related to the Einstein-frame one as \begin{equation} V_J = \frac{\Omega^2}{9} V_E. \label{jspot} \end{equation} The complete Jordan-frame supergravity action including fermions and gauge bosons can be found in Ref.~\cite{jsugra}. Now specifying the frame function $\Omega$ to the K\"ahler potential as \begin{equation} \Omega = - 3 M^2_P e^{-K/(3M^2_P)}, \label{framefunc} \end{equation} we simplify the Jordan-frame action \cite{jsugra} as \begin{equation} S_J=\int d^4x \sqrt{-g_J}\Big(-\frac{1}{6}\Omega R-\Omega_{i{\bar j}}D_\mu\phi^i D^\mu {\bar\phi}^{\bar j}+\Omega b^2_\mu-V_J \Big) \end{equation} where the auxiliary vector field $b_\mu$ gets the following form, \begin{equation} b_\mu =\frac{1}{2}iA^a_\mu(r_a -{\bar r}_a) -\frac{i}{2\Omega}\Big(D_\mu\phi^i\partial_i\Omega - D_\mu {\bar\phi}^{\bar i}\partial_{\bar i}\Omega\Big). \label{auxvec} \end{equation} Therefore, the kinetic terms for scalar fields depend on the analogue of the K\"ahler metric with $\Omega$ playing a role of $K$. However, the geometry of the non-linear sigma model of scalar is not of the K\"ahler type because of the additional term proportional to $b^2_\mu$. In order to get the canonical scalar kinetic terms in the Jordan frame, we need $\Omega_{i{\bar j}}=\delta_{i{\bar j}}$ and $b_\mu=0$. The most general frame function for giving $\Omega_{i{\bar j}}=\delta_{i{\bar j}}$ is the following \cite{jsugra}, \begin{equation} \Omega= - 3M^2_P + \delta_{i{\bar j}} \phi^i {\bar\phi}^{\bar j} -\frac{3}{2}(F(\phi)+{\rm h.c.}). \end{equation} Then, from the relation (\ref{framefunc}), the corresponding K\"ahler potential takes the following form, \begin{equation} K= - 3M^2_P \ln \Big(1-\frac{1}{3M^2_P}\delta_{i{\bar j}} \phi^i {\bar\phi}^{\bar j} +\frac{1}{2M^2_P}(F(\phi)+{\rm h.c.})\Big). \end{equation} Even with this choice of the frame function, we note that the auxiliary vector field $b_\mu$ is nonzero due to the angular modes of complex scalar fields. During the cosmological evolution, however, when only the moduli $|\phi^i|$ dominate the dynamics, the scalar kinetic terms can be of canonical form. When $F=0$, the non-minimal coupling of the scalar fields are fixed as ${\cal L}=-\sqrt{-g}\,\sum_i\xi_i|\phi_i|^2R$ with $\xi_i=\frac{1}{6}$ so the scalar fields are conformally coupled to gravity. However, by choosing an appropriate holomorphic function $F$, we can break the conformal symmetry explicitly and include the nontrivial non-minimal coupling to gravity. Thus, it is possible to get a supergravity realization of the inflation model with non-minimal coupling. Henceforth we set the Planck scale to $M^2_P=1$ but we will recover $M_P$ whenever needed. \section{The tachyonic mass problem in Jordan-frame supergravity inflation} We consider an inflation model with two singlets $S$ and $X$ in the Jordan frame supergravity. For the canonical scalar kinetic terms with two singlets, the general frame function in the Jordan frame is \begin{equation} \Omega=-3+S^\dagger S + X^\dagger X -\frac{3}{2}(F(S,X)+{\rm h.c.}). \label{framefunc1} \end{equation} Now we choose a non-minimal coupling to be $F=\chi S^2$ with $\xi$ being a dimensionless constant. Then, the K\"ahler potential becomes \begin{equation} K= - 3\ln \Big(1-\frac{1}{3}S^\dagger S -\frac{1}{3}X^\dagger X +\frac{1}{2}(\chi S^2+{\rm h.c.})\Big). \end{equation} Moreover, by imposing a $U(1)_G$ global symmetry\footnote{The R-symmetry only does not restrict the superpotential to the form considered in this paper, rather also allowing for a tadpole term $W=f^2 X$, which would affect the slow-roll inflation unless $|f|\ll \frac{|\lambda|}{\sqrt{2}}|S|$. We allow only for the dimensionless coupling by imposing the non-R $U(1)_G$ symmetry, which is the analogue of PQ symmetry in the NMSSM.} with charges, $G[X]=-2$ and $G[S]=+1$, we find the following unique superpotential at the renormalizable level, \begin{eqnarray} W= \frac{1}{2}\lambda X S^2. \end{eqnarray} where $\lambda$ is a dimensionless coupling. The first attempt for the single field inflation with $W=\frac{1}{3}\lambda S^3$ was unsuccessful due to the negative vacuum energy coming from the supergravity correction \cite{higgsnmssm}. This is in a similar spirit to the general problem in supergravity chaotic inflation models with a single field \cite{chaoticsugra}. We note that the $U(1)_G$ global symmetry is broken explicitly in the K\"ahler potential only by the non-minimal coupling. With no gauged $U(1)_R$ symmetry, the scalar potential for the singlets comes only from the F-terms and it is given in the Jordan frame by using eq.~(\ref{jspot}) with eq.~(\ref{spot}), \begin{eqnarray} V_J&=&(1-k)^{-1}\Big[K^{SS^\dagger}|D_SW|^2+K^{XX^\dagger}|D_X W|^2 \nonumber \\ &&+(K^{SX^\dagger} (D_S W) (D_X W)^\dagger +{\rm h.c.})-3|W|^2\Big] \end{eqnarray} where $k\equiv \frac{1}{3}|S|^2+\frac{1}{3}|X|^2-\frac{1}{2}(\chi S^2+{\rm h.c.})$ and \begin{eqnarray} D_S W&=& \lambda X S\bigg(1+\frac{|S|^2-3\chi S^2}{2(1-k_0-\frac{1}{3}|X|^2)}\bigg),\\ D_X W&=& \frac{1}{2}\lambda S^2 \bigg(1+\frac{|X|^2}{1-k_0-\frac{1}{3}|X|^2}\bigg). \end{eqnarray} By using the K\"ahler metric with $\gamma=0$ given in eqs.~(\ref{kahlerm}) and (\ref{inversekahlerm}), the Jordan-frame scalar potential becomes \begin{eqnarray} V_J&=&\frac{1-k_0}{1-k_0+\frac{1}{3}|S^\dagger-3\chi S|^2}\,|D_S W|^2+\frac{1-k+\frac{1}{3}|S^\dagger-3\chi S|^2}{1-k_0+\frac{1}{3}|S^\dagger-3\chi S|^2}\,|D_X W|^2 \nonumber \\ &&-\frac{1}{1-k_0+\frac{1}{3}|S^\dagger-3\chi S|^2}\cdot \frac{1}{3}X^\dagger (S-3\chi^\dagger S^\dagger)(D_S W)(D_X W)^\dagger+{\rm h.c.} \nonumber \\ &&-3(1-k)^{-1}|W|^2 \end{eqnarray} with $k_0\equiv \frac{1}{3}|S|^2-\frac{1}{2}(\chi S^2+{\rm h.c.})$. For $|X|\ll 1$ and $\chi|S|^2\gg 1$, we obtain the Jordan-frame scalar potential as \begin{equation} V_J\simeq \frac{1}{4}|\lambda|^2 |S|^4 -\frac{|\lambda|^2}{6\chi}|X|^2 (S^2 +S^{\dagger 2})+{\cal O}\Big(\frac{|\lambda|^2}{\chi^2}|X|^4\Big). \end{equation} Thus, the Jordan-frame scalar potential is unstable in the direction of the real part of the $S$ singlet. Consequently, from eq.~(\ref{jspot}), with $S\equiv |S|e^{i\theta}$, the Einstein-frame scalar potential is \begin{eqnarray} V_E&=&(1-k)^{-2} V_J \nonumber \\ &\simeq & \frac{|\lambda|^2M^4_P}{4\chi^2\cos^2(2\theta)}\bigg[1-\frac{2M^2_P}{\chi|S|^2\cos 2\theta} +\frac{2}{3}\Big(\frac{1}{\cos 2\theta}-2\cos 2\theta\Big)\frac{|X|^2}{\chi|S|^2}\bigg]. \label{espot} \end{eqnarray} Thus, we find that the F-term contribution of the $X$ singlet approaches a positive constant for $\chi |S|^2\gg 1$ while the other terms are suppressed so there appears a vacuum energy required for the inflation. Here we have recovered the Planck scale, $M_P$. Then, minimizing the potential for the angle at $\theta\simeq 0$ from the first term in Eq.~(\ref{espot}), the scalar potential becomes \begin{equation} V_E\simeq \frac{|\lambda|^2M^4_P}{4\chi^2}\bigg[1-\frac{2M^2_P}{\chi|S|^2}-\frac{2|X|^2}{3\chi|S|^2}\bigg]. \end{equation} Therefore, the slow-roll inflation along the $S$ singlet is possible for $\chi|S|^2\gg 1$ so the Hubble scale during the inflation is given by $H^2\simeq \frac{V_E}{3M^2_P}\simeq \frac{|\lambda|^2M^2_P}{12\chi^2}$. However, even in the Einstein frame, we get a tachyonic effective mass of the $X$ singlet, ending up with the instability of this singlet direction. On the other hand, the kinetic terms for the singlets in the Einstein frame are given by \begin{equation} {\cal L}_{\rm kin}\simeq -\frac{3M^2_P}{|S|^2}|\partial_\mu S|^2 -\frac{M^2_P}{\chi |S|^2}|\partial_\mu X|^2-\Big(\frac{M^2_P XS}{\chi|S|^4}\partial_\mu S \partial^\mu X^\dagger +{\rm h.c.}\Big). \label{einskin} \end{equation} Then, at the minimum with $\theta\simeq 0$, in terms of the canonical inflaton field, we obtain the Lagrangian density as \begin{eqnarray} {\cal L}_{\varphi,X}&\simeq& -\frac{1}{2}(\partial_\mu\varphi)^2-e^{-2\varphi/(\sqrt{6}M_P)}|\partial_\mu X|^2 -\frac{1}{\sqrt{6}M_P}e^{-2\varphi/(\sqrt{6}M_P)}\partial_\mu\varphi \partial^\mu |X|^2 \nonumber \\ &&-\frac{|\lambda|^2M^4_P}{4\chi^2}\bigg[1-2 e^{-2\varphi/(\sqrt{6}M_P)}-\frac{2}{3}e^{-2\varphi/(\sqrt{6}M_P)}\frac{|X|^2}{M^2_P}\bigg] \end{eqnarray} with $\varphi\equiv\frac{\sqrt{6}}{2}M_P\ln(\chi|S|^2/M^2_P)$. Consequently, we find that in the canonical field basis, the $X$ singlet has a tachyonic effective mass of order the Hubble scale\footnote{For the approximate dS background with a slow-rolling $\varphi$, and $|X|\ll 1$, the equation of motion for $X$ is $\ddot{X}+3H{\dot X}\simeq -m^2_X X$. So, for $m^2_X\simeq-2H^2$, the small perturbation of the $X$ singlet would grow exponentially as $X\propto {\rm exp}((\sqrt{17}-3)Ht/2)$ during inflation. Since the time scale of exponential growth is $t_X\simeq \frac{1.78}{H}$ so it would make the slow-roll inflation with 60 efoldings impossible. I would like to thank M. Giovannini for pointing this out.} \begin{equation} m^2_X\simeq K^{X X^\dagger} V_{E,X X^\dagger} \simeq -\frac{|\lambda|^2M^2_P}{6\chi^2}\simeq -2H^2. \end{equation} Therefore, the $X$ singlet would roll down fast into a minimum of the full scalar potential, dominating the scalar field dynamics and spoiling the slow-roll inflation along the $S$ singlet. This problem seems generic for the canonical scalar kinetic terms in the Jordan frame supergravity\footnote{One can compare this case to the generalized chaotic inflation for the minimal K\"ahler potential and the superpotential $W=XS^n$ with $n$ being a natural number in Refs.~\cite{chaoticsugra,westp} where the accompanying singlet has a vanishing mass at the origin.}. \section{A solution to the tachyonic mass problem} In this section, we propose a simple solution to the tachyonic mass problem encountered in the inflation model of Jordan frame supergravity. For this, we introduce a higher order correction with the $X$ singlet to the frame function as follows, \begin{equation} \Delta \Omega = -\gamma (X^\dagger X)^2 \label{add} \end{equation} where $\gamma=c\frac{M^2_P}{M^2}$ with $c$ being a dimensionless parameter and $M$ being the mass of heavy fields that are integrated out. Here we expressed the coefficient $\gamma$ as being dimensionless in units of $M_P=1$. In the presence of the higher order correction (\ref{add}), the Jordan-frame kinetic term for the $X$ singlet becomes non-canonical. However, we keep the Jordan-frame kinetic term for the $S$ singlet to be canonical. In this case, the additional non-canonical kinetic terms coming from $b_\mu$ in eq.~(\ref{auxvec}) still vanish for the frozen angular modes. Generically, however, both $(X^\dagger X)(S^\dagger S)$ and $(S^\dagger S)^2$ terms could be also generated after the heavy fields are integrated out. The $(X^\dagger X)(S^\dagger S)$ term corresponds to the effective wave function renormalization of the $X$ singlet during the inflation driven at a nonzero $|S|$. So, it does not affect much either the inflation dynamics or the $X$ singlet. However, in order to maintain the behavior of the chaotic inflation with the non-minimal coupling at a large $|S|$, the $(S^\dagger S)^2$ term must be suppressed compared to the non-minimal coupling term. That is, for $\gamma_s|S|^4\ll \chi |S|^2$ with $\gamma_s$ being the coefficient of the $(S^\dagger S)^2$ term, together with the chaotic inflation condition, $\chi |S|^2\gg 1$, we need $\frac{1}{\chi}\ll |S|^2\ll \frac{\chi}{\gamma_s}$. Then, the resultant upper bound on the coupling is $\gamma_s\ll \chi^2$. For instance, integrating out heavy fields of mass $M$, we would generate $\gamma_s= c_s\frac{M^2_P}{M^2}$, becoming $\gamma_s\sim c_s\chi^2$ for the heavy field mass saturating the unitarity bound as will be discussed in next section. Then, we would need $c_s\ll 1$ for the slow-roll inflation. As shown in the appendix B, the smallness of $c_s$ is guaranteed when the tree-level coupling of heavy fields to the $S$ singlet is forbidden by a $Z_2$ discrete symmetry. Even higher order non-holomorphic interactions for the $S$ singlet generated by the heavy fields would be suppressed by the same $Z_2$ symmetry. On the other hand, higher order corrections to the holomorphic part of the frame function will not be generated due to non-renormalization theorem \cite{higgsnmssm}. Now we discuss the effect of the $(X^\dagger X)^2$ term on the tachyonic mass problem. Due to the correction term in the frame function, for $|X|\ll 1$ and $\chi|S|^2\gg 1$, the Jordan-frame scalar potential is modified to \begin{eqnarray} V_J&\simeq& \frac{1}{4}|\lambda|^2(1+4\gamma|X|^2)|S|^4-\frac{|\lambda|^2}{6\chi}|X|^2(S^2+S^{\dagger 2}) \nonumber \\ &&+\frac{\gamma |\lambda|^2}{\chi}|X|^4\bigg(\frac{4|S|^4}{S^2+S^{\dagger 2}}+\frac{1}{3}(S^2+S^{\dagger 2})\bigg)+{\cal O}\Big(\frac{|\lambda|^2}{\chi^2}|X|^4\Big). \end{eqnarray} Thus, the higher order correction in the frame function leads to a higher dimensional interaction term, $|X|^2|S|^4$, which gives rise to an additional effective mass for the $X$ singlet during inflation and overcomes the tachyonic instability. Then, the Einstein-frame scalar potential is modified to \begin{eqnarray} V_E\simeq\frac{|\lambda|^2M^4_P}{4\chi^2\cos^2(2\theta)}\bigg[1-\frac{2M^2_P}{\chi|S|^2\cos 2\theta} +4\gamma |X|^2+\frac{2}{3}\Big(\frac{1}{\cos 2\theta}-2\cos 2\theta\Big)\frac{|X|^2}{\chi|S|^2}\bigg]. \end{eqnarray} Therefore, at the minimum with $\theta\simeq 0$, we get the resultant scalar potential in the Einstein frame, \begin{equation} V_E\simeq \frac{|\lambda|^2M^4_P}{4\chi^2}\bigg[1-\frac{2M^2_P}{\chi |S|^2}+4\gamma \frac{|X|^2}{M^2_P}-\frac{2|X|^2}{3\chi|S|^2}\bigg]. \end{equation} On the other hand, for $|X|\ll M_P$, the kinetic terms in the Einstein frame are the same as eq.~(\ref{einskin}). Thus, with $\varphi=\frac{\sqrt{6}}{2}M_P\ln(\chi|S|^2/M^2_P)$, the Lagrangian density of the singlets is \begin{eqnarray} {\cal L}_{\varphi,X}&\simeq& -\frac{1}{2}(\partial_\mu\varphi)^2-e^{-2\varphi/(\sqrt{6}M_P)}|\partial_\mu X|^2 -\frac{1}{\sqrt{6}M_P}e^{-2\varphi/(\sqrt{6}M_P)}\partial_\mu\varphi\partial^\mu |X|^2 \nonumber \\ &&-\frac{|\lambda|^2M^4_P}{4\chi^2}\bigg[1-2 e^{-2\varphi/\sqrt{6}}+\frac{2}{3}\Big(6\gamma -e^{-2\varphi/(\sqrt{6}M_P)}\Big)\frac{|X|^2}{M^2_P}\bigg]. \label{effaction} \end{eqnarray} So, the effective mass of the $X$ singlet becomes \begin{equation} m^2_X\simeq K^{XX^\dagger}V_{E,XX^\dagger}\simeq \Big(12\gamma e^{2\varphi/(\sqrt{6}M_P)}-2\Big)H^2. \end{equation} Consequently, we find that for $6\gamma e^{2\varphi/(\sqrt{6}M_P)}> 1$, the $X$ singlet can have a positive squared mass of order the Hubble scale during inflation. Then, the $X$ singlet can be stabilized at the origin and the slow-roll inflation is driven by the $S$ singlet. Using $e^{-2\varphi/(\sqrt{6}M_P)}\sim 0.02$ for the correct spectral index as will be discussed in next section, the tachyon-free condition becomes $\gamma> 0.003$. As shown in the appendix B, when heavy fields are coupled to the $X$ singlet but not to the $S$ singlet, they can be integrated out, generating the one-loop correction to the frame function. The resulting one-loop effective frame function has no higher order correction for the $S$ singlet but it contains the leading higher order term for the $X$ singlet as $\Delta\Omega=-\gamma(X^\dagger X)^2$ where $\gamma=c\frac{M^2_P}{M^2}$ with $c=\frac{|\kappa|^4}{192\pi^2}$. Here $\kappa$ is a dimensionless coupling of the $X$ singlet to the heavy field. In this case, we need $c=\frac{|\kappa|^4}{192\pi^2}> 0.003\frac{M^2}{M^2_P}$ for the stable $X$ singlet during inflation. As will be discussed in next section, imposing the unitarity bound $M\sim M_P/\chi$, the singlet coupling is constrained to be $|\kappa|>\frac{0.97}{\sqrt{\chi}}$. \section{Observational constraints versus unitarity bound} For the inflation model discussed in the previous section, we consider the observational consequences and the unitarity bound on new physics scale. First, from the Einstein-frame potential (\ref{effaction}) at $X=0$, the slow-roll parameters are determined as follows, \begin{eqnarray} \epsilon&\simeq& \frac{1}{2}\bigg(\frac{\frac{\partial V_E}{\partial \varphi}}{V_E}\bigg)^2\simeq \frac{4}{3} e^{-4\varphi_i/\sqrt{6}}, \\ \eta&\simeq& \frac{\frac{\partial^2 V_E}{\partial\varphi^2}}{V_E}\simeq -\frac{4}{3} e^{-2\varphi_i/\sqrt{6}} \end{eqnarray} where $\varphi_i\gg \sqrt{6}$ (or $|S_i|\gg \frac{1}{\sqrt{\chi}}$) is the scalar vev during inflation. From the number of efoldings, $N\simeq 60$, we get the spectral index and the tensor to scalar ratio, \begin{equation} n_s\simeq 0.968, \quad r\simeq 3.0\times 10^{-3}. \end{equation} On the other hand, the density perturbation at horizon exit is given by \begin{equation} \Delta^2_{\cal R}=\frac{V_E}{24\pi^2 M^4_P\epsilon}\simeq \frac{|\lambda|^2 N^2}{72\pi^2\chi^2}. \end{equation} Thus, from the COBE normalization, $\delta_H=\frac{2}{5}\Delta_{\cal R}=(1.91\pm 0.17)\cdot 10^{-5}$, we get a constraint on the ratio between the dimensionless inflation parameters, \begin{equation} \frac{\chi}{|\lambda|}\simeq 5\times 10^4.\label{constraintonchi} \end{equation} The non-minimal coupling to gravity induces a new effective interaction between the graviton and the scalar field. Then, the power counting for the scattering amplitude for the scalar field involving the effective interaction gives rise to the unitarity bound on the maximum energy scale. In particular, the Hubble scale, which is the inflation energy scale, must be much smaller than the unitarity bound such that the semi-classical approximation for inflation is justified. In our case, the non-minimal coupling, $F=\chi S^2$, gives rise to the effective interaction term in the Jordan frame, \begin{equation} {\cal L}_{\rm eff}\simeq \Big(\frac{\chi}{M_P} S^2 + {\rm h.c.}\Big)\Box h^\mu_\mu \label{effint} \end{equation} where $h^\mu_\mu$ is the trace part of the graviton. Thus, from the power counting on scalar scattering \cite{unitarity,boundH,frameindep}, the upper bound allowed by unitarity on the new-physics scale is given by $\Lambda\simeq \frac{M_P}{\chi}$. On the other hand, the Hubble scale during inflation is approximately given by $H\simeq \frac{|\lambda|M_P}{\chi}$. For the semi-classical approximation for the inflation dynamics to be justified \cite{boundH}, we must have $H\ll \Lambda$, resulting in $|\lambda|\ll 1$. Suppose that $|\lambda|=0.01$. Then, from eq.~(\ref{constraintonchi}), we need to take the non-minimal coupling to be $\chi\simeq 5\times 10^2$. In this case, the quantum gravity scale becomes $\Lambda\simeq 0.01 M_P\sim 10^{16}$ GeV, being close to the GUT scale. \section{Implications for the Higgs inflation in NMSSM} The SM Higgs inflation with non-minimal coupling has been recently generalized to the supersymmetric case \cite{higgsnmssm} where there are two Higgs doublets required to be present for the anomaly cancellations. In the MSSM, the Higgs quartic self-interaction comes from the gauge interactions, i.e. the D-term. However, it turns out that the Higgs inflation does not work in the MSSM\footnote{The MSSM inflation with a flat direction such as $\phi=LLe$ or $udd$ can occur at the inflection point, which requires a fine-tuning between soft mass parameters for the flat direction \cite{mazumdar}.} because the slow-roll conditions are not satisfied along the $\tan\beta$ direction for a nonzero D-term \cite{higgsnmssm}. Therefore, we need an additional quartic self-interaction for the Higgs from the F-term. The extension of the MSSM with a gauge singlet has been considered for solving the $\mu$ problem \cite{nmssm}. In the NMSSM, the same term giving rise to the $\mu$ term in the superpotential leads to an additional quartic self-interaction for the Higgs in the scalar potential. The NMSSM extension of the Higgs inflation \cite{higgsnmssm} has been proposed with the following frame function and the superpotential, \begin{eqnarray} \Omega&=&-3+H^\dagger_u H_u + H^\dagger_d H_d + X^\dagger X +\frac{3}{2}(\chi H_u H_d +{\rm h.c.}), \label{framenmssm} \\ W&=& \frac{1}{2}\lambda X H_u H_d +\frac{1}{3}\rho X^3 \end{eqnarray} where $H_u,H_d$ are the Higgs doublets, $X$ is the SM singlet and $\chi,\lambda,\rho$ are dimensionless parameters. In this case, the frame function or the K\"ahler potential in this model has a particular Higgs-dependent structure due to the non-minimal coupling so the $\mu$ term of order the SUSY breaking scale can actually be generated within supergravity \cite{gmmech} as will be discussed later in this section. This model with $\beta=\frac{\pi}{4}$ where $\tan\beta\equiv \frac{\langle H_u\rangle}{\langle H_d\rangle}$, i.e. the D-flat direction, is similar to our model with two singlets in Jordan frame supergravity proposed in this paper. That is, we can identify the inflaton $S$ in our toy model with the Higgs doublets satisfying the D-flat condition. In this NMSSM extension, it has been shown \cite{jsugra} that the inflationary trajectory with $X=0$ in this model has a tachyonic instability in the same way as analyzed in section 3. In order to solve the problem with a tachyonic mass of order the Hubble scale, we can add the same higher order term, $\Delta\Omega=-\gamma (X^\dagger X)^2$, for the $X$ singlet in the frame function (\ref{framenmssm}) as in Section 4. In the NMSSM, however, there is an additional tachyonic mass for the $X$ singlet coming from the cubic term in the superpotential \cite{jsugra}: $\Delta m^2_X\simeq -\frac{|\lambda\rho|M^2_P}{\chi}\simeq -\frac{12\chi|\rho|}{|\lambda|}H^2$. Thus, for $\Delta m^2_X\gtrsim -H^2$, from $\frac{\chi}{|\lambda|}\simeq 5\times 10^4$, we need to choose a very small cubic coupling, $|\rho|\lesssim 10^{-5}$. Therefore, if $\gamma\gtrsim 0.003$, the higher order term in the frame function gives rise to the positive squared mass for the $X$ singlet of order the Hubble scale so the singlet cubic coupling in the superpotential tends to be disallowed by the PQ symmetry. In this case, since the non-minimal coupling breaks the PQ symmetry explicitly, there is no problem with a dangerous PQ axion. If we choose a larger value of the higher order term, $\gamma\gg 0.003$, then it is possible to allow for the sizable singlet cubic coupling. Now we are in a position to address the question on the fate of the trajectory $\beta=\frac{\pi}{4}$ at the end of inflation. During the inflation, it has been shown \cite{jsugra} that the field $\beta$ rapidly approaches $\frac{\pi}{4}$ and stay there for $\chi (g^2+g^{'2})\gg \lambda^2$. For a large non-minimal coupling $\chi$ and a small $\lambda$, this condition is always satisfied. However, at the end of inflation, the stability of the trajectory $\beta=\frac{\pi}{4}$ depends on whether $g^2,g^{'2}>2\lambda^2$ or not \cite{jsugra}. If the tachyonic instability is present at the end of inflation, the tachyonic preheating would lead to large fluctuations of the field $\beta$ and spontaneous symmetry breaking. If $g^2\simeq g^{'2}\sim \frac{1}{2}$ is given by the GUT-scale values at the end of inflation and the unitarity bound is satisfied for $|\lambda|\ll 1$, we get $g^2,g^{'2}\gg 2\lambda^2$ so there is no tachyonic instability of the $\beta$ field at the end of inflation. In the NMSSM, $\lambda$ can be small for the Hubble scale ($H\simeq \frac{\lambda M_P}{\chi}$) during inflation to be much lower than the unitarity cutoff ($\Lambda\simeq \frac{M_P}{\chi}$), without having a phenomenologically unacceptable light Higgs mass unlike the SM Higgs inflation. Therefore, it will be interesting to investigate the phenomenological consequences of the reliable Higgs inflation on the parameter space in the NMSSM. In order to compare to the low-energy data, we need to consider the running of the coupling constants. But, here we make a qualitative discussion assuming that the running coupling constants are not so significantly different from the ones during inflation. If the Higgs doublets are the inflaton, a large non-minimal coupling must be introduced in the frame function, generating the additional contribution to the $\mu$ term by supergravity effect. That is, the effective $\mu$ term is given by the addition of the non-minimal coupling and the superpotential term as follows, \begin{equation} \mu = \frac{3}{2}\chi m_{3/2} + \frac{1}{2}\lambda \langle X\rangle \end{equation} where $m_{3/2}=|\langle e^{K/2} W\rangle|$ is the gravitino mass. In order for the $\mu$ term to be of the soft mass scale for electroweak symmetry breaking, we have to get $m_{3/2}\sim \frac{m_{\rm soft}}{\chi}$ and $\langle X\rangle\leq\frac{m_{\rm soft}}{\lambda}$. Suppose that $\lambda\sim 0.01$ and $\chi\sim 10^2$, satisfying the constraints coming from the COBE normalization (\ref{constraintonchi}) and the unitarity bound on the Hubble scale discussed below Eq.~(\ref{effint}). Then, for $m_{\rm soft}\sim 1$ TeV, we would need the gravitino mass to be $m_{3/2}\sim 10$ GeV while the $X$ singlet VEV is to be $\langle X\rangle \leq 100$ TeV. In order to get such a small gravitino mass, gauge mediation must be dominant over gravity mediation. In this case, when $R$-parity is conserved, the gravitino is LSP and can be either a non-thermal dark matter with neutralino NLSP \cite{feng} or a thermal dark matter for the reheating temperature $T_R\sim 10^8{\rm GeV}$ \cite{steffen}. Here a comment on the Higgs physics is in order. Since the $\lambda$ coupling is so small, the tree-level contribution to the Higgs mass coming from $\lambda$ is suppressed. Furthermore, the mixing between the singlet and the neutral component of the MSSM Higgs doublets is small so the lightest neutral Higgs in this NMSSM should be of the MSSM type. \section{Conclusion} We have reconsidered the inflationary model with a large non-minimal coupling in supergravity and have shown that for the minimal Jordan frame function, the non-inflaton field gets a tachyonic mass of order the Hubble scale during inflation, developing the instability of the slow-roll inflation. We have shown that the tachyonic mass problem can be solved by introducing a higher order correction in the frame function. The necessary correction can be obtained after the heavy fields coupled only to the non-inflaton field are integrated out. This result sheds light on the Higgs inflation in the NMSSM as the same tachyonic mass problem of the singlet is solved by a similar higher order correction in the frame function. Moreover, when the singlet coupling to the Higgs doublets in the NMSSM is made small, the Higgs inflation is a viable possibility within the semi-classical approximation in the effective theory even with a large non-minimal coupling. Thus, combining both the observational constraints on the inflation with the unitarity bound on the new physics, we found some interesting consequences on the NMSSM phenomenology. First, the large non-minimal coupling generates the $\mu$ term which is much larger than gravitino mass. Thus, gravitino becomes a dark matter candidate. In this case, one has to explain how the soft mass parameters of order the $\mu$ term can be much larger than gravitino mass for electroweak symmetry breaking. When gauge mediation is dominant over gravity mediation, it is possible to have $m_{3/2}\ll m_{\rm soft}$. Second, due to a necessary small singlet coupling to the Higgs doublets, the NMSSM Higgs looks more like the MSSM Higgs. In order to make sure of the naturalness of the Higgs inflation with a large non-minimal coupling in supergravity, one should also take into account the loop corrections of the inflaton potential due to the spontaneous SUSY breaking during inflation. We leave this important question in a future work. \section*{Acknowledgments} The author thanks Jim Cline for his interest and encouragement on the work and Jose Espinosa, Massimo Giovannini and Graham Ross for comments and discussions. \defB.\arabic{equation}{A.\arabic{equation}} \setcounter{equation}{0} \vskip0.8cm \noindent {\Large \bf Appendix A: The K\"ahler metric} \vskip0.4cm \noindent The K\"ahler metric $K_{i{\bar j}}=\partial_i\partial_{\bar j}K$ ($\phi^i=S,X$) for $K=-3\ln(1-k(S,X,S^\dagger,X^\dagger))$ is given by \begin{equation} K_{i{\bar j}}=\frac{3}{(1-k)^2}\left( \begin{array}{ll} (1-k)k_{S{\bar S}}+|k_S|^2 & (1-k)k_{S{\bar X}}+k_S k_{\bar X} \\ (1-k)k_{{\bar S}X}+k_{\bar S}k_X & (1-k)k_{X{\bar X}}+|k_{X}|^2 \end{array}\right). \end{equation} On the other hand, the first derivative of the K\"ahler metric is $K_i=\frac{3k_i}{1-k}$. The inverse K\"ahler metric is given by \begin{equation} K^{i{\bar j}}=\frac{(1-k)^2}{3D}\left( \begin{array}{ll} (1-k)k_{X{\bar X}}+|k_X|^2 & -(1-k)k_{{\bar S}X}-k_{\bar S} k_{X} \\ -(1-k)k_{S{\bar X}}+k_{S}k_{\bar X} & (1-k)k_{S{\bar S}}+|k_{S}|^2 \end{array}\right) \end{equation} with \begin{equation} D\equiv [(1-k)k_{S{\bar S}}+|k_{S}|^2][(1-k)k_{X{\bar X}}+|k_X|^2] -|(1-k)k_{S{\bar X}}+k_S k_{\bar X}|^2. \end{equation} For $k=\frac{1}{3}|S|^2+\frac{1}{3}(1-\gamma |X|^2)|X|^2-\frac{1}{2}(\chi S^2+{\rm h.c.})$ used in the text, we get $k_{S{\bar X}}=k_{{\bar S}X}=0$. In this case, the K\"ahler metric and the inverse K\"ahler metric are given as follows, \begin{equation} K_{i{\bar j}}=\frac{1}{(1-k)^2}\left( \begin{array}{ll} 1-k+\frac{1}{3}|S^\dagger-3\chi S|^2 & \,\,\,\,\frac{1}{3}X(1-2\gamma|X|^2)(S^\dagger-3\chi S) \\ \frac{1}{3}X^\dagger (1-2\gamma |X|^2)(S-3\chi^\dagger S^\dagger) & \,\,(1-k)(1-4\gamma|X|^2)+\frac{1}{3}|X|^2(1-2\gamma|X|^2)^2 \end{array}\right),\label{kahlerm} \end{equation} \begin{equation} K^{i{\bar j}}=\frac{(1-k)^2}{9D}\left( \begin{array}{ll} (1-k)(1-4\gamma|X|^2)+\frac{1}{3}|X|^2(1-2\gamma|X|^2)^2 & -\frac{1}{3}X^\dagger(1-2\gamma|X|^2)(S-3\chi^\dagger S^\dagger) \\ -\frac{1}{3}X (1-2\gamma |X|^2)(S^\dagger-3\chi S) & 1-k+\frac{1}{3}|S^\dagger-3\chi S|^2 \end{array}\right) \label{inversekahlerm} \end{equation} with \begin{eqnarray} \frac{(1-k)^2}{9D}=(1-k)\bigg[(1-4\gamma |X|^2)\Big(1-k+\frac{1}{3}|S^\dagger -3\chi S|^2\Big) +\frac{1}{3}|X|^2(1-2\gamma|X|^2)^2\bigg]^{-1}. \end{eqnarray} The first derivatives of the K\"ahler metric are given by \begin{equation} K_S=\frac{S^\dagger-3\chi S}{1-k}, \quad K_X=\frac{X^\dagger(1-2\gamma|X|^2)}{1-k}. \end{equation} \defB.\arabic{equation}{B.\arabic{equation}} \setcounter{equation}{0} \vskip0.8cm \noindent {\Large \bf Appendix B: One-loop frame function due to massive fields} \vskip0.4cm \noindent We consider the one-loop K\"ahler correction coming from heavy fields coupled to the $X$ singlet. We take a toy model providing the necessary higher order corrections to solve the tachyonic mass problem in Jordan frame supergravity. We introduce two heavy chiral superfields, $\Phi_1$ and $\Phi_2$, which have $U(1)_G$ charges, $G[\Phi_1]=+1$ and $G[\Phi_2]=-1$. We impose a $Z_2$ symmetry to forbid the unwanted coupling to the $S$ singlet as follows, \begin{equation} Z_2:\,\, \Phi_1\rightarrow -\Phi_1, \quad \Phi_2\rightarrow -\Phi_2, \quad S\rightarrow S, \quad X\rightarrow X. \end{equation} Then, the additional superpotential for the heavy fields is given by \begin{equation} W'=\frac{1}{2}\kappa X\Phi^2_1+M\Phi_1\Phi_2. \label{superpadd} \end{equation} We assume that the additional heavy fields have canonical kinetic terms in the Jordan frame, i.e. \begin{equation} \Omega_{\rm tree}=\Omega_0+\Phi^\dagger_1\Phi_1+\Phi^\dagger_2\Phi_2 \label{fullframe} \end{equation} where $\Omega_0$ is the frame function given in Eq.~(\ref{framefunc}). The general formula for the one-loop correction to the frame function in dimensional regularization(DR) \cite{1loopkahler} is \begin{equation} \Delta\Omega = -\frac{\Gamma(1-\frac{d}{2})}{2(4\pi)^{d/2}\mu^{d-4}}\sum_i\bigg[(1-6\xi_i)m^{d-2}_{B,i}+m^{d-2}_{F,i}-4m^{d-2}_{V_i}\bigg] \end{equation} where $d=4-\epsilon$, $\mu$ is the renormalization scale in DR, $m_{B,i},m_{F,i},m_{V,i}$ are masses of real scalars, Weyl fermions and gauge bosons in the Jordan frame. Here $\xi_i$ are the non-minimal couplings of real scalars from ${\cal L}=-\sqrt{-g}\frac{1}{2}\sum_i\xi_i\phi^2_i R$. We note that if the tree-level frame function does not contain a holomorphic non-minimal coupling, its expansion should give rise to $\Phi^\dagger_i\Phi_i$ as the only leading term in the K\"ahler potential so scalar fields have a conformal coupling in the Jordan frame \cite{1loopkahler}. In our case, since heavy scalar fields are conformally coupled to gravity with $\xi_1=\xi_2=\frac{1}{6}$, they do not contribute to the one-loop effective frame function. So, the one-loop frame function is given by the fermionic contribution only as follows, \begin{equation} \Delta\Omega=-\frac{1}{32\pi^2}\sum_{i=1,2}\bigg(-\frac{2}{\epsilon}\,m^2_{F,i}+m^2_{F,i}\ln\Big(\frac{m^2_{F,i}}{\mu^2}\Big)\bigg). \label{1loopframe} \end{equation} Since we are interested in the loop corrections to the $X$ singlet potential, we take $\langle \Phi_1\rangle=\langle \Phi_2\rangle=0$. So, from the superpotential (\ref{superpadd}), $\Phi_1,\Phi_2$ have no mixing with the $X$ singlet so they are decoupled. But, the eigenvalue masses of fermionic partners of $\Phi_1,\Phi_2$ depend on the VEV of the $X$ singlet and they are given by \begin{eqnarray} M^2_{F,1,2}=M^2\Big(1+a\pm \frac{1}{2}\sqrt{2a+a^2}\Big), \quad a\equiv \frac{|\kappa X|^2}{2M^2}.\label{fmass} \end{eqnarray} Therefore, after subtracting the divergences in DR, we obtain the renormalized one-loop frame function as \begin{eqnarray} \Delta\Omega&=&-\frac{1}{32\pi^2}\bigg[2M^2\ln\Big(\frac{M^2}{\mu^2}\Big)+\Big\{\ln\Big(\frac{M^2}{\mu^2}\Big)+2\Big\}|\kappa X|^2 +\frac{|\kappa X|^4}{6M^2}\bigg]. \end{eqnarray} The first term corresponds to the renormalization of the Newton constant and the second term is the wave function renormalization of the $X$ singlet while the last term is the higher order interaction term.
1,477,468,751,219
arxiv
\section{Introduction} The time-evolution of a closed quantum system is usually described as the conjugation by a group of unitary operators on the Hilbert space representing the state space of the system. When the system is open, that is, exchanges energy with its surroundings, the situation is more complicated and rigorous treatment usually requires approximations. The most standard approach was put on solid mathematical ground by Davies in the seventies (in \cite{Dav}, see also \cite{DF}), and leads to describe the system's evolution by a semigroup $(\Phi_t)_{t\in \mathbb{R}_+}$ of linear maps on the set of states (\textit{i.e.} positive, normalized functionals acting on the set of operators on the Hilbert space) with specific algebraic properties (see section~\ref{section StatesQChannels}). Many features of these continuous parameter semigroups are already contained in the case of discrete semigroups $(\Phi^n)_{n\in \mathbb{N}}$. In addition, the interest in the discrete case was renewed by quantum computation theory (where the maps $\Phi$ model quantum gates, see \cite{NieChu}) and by quantum repeated interaction systems (see \cite {BJM}). We therefore restrict ourseleves to the discrete case, and focus on the study of $\Phi=\Phi^1$, a linear map which is completely positive and trace-preserving. Such a map is called a quantum channel. The study of ergodic properties of an open quantum system is therefore related to the study of invariants of $\Phi$, and of the associated spectrum. Analogies with operators associated with Markov chains (see Example \ref{example_MarkovChain}) inspired the development of a notion of irreducible quantum channel by various authors in the seventies and eighties (\hspace{1sp}\cite{AHK}, \cite{WE}, \cite{EHK}, \cite{Gro}), with different (and sometimes conflicting) definitions and implications. A vision of irreducibility as related to an intuitive notion of trajectories (as for Markov chains), however, was not developed explicitly before the work of Baumgartner and Narnhofer in \cite{BN}, where it is done in the case of a finite-dimensional Hilbert space. This vision allows to describe the decomposition of a reducible quantum channel into a sum of irreducible ones. In addition, a fine study of these decompositions leads to a description of the full structure of invariant states of a general quantum channel. In \cite{CP1}, we studied open quantum random walks, a special class of evolutions belonging to the above case. This led us to restate and extend the results of \cite{BN} to the case of open quantum random walks, which required in particular an extension to the infinite dimensional case. Our proofs, however, apply to a wider class of evolutions than just quantum random walks. We therefore describe our results in full generality here. The structure of this article is as follows. In section \ref{section StatesQChannels}, we describe our framework and in particular the evolutions $\Phi$ of interest, the so-called quantum channels. In section \ref{section_irreducibility}, we recall the different notions of irreducibility. In section~\ref{section_enclosures}, we define enclosures, our key tool, which originate in \cite{BN}. In section~\ref{section_invariantstates}, we describe the relation between enclosures and supports of invariant states. In section \ref{section_decomposition} we discuss the structure of invariant states of a simple reducible evolution. In section \ref{section_irreducibledecompositions}, we state our general decomposition theorem, that describes irreducible decompositions of evolutions and the general structure of the set of invariant states. In section \ref{section_examples}, we apply these results to a number of examples. \paragraph{Acknowledgements.} RC gratefully acknowledges the support of PRIN project 2010MXMAJR and GNAMPA project ``Semigruppi markoviani su algebre non commutative'', and YP the support of ANR project n${}^\circ$ANR-14-CE25-0003. YP wishes to thank Julien Deschamps for enlightening discussions. \section{States and Quantum Channels} \label{section StatesQChannels} In this section we give a short summary of the theory of quantum channels, \textit{i.e.} completely positive, trace-preserving maps on an ideal of trace-class operators. We fix a separable Hilbert space $\mathcal H$, which is supposed to play the role of a state space for a quantum system. We denote by $\mathcal I_1({\mathcal H})$ the set of trace-class operators on ${\mathcal H}$ (see \cite{RS1}), and equip it with the topology induced by the trace norm. We recall that the topological dual $\mathcal I_1(\mathcal H)^*$ can be identified with the algebra $\mathcal B(\mathcal H)$ of bounded linear operators through the Schatten duality $(\rho,X)\mapsto \mathrm{Tr}(\rho\, X)$. Therefore, the topology of $\mathcal I_1({\mathcal H})$ is the same as the weak topology induced by~$\mathcal B(\mathcal H)$. We also recall that an operator $X$ on $\mathcal H$ is called nonnegative (respectively positive or positive definite), denoted $X\geq0$ (resp. $X>0$), if for $\varphi\in \mathcal H\setminus\{0\}$, one has $\langle \varphi, X\, \varphi\rangle \geq 0$ (resp. $\langle\varphi, X\, \varphi\rangle>0$). The states of a system will be represented by an operator belonging to a specific class: \begin{defi} \label{defi_state} An operator $\rho$ is called a state if it is self adjoint (\textit{i.e.} $\rho=\rho^*$), nonnegative, and is trace-class with trace one. We denote by ${\mathcal S}({\mathcal H})$ the set of states on ${\mathcal H}$. A state is called faithful if it is positive definite. \end{defi} \begin{remark} \label{remark_normalornot} In the literature, a state is sometimes defined as a positive linear form on $\mathcal B(\mathcal H)$ mapping $\mathrm{Id}$ to $1$, \textit{i.e.} as an element of the set \[\mathcal B({\mathcal H})^*_{+,1}= \{ \eta\in B({\mathcal H})^* \ \mbox{s.t.}\ \eta(X)\geq 0 \ \mbox{for}\ X\geq 0 \ \mbox{and}\ \eta(\mathrm{Id})=1\}\] equipped with the weak-* topology. The objects defined in Definition \ref{defi_state} are then called normal states. Obviously $\mathcal S({\mathcal H})$ is homeomorphic to a subset of~$\mathcal B({\mathcal H})^*_{+,1}$. \end{remark} Consider now a linear map $\Phi$ on $\mathcal I_1({\mathcal H})$. We say that this map is positive if it maps nonnegative elements of $\mathcal I_1({\mathcal H})$ to nonnegative elements of $\mathcal I_1({\mathcal H})$. We say that it is $n$-positive, for $n\in \mathbb{N}$, if the map $\Phi\otimes \mathrm{Id}_{\mathcal M_n(\mathbb{C})}$ is positive as a map on $\mathcal I_1({\mathcal H}\otimes \mathbb{C}^n)$; and completely positive if it is $n$-positive for any $n$ in $\mathbb{N}$. We say that it is trace-preserving if, for any $\rho\in \mathcal I_1({\mathcal H})$, one has $\mathrm{Tr}(\Phi(\rho))=\mathrm{Tr}(\rho)$; in particular a positive trace-preserving map induces a map on ${\mathcal S}({\mathcal H})$. Our main objects of interest will be maps that are completely positive and trace-preserving: \begin{defi} A completely positive, trace-preserving map on a space $\mathcal I_1({\mathcal H})$ is called a quantum channel on ${\mathcal H}$. \end{defi} \begin{remark} A positive linear map on $\mathcal I_1({\mathcal H})$ is automatically bounded (see Lemma 2.2 in \cite{Sch}), so that it is weak-continuous. \end{remark} The following theorem states a well-known fact about quantum channels (see \cite{Kraus}, \cite{NieChu}): \begin{theo} A linear map $\Phi$ on $\mathcal I_1({\mathcal H})$ is completely positive if and only if there exists a family $(V_i)_{i\in I}$ of operators on ${\mathcal H}$ such that for any $\rho$ in $\mathcal I_1({\mathcal H})$, \begin{equation}\label{eq_KrausForm} \Phi(\rho)=\sum_{i\in I} V_i \rho V_i^*. \end{equation} If in addition $\Phi$ is trace-preserving, then the operators $V_i$ satisfy the relation \[\sum_{i\in I} V_i^* V_i = \mathrm{Id}_{\mathcal H}.\] \end{theo} The decomposition \eqref{eq_KrausForm} is called a Kraus form of $\Phi$, and the family $(V_i)_{i\in I}$ an unravelling. Note that an unravelling of $\Phi$ is not unique (see \cite{NieChu} for more details). We have mentioned that a source of inspiration is the analogy between quantum channels and Markov chains. In the following example we point out that Markov chains are a special case of quantum channels. Note that, for any two vectors~$x$ and $y$ in a Hilbert space ${\mathcal H}$ with scalar product $\braket{\cdot}{\cdot}$ (which we assume is antilinear in the left variable), we denote by~$\ketbra xy$ the map $z \mapsto \braket y z\ x$. \begin{example} \label{example_MarkovChain} Consider a Markov chain $(X_n)_n$ on a countable set $E$ with transitions $p_{i,j}=\mathbb{P}(X_{n+1}=i\,|\,X_n=j)$. If we let ${\mathcal H}$ be $\ell^2(E)$, the set of (complex valued) square-summable sequences indexed by $E$, denote by $(e_i)_{i\in E}$ the canonical orthonormal basis, and consider $V_{i,j} =\sqrt{p_{i,j}\vphantom{1}}\, \ketbra{e_i}{e_j}$ for $i,j$ in~$E$, then \eqref{eq_KrausForm} defines a quantum channel, and any invariant state of $\Phi$ is of the form $\rho=\sum_{i\in E} \pi_i \ketbra{e_i}{e_i}$ with $(\pi_i)_{i\in E}$ an invariant probability measure for the Markov chain. \end{example} \begin{remark}\label{remark_TPnormone} Trace-preservation of a map $\Phi$ is equivalent to $\Phi^*(\mathrm{Id})=\mathrm{Id}$. The adjoint $\Phi^*$ is then a positive, unital (i.e. $\Phi^*(\mathrm{Id})=\mathrm{Id}$) map on $\mathcal B (\mathcal H)$, and by the Russo-Dye theorem (\hspace{1sp}\cite{RD}) one has $\|\Phi^*\|=\|\Phi^*(\mathrm{Id})\|$ so that $\|\Phi\|=\|\Phi^*\|=1$. \end{remark} A quantum channel represents the (discrete) dynamics of an open quantum system in the Schrödinger picture (see \cite{NieChu} for more details). We denote by~$\mathcal F(\Phi)$ the subset of $\mathcal I_1({\mathcal H})$ of invariant elements of $\Phi$ and we will be specifically interested in the set ${\mathcal S}({\mathcal H})\cap \mathcal F(\Phi)$ of invariant states, \textit{i.e.} elements of ${\mathcal S}({\mathcal H})$ that are invariant by $\Phi$. For $\rho$ a state we will consider its support, which is defined as the range of the projection $\mathrm{Id}-P_0(\rho)$, where \[P_0(\rho)=\sup\{P\ \mathrm{orthogonal\ projection\ s.t.}\, \rho(P)=0\}.\] The supremum taken above is considered with respect to the order induced by the relation $\geq$ for operators, and always exists in the present situation. Following \cite{FV}, we denote: \[ \mathcal R = \mathrm{sup}\{\mathrm{supp}\,\rho\, |\, \rho \mbox{ an invariant state}\} \] so that by definition, $\mathrm{supp}\, \rho\subset \mathcal R$ if $\rho$ is an invariant state. This space is often called the fast recurrent space, in parallel with the classical case, where the fast recurrent configurations are the ones which support the invariant probability laws. The orthogonal of $\mathcal R$ is \[\mathcal D = \{x \in \mathcal H \, |\, \langle x,\rho\,x\rangle =0 \ \mbox{ for any invariant state }\rho \}.\] \begin{remark} In \cite{BN}, the states $\mathcal R$ and $\mathcal D$ are defined without reference to the set of invariant states, as \[\mathcal D = \{x\in \mathcal H\, |\,\langle x,\Phi^n(\rho)\, x\rangle\underset{n\to\infty}{\longrightarrow}0 \mbox{ for any state }\rho\}\] and $\mathcal R=\mathcal D^\perp$. These different definitions of $\mathcal R$ and $\mathcal D$ are equivalent in finite dimension. \end{remark} \begin{remark} The space $\mathcal D$ is the sum of the transient and slow recurrent subspaces, as defined in \cite{Um}. \end{remark} \section{Irreducibility}\label{section_irreducibility} Before we discuss decompositions of quantum channels, we need to discuss the relevant reducing components of the decomposition, \textit{i.e.} irreducible quantum channels. As we will see in Proposition \ref{prop_Schrader} and Remark \ref{remark_remarksirreducibility}, irreducibility is strongly connected with the uniqueness of the invariant state. As we already mentioned in the introduction, however, different definitions of irreducibility of quantum channels can be found in the literature. We will briefly recall them here. First we need to define some relevant concepts: \begin{defi}\label{def-irreducibility} Let $\Phi$ be a quantum channel on ${\mathcal S}({\mathcal H})$. We say that an orthogonal projection $P$: \begin{itemize} \item reduces $\Phi$ if we have $\Phi\big(P\mathcal I_1(\mathcal H)P\big) \subset P\mathcal I_1(\mathcal H)P$, \item is subharmonic for $\Phi^*$ if $\Phi^*(P)\geq P$. \end{itemize} \end{defi} The complete proof of the following Proposition is given in \cite{CP1}: \begin{prop} \label{prop_defirreducibility} Let $\Phi$ be a quantum channel on $\mathcal I_1({\mathcal H})$. The following properties are equivalent: \begin{itemize} \item $\Phi$ is Davies-irreducible: the only orthogonal projections reducing $\Phi$ are $P=0$ and $\mathrm{Id}$; \item the only orthogonal projections that are subharmonic for $\Phi^*$ are $P=0$ and~$\mathrm{Id}$; \item ergodicity: for any state $\rho$, the operator $(\exp t\Phi) (\rho)$ is definite-positive for any $t>0$. \end{itemize} \end{prop} We say that $\Phi$ is irreducible if and only if any of the properties in Proposition~\ref{prop_defirreducibility} holds. \begin{remark} \label{remark_remarksirreducibility} Regarding the above concepts and their interrelations: \begin{enumerate} \item the equivalence between the first two properties follows from the simple observation that an orthogonal projection reduces $\Phi$ if and only if it is subharmonic for $\Phi^*$ (see \cite[Proposition 3.3]{CP1}); \item the definition of ergodicity given here originates in \cite{Sch}, and extends the definition given in \cite{EHK} to infinite-dimensional ${\mathcal H}$; \item there exists yet another notion of irreducibility: one says that $\Phi$ is Evans-irreducible if the only orthogonal projections that are harmonic for $\Phi$, \textit{i.e.} such that $\Phi^*(P)=P$, are $P=0$ and $\mathrm{Id}$. Clearly Davies-irreducibility implies Evans-irreducibility, but the converse is not true in general. \end{enumerate} \end{remark} In the same fashion as for Markov semigroups, there exists a Perron-Frobenius theorem related to the property of irreducibility. We state it in the next proposition, in a form essentially due to Schrader in \cite{Sch}: \begin{prop}\label{prop_Schrader} Let $\Phi$ be a quantum channel on $\mathcal I_1(\mathcal H)$, and assume it has an eigenvalue $\lambda$ of modulus $1$, with eigenvector $\rho$. Then: \begin{itemize} \item $1$ is also an eigenvalue, with eigenvector $|\rho|=(\rho^*\rho)^{1/2}$, \item if $\Phi$ is irreducible, then $\lambda$ is a simple eigenvalue and $|\rho|>0$. \end{itemize} \end{prop} \begin{remark} \label{remark_PFHeisenberg} Proposition \ref{prop_Schrader} still holds if $\Phi$ is not completely positive and trace-invariant, but simply 2-positive and trace-invariant. For this reason, the same statement holds when the map $\Phi$ on $\mathcal I_1(\mathcal H)$ is replaced with the map $\Phi^*$ on $\mathcal B(\mathcal H)$, and all subsequent results about quantum channels will hold for 2-positive and trace-invariant maps on $\mathcal I_1({\mathcal H})$, as long as they do not involve the Kraus form or unravelling of $\Phi$. \end{remark} An immediate consequence of this proposition is that an irreducible quantum channel on $\mathcal I_1({\mathcal H})$ has at most one invariant state. In sections~\ref{section_decomposition} and \ref{section_irreducibledecompositions} we will study the relations between the invariant states of a reducible quantum channel and the invariant states of its irreducible components. \section{Enclosures and communicating classes} \label{section_enclosures} For Markov chains, it is well-known that irreducibility is related with the notion of communication within the induced graph. In addition, communicating classes have an explicit description as orbits of points, and are the relevant objects to break down a reducible Markov chain into irreducible ones. In this section we introduce the notion of enclosure, that will parallel the notion of closed set for Markov chains, and allow us to study irreducible decompositions of quantum channels. \begin{defi} \label{defi_enclosures} Let $\Phi$ be a quantum channel. A closed subspace ${\mathcal V}$ is an enclosure for $\Phi$ if, for any state $\rho$, $\mathrm{supp}\, \rho \subset {\mathcal V}$ implies $\mathrm{supp}\,\Phi(\rho)\subset{\mathcal V}$. \end{defi} We will call nontrivial any enclosure which is neither $\{0\}$ nor ${\mathcal H}$. Clearly, a subspace ${\mathcal V}$ is an enclosure if and only if it is the range of a reducing orthogonal projector. Therefore, a quantum channel $\Phi$ is irreducible if and only if it has no nontrivial enclosures. This shows that enclosures are relevant to the notion of irreducibility. \smallskip We now prove a simpler characterization of enclosures: \begin{lemme} \label{lemme_enclosure} A vector subspace ${\mathcal V}$ of ${\mathcal H}$ is an enclosure if and only if, for any~$x$ in ${\mathcal V}$ with $\|x\|=1$, the state $\Phi(\ketbra xx)$ has support in ${\mathcal V}$. \end{lemme} \noindent{\bf Proof: } Let $\rho$ be a state with support in ${\mathcal V}$. The spectral decomposition of $\rho$ is of the form $\sum_{i\in I} \lambda_i \ketbra{e_i}{e_i}$ with $\lambda_i>0$, $\sum_{i\in I} \lambda_i=1$ and $e_i\in {\mathcal V}$. Therefore, $\mathrm{supp} \, \Phi(\ketbra{e_i}{e_i})\subset \mathrm{supp}\,\Phi(\rho)$, which shows the direct implication; in addition, the support of $\Phi(\rho)$ is the supremum of the projectors on the ranges of $\Phi(\ketbra{e_i}{e_i})$ and this shows the converse. $\Box$ This has the following useful corollary. Note that, for $({\mathcal V}_i)_{i\in I}$ a family of closed subspaces of ${\mathcal H}$, we denote by \textit{e.g.} ${\mathcal V}_1+{\mathcal V}_2+\ldots$ or $\sum_{i\in I} {\mathcal V}_i$ the closed vector space generated by $\bigcup_{i\in I}{\mathcal V}_i$. \begin{coro} \label{coro_sumsenclosures} Let ${\mathcal V}_1$ and ${\mathcal V}_2$ be two enclosures. The closed subspace ${\mathcal V}_1+{\mathcal V}_2$ is also an enclosure. \end{coro} \noindent{\bf Proof: } By a direct computation, $\ketbra {x_1+x_2}{x_1+x_2}\leq 2\,\ketbra{x_1}{x_1}+2\,\ketbra{x_2}{x_2}$ for $x_1,x_2$ in ${\mathcal V}_1,{\mathcal V}_2$ respectively. Applying Lemma \ref{lemme_enclosure} shows that ${\mathcal V}_1+{\mathcal V}_2$ is an enclosure. $\Box$ \smallskip This allows us to obtain an explicit characterization of enclosures in terms of unravellings of $\Phi$, and connect them to a notion of orbit under the action of possible transitions of $\Phi$. \begin{prop}\label{prop_enclosures} Consider a quantum channel $\Phi$ with unravelling $(V_i)_{i\in I}$. A subspace ${\mathcal V}$ of ${\mathcal H}$ is an enclosure if and only if $V_i\, {\mathcal V} \subset {\mathcal V}$ for any $i$. \end{prop} \noindent{\bf Proof: } The proposition follows from Lemma \ref{lemme_enclosure} and the fact that, by the trace norm continuity of $\Phi$ one has for any $x \in{\mathcal V}$, \begin{equation}\label{eq_pippo} \Phi(\ketbra xx) = \sum_{i \in \, I} \ketbra{V_ix}{V_ix}. \quad\Box \end{equation} Our goal is to consider enclosures defined as the set of points accessible from a given initial $x\in {\mathcal H}$. Proposition \ref{prop_enclosures} suggests a natural definition. \begin{prop}\label{prop_enclosures2} Let $\Phi$ be a quantum channel on $\mathcal I_1({\mathcal H})$. Let $(V_i)_{i\in I}$ be an unravelling of $\Phi$. For $x$ in $\mathcal H\setminus\{0\}$, we call enclosure generated by $x$ the closed vector space \begin{equation} \label{eq_EnclosureunRavelling} \mathrm{Enc}(x)=\mathbb{C} x\,+\,\overline{\mbox{\rm span} \{V_{i_1}\cdots V_{i_n}\, x, \,|\, n\in \mathbb{N}^*,\,i_1,.. i_n\in I \}}. \end{equation} With this definition, the space $\mathrm{Enc}(x)$ is the smallest enclosure containing $x$. \end{prop} \noindent{\bf Proof: } It follows from \eqref{eq_pippo} that definition \eqref{eq_EnclosureunRavelling} also satisfies \begin{equation} \label{eq_EnclosureEqDefinition} \mathrm{Enc}(x)=\overline{\mathrm{span}\{\mathrm{supp}\, \Phi^n(\ketbra xx),\, n\ge 0\}}. \end{equation} This shows that definition \eqref{eq_EnclosureunRavelling} is independent of the choice of unravelling. The fact that $\mathrm{Enc}(x)$ is an enclosure then follows from Proposition \ref{prop_enclosures}. $\Box$ \begin{remark} This implies in particular that a quantum channel $\Phi$ is irreducible if and only if $\mathcal H=\mathrm{Enc}(x)$ for any $x$ in $\mathcal H\setminus\{0\}$. \end{remark} We can define a notion of accessibility among vectors in ${\mathcal H}$, related to the notion of enclosure, and consider an equivalence relation. We will argue, however, that this will not immediately provide us with an interesting decomposition of a quantum channel. \begin{defi} For $x$, $y$ in $\mathcal H$, we say that: \begin{itemize} \item $y$ is accessible from $x$ (and denote it by $x {\rightarrow} y$) if $y\in\mathrm{Enc}(x)$; \item $y$ and $x$ communicate (and denote it by $x {\leftrightarrow} y$) if~$\mathrm{Enc}(x)~=~\mathrm{Enc}(y)$. \end{itemize} \end{defi} One can immediately observe that accessibility is a transitive relation, and communication is an equivalence relation. We denote by ${\mathcal C}(x)$ the equivalence class of a vector $x$ in $\mathcal H$ for the relation ${\leftrightarrow}$, $$ {\mathcal C}(x) = \{ y\in \mathrm{Enc}(x)\ \mathrm{ s.t. }\ x\in\mathrm{Enc}(y)\}. $$ An equivalence class of a vector $x $ by ${\leftrightarrow}$ is a subset of $\mathrm{Enc} (x)$ but it is not a vector space since, for $x\neq 0$, ${\mathcal C}(x)$ cannot contain $0$. Even adding the point $0$ may fail to make ${\mathcal C}(x)$ a vector space, as the next example shows. \begin{example} \label{example_23} Take ${\mathcal H}=\mathbb{C}^2$ and denote by $e_1,e_2$ its canonical basis. Consider a quantum channel $\Phi$ on $\mathcal I_1({\mathcal H})$ with unravelling $(V_1,V_2)$ given by $ V_1=\sqrt p \left(\!\begin{smallmatrix} 0 & 1 \\ 0 & 0 \end{smallmatrix}\!\right)$ and $ V_2=\left(\!\begin{smallmatrix} 1 & 0 \\ 0 & \sqrt{1-p} \end{smallmatrix}\!\right)$ for some $p\in(0,1)$ so that, for $\rho=\left(\!\begin{smallmatrix} \rho_{1,1} & \rho_{1,2} \\ \rho_{2,1} & \rho_{2,2}\end{smallmatrix}\!\right)$ in $\mathcal I_1({\mathcal H})$, we have $$ \Phi(\rho)= \begin{pmatrix} p\rho_{2,2}+\rho_{1,1} & \sqrt{1-p}\; \rho_{1,2} \\ \sqrt{1-p}\; \rho_{2,1} & (1-p)\rho_{2,2} \end{pmatrix}. $$ By an immediate direct computation, the state $\ketbra{e_1}{e_1}$ is the only invariant state of this map. We want to describe the equivalence classes and the enclosures of the map $\Phi$. We notice that, for any vector $u={}^t(u_1,u_2)$ in $\mathbb{C}^2$, $$ \ketbra uu= \begin{pmatrix} |u_1|^2 & u_1 \bar u_2 \\ \bar u_1 u_2 & |u_2|^2 \end{pmatrix} \ \mbox{so that}\ \Phi(\ketbra uu)= \begin{pmatrix} p|u_2|^2+|u_1|^2& \sqrt{1-p}\; u_1 \bar u_2 \\ \sqrt{1-p}\; \bar u_1 u_2 & (1-p)|u_2|^2 \end{pmatrix}. $$ It is immediate that $\Phi(\ketbra uu)$ is a positive definite matrix whenever $u_2\neq 0$, so that \begin{itemize} \item $\mathrm {supp} \,\Phi^n(\ketbra {e_1}{e_1})=\mathbb{C}\,e_1$ for all $n\ge 0$, \item for $u_2\neq 0$, $\mathrm {supp} \,\Phi^n(\ketbra uu)=\mathbb{C}^2$ for all $n\ge 1$. \end{itemize} Identity \eqref{eq_EnclosureEqDefinition} allows us to determine all the enclosures and equivalence classes: \begin{itemize} \item $\mathrm{Enc}(0)={\mathcal C}(0)=\{0\}$, \item $\mathrm{Enc}(e_1)=\mathbb{C}\,e_1$ and ${\mathcal C}(e_1)=\mathrm{Enc}(e_1)\setminus\{0\}$, \item for all $u\in\mathbb{C}^2\setminus \mathbb{C}\,e_1$, $\mathrm{Enc}(u)=\mathbb{C}^2$ and ${\mathcal C}(u)=\mathbb{C}^2\setminus \mathrm \mathbb{C}\, e_1$. \end{itemize} \end{example} \smallskip Supports of invariant states, on the other hand, are always vector spaces. Therefore, the naive approach of considering the partition of ${\mathcal H}$ induced by the relation ${\leftrightarrow}$ to obtain a relevant decomposition of a quantum channel into irreducible such maps fails, as it does not seem to involve the vector space structure. A natural idea, derived from the study of Markov chains, is to consider specifically minimal objects. We therefore give the following definition of a minimal enclosure: \begin{defi} Let ${\mathcal V}$ be an enclosure. We say that ${\mathcal V}$ is a minimal enclosure if any enclosure ${\mathcal V}'$ satisfying ${\mathcal V}'\subset {\mathcal V}$ is either $\{0\}$ or ${\mathcal V}$. We say that ${\mathcal V}$ is a minimal nontrivial enclosure if in addition ${\mathcal V}\neq\{0\}$. \end{defi} The following easy proposition shows that this notion is indeed relevant: \begin{prop} ${\mathcal C}(x)=\mathrm{Enc}(x)\setminus \{0\}$ if and only if $\mathrm{Enc}(x)$ is a minimal nontrivial enclosure. \end{prop} \noindent{\bf Proof: } If ${\mathcal C}(x)=\mathrm{Enc}(x)\setminus \{0\}$, then, for all $y$ in $\mathrm{Enc}(x)\setminus \{0\}$, we have $\mathrm{Enc}(x)=\mathrm{Enc}(y)$ and consequently $\mathrm{Enc}(x)$ is minimal. Conversely, if ${\mathcal V}=\mathrm{Enc}(x)$ is a minimal enclosure, for any $y$ in~${\mathcal V}\setminus\{0\}$, $\mathrm{Enc}(y)$ is a nontrivial enclosure contained in ${\mathcal V}$ so that $\mathrm{Enc}(y)={\mathcal V}$. Therefore $x{\leftrightarrow} y$ and ${\mathcal V}={\mathcal C}(x)$. $\Box$ \section{Enclosures and invariant states} \label{section_invariantstates} Baumgartner and Narnhofer (in \cite{BN}) studied a decomposition of a quantum channel related to the supports of extremal invariant states, in the case of a finite dimensional space ${\mathcal H}$. In the present paper, we extend this analysis to the infinite dimensional case. For this we will need to relate extremal invariant states to minimal enclosures. We will see that the form of invariant states for the quantum channel is dictated by the uniqueness or non-uniqueness of the decompositions into minimal enclosures and that this is related to the existence of mutually non-orthogonal minimal enclosures. The first result is: \begin{prop}\label{prop_SuppInvSt} Let $\Phi$ be a quantum channel on ${\mathcal H}$. \begin{enumerate} \item The support of an invariant state is an enclosure. \item The fast recurrent subspace $\mathcal R$ is an enclosure. \end{enumerate} \end{prop} \noindent{\bf Proof: } To prove the first point, fix an invariant state $\rho_0$, and let $\rho$ be another state with support contained in $\mathrm{supp}\,\rho_0$. Fix an orthonormal family of eigenvectors for $\rho_0$ generating $\mathrm{supp}\,\rho_0$, and let $X_0$ be the set of finite linear combinations of these vectors. This set $X_0$ is dense in $\mathrm{supp}\,\rho_0$ and for every $x$ in $X_0$ there exists $\lambda$ such that $\ketbra xx \leq \lambda \rho_0$. Therefore there exists an approximation of~$\rho$ in the $\mathcal I_1(\mathcal V)$ norm sense by an increasing sequence of finite-dimensional operators $(\rho_p)_p$ such that for every $p$ there exists a $\lambda_p$ with $\rho_p\leq \lambda_p\rho_0$, so that $\Phi(\rho_p)\leq \lambda_p \Phi(\rho_0)$ and therefore $\mathrm{supp}\,{\Phi}(\rho_p)\subset\mathrm{supp}\,\rho_0$. The sequence $\Phi(\rho_p)$ is increasing and weakly convergent to $\Phi(\rho)$ so that $\mathrm{supp}\,\Phi(\rho)\subset\mathrm{supp}\, \rho_0$, which proves that $\mathrm{supp}\, \rho_0$ is an enclosure. To prove the second point, associate with every invariant state $\rho$ the orthogonal projector $P_\rho$ on its support. Then the orthogonal projector $P$ on ${\mathcal V}$ is the supremum of the family $(P_\rho)_{\rho}$. By Proposition \ref{prop_SuppInvSt}, every $P_\rho$ is subharmonic, \textit{i.e.} $\Phi^*(P_\rho)\ge P_\rho$ for any invariant state $\rho$. Moreover, $\Phi^*(P)\ge \Phi^*(P_\rho)\ge P_\rho$ for any invariant $\rho$, so that $\Phi^*(P)\ge P$ and the conclusion follows. $\Box$ \begin{remark} \label{remark_Umanita} The first point of the previous proposition has already been proven in \cite{FR} and \cite{Um} in the dual setting, {\it i.e.} considering reducing projections for $\Phi^*$. If ${\mathcal H}$ is separable, the second point can also be derived from a result from \cite{Um} which proves that there exists an invariant state with support equal to $\mathcal R$. \end{remark} \begin{remark} The converse of point 1 of Proposition \ref{prop_SuppInvSt} is not true. Consider Example \ref{example_MarkovChain} associated with the symmetric random walk on $\mathbb{Z}$. Then ${\mathcal H}=\ell^2(\mathbb{Z})$ is an enclosure but the quantum channel $\Phi$ has no invariant state. \end{remark} \begin{prop}\label{prop_coherence} Let $\mathcal V$ be an enclosure, $\mathcal W$ be a subspace of $\mathcal H$ which is in direct sum with $\mathcal V$, and $P_{\mathcal V}$ and $P_{\mathcal W}$ be the respective orthogonal projections. Consider a state $\rho$ with support in $\mathcal V \oplus \mathcal W$ and denote \[\rho_{\mathcal V}= P_{\mathcal V}\, \rho \,P_{\mathcal V},\quad \rho_{\mathcal W}= P_{\mathcal W}\, \rho \,P_{\mathcal W}\, \quad \rho_{\mathcal C}= P_{\mathcal V}\, \rho\, P_{\mathcal W}, \quad \rho_{\mathcal C}'=P_{\mathcal W}\,\rho \,P_{\mathcal V}; \] similarly, decompose $\Phi(\rho)$ into $\Phi(\rho)_{\mathcal V}+\Phi(\rho)_{\mathcal W}+\Phi(\rho)_{\mathcal C}+\Phi(\rho)_{\mathcal C}'$. Then \begin{enumerate} \item $P_{\mathcal W}\,( \Phi (\rho_C)+ \Phi (\rho'_C))\, P_{\mathcal W}=0$; \item if $\mathcal Z$ is another enclosure with ${\mathcal V}\subset \mathcal Z\subset \mathcal R$, then $\mathcal Z \cap \mathcal V^\perp$ is an enclosure; \item if $\mathcal W$ is also an enclosure, then \[ \Phi (\rho)_{\mathcal V} =\Phi(\rho_{\mathcal V}) \quad \Phi (\rho)_{\mathcal W} =\Phi(\rho_{\mathcal W}) \quad \Phi (\rho)_{\mathcal C} =\Phi(\rho_{\mathcal C}) \quad \Phi (\rho)_{\mathcal C}' =\Phi(\rho'_{\mathcal C}). \] \end{enumerate} \end{prop} \noindent{\bf Proof: } \begin{enumerate} \item Let $\kappa_{\pm \varepsilon}=\frac1\varepsilon \,\rho_{\mathcal V}\pm \,\rho_{\mathcal C}+\varepsilon \, \rho_{\mathcal W}.$ We have $\kappa_{\pm\varepsilon}\geq 0$ (as can be checked from $\langle u, \kappa_{\pm\varepsilon}\, u \rangle = \langle u_{\pm\varepsilon}, \rho\, u_{\pm\varepsilon} \rangle$, where $ u_{\pm\varepsilon}=\frac1{\sqrt \varepsilon}\,P_{\mathcal V}u + \sqrt \varepsilon\, P_{\mathcal W}u$), so that $\Phi(\kappa_{\pm\varepsilon})\geq 0$, and, because $\mathcal V$ is an enclosure, the support of $\Phi(\rho_{\mathcal V})$ is contained in $\mathcal V$, so that \[ P_{\mathcal W}\,\Phi(\kappa_{\pm \varepsilon})\, P_{\mathcal W} = \pm P_{\mathcal W}\big(\Phi(\rho_{\mathcal C})+\Phi(\rho_{\mathcal C}')\big)P_{\mathcal W}+ \varepsilon \,P_{\mathcal W}\, \Phi (\rho_{\mathcal W}) \, P_{\mathcal W}\geq 0,\] and by necessity $ P_{\mathcal W}\,(\Phi(\rho_{\mathcal C})+\Phi(\rho'_{\mathcal C}))\,P_{\mathcal W}=0$. \item Consider $\mathcal W=\mathcal Z\cap\mathcal V^\perp$ and $\rho$ any invariant state; then \[ \rho_{\mathcal V}+\rho_{\mathcal W}+\rho_{\mathcal C} +\rho_{\mathcal C}' = \Phi(\rho_{\mathcal V}) + \Phi(\rho_{\mathcal W})+\Phi(\rho_{\mathcal C})+\Phi(\rho_{\mathcal C}'). \] Projecting by $P_{\mathcal W}$ this yields $\rho_{\mathcal W}= P_{\mathcal W} \Phi(\rho_{\mathcal W})P_{\mathcal W} $, so that $P_{\mathcal V}\,\Phi(\rho_{\mathcal W})\, P_{\mathcal V}$ is positive with zero trace. Therefore $P_{\mathcal V}\,\Phi(\rho_{\mathcal W})\, P_{\mathcal V}=0$ which implies $P_{\mathcal V}\,\Phi(\rho_{\mathcal W})=\Phi(\rho_{\mathcal W})\, P_{\mathcal V}=0$ and so $\rho_{\mathcal W}=\Phi(\rho_{\mathcal W})$. As the support of a stationary state, $\mathrm{supp}\,\rho_{\mathcal W}= \mathrm{supp}\,\rho\cap\mathcal Z\cap\mathcal V^\perp$ is an enclosure. By point 2 of Proposition \ref{prop_SuppInvSt}, taking the supremum over all possible invariant states~$\rho$ we deduce that $\mathcal Z \cap \mathcal V^\perp$ is also an enclosure. \item If $\mathcal V$ and $\mathcal W$ are enclosures, then $\mathrm{supp}\,\Phi(\rho_{\mathcal V})\subset \mathcal V$ and $\mathrm{supp}\,\Phi(\rho_{\mathcal W})~\subset~\mathcal W$. The conclusion follows from the previous points. $\Box$ \end{enumerate} We will now discuss the connection between minimal enclosures and extremal invariant states, \textit{i.e.} states $\rho$ such that $ \rho = t\,\rho_1 + (1-t)\, \rho_2$, with $\rho_1$, $\rho_2$ in $\mathcal S({\mathcal H}) \cap \mathcal F(\Phi)$ and $t\in(0,1)$, implies $\rho_1=\rho_2=\rho$. \begin{remark} The distinction between states and normal states mentioned in Remark \ref{remark_normalornot} does not lead to an ambiguity: by Example 4.1.35 in \cite{BR1}, the set $\mathcal S({\mathcal H})$, when viewed as a subspace of $\mathcal B({\mathcal H})^*_{+,1}$, is a face, so that $\rho \in \mathcal S({\mathcal H})$ is extremal regarding convex decompositions in $\mathcal S({\mathcal H})\cap \mathcal F(\Phi)$ if and only if it is extremal regarding convex decompositions in $\mathcal B({\mathcal H})^*_{+,1} \cap \mathcal F(\Phi)$. \end{remark} \begin{coro} \label{coro_subinvariantstate} For any enclosure ${\mathcal V}$ contained in $\mathcal R$, there exists an invariant state $\rho$ such that $\mathrm{supp}\, \rho \subset {\mathcal V}$. \end{coro} \noindent{\bf Proof: } By definition of $\mathcal R$, there exists an invariant state $\rho$ with $\mathrm{supp}\,\rho\cap \mathcal V \neq \{0\}$. By Proposition \ref{prop_coherence}, $P_{\mathcal V} \,\rho\, P_{\mathcal V}$ is (up to normalization) an invariant state with support in ${\mathcal V}$. $\Box$ \smallskip The following Proposition is the main result in this section: \begin{prop}\label{prop_minimal_enclosures} A subspace of $\mathcal R$ is a minimal enclosure if and only if it is the support of an extremal invariant state. Moreover, any enclosure included in~$\mathcal R$ contains a (nontrivial) minimal enclosure. Equivalently, for any invariant state $\rho$, there exists an extremal invariant state $\rho_{\mathrm{ex}}$ with $\mathrm{supp}\,\rho_{\mathrm{ex}} \subset \mathrm{supp}\, \rho$. \end{prop} \noindent{\bf Proof: } If $\mathcal V$ is a minimal enclosure contained in $\mathcal R$, then by Corollary \ref{coro_subinvariantstate}, there exists a $\Phi$-invariant state $\rho_{\mathcal V}$ with support in ${\mathcal V}$. By the discussion following Definition~\ref{defi_enclosures}, the restriction of $\Phi$ to ${\mathcal I_1({\mathcal V})}$ is irreducible. Proposition \ref{prop_Schrader} shows that $\rho_{\mathcal V}$ is the unique $\Phi$-invariant state with support in ${\mathcal V}$, and $\mathrm{supp}\,\rho_{\mathcal V}={\mathcal V}$. This $\rho_{\mathcal V}$ must be extremal since $\rho_{\mathcal V}=t\, \rho_1 + (1-t)\, \rho_2$ with $\rho_1$, $\rho_2$ invariant states and $t\in(0,1)$ would imply that $\rho_1$, $\rho_2$ are invariant states with support in $\mathcal V$ but then by uniqueness, $\rho_{\mathcal V}=\rho_1=\rho_2$. Conversely, if $\mathcal V= \mathrm{supp}\, \rho$ with $\rho$ an extremal invariant state, then by Proposition \ref{prop_SuppInvSt}, $\mathcal V$ is an enclosure. If we suppose, by contradiction, that it is not minimal, then there exists an enclosure $\mathcal W$ with $\mathcal W \subsetneq\mathcal V\subset \mathcal R$ and, by Corollary~\ref{coro_subinvariantstate}, an invariant state $\rho'$ with $\mathrm{supp}\,\rho'\subset \mathcal W$. Since $\rho$ is faithful on $\mathcal V$, by the same argument as in the proof of Proposition \ref{prop_SuppInvSt}, we can approximate $\rho'$ in the $\mathcal I_1(\mathcal V)$ norm sense by a sequence $(\rho'_p)_p$ of finite-dimensional operators such that for every $p$, there exists $\lambda_p$ with $\rho'_p\leq\lambda_p \rho$. If we let $\Psi_n=\frac 1n \sum_{k=0}^{n-1} \Phi^k$ then by a standard compacity argument, $(\Psi_n(\rho'_p))_n$ converges weakly to a $\Phi$-invariant nonnegative trace-class operator $\rho^{\mathrm{inv}}_p$ which therefore satisfies $\rho^{\mathrm{inv}}_p \leq \lambda_p \,\rho$. The extremality of $\rho$ implies that $\rho^{\mathrm{inv}}_p$ is proportional to $\rho$. This in turn implies that $(\Psi_n(\rho'))_n$ converges weakly to $\rho$, but $\Psi_n(\rho')=\rho'$ by the $\Phi$-invariance of $\rho'$. Therefore, $\rho'=\rho$, a contradiction. By Proposition \ref{prop_SuppInvSt} and Corollary \ref{coro_subinvariantstate}, the second claim and third claims are equivalent. To prove the second one, consider the maps $\Phi^*_{\mathcal R}$ on the set $\mathcal B(\mathcal R)$ of bounded operators acting on ${\mathcal R}$ defined by $$ \Phi^*_{\mathcal R}(P_{\mathcal R} x P_{\mathcal R}) = P_{\mathcal R} \Phi^*(x) P_{\mathcal R},$$ and denote by ${\mathcal F}(\Phi^*_{\mathcal R})$ the vector space of the fixed points for $\Phi^*_{\mathcal R}$, i.e. ${\mathcal F}(\Phi^*_{\mathcal R}) =\{ X\in P_{\mathcal R}{\mathcal B}({\mathcal H})P_{\mathcal R}: \Phi^*_{\mathcal R}(X)=X\}.$ We know that ${\mathcal F}(\Phi^*_{\mathcal R})$ is the image of a normal conditional expectation by Theorem 2.1 of \cite{FV}. The proof of Theorem~5 of \cite{Tom} shows then that ${\mathcal F}(\Phi^*_{\mathcal R})$ is an atomic subalgebra. It is trivial to verify that the projections contained in ${\mathcal F}(\Phi^*_{\mathcal R})$ are exactly the projections on enclosures contained in ${\mathcal R}$. So, for any enclosure $\mathcal V$, we consider the corresponding projection $P_{\mathcal V}\in {\mathcal F}(\Phi^*_{\mathcal R})$; but since ${\mathcal F}(\Phi^*_{\mathcal R})$ is atomic, it contains a minimal projection $P'\le P$ and the range of $P'$ is then a minimal enclosure contained in ${\mathcal V}$. \qed \begin{remark} The proof of point 3 of Proposition \ref{prop_minimal_enclosures} can be given in a more constructive way: consider an invariant state $\rho$, which by restriction one can assume is faithful, i.e. with support ${\mathcal H}$. By the Banach-Alaoglu theorem, the set $\mathcal B(\mathcal H)^*_{+,1}\cap \mathcal F(\Phi)$ is a compact, convex, metrizable subset of the locally convex space $\mathcal B(\mathcal H)^*$ equipped with the weak-* topology. By Theorem 4.1.11 and Proposition 4.1.3 in \cite{BR1}, and the fact that affine maps on $\mathcal B(\mathcal H)^*$ are exactly the maps $\eta \mapsto \eta(X)$ for $X \in \mathcal B(\mathcal H)$, there exists a Borel probability measure $\mu$ in~$\mathcal B(\mathcal H)^*$, such that $\rho(X)= \int\eta(X) \mathrm{d} \mu(\eta)$ for any $X$, and $\mu$ has support in the set of extremal states of $\mathcal B(\mathcal H)^*_{+,1}\cap \mathcal F(\Phi)$. Since in addition the set $\mathcal S({\mathcal H}) \cap \mathcal F(\Phi)$ is a face, $\mu$ has support in the set of extremal states of $\mathcal S({\mathcal H})\cap \mathcal F(\Phi)$. For any Borel set $B$ of $\mathcal B(\mathcal H)^*$ with $\mu(B)>0$ one can define $\rho_B=\frac1{\mu(B)}\int_B \eta(X) \mathrm{d} \mu(\eta)$. This $\rho_B$ is a state with $\mathrm{supp}\, \rho_B \subset \mathrm{supp}\, \rho$. By considering a sequence of Borel sets that are balls $B(\rho_0,\frac1n)$ for the metric compatible with the weak-* topology restricted to the unit sphere of $\mathcal B(\mathcal H)^*$, one has for $\mu$-almost all $\rho_0$ that~$\rho_{B(\rho_0,\frac1n)} \to \rho_0$ in the topology of $\mathcal S({\mathcal H})$, so that $\mathrm{supp}\, \rho_0 \subset \mathrm{supp}\,\rho$. \end{remark} For any quantum channel $\Phi$, point 2 of Proposition \ref{prop_coherence}, together with Proposition \ref{prop_minimal_enclosures}, will allow us to decompose the space $\mathcal R$ associated with $\Phi$ into a direct sum of minimal enclosures, and each of them is the support of an extremal invariant state. We give the following sequel to the two results quoted above, that essentially shows that the procedure of taking orthogonal complements is efficient in terms of decomposition into minimal enclosures: \begin{lemme} \label{lemme_makeitorthogonal} Let $\mathcal V = {\mathcal V}_1+\ldots+ {\mathcal V}_n+{\mathcal V}_{n+1}$, where the ${\mathcal V}_i$, $i=1,\ldots, n+1$, are distinct minimal enclosures contained in $\mathcal R$, and ${\mathcal V}_i\perp {\mathcal V}_j$ for $i\neq j$ in $1,\ldots, n$. Then there exists a minimal enclosure ${\mathcal V}_{n+1}'$, orthogonal to ${\mathcal V}_1,\ldots, {\mathcal V}_n$ and such that ${\mathcal V}= {\mathcal V}_1+\ldots+{\mathcal V}_n+{\mathcal V}_{n+1}'$. If $n=1$ then one can take ${\mathcal V}_2'= {\mathcal V}\cap {\mathcal V}_1^\perp$. In particular, if a subspace of $\mathcal R$ can be written as a sum of minimal enclosures, then it can be written as a sum of mutually orthogonal minimal enclosures. \end{lemme} \noindent{\bf Proof: } Let us first prove the claim for $n=1$. We know that ${\mathcal V}$ is an enclosure as direct sum of two enclosures and so by Proposition \ref{prop_coherence}, ${\mathcal V}_2'$ is an enclosure. If ${\mathcal V}_2\perp {\mathcal V}_1$ then ${\mathcal V}_2'={\mathcal V}_2$ and there is nothing to prove. Assume therefore that ${\mathcal V}_2\not\perp {\mathcal V}_1$. Proposition \ref{prop_minimal_enclosures} provides us with a nontrivial minimal enclosure ${\mathcal W}\subseteq {\mathcal V}_2'$. Then $\mathcal W \not\subset {\mathcal V}_2$ for otherwise $\mathcal W ={\mathcal V}_2\subset {\mathcal V}_2'$ and ${\mathcal V}_2\perp {\mathcal V}_1$, a contradiction. Since ${\mathcal W}$ is contained in ${\mathcal V}_1+{\mathcal V}_2$, there exists $w\in {\mathcal W}$ such that $w=v_1+v_2$ for some $v_1\in {\mathcal V}_1\setminus \{0\}$ and $v_2\in {\mathcal V}_2$. Then $v_1=w-v_2\in {\mathcal V}_1\cap({\mathcal W}+{\mathcal V}_2)$. By Corollary \ref{coro_sumsenclosures}, this means that ${\mathcal V}_1\cap({\mathcal W}+{\mathcal V}_2)$ is a nontrivial enclosure contained in the minimal enclosure ${\mathcal V}_1$. Consequently ${\mathcal V}_1\subset {\mathcal W}+{\mathcal V}_2$, so that ${\mathcal V}_1+{\mathcal V}_2\subset{\mathcal W}+{\mathcal V}_2$ and necessarily ${\mathcal W}={\mathcal V}_2'$. This proves the minimality of ${\mathcal V}_2'$. Now if $n>1$, define ${\mathcal V}_{n+1,1}'=({\mathcal V}_1+{\mathcal V}_{n+1})\cap {\mathcal V}_1^\perp$. By the preceding discussion, ${\mathcal V}_{n+1,1}'$ is orthogonal to ${\mathcal V}_1$ and ${\mathcal V}_1+{\mathcal V}_{n+1} = {\mathcal V}_1+{\mathcal V}_{n+1,1}'$. Then define ${\mathcal V}_{n+1,2}'=({\mathcal V}_2+{\mathcal V}_{n+1,1}')\cap {\mathcal V}_2^\perp$. This ${\mathcal V}_{n+1,2}'$ is now orthogonal to ${\mathcal V}_1$ and ${\mathcal V}_2$ and ${\mathcal V}_2+{\mathcal V}_{n+1,1}'={\mathcal V}_2+{\mathcal V}_{n+1,2}'$ so that ${\mathcal V}_1+{\mathcal V}_2+{\mathcal V}_{n+1}={\mathcal V}_1+{\mathcal V}_2+{\mathcal V}_{n+1,2}'$. Iterating this process gives the desired ${\mathcal V}_{n+1}'$ in the form of ${\mathcal V}_{n+1,n}'$. $\Box$ \smallskip We therefore have our main tool for decompositions of quantum channels into irreducible ones. We wish to relate these decompositions to the structure of invariant states of $\Phi$. In the case of Markov chains, it is well-known that these are all convex combinations of the extremal invariant states associated with irreducible parts in the decomposition. We will see in the next section, however, that this is not the case for general quantum channels. \section{Invariant states of non-irreducible quantum channels} \label{section_decomposition} In this section we study the last ingredient of our decomposition, that is, how the invariant states of a quantum channel on a sum ${\mathcal V}_1+{\mathcal V}_2$ of two minimal enclosures relate to the extremal invariant states associated with these two minimal enclosures. We will see that this relation will depend on the uniqueness of the decomposition ${\mathcal V}_1+{\mathcal V}_2$. Let us define what we mean by this uniqueness. We say that the decomposition of a subspace $\mathcal Z$ of $\mathcal R$ in a direct sum of minimal enclosures is unique, if, whenever $(\mathcal V_\alpha)_{\alpha\in A}$ and $(\mathcal W_\beta)_{\beta\in B}$ are two families of minimal enclosures with \[ {\mathcal V}_\alpha\cap {\mathcal V}_{\alpha'}=\{0\} \mbox{ for any } \alpha\neq \alpha',\qquad \mathcal W_\beta\cap \mathcal W_{\beta'}=\{0\} \ \mbox{ for any } \beta\neq \beta',\] and $\mathcal Z=\sum_{\alpha\in A}\mathcal V_\alpha = \sum_{\beta\in B} \mathcal W_\beta,$ then the sets $\{\mathcal V_\alpha, \, \alpha\in A\}$ and $\{\mathcal W_\beta,\, \beta\in B\}$ coincide, and in particular $A$ and~$B$ have the same cardinality. \smallskip The following lemma characterizes the situations when the decomposition of a subspace as the direct sum of two enclosures is unique. First remark that, by point $2$ in Proposition \ref{prop_coherence}, if $x$ and $y$ are in $\mathcal R$ then \begin{itemize} \item either $\mathrm{Enc}(x )\perp \mathrm{Enc}(y)$, \item or $ x\not\in\mathrm{Enc}(y)^\perp$ and $y\not\in\mathrm{Enc}(x)^\perp$. \end{itemize} Indeed, if $y \in \mathrm{Enc}(x)^\perp\cap \mathcal R$ then $\mathrm{Enc}(y)\perp \mathrm{Enc}(x)$. \begin{lemme}\label{lemma_uniquenessdec} Let $\mathcal V = {\mathcal V}_1 + {\mathcal V}_2$, where ${\mathcal V}_1$ and ${\mathcal V}_2$ are minimal enclosures contained in $\mathcal R$. The decomposition of $\mathcal V$ in a direct sum of minimal enclosures is unique if and only if any enclosure $\mathcal W$ such that $\mathcal W\not\perp {\mathcal V}_1$ and $\mathcal W\not\perp {\mathcal V}_2$ satisfies $\mathcal W \cap \mathcal V = \{0\}$. If the latter statement holds, then the two enclosures are orthogonal. \end{lemme} \noindent{\bf Proof: } Assume the decomposition of $\mathcal V$ as a direct sum of minimal enclosures is unique. Then ${\mathcal V}_1 \perp {\mathcal V}_2$, otherwise by Proposition \ref{prop_coherence}, $\mathcal V \cap {\mathcal V}_1^\perp$ would be an enclosure that does not contain ${\mathcal V}_2$, leading to a different decomposition of $\mathcal V$. Now consider a minimal enclosure $\mathcal W$ with $\mathcal W\not\perp {\mathcal V}_1$ and $\mathcal W\not\perp {\mathcal V}_2$. This implies $\mathcal W\neq {\mathcal V}_1$ so by Proposition \ref{prop_enclosures}, $\mathcal W \cap {\mathcal V}_1=\{0\}$. If $\mathcal W\cap \mathcal V\neq \{0\}$ then it is an enclosure in $\mathcal W$ so by minimality, $\mathcal W\subset \mathcal V$. Then $\mathcal W \oplus {\mathcal V}_1$ is a direct sum of minimal enclosures contained in $\mathcal V$, so, by Proposition \ref{prop_minimal_enclosures}, one can complete this as a decomposition of~$\mathcal V$ into a direct sum of minimal enclosures. This is a contradiction, leading to $\mathcal W\cap \mathcal V= \{0\}$. Now assume that any enclosure $\mathcal W$ such that $\mathcal W\not\perp {\mathcal V}_1$ and $\mathcal W\not\perp {\mathcal V}_2$ satisfies $\mathcal W \cap \mathcal V = \{0\}$. Taking first $\mathcal W= {\mathcal V}_2$, which obviously has a nontrivial intersection with $\mathcal V$, we obtain that ${\mathcal V}_1\perp {\mathcal V}_2$. Now consider some minimal enclosure ${\mathcal V}_3$ contained in $\mathcal V$. Then, by assumption, one has \textit{e.g.} ${\mathcal V}_3\perp {\mathcal V}_1$ and ${\mathcal V}_3\not\perp {\mathcal V}_2$ and so ${\mathcal V}_3\subset {\mathcal V}_1^\perp \cap \mathcal V$, which, as proved above, is ${\mathcal V}_2$. This proves the uniqueness of the decomposition. $\Box$ Next we need to strengthen Proposition \ref{prop_coherence} to distinguish between the situations where the decomposition into minimal enclosures is unique or not. The first result treats the situation where the decomposition is unique. To simplify the notation, from now on, when $\mathcal V$ is an enclosure, we will denote by $\Phi_{|\mathcal V}$ (instead of $\Phi_{|\mathcal I_1(\mathcal V)}$) the restriction of $\Phi$ to $\mathcal I_1(\mathcal V)$. \begin{prop} \label{prop_enclosures_unique} If $\rho$ is $\Phi$-invariant and $\mathcal V$ and $\mathcal W$ are two minimal enclosures contained in $\mathcal R$, such that the decomposition of $\mathcal V + \mathcal W$ into a sum of minimal enclosures is unique, then $P_{\mathcal V}\, \rho\, P_{\mathcal W}=P_{\mathcal W}\,\rho \,P_{\mathcal V}=0$, \textit{i.e.} with the notation of Proposition~\ref{prop_coherence} one has $\rho_{\mathcal C}=\rho_{\mathcal C}'=0$. \end{prop} \noindent{\bf Proof: } If $\mathcal V$ and $\mathcal W$ are minimal enclosures in $\mathcal R$, then, by Proposition \ref{prop_minimal_enclosures}, they are the supports of extremal invariant states $\rho_{\mathcal V}$ and $\rho_{\mathcal W}$. Because the decomposition of $\mathcal V+ \mathcal W$ into minimal enclosures is unique, $\rho_{\mathcal V}$ and~$\rho_{\mathcal W}$ are the unique extremal invariant states of $\Phi_{|(\mathcal V +\mathcal W)}$. Since the set of invariant states is convex, then by the Krein-Milman theorem, $\rho$ is a convex combination of $\rho_{\mathcal V}$ and $\rho_{\mathcal W}$, so $\rho_{\mathcal C}$ and $\rho_{\mathcal C}'$ must be zero. $\Box$ \begin{remark} \label{remark_MarkovChainCase} Consider the quantum channel $\Phi$ associated with a Markov chain as in Example \ref{example_MarkovChain}. It is a simple observation that a minimal enclosure for $\Phi$ is necessarily of the form ${\mathcal V}=\ell^2(C)$ for $C$ a minimal communication class for the Markov chain (where $\ell^2(C)$ is viewed as a subspace of $\ell^2(E)$). Therefore, two distinct minimal enclosures ${\mathcal V}_1$ and ${\mathcal V}_2$ are necessarily orthogonal, decompositions into sums of minimal enclosures are unique, and any invariant state on ${\mathcal H}=\ell^2(V_1+V_2)$ is a convex combination of the extremal invariant states $\rho_1,~\rho_2$ with supports $\ell^2(V_1)$, $\ell^2(V_2)$ respectively. \end{remark} \smallskip A second result will allow us to describe more explicitly the situation where the decomposition into minimal enclosures is not unique, and describe the associated invariant states: \begin{prop}\label{prop_partialisom} Let ${\mathcal V}_1$ and ${\mathcal V}_2$ be two minimal enclosures contained in $\mathcal R$. Assume that the decomposition of $\mathcal V = {\mathcal V}_1+ {\mathcal V}_2$ in a direct sum of minimal enclosures is not unique. Then $\dim\,{\mathcal V}_1= \dim \,{\mathcal V}_2.$ If, in addition, ${\mathcal V}_1\perp {\mathcal V}_2$ (as can be chosen by Lemma \ref{lemme_makeitorthogonal}) then there exists a partial isometry $Q$ from ${\mathcal V}_1$ to~${\mathcal V}_2$ satisfying \begin{equation}\label{eq_partialisometry} Q^* Q = \mathrm{Id}_{|{\mathcal V}_1}\qquad Q\,Q^* = \mathrm{Id}_{|{\mathcal V}_2} \end{equation} and for any $\rho$ in $\mathcal I_1(\mathcal H)$, and $R=QP_{{\mathcal V}_1}+Q^*P_{{\mathcal V}_2}$: \begin{equation}\label{eq_commpartialisom} R\, \Phi(\rho)\, P_{{\mathcal V}_i} + P_{{\mathcal V}_i}\, \Phi(\rho)\, R= \Phi\big(R\,\rho\, P_{{\mathcal V}_i} + P_{{\mathcal V}_i}\,\rho\, R\big) \quad \mbox{for }i=1,2. \end{equation} \end{prop} \noindent{\bf Proof: } By Lemma \ref{lemma_uniquenessdec}, there exists a minimal enclosure $\mathcal W$ in ${\mathcal V}_1+{\mathcal V}_2$ such that $\mathcal W \not\perp {\mathcal V}_i$, $i=1,2$. Then \textit{e.g.} ${\mathcal V}_1\cap \mathcal W^\perp$ is a nontrivial enclosure contained in ${\mathcal V}_1$, and by minimality ${\mathcal V}_1\subset \mathcal W^\perp$. Therefore $\dim {\mathcal V}_1 \leq \dim\mathcal W$, and by symmetry one has the equality $\dim\,{\mathcal V}_1 = \dim\,\mathcal W$. Similarly one has $\dim\,{\mathcal V}_2 = \dim\,\mathcal W$. Assume now that ${\mathcal V}_1\perp {\mathcal V}_2$. Define the map $\Phi_{\mathcal R}^*$ as in the proof of Proposition~\ref{prop_minimal_enclosures}. By Remark \ref{remark_PFHeisenberg}, if $E={\mathcal V}_1$, ${\mathcal V}_2$ or $\mathcal W$, then $P_{E}$ is (up to multiplication) the unique invariant of the restriction $\Phi_{E}^*$ of $\Phi_{\mathcal R}^*$ to $\mathcal B(E)$. Consider the decomposition of $P_{\mathcal W}=\begin{pmatrix}A& B^*\\B&C\end{pmatrix}$ in the splitting $\mathcal V={\mathcal V}_1\oplus {\mathcal V}_2$, where necessarily~$B\neq 0$. A simple consequence of Proposition \ref{prop_coherence} is that in the same decomposition, $\Phi_{\mathcal R}^*(P_{\mathcal W})=\begin{pmatrix}{\Phi_{\mathcal R}^*}(A)& {\Phi_{\mathcal R}^*}(B)^*\\ {\Phi_{\mathcal R}^*}(B)&{\Phi_{\mathcal R}^*}(C)\end{pmatrix}$. Therefore $A$ is proportional to $P_{{\mathcal V}_1}$ and $C$ to $P_{{\mathcal V}_2}$. Writing relations $P=P^*=P^2$ satisfied by $P_{\mathcal W}$, one sees that $B$ must be proportional to an operator $Q$ satisfying relations~\eqref{eq_partialisometry}. Fix $Q$; for $\theta\in[0,\pi]$, the operator defined by \[P_{\theta}=\begin{pmatrix}\cos^2 \theta & \sin \theta\cos\theta\, Q^*\\ \sin \theta\cos\theta\, Q & \sin^2\theta \end{pmatrix} \] is an orthogonal projection preserved by the map $\Phi_{\mathcal R}^*$. So its range is an enclosure and, by point $3$ of Proposition \ref{prop_coherence}, $P_\theta$ will satisfy the relation \[\Phi(P_{\theta} \, \rho \, P_{\theta})= P_{\theta} \,\Phi(\rho)\,P_{\theta},\] for any $\rho$ in $\mathcal I_1(\mathcal H)$. Differentiating this relation with respect to $\theta$, we have \[ \Phi\big(\frac{\mathrm d P_\theta}{\mathrm d\theta} \, \rho \, P_{\theta}+ P_{\theta} \, \rho \, \frac{\mathrm d P_\theta}{\mathrm d\theta}\big)=\frac{\mathrm d P_\theta}{\mathrm d\theta} \, \Phi(\rho) \, P_{\theta}+ P_{\theta} \, \Phi(\rho) \, \frac{\mathrm d P_\theta}{\mathrm d\theta}. \] Computing the derivatives at $\theta=0$ and $\theta=\pi/2$, we obtain relations \eqref{eq_commpartialisom}. $\Box$ \begin{coro}\label{coro_partialisom2} Assume that $\mathcal V = {\mathcal V}_1+ {\mathcal V}_2$ where ${\mathcal V}_1$ and ${\mathcal V}_2$ are mutually orthogonal minimal enclosures, contained in $\mathcal R$, but that the decomposition of ${\mathcal V}$ into a direct sum of minimal enclosures is non-unique. For $i=1,2$ let $\rho^{\mathrm{inv}}_i$ be the unique invariant state with support in ${\mathcal V}_i$, if it exists, and $\rho^{\mathrm{inv}}_i=0$ otherwise. Consider $Q$ the partial isometry defined in Proposition \ref{prop_partialisom}. Then $\rho^{\mathrm{inv}}_2=Q\, \rho^{\mathrm{inv}}_1 \, Q^*$. If $\rho$ is an invariant state with support in $\mathcal V$, then: \begin{itemize} \item $P_{{\mathcal V}_1}\,\rho\,P_{{\mathcal V}_1} $ is proportional to $\rho^{\mathrm{inv}}_1$, \item $P_{{\mathcal V}_2}\,\rho\,P_{{\mathcal V}_2}$ is proportional to $\rho^{\mathrm{inv}}_2$, \item $P_{{\mathcal V}_1}\,\rho\,P_{{\mathcal V}_2}$ is proportional to $\rho^{\mathrm{inv}}_1\, Q^*=Q^*\rho^{\mathrm{inv}}_2$, \item $P_{{\mathcal V}_2}\,\rho\,P_{{\mathcal V}_1}$ is proportional to $\rho^{\mathrm{inv}}_2\, Q=Q\rho^{\mathrm{inv}}_1$. \end{itemize} \end{coro} \noindent{\bf Proof: } The first identity is obtained by applying relation \eqref{eq_commpartialisom} to $\rho=\rho^{\mathrm{inv}}_1$ with~$P_1$, then applying it again to the resulting relation, this time with $P_2$. That each $\rho_{i,j}=P_{{\mathcal V}_i}\rho P_{{\mathcal V}_j}$ is an invariant is an immediate consequence of Proposition \ref{prop_coherence}. The relation satisfied by $\rho_{1,2}$ and $\rho_{2,1}$ is then obtained by applying relation~\eqref{eq_commpartialisom} to e.g. $\rho_{1,2}$, with $P_1$ or $P_2$. $\Box$ \section{Irreducible decompositions of quantum channels and invariant states} \label{section_irreducibledecompositions} We are now in a position to state the relevant decomposition associated with $\Phi$. \begin{prop}\label{prop_finaldec} Let $\Phi$ be a quantum channel on a separable Hilbert space ${\mathcal H}$. There exists a decomposition of ${\mathcal H}$ in the form \begin{equation}\label{eq_finaldec} {\mathcal H} = \mathcal D + \sum_{\alpha \in A}{\mathcal V}_\alpha + \sum_{\beta \in B}\sum_{\gamma \in C_\beta} {\mathcal V}_{\beta,\gamma}, \end{equation} where any set $A,B,C_\beta$ is at most countable, $A$ and $B$ can be empty (but not simultaneously), any $C_\beta$ has cardinality at least two, and: \begin{itemize} \item every ${\mathcal V}_\alpha$ or ${\mathcal V}_{\beta,\gamma}$ in this decomposition is a minimal enclosure, \item for $\beta$ in $B$, any minimal enclosure that is not orthogonal to $\sum_{\gamma \in C_\beta}{\mathcal V}_{\beta,\gamma}$ is contained in $\sum_{\gamma \in C_\beta} {\mathcal V}_{\beta,\gamma}$, \item any two distinct subspaces $\mathcal D$, ${\mathcal V}_\alpha$, ${\mathcal V}_{\beta,\gamma}$ are mutually orthogonal. \end{itemize} \end{prop} \noindent{\bf Proof: } We start with the orthogonal decomposition $\mathcal H = \mathcal D +\mathcal R$, and proceed to decompose $\mathcal R$. Consider the set of all minimal enclosures ${\mathcal V}$ with the property that any minimal enclosure different from ${\mathcal V}$ is orthogonal to ${\mathcal V}$. By separability, this set is at most countable. Then we can denote all such minimal enclosures by ${\mathcal V}_\alpha$, with $\alpha$ in a (countable) set of indices $A$. Let $\mathcal O$ be the direct sum of all these enclosures, $\mathcal O=\sum_{\alpha\in A}{\mathcal V}_\alpha$. Then $\mathcal O$ is an enclosure, and, by point $2$ of Proposition \ref{prop_coherence}, $\mathcal R \cap \mathcal O^\perp$ is also an enclosure. Assume that $\mathcal R \cap \mathcal O^\perp$ is nontrivial; we proceed to decompose it. Let $\beta(1)=1$ and consider a minimal enclosure ${\mathcal V}_{\beta(1),1}\subset \mathcal R \cap \mathcal O^\perp$. By the definition of $\mathcal O$, there exists a minimal enclosure ${\mathcal V}_2$ in $\mathcal R \cap \mathcal O^\perp$, and by Lemma \ref{lemme_makeitorthogonal} we can choose ${\mathcal V}_{\beta(1),2}$ minimal, orthogonal to ${\mathcal V}_1$, and such that ${\mathcal V}_{\beta(1),1}+{\mathcal V}_{\beta(1),2}={\mathcal V}_{\beta(1),1}+{\mathcal V}_2$. If all minimal enclosures are either included in ${\mathcal V}_{\beta(1),1}+ {\mathcal V}_{\beta(1),2}$ or orthogonal to ${\mathcal V}_{\beta(1),1}+ {\mathcal V}_{\beta(1),2}$, we set $C_{\beta(1)}=\{1,2\}$. Otherwise, we call ${\mathcal V}_{3}$ a minimal enclosure not included in and not orthogonal to ${\mathcal V}_{\beta(1),1}+{\mathcal V}_{\beta(1),2}$. By Lemma \ref{lemme_makeitorthogonal} we can choose ${\mathcal V}_{\beta(1),3}$ minimal, orthogonal to ${\mathcal V}_{\beta(1),1}+{\mathcal V}_{\beta(1),2}$ and such that $${\mathcal V}_{\beta(1),1}+{\mathcal V}_{\beta(1),2}+{\mathcal V}_{\beta(1),3}={\mathcal V}_{\beta(1),1}+{\mathcal V}_{\beta(1),2}+{\mathcal V}_{3}$$ and we proceed again with the same method for a denumerable number of steps so that we construct $C_{\beta(1)}$. If $\mathcal R\cap \mathcal O^\perp \cap \big(\sum_{\gamma\in C_{\beta(1)}}E_{\beta(1),\gamma}\big)^\perp\neq \{0\}$, we can iterate the procedure. $\Box$ \medskip Before we state our next result, let us give some notation. We fix a decomposition \eqref{eq_finaldec} as considered in Proposition \ref{prop_finaldec}. We define \[P_0 = P_{\mathcal R^\perp},\qquad P_i = P_{{\mathcal V}_i}\; \mbox{ for $i\in A$ or $i\in \bigcup_{\beta\in B}\{\beta\}\times C_\beta$}, \] and, for a state $\rho$, and $i$, $j$ taking the values $0$, $\alpha \in A$ or $(\beta,\gamma)\in \bigcup_{\beta\in B}\, \{\beta\}\times C_\beta$ \begin{equation} \label{eq_decomprho} \rho_i=P_i \, \rho \, P_i\qquad \rho_{i,j}= P_i \, \rho \, P_j. \end{equation} In addition, we denote by $\rho^{\mathrm{inv}}_i$ the unique invariant state of $\Phi_{|{\mathcal V}_i}$ if it exists, and $\rho^{\mathrm{inv}}_i=0$ otherwise. \smallskip We can now state: \begin{theo}\label{theo_invariantstates} Let $\rho$ be a $\Phi$-invariant state and consider a related orthogonal decomposition of the form \eqref{eq_finaldec}. With the notation~\eqref{eq_decomprho}, we have \begin{enumerate} \item $\rho_0=0$, \item every $\rho_i$ is proportional to $\rho^{\mathrm{inv}}_i$, for all indices $i\in A\cup\bigcup_{\beta\in B}\{\beta\}\times C_\beta$, \item for $\gamma\neq \gamma'$ in $C_\beta$, the off-diagonal term $\rho_{((\beta,\gamma),(\beta,\gamma'))}$, which we simply denote by $\rho_{(\beta,\gamma,\gamma')}$, may be non-zero, and is $\Phi$-invariant. In addition, there exists a partial isometry $Q_{(\beta,\gamma,\gamma')}$ from ${\mathcal V}_{\beta,\gamma}$ to ${\mathcal V}_{\beta,\gamma'}$ such that: \begin{itemize} \item $\rho^{\mathrm{inv}}_{(\beta,\gamma')}=Q_{(\beta,\gamma,\gamma')}\, \rho^{\mathrm{inv}}_{(\beta,\gamma)}\, Q_{(\beta,\gamma,\gamma')}^*$ \item $\rho_{(\beta,\gamma,\gamma')}$ is proportional to $Q^*_{(\beta,\gamma,\gamma')}\,\rho^{\mathrm{inv}}_{(\beta,\gamma')}=\rho^{\mathrm{inv}}_{(\beta,\gamma)}\, Q^*_{(\beta,\gamma,\gamma')}$, \end{itemize} \item all other $\rho_{i,j}$ (for $i,j$ taking all possible values in $\{0\}\cup A\cup\bigcup_{\beta\in B}\{\beta\}\times C_\beta$) are zero. \end{enumerate} \end{theo} \noindent{\bf Proof: } This follows from a repeated application of Propositions \ref{prop_coherence} and \ref{prop_finaldec}, and Corollary \ref{coro_partialisom2}. $\Box$ \begin{remark} The decomposition of an invariant state $\rho$ given by Theorem \ref{theo_invariantstates} can be rewritten in the same form as in formula $(12)$ of Theorem $7$ in \cite{BN}, or as in Theorem 22 of \cite{DFSU}, by simple algebraic manipulations. The key object is an isomorphism between ${\mathcal V}_{\beta,1}\otimes \mathbb{C}^{C_\beta}$ and $\sum_{\gamma \in C_\beta} {\mathcal V}_{\beta,\gamma}$ for each $\beta$, given by \[\mathcal E (u\otimes x) = \sum_{\gamma \in C_\beta} u_\gamma \, Q_{(\beta,1,\gamma)} x \ \mbox{ for }\ u=(u_\gamma)_{\gamma\in{\mathcal C}_\beta}.\] \end{remark} \begin{remark} The representation of invariant states appearing in Theorem \ref{theo_invariantstates} has recently been studied in \cite{DFSU}, where an analogous result is proven in infinite dimension (and in the continuous time setting, but this point is not crucial). Our techniques and starting points are completely different and essentially replicate the approach used in \cite{BN} and \cite{CP1}. Concerning the orthogonal decomposition and the representation of invariant states, however, our result is more general than the one in \cite[Theorem 2.1]{DFSU}, since we do not need to assume the atomicity of the decoherence free algebra (notice that the existence of a faithful normal invariant state assumed in \cite{DFSU} is not a restriction, since our decomposition is anyway only for the fast recurrent subspace $\mathcal R$, and by Remark \ref{remark_Umanita}, the restriction of~$\Phi$ to~$\mathcal R$ has a faithful invariant state). The key step which allows us to avoid this additional assumption is that we can prove that the fixed point algebra~${\mathcal F}(\Phi^*_{\mathcal R})$ is atomic. When there exists a faithful invariant state, this means that ${\mathcal F}(\Phi^*)$ is atomic. However, we do not know whether the decoherence free algebra (see \cite{DFSU}), usually denoted by ${\mathcal N}(\Phi^*)$, is atomic, neither can we so far deduce other generalizations of the results on the structure of this algebra studied in \cite{DFSU}. \end{remark} \section{Examples}\label{section_examples} \begin{example} (classical Markov chains) Consider as in Example \ref{example_MarkovChain} a Markov chain on a countable set $E$. Denote by $(C_\alpha)_{\alpha\in A}$ the family of minimal communication classes $C_\alpha$ such that the Markov chain has an invariant probability~$\pi^{(\alpha)}$ with support $C_\alpha$, by $R=\cup_{\alpha\in A} C_\alpha$ the (disjoint) union of these classes, and by~$D$ the complement $D=E\setminus R$. Then, according to the discussion in Remark \ref{remark_MarkovChainCase}, the decomposition \eqref{eq_finaldec} of ${\mathcal H}=\ell^2(E)$ is given by \[ {\mathcal H} = \mathcal D + \sum_{\alpha\in A} {\mathcal V}_\alpha \quad \mbox{where}\ \mathcal D = \ell^2(D), \ {\mathcal V}_\alpha = \ell^2(C_\alpha)\] and any invariant state on ${\mathcal H}$ is a convex combination of the extremal states, which are of the form $\sum_{i\in C_\alpha} \pi^{(\alpha)}_i \ketbra{e_i}{e_i}$. \end{example} \begin{example} Consider the quantum channel defined in Example \ref{example_23}. From the computations in Example \ref{example_23}, one has $\mathcal R=\mathbb{C}\, e_1$ and therefore $\mathcal D = \mathbb{C}\, e_2$. \end{example} \begin{example} ($2\times 2$ matrices) Consider ${\mathcal H}=\mathbb{C}^2$ and $\Phi$ a positive quantum map on the algebra $\mathcal B(\mathbb{C}^2)$, which we identify with the set $M_2(\mathbb{C})$ of $2\times 2$ matrices and equip with the scalar product $ \langle x , y \rangle_{M_2} = \mathrm{tr} (x^*y)$. The Pauli matrices \[ \sigma_0=\frac{1}{\sqrt{2\,}}\, \mathrm{Id}_{\mathbb{C}^2}, \quad \sigma_1 =\frac{1}{\sqrt{2\,}}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \quad \sigma_2 = \frac{1}{\sqrt{2\,}}\begin{pmatrix} 0 & \llap{-}i \\ i & 0 \end{pmatrix} \quad \sigma_3 = \frac{1}{\sqrt{2\,}}\begin{pmatrix} 1 & 0 \\ 0 & \llap{-}1 \end{pmatrix} \] form an orthonormal basis of $M_2(\mathbb{C})$ and satisfy \[ \sigma_k^2=\sigma_0^2,\qquad \sigma_k\sigma_j = - \sigma_j\sigma_k, \qquad \sigma_j\sigma_k = i \sigma_\ell \] if $(j,k,\ell)\in\{(1,2,3),(2,3,1),(3,1,2)\}$. It is easy to see that, since $\Phi$ is trace preserving and positive, its matrix in the basis $\{\sigma_0,\sigma_1,\sigma_2,\sigma_3\}$ is of the form \begin{eqnarray}\label{generator} \Phi= \begin{pmatrix} 1 & {}^t 0\\ b & A \end{pmatrix} \end{eqnarray} where $b\in {\mathbb R}^3$, ${}^t 0 = (0,0,0)$, $A$ is a $3\times 3$ matrix with real coefficients. The map~$\Phi$ is positive if and only if $\|b+Ax\|\le 1$ for all $x$ such that $\|x\|\le 1$ (see \cite{C} for more details, even if in the continuous time setting). \smallskip It is well-known that states on $\mathbb{C}^2$ are all operators of the form $\rho=\sigma_0+u\cdot\sigma$ with $u$ in ${\mathbb R}^3$, $\|u\|\leq 1$ (here we use the standard notation $u\cdot\sigma= \sum_{i=1,2,3}u_i\, \sigma_i$). This is called the Bloch sphere representation. In addition, it is easy to see that a state $\rho=\sigma_0+u\cdot\sigma$ is invariant for $\Phi$ if and only if $b+Au-u=0$. Essentially, the problem of decomposing $\cal R$ into minimal enclosures is reduced to solving the linear system $b+Au-u=0$, and then considering if there exist solutions with $\|u\|=1$ and how many they are. However, by the Markov-Kakutani Theorem, an invariant state always exists. For the decomposition of the fast recurrent space $\cal R$, only $3$ different cases are possible. \begin{itemize} \item There exists a unique invariant state $\rho$. Then the only minimal enclosure in $\cal R$ is $\cal R$ itself and it has dimension $2$ when $\rho$ is faithful and $1$ otherwise. \item There exist infinitely many invariant states, and they are given by all convex combinations of two extremal invariant states $\rho_1$ and $\rho_2$. Then $\cal R$ can be written in a unique way as the direct sum of two minimal enclosures, which are the supports of $\rho_1$ and $\rho_2$. \item There exist infinitely many extremal invariant states. Then any state is invariant, any one dimensional subspace is an enclosure, and $\cal R$ can be written as ${\cal R}=\mathbb{C}\, e_1\oplus \mathbb{C}\, e_2$, for any two linearly independent vectors $e_1$, $e_2$ in $\mathbb{C}^2$. This third case is possible if and only if $\Phi$ is the identity operator. \end{itemize} \end{example} \begin{example} Define $V=\mathbb{C}\cup\{0\}$, ${\mathfrak h}={\mathbb C}^3$, ${\mathcal H} = {\mathfrak h}\otimes \ell^2(V)$ and fix a canonical basis $(e_k)_{k=1,2,3}$ of ${\mathfrak h}$ so that we represent matrices and vectors in this basis. Choose $p,q>0$ such that $p<1/2$, $p+q<1$ and a family of operators $(L_{i,j})_{i,j \in V}$ on ${\mathfrak h}$ such that $L_{ij}=0$ when $|i-j|\ge 2$, $L_{00}=\sqrt{1-p\,}\, \mathrm{Id}_{\mathfrak h}$, $L_{j+1,j}= \sqrt{p\,} \,\mathrm{Id}_{\mathfrak h}$ for $j\ge0$ and \begin{eqnarray*} L_{j-1,j}&=& \begin{pmatrix} \sqrt{1-p} &0&0\\ 0&\sqrt{1-p\,}&0 \\ 0&0&\ \sqrt{q}\ \end{pmatrix} \quad \mbox{for } j\ge1 \\ L_{j,j}&=&\sqrt{\frac{(1-p-q)}2\,} \begin{pmatrix} 0 &0&1\\ 0&0&1\\ 0&0&0 \end{pmatrix} \quad \mbox{for } j\ge1 \end{eqnarray*} We have $\sum_{i\in V}L_{i,j}^*L_{ij}=\mathrm{Id}$ for all $j$ in $V$, so that the map $\Phi$ acting on ${\mathcal I}_1(\mathcal H)$ defined by $$ \Phi(\rho) = \sum_{i,j\in V} (L_{i,j}\otimes \ketbra ij) \, \rho \, (L_{i,j}^*\otimes \ketbra ji), $$ is a quantum channel. This map $\Phi$ is an open quantum random walk with transition operators $(L_{i,j})_{i,j \in V}$ as defined in \cite{APSS}. Denote by $(\ket j)_{j\in V}$ the canonical basis of $\ell^2(V)$. It was proved in \cite{CP1} that minimal enclosures for open quantum random walks are generated by vectors of the form $u\otimes \ket i$. Consider therefore $u={}^t(u_1,u_2,u_3)$ in ${\mathfrak h}$, then $$ \mathrm{Enc}(u\otimes \ket i) =\left\{\begin{array}{ll} \mathrm{span}\{u\otimes \ket j, j\ge 0\} & \mbox{if } u_3=0,\\ \mathrm{span}\{e_3\otimes \ket j,(e_1+e_2)\otimes \ket j, j\ge 0\} & \mbox{if } u_3\neq0, u_1=u_2\\ {\mathcal H} & \mbox{if } u_3\neq0, u_1\neq u_2,\\ \end{array}\right. $$ The enclosures described in the first case ($u_3=0$) are the minimal ones and so they support the extremal invariant states of the evolution. Using finite difference equations as for similar classical Markov chains, one can compute these extremal invariant states, $$ \rho(u)=c\,\sum_{j\ge 0} \left(\frac{p}{1-p}\right)^j \ketbra uu \otimes \ketbra jj, $$ for $u={}^t(u_1,u_2,0)\neq 0$ and a normalizing constant $c$. Then we have $\mathcal R=\mbox{span }\{e_1,e_2\}\otimes \ell^2(V)$, $\mathcal D=\mbox{span }\{e_3\}\otimes \ell^2(V)$ and the decomposition \eqref{eq_finaldec} can be written with $A$ empty, $B$ consisting of only one element~$\beta$, $C_{\beta}=\{1,2\}$, $$ {\mathcal V}_{\beta,1} = \mathrm{span}\{v^1\otimes\ket j, j\ge 0\}, \qquad {\mathcal V}_{\beta,2} = \mathrm{span}\{v^2\otimes \ket j, j\ge 0\} , $$ for any linearly independent vectors $v^1$ and $v^2$ orthogonal to $e_3$. We observe that $\rho^{\mathrm{inv}}_{\beta,1}=\rho(e_1)$ is the only invariant state with support $E=\mathrm{span}\{e_1\otimes \ket j, j\ge 0\}$ and, defining $Q$ as $\ketbra {e_2}{e_1}$, that any invariant state has a decomposition $$ \rho=\begin{pmatrix} \quad t\, \rho(e_1) \quad & \lambda \,\rho(e_1) Q^* \\ \bar\lambda \,Q\rho(e_1) & (1-t)\, Q\rho(e_1)Q^*\end{pmatrix} \qquad\mbox{with } t\in [0,1]. $$ Using the previous expressions for enclosures, one can also deduce the communication classes, in particular for the vectors of the form $u\otimes \ket j $, which are the most interesting in the special case of open quantum random walks. \end{example}
1,477,468,751,220
arxiv
\subsection*{Examples}The following examples show that Theorem 1 and Corollary 3 essentially cannot be improved upon. \begin{itemize}\renewcommand{\labelitemi}{\scalebox{.5}{{$\blacksquare$}}} \item[(A)] There is a Suslinian chainable continuum that contains a totally disconnected Polish space of positive dimension. This example was constructed by Howard Cook and Andrew Lelek \cite[Example 4.2] {iv}. By \cite[Theorem 3.2] {iv} it contains a totally disconnected set $P$ with countable complement, and by the statement of \cite[Example 4.2] {iv} the set $P$ is not zero-dimensional.\footnote{Note that in \cite{iv}, the term \textit{hereditarily disconnected} is used instead of totally disconnected, and \textit{totally disconnected} is defined to be a stronger condition which is still satisfied by every zero-dimensional space.} \item[(B)] There is a hereditarily decomposable chainable continuum which homeomorphically contains $\mathfrak E_{\mathrm{c}}$ (e.g.\ the Cantor organ \cite[p.\;191]{kur}). \item[(C)] There is a Suslinian dendroid whose endpoint set is homeomorphic to $\mathfrak E_{\mathrm{c}}$. This example can be obtained as a quotient of the Lelek fan and is due to Piotr Minc and Ed Tymchatyn (personal communication). \end{itemize} \begin{figure}[h]\includegraphics[scale=0.042]{weee.png} \caption{Illustration of Example A. The set $K_1$ is constructed so that the points where two triangles intersect accumulate onto the middle third of the left-most segment $S$. This principle is repeated in the smaller triangles to form $K_2, K_3$, etc. The set $X=\bigcap _{n=0}^\infty K_n$ is a Suslinian chainable continuum. If $Q$ is any countable set which intersects each arc of $X$, then $P=X\setminus Q$ is totally disconnected. But every clopen subset of $P$ meeting $S$ must contain all of $P\cap S$ (proved by Cook and Lelek). Hence $P$ is not zero-dimensional.} \end{figure} \subsection*{Proof of Theorem 1} Let $X$ be a Suslinian chainable continuum. Let $E$ be a totally disconnected $G_{\delta}$-subset of $X$. We will produce a point $e$ at which $E$ is zero-dimensional. To that end, let $\mathcal K$ be the set of all non-degenerate connected components of $\overline E$. Note that $\mathcal K$ is countable by the Suslinian property. So by Baire's theorem either $\bigcup \mathcal K$ has empty interior in $\overline E$, or there exists $K\in\bigcup \mathcal K$ which contains a non-empty open subset of $\overline E$. \medskip \textbf{Case 1:} $\bigcup \mathcal K$ has empty interior in $\overline E$. \medskip Then $\overline E\setminus \bigcup \mathcal K$ is a dense $G_{\delta}$-set in $\overline E$. Since $E$ is also dense and $G_{\delta}$ in $\overline E$, there exists $e\in E\cap (\overline E\setminus \bigcup \mathcal K)$. Then $\{e\}$ is a connected component of the compactum $\overline E$. So $\overline E$ is zero-dimensional at $e$ (cf.\ \cite[Section 1.4]{eng}). Therefore $E$ is zero-dimensional at $e$. \medskip \textbf{Case 2:} There exists $K\in\bigcup \mathcal K$ which contains a non-empty open subset of $\overline E$. \medskip Let $U$ be a non-empty relatively open subset of $\overline E$ that is contained in $K$. The continuum $K$ is Suslinian and chainable, as these properties are inherited from $X$. Therefore $K$ is hereditarily decomposable \cite[Theorem 1.1]{lel} and irreducible \cite[Theorem 12.5]{nad}. By Kuratowski's theory of tranches (see \cite[\S 48]{kur} or \cite[p.15]{es}), there is a mapping $\varphi:K\to [0,1]$ such that $\varphi^{-1}\{t\}$ is a nowhere dense subcontinuum of $K$ for every $t\in [0,1]$. By the Suslinian property, the set $$Q=\big\{t\in [0,1]:|\varphi^{-1}\{t\}|>1\big\}$$ is countable. Therefore $K\setminus \varphi^{-1}(Q)$ is dense $G_{\delta}$ in $K$. So there exists $$e\in E\cap U\setminus \varphi^{-1}(Q).$$ By compactness of $K$ there exist $a,b\in [0,1]\setminus Q$ such that $$a<\varphi(e)<b$$ and $\varphi^{-1}[a,b]\subset U$. We claim that there is a relatively clopen $C\subset E\cap \varphi^{-1}[a,\varphi(e)]$ which contains $e$ and misses the point $\varphi^{-1}(a)$. To see this, note that $E\cap \varphi^{-1}[a,\varphi(e)]$ is non-degenerate because $E$ is dense in $\varphi^{-1}(a,\varphi(e))$. Since $E$ is totally disconnected, $E\cap \varphi^{-1}[a,\varphi(e)]$ is not connected. Let $W$ be a proper clopen subset of the space $E\cap \varphi^{-1}[a,\varphi(e)]$ such that $e\in W$. Since $K\setminus \varphi^{-1}(Q)$ is dense in $\varphi^{-1} (a,\varphi(e))$, and $W$ is a proper closed subset of $E\cap \varphi^{-1}[a,\varphi(e)]$, there exists $c\in [a,\varphi(e))\setminus Q$ such that $\varphi^{-1}(c)\notin W$. Then $C=W\cap \varphi^{-1}[c,\varphi(e)]$ is as desired. Likewise there is a relatively clopen $D\subset E\cap \varphi^{-1}[\varphi(e),b]$ which contains $e$ and misses the point $\varphi^{-1}(b)$. Then $C\cup D$ is a clopen subset of $E$ that contains $e$ and lies inside of $U$. If $U'$ is any smaller open subset of $\overline E$ containing $e$, then the same argument will produce a clopen subset of $E$ which contains $e$ and lies inside of $U'$. Therefore $E$ is zero-dimensional at $e$. This concludes the proof of Theorem 1. \hfill $\blacksquare$ \subsection*{Remarks} Corollaries 2 and 3 follow from Theorem 1 because $E(X)$ and $\mathfrak E_{\mathrm{c}}$ are totally disconnected Polish spaces, and $\mathfrak E_{\mathrm{c}}$ is nowhere zero-dimensional. By applying Theorem 1 locally, it can be seen that if $E$ is any totally disconnected $G_{\delta}$-subset of $X$ (a Suslinian chainable continuum), then the set of points at which $E$ is zero-dimensional is dense in $E$. The set of points at which a separable metrizable space is zero-dimensional is always $G_{\delta}$, so we get the following strengthening of Corollary 2: \textit{The set of points at which $E(X)$ is zero-dimensional is dense $G_{\delta}$ in $E$.} Thus $E(X)$ is zero-dimensional ``almost everywhere''.
1,477,468,751,221
arxiv
\section{Introduction} Within the standard model (SM), the double charm decays of $B_{u,d}$ and $B_{s}$ Mesons considered here are dominated by the color-favored ``Tree" transition $b\to c\bar{c}d (s)$, while the color-suppressed ``Penguin" transition is generally small. If the penguin contribution was absent, the mixing induced CP asymmetry (CPA), denoted as $S_f$, would be proportional to $\sin(2\beta)$, while the direct CPA, denoted as $C_f$, would be zero. In some new physics models beyond the SM, the penguin contributions can be large and may change the SM predictions for the branching ratios and the CP asymmetries (CPA) significantly. The study of these double charm $B/B_s$ meson decays therefore plays an important role in testing the SM as well as searching for the signals of the new physics (NP). Experimentally, the BaBar and Belle Collaboration have reported the measurement of the direct CPA in $B^0\to D^{+}D^{-}$ decay \begin{eqnarray} \mathcal{C}(B^0 \to D^+D^-)&=&\left\{\begin{array}{ll} -0.91\pm0.23\pm0.06 &\mbox{(Belle \cite{Fratina:2007zk})},\\ -0.07\pm0.23\pm0.03 &\mbox{(BaBar \cite{:2008aw})}. \end{array}\right. \label{eq:DDdircp} \end{eqnarray} It is easy to see that Belle found an evidence of CP violation in $B^0 \to D^+ D^-$ at the $4.1\sigma$ level \cite{Fratina:2007zk}, but BaBar did not \cite{:2008aw}. On the other hand, such a large direct CPA in $B^0 \to D^+ D^-$ decay has not been observed in the measurements for other similar decay modes: such as $\bar{B}^0 \to D^{^{(*)+}}D^{^{(*)-}}$, $B^-\to D^{^{(*)0}}D^{^{(*)-}}$ and $\bar{B}^0_{s}\to D^{^{(*)+}}_s D^{^{(*)-}}$ \cite{:2008aw,Miyake:2005qb,Aushev:2004uc,Aubert:2007rr,Vervink:2008dv, Aubert:2006ia,Abe:2007sk,Majumder:2005gy}, although they have the same flavor structures as $B^0\to D^+D^-$ at the quark level. In the SM, the direct CPA's should be naturally very small in size because the penguin contributions are small. If the large CP violation in $B^0 \to D^+D^-$ from Belle is true, it would establish the presence of new physics. Up to now, by using the low-energy effective hamiltonian and various factorization hypothesis, many investigations on the decays of B to double-charm states have been carried out in the framework of the SM \cite{LuCaidian10,Liying} or some popular new physics models \cite{Zwicky:2007vv,Fleischer:2007zn,Gronau:2008ed,ruminwang:SUSY2009}. In this paper, we will present our systematic calculation of the branching ratios and CP violations for double charm decays $B/B_s \to D^{(*)}_{(s)} D^{(*)}_{(s)}$ in the minimal supergravity (mSUGRA) model \cite{nills1984}. In the framework of the mSUGRA model, the new physics contributions to the semileptonic, leptonic and radiative rare B decays and the charmless two-body B-meson decays have been investigated in previous works~\cite{hmdx98,tyy96,tyy97,huang03,zw04}. For the two-body $B \to M_1 M_2$ decays, the new physics part of the Wilson coefficients $C_k (k=3,\cdots,6)$,$C_{7\gamma}$ and $C_{8g}$ in the mSUGRA model can be found in Ref.~\cite{zw04}. The usual route to calculate the decay amplitude for non-leptonic two-body B decays is to start from the low energy effective Hamiltonian for $\Delta B=1$ decays. With the operator product expansion method, the relevant $\Delta B=1$ effective Hamiltonian can be factorized into the Wilson coefficients $C_i(\mu)$ times the four-quark operators $Q_i(\mu)$. As to $C_i(\mu)$, they have been evaluated to next-to-leading order with the perturbation theory and renormalization group method. The remanent and also intractable problem is to calculate the hadronic matrix elements of these four-quark operators. Up to now, many methods have been put forward to settle this problem, such as the naive or generalized factorization approach \cite{NF,gfa}, QCD factorization approach (QCDF) \cite{qcdf,bn03a} and the perturbative QCD (PQCD) approach\cite{pqcd}. For the strong phase, which is important for the CP violation prediction, is quite sensitive to these various approaches, and different approaches may lead to quite different results. In this paper, we will use the naive factorization method, which is expected to be reliable for the color-allowed amplitudes, which are dominant contributions in these double charm decays. This paper is organized as follows. In the next section we will give a brief review for the mSUGRA model. In Sec.~\ref{sec:framework}, we introduce the basic formulas for calculating the branching ratios, the polarization fractions and the CP violation in the considered $B/B_s \to D^{(*)}_{(s)} D^{(*)}_{(s)}$ decays. In Sec.~\ref{sec:results}, we present the numerical results for the double charm decays of B-meson in both the SM and the mSUGRA model. The conclusions are included in the final section. \section{outline of the mSUGRA model} \label{sec:msugra} In the minimal supersymmetry model (MSSM), the most general superpotential takes the form \cite{nills1984,msugra} \begin{eqnarray} {\cal W}=\varepsilon_{\alpha\beta}\left [f_{Uij}Q_{i}^{\alpha}H_{2}^{\beta}U_{j} +f_{Dij}H_{1}^{\alpha}Q_{i}^{\beta}D_{j} +f_{Eij}H_{1}^{\alpha}L_{i}^{\beta}E_{j} -\mu H_{1}^{\alpha}H_{2}^{\beta} \right ], \end{eqnarray} a set of terms which explicitly but softly break SUSY should be added to the supersymmetric Lagrangian. A general form of the soft SUSY-breaking terms is given as \begin{eqnarray} -{\cal L}_{soft}&=& \left (m^{2}_{Q}\right )_{ij}\tilde{q}^{+}_{Li}\tilde{q}_{Lj} +\left(m^{2}_U\right )_{ij}\tilde{u}^{*}_{Ri}\tilde{u}_{Rj} +\left(m^{2}_D\right )_{ij}\tilde{d}^{*}_{Ri}\tilde{d}_{Rj} +\left(m^{2}_L\right )_{ij}\tilde{l}^{+}_{Li}\tilde{l}_{Lj}\nonumber\\ &\ \ & +\left(m^{2}_E\right )_{ij}\tilde{e}^{*}_{Ri}\tilde{e}_{Rj} +\Delta^{2}_{1}h_{1}^{+}h_{1}+\Delta^{2}_{2}h_{2}^{+}h_{2} \nonumber\\ &\ \ & +\varepsilon_{\alpha\beta} \left [A_{Uij}\tilde{q}^{\alpha}_{Li}h^{\beta}_{2}\tilde{u}^{*}_{Rj} +A_{Dij}h^{\alpha}_{1}\tilde{q}^{\beta}_{Li}\tilde{d}^{*}_{Rj} +A_{Eij}h^{\alpha}_{1}\tilde{l}^{\beta}_{Li}\tilde{e}^{*}_{Rj} +B\mu h^{\alpha}_{1}h^{\beta}_{2}\right ]\nonumber\\ &\ \ & +\frac{1}{2}m_{\tilde{B}}\tilde{B}\tilde{B} +\frac{1}{2}m_{\tilde{W}}\tilde{W}\tilde{W} +\frac{1}{2}m_{\tilde{G}}\tilde{G}\tilde{G} + H.C. \label{eq:lsoft} \end{eqnarray} where $\tilde{q}_{Li}$, $\tilde{u}^{*}_{Ri}$, $\tilde{d}^{*}_{Ri}$, $\tilde{l}_{Li}$, $\tilde{e}^{*}_{Ri}$, $h_1$ and $h_2$ are scalar components of chiral superfields $Q_i$, $U_i$, $ D_{i}$, $L_{i}$, $E_{i}$, $H_1$ and $H_2$ respectively, and $\tilde{B}$, $\tilde{W}$ and $\tilde{G}$ are $ U(1)_Y$, $SU(2)_L$, and $ SU(3)_C $ gauge fermions. In order to avoid severe phenomenological problems, such as large flavor changing neutral currents (FCNC), unacceptable amount of additional CP violation and so on, a set of assumptions are added to the unconstrained MSSM in the mSUGRA model. One underlying assumption is that SUSY-breaking occurs in a hidden sector which communicates with the visible sector only through gravitational interactions. The free parameters in the MSSM are assumed to obey a set of boundary conditions at the Grand Unification scale $M_{X}$\cite{nills1984,msugra} \begin{eqnarray} \alpha_{1}&=&\alpha_{2}=\alpha_{3}=\alpha_{X}, \nonumber\\ (m^{2}_{Q})_{ij}&=& (m^{2}_{U})_{ij}=(m^{2}_{D})_{ij}=(m^{2}_{L})_{ij} =(m^{2}_{E})_{ij}=(m^{2}_{0})\delta_{ij}, \nonumber\\ \Delta^{2}_{1}&=&\Delta^{2}_{2}=m^{2}_{0}, \nonumber\\ A_{Uij}&=&f_{Uij}A_{0},\ \ A_{Dij}=f_{Dij}A_{0}, \ \ A_{Eij}=f_{Eij}A_{0}, \nonumber\\ m_{\tilde{B}}&=& m_{\tilde{W}}=m_{\tilde{G}}=m_{\frac{1}{2}} \end{eqnarray} where $\alpha_{i}=g^2_i/(4\pi)$, while $g_{i}$ (i=1,2,3) denotes the coupling constant of the $U(1)_Y$, $SU(2)_L$, $SU(3)_C$ gauge group, respectively. Besides the three parameters $ m_{\frac{1}{2}}$, $m_{0}$ and $A_{0}$, the bilinear coupling B and the supersymmetric Higgs(ino) mass parameter $\mu$ in the supersymmetric sector should also be determined. By requiring the radiative electroweak symmetry-breaking (EWSB) takes place at the low energy scale, both of them are obtained except for the sign of $\mu$. At this stage, only four continuous free parameters and an unknown sign are left in the mSUGRA model \begin{eqnarray} \tan\beta, m_{\frac{1}{2}}, m_{0},A_{0},sign(\mu). \end{eqnarray} According to the previous studies about the constraints on the parameter space of the mSUGRA model \cite{lepa,lepb,spa,sps1,zw04,ali02}, we choose two sets of typical mSUGRA points as listed in Table \ref{tab1}. \begin{table}[tbp] \caption{ Two typical sets of SUSY parameters to be used in the numerical calculation.} \label{tab1} \begin{center} \begin{tabular}{c|cccccccccc|ccccc} \hline \hline \multicolumn{1}{c|}{CASE}& \multicolumn{2}{c}{$m_0$\qquad }&\multicolumn{2}{c}{$m_{\frac{1}{2}}$\qquad }& \multicolumn{2}{c}{$A_0$\qquad }&\multicolumn{2}{c}{$\tan\beta$}& \multicolumn{2}{c|}{$Sign[\mu]$}& \multicolumn{5}{c}{$ R_7$} \\ \hline\hline A&\multicolumn{2}{c}{$300$}&\multicolumn{2}{c}{$300$}& \multicolumn{2}{c}{$0$}&\multicolumn{2}{c}{$2$}&\multicolumn{2}{c|}{$-$}& \multicolumn{5}{c}{$1.10$} \\ B&\multicolumn{2}{c}{$369$}&\multicolumn{2}{c}{$150$}& \multicolumn{2}{c}{$-400$}&\multicolumn{2}{c}{$40$}&\multicolumn{2}{c|}{$+$}& \multicolumn{5}{c}{$-0.93$} \\ \hline \hline \end{tabular} \end{center} \end{table} \section{Effective Hamiltonian and observables} \label{sec:framework} In this section, we will give a brief review of the theoretical framework of the low energy effective Hamiltonian and the factorized matrix elements as well as the decay amplitudes for $\Delta B=1$ decays. \subsection{Effective Hamiltonian in the SM and mSUGRA model} In the SM, the low energy effective Hamiltonian for $\Delta B=1$ transition at a scale $\mu$ is given by \cite{Buchalla:1995vs} \begin{eqnarray} \mathcal{H}^{\rm SM}_{\rm eff}&=&\frac{G_F}{\sqrt{2}}\sum_{p=u, c} \lambda_p \Biggl\{C_1Q_1^p+C_2Q_2^p +\sum_{i=3}^{10}C_iQ_i+C_{7\gamma}Q_{7\gamma} +C_{8g}Q_{8g} \Biggl\}+ h.c., \label{HeffSM} \end{eqnarray} here $\lambda_p=V_{pb}V_{pq}^* $ for $b \to q$ transition $(p\in \{u,c\},q\in \{d,s\})$. The detailed definition of the operators can be found in Ref.~\cite{Buchalla:1995vs}. Within the SM and at the scale $M_W$, the Wilson coefficients $C_1(M_W), \cdot\cdot\cdot ,C_{10}(M_W)$, $C_{7\gamma}(M_W)$ and $C_{8g}(M_W)$ have been given, for example, in Ref.~\cite{Buchalla:1995vs}. By using QCD renormalization group equations, it is straightforward to run Wilson coefficients $C_i(M_W)$ from the scale $\mu\sim O(M_W)$ down to the lower scale $\mu\sim O(m_b)$. In the mSUGRA model, there are four kinds of SUSY contributions to the $b\to d(s)$ transition at the one-loop level, depending on the virtual particles running in the penguin diagrams: \begin{itemize} \item[] (i) the charged Higgs boson $H^{\pm}$ and up-type quarks $u,c,t$; \item[] (ii) the charginos $\tilde{\chi}^{\pm}_{1,2}$ and the up-type squarks $\tilde{u}, \tilde{c},\tilde{t}$; \item[] (iii) the neutralinos $\tilde{\chi}^{0}_{1,2,3,4}$ and the down-type quarks $\tilde{d}, \tilde{s},\tilde{b}$; \item[] (iv) the gluinos $\tilde{g}$ and the down-type quarks $\tilde{d}, \tilde{s},\tilde{b}$. \end{itemize} In general, the Wilson coefficients after the inclusion of various contributions can be expressed as \begin{eqnarray} C_i(\mu_W) = C_{i}^{SM} + C_{i}^{H^-} +C_{i}^{\tilde{\chi}^-} +C_{i}^{\tilde{\chi}^0} +C_{i}^{\tilde{g}}, \label{eq:cimuw} \end{eqnarray} where $C_{i}^{H^-}, C_{i}^{\tilde{\chi}^-}, C_{i}^{\tilde{\chi}^0}$ and $C_{i}^{\tilde{g}}$ denote the Wilson coefficients induced by the penguin diagrams with the exchanges of the charged Higgs $H^\pm$, the chargino $\tilde{\chi}^{\pm}_{1,2}$, the neutralino $\tilde{\chi}^{0}_{1,2,3,4}$ and the gluino $\tilde{g}$, respectively. The detailed expressions of these Wilson coefficients can be found in Ref.~\cite{zw04}. \subsection{Decay amplitudes in naive factorization} The decay amplitudes of $B\to D^{^{(*)}}D^{^{(*)}}_{q}$ in the SM within the naive factorization can be written as \cite{NF} \begin{eqnarray} \mathcal{M}^{\rm SM}(B\to D^{^{(*)}}D^{^{(*)}}_{q})=\frac{G_F}{\sqrt{2}}\left(\lambda_c a_1^c +\sum_{p=u,c}\lambda_p\left[a_4^p+a_{10}^p+\xi(a_6^p+a_8^p)\right] \right) A_{[BD^{^{(*)}},D^{^{(*)}}_{q}]},\label{AM} \end{eqnarray} where the coefficients $a^{p}_i=\left(C_i+\frac{C_{i\pm1}}{N_c}\right)+P^{p}_i$ with the upper (lower) sign applied when $i$ is odd (even), and $P^p_i$ account for penguin contributions. The factorization parameter $\xi$ in Eq. (\ref{AM}) arises from the transformation of $(V-A)(V+A)$ currents into $(V-A)(V-A)$ ones for the penguin operators. It depends on properties of the final-state mesons involved and is defined as \begin{eqnarray} \xi&=&\left\{\begin{array}{cl} +\frac{2m^2_{D_q}}{(\bar{m}_c+\bar{m}_q)(\bar{m}_b-\bar{m}_c)} &~~\mbox{($DD_q$)},\\ 0 &~~\mbox{($DD^*_q$)},\\ -\frac{2m^2_{D_q}}{(\bar{m}_c+\bar{m}_q)(\bar{m}_b+\bar{m}_c)} &~~\mbox{($D^*D_q$)},\\ 0 &~~\mbox{($D^*D^*_q$)}.\\ \end{array}\right. \end{eqnarray} The term $A_{[BD^{^{(*)}},D^{^{(*)}}_{q}]}$ in Eq.~(\ref{AM}) is the factorized matrix element. For $B\to D^{^{(*)}}D^{^{(*)}}_{q}$ decay mode, it can be written as \begin{eqnarray} A_{[BD^{^{(*)}},D^{^{(*)}}_{q}]}\equiv\left<D^{^{(*)}}_{q} |\bar{q}\gamma^\mu(1-\gamma_5)c|0\right>\left<D^{^{(*)}}| \bar{c}\gamma_\mu(1-\gamma_5)b|B\right>. \end{eqnarray} The decay constants and form factors \cite{NF,Neubert:1991xw} are usually defined as \begin{eqnarray} \langle D_q(p_{_{D_q}})|\bar{q}\gamma^\mu\gamma_5c|0\rangle &=& -if_{_{D_q}}p^\mu_{_{D_q}},\\ \langle D^*_q(p_{_{D^*_q}})|\bar{q}\gamma^\mu c|0\rangle &=& f_{_{D^*_q}}p^\mu_{_{D^*_q}},\\ \langle D(p_{_{D}})|\bar{c}\gamma_{\mu}b|B (p_{_{B}})\rangle &=&\frac{m^2_B-m^2_{_{D}}}{q^2}q_\mu F_0(q^2)+\left[(p_{_{B}}+p_{_{D}})_\mu-\frac{m^2_B-m^2_{_{D}}}{q^2}q_\mu\right] F_1(q^2),\ \ \ \end{eqnarray} \begin{eqnarray} \langle D^*(p_{_{D^*}},\varepsilon^{\ast})|\bar{c}\gamma_{\mu}b|B (p_{_{B}})\rangle &=& \frac{2V(q^2)}{m_B+m_{_{D^*}}} \epsilon_{\mu\nu\alpha\beta}\varepsilon^{\ast\nu}p_{_B}^{\alpha}p_{_{D^*}}^{\beta},\\ \langle D^*(p_{_{D^*}},\varepsilon^{\ast})|\bar{c}\gamma_{\mu}\gamma_5b|B (p_{_{B}})\rangle &=&i\left[\varepsilon_{\mu}^\ast(m_B+m_{_{D^*}})A_1(q^2) -(p_{_B}+p_{_{D^*}})_{\mu}({\varepsilon^\ast}\cdot{p_{_B}})\frac{A_2(q^2)} {m_B+m_{_{D^*}}}\right]\nonumber \\ &&-iq_{\mu}({\varepsilon^\ast}\cdot{p_{_B}})\frac{2m_{_{D^*}}}{q^2} [A_3(q^2)-A_0(q^2)], \end{eqnarray} where $q=p_B-p_{D^{^{(*)}}}$. In terms of decay constants and form factors, the matrix element $A_{[BD^{^{(*)}},D^{^{(*)}}_{q}]}$ can be written as follows \begin{eqnarray} A_{[BD^{^{(*)}},D^{^{(*)}}_{q}]}=\left \{\begin{array}{ll}if_{D_q}(m_B^2-m^2_{D})F_0(m^2_{D_q}), & (DD_{q}), \\2f_{D^{^{*}}_{_{q}}}m_B|p_c|F_1(m^2_{D^{^{*}}_{q}}), & (DD^{*}_{q}), \\-2f_{D_{q}}m_B|p_c|A_0(m^2_{D_{q}}), &(D^{*}D_{q}), \\ -i f_{D^{^{*}}_{_{q}}}m_{D^{^{*}}_{q}}\biggl[ (\varepsilon_{D^{^{*}}}^{\ast}\cdot\varepsilon_{D^{^{*}}_{q}}^{\ast}) (m_{B}+m_{D^{^{*}}})A_1(m_{D^{^{*}}_{q}}^2)\\ \hspace{2cm}-(\varepsilon_{D^{^{*}}}^{\ast}\cdot p_{D^{^{*}}_{q}})(\varepsilon_{D^{^{*}}_{q}}^{\ast}\cdot p_{D^{^{*}}})\frac{2A_2(m^2_{D^{^{*}}_{q}})}{m_{B}+m_{D^{^{*}}}}\biggr.\\ \hspace{2cm} \left.+i\epsilon_{\mu\nu\alpha\beta}\varepsilon_{D^{^{*}}_{q}}^{\ast\mu} \varepsilon_{D^{^{*}}}^{\ast\nu}p_{D^{^{*}}_{q}}^{\alpha}p_{D^{^{*}}}^{\beta} \frac{2V(m^2_{D^{^{*}}_{q}})}{m_{B}+m_{D^{^{*}}}}\right], &(D^*D^*_{q}). \end{array} \right. \end{eqnarray} For the penguin contributions, we will consider not only QCD and electroweak penguin operator contributions but also the contributions from the electromagnetic and chromomagnetic dipole operators $Q_{7\gamma}$ and $Q_{8g}$, as defined by the factor $P^p_i$\cite{NF}: \begin{eqnarray} P_1^c &=&0, \nonumber\\ P_4^p&=&\frac{\alpha_s}{9\pi}\left\{C_1\left[ \frac{10}{9}-G_{D^{(*)}_q}(m_p)\right]-2F_1C^{eff}_{8g}\right\},\nonumber\\ P_6^p&=&\frac{\alpha_s}{9\pi}\left\{C_1\left[ \frac{10}{9}-G_{D^{(*)}_q}(m_p)\right]-2F_2C^{eff}_{8g}\right\},\nonumber\\ P_8^p&=&\frac{\alpha_e}{9\pi}\frac{1}{N_c}\left\{(C_1+N_cC_2)\left[ \frac{10}{9}-G_{D^{(*)}_q}(m_p)\right]-3F_2C^{eff}_{7\gamma}\right\},\nonumber\\ P_{10}^p&=&\frac{\alpha_e}{9\pi}\frac{1}{N_c}\left\{(C_1+N_cC_2)\left[ \frac{10}{9}-G_{D^{(*)}_q}(m_p)\right]-3F_1C^{eff}_{7\gamma}\right\},\label{PiF} \end{eqnarray} with the penguin loop-integral function $G_{D^{(*)}_q}(m_p)$ defined as \begin{eqnarray} G_{D^{(*)}_q}(m_p)&=&\int^1_0 du G(m_p,k) \Phi_{D^{(*)}_q}(u),\\ G(m_p,k)&=&-4\int^1_0 dx x(1-x)\mbox{ln}\left[\frac{m_p^2-k^2x(1-x)}{m^2_b} -i \epsilon\right], \end{eqnarray} where $k^2=m^2_c+\bar{u}(m^2_b-m^2_c-m^2_{M_2})+\bar{u}^2m^2_{M_2}$ is the penguin momentum transfer with $\bar u \equiv 1-u$. In the function $G_{D^{(*)}_q}(m_p)$, we have used a $D^{(*)}_q$ meson-emitting distribution amplitude $\Phi_{D^{(*)}_q}(u)=6u(1-u)[1+a_{D^{(*)}_q}(1-2u)]$, in stead of keeping $k^2$ as a free parameter as usual. The constants $F_1$ and $F_2$ in Eq.~(\ref{PiF}) are defined by \cite{NF} \begin{eqnarray} F_1&=&\left\{\begin{array}{ll} \int^1_0 du \Phi_{D_q}(u)\frac{m_b}{m_b-m_c}\frac{m^2_b-um^2_{D_q}-2m^2_c+m_bm_c}{k^2} &~~\mbox{($DD_q$)},\\ \int^1_0 du \Phi_{D^*_q}(u)\frac{m_b}{k^2}\left(\bar{u}m_b+\frac{2um_{D^*_q}}{m_b-m_c}\epsilon_2^*\cdot p_{1} -um_c\right) &~~\mbox{($DD^*_q$)},\\ \int^1_0 du \Phi_{D_q}(u)\frac{m_b}{m_b+m_c}\frac{m^2_b-um^2_{D_q}-2m^2_c-m_bm_c}{k^2} &~~\mbox{($D^*D_q$)},\\ \int^1_0 du \Phi_{D^*_q}(u)\frac{m_b}{k^2}\left(\bar{u}m_b+\frac{2um_{D^*_q}}{m_b+m_c}\epsilon_2^*\cdot p_{1} +um_c\right) &~~\mbox{($D^*D^*_q$)},\\ \end{array}\right.\\ F_2&=&\left\{\begin{array}{ll} \int^1_0 du \Phi_{D_q}(u)\frac{m_b}{k^2}[\bar{u}(m_b-m_c)+m_c] &~~\mbox{($DD_q$)},\\ 0 &~~\mbox{($DD^*_q$)},\\ \int^1_0 du \Phi_{D_q}(u)\frac{m_b}{k^2}[\bar{u}(m_b+m_c)-m_c] &~~\mbox{($D^*D_q$)},\\ 0 &~~\mbox{($D^*D^*_q$)},\\ \end{array}\right. \end{eqnarray} where $\epsilon_{2L}^*\cdot p_{1}\approx (m_b^2-m^2_{M^*_q}-m_c^2)/(2m_{M^*_q})$ and $\epsilon_{2T}^*\cdot p_{1}=0$ for $B\to D^*D^*_q$ decays. \subsection{Observables of $B \to M_1 M_2$ decays} In the $B$ meson rest frame, the branching ratios of two-body $B$ meson decays can be written as \begin{eqnarray} \mathcal{B}(B\to D^{^{(*)}}D^{^{(*)}}_{_{q}})=\frac{\tau_B }{8\pi }\frac{|p_c |}{m_B^2}\left|\mathcal{M}(B\to D^{^{(*)}}D^{^{(*)}}_{_{q}})\right|^2, \end{eqnarray} where $\tau_B$ is the $B$ meson lifetime, and $|p_c|$ is the magnitude of momentum of particle $M_1$ and $M_2$ in the B rest frame and written as \begin{eqnarray} |p_c|=\frac{\sqrt{[m_B^2-(m_{D^{^{(*)}}}+m_{D^{(*)}_{q}})^2][m_B^2-(m_{D^{^{(*)}}}-m_{D^{(*)}_{q}})^2]}}{2m_B}. \end{eqnarray} In $B\to D^*D^*_q$ decays, one generally should evaluate three amplitudes as $\mathcal{M}_{0,\pm}$ in the helicity basis or as $\mathcal{M}_{L,\parallel,\perp}$ in the transversity basis, which are related by $\mathcal{M}_{L}=\mathcal{M}_{0}$ and $\mathcal{M}_{\parallel,\perp}=\frac{\mathcal{M}_+\pm\mathcal{M}_-}{\sqrt{2}}$. Then we have \begin{eqnarray} \left|\mathcal{M}(B\to D^*D^*_q)\right|^2=|\mathcal{M}_0|^2+|\mathcal{M}_+|^2+|\mathcal{M}_-|^2 =|\mathcal{M}_L|^2+|\mathcal{M}_\parallel|^2+|\mathcal{M}_\perp|^2. \end{eqnarray} The longitudinal polarization fraction $f_L$ and transverse polarization fraction $f_\perp$ are defined by \begin{eqnarray} f_{L,\perp}(B\to D^*D^*_q)&=&\frac{\Gamma_{L,\perp}}{\Gamma}=\frac{|\mathcal{M}_{L,\perp}|^2} {|\mathcal{M}_L|^2+|\mathcal{M}_\parallel|^2+|\mathcal{M}_\perp|^2}. \end{eqnarray} In charged $B$ meson decays, where mixing effects are absent, the only possible source of CPAs is \begin{eqnarray} \mathcal{A}_{\rm CP}^{k,{\rm dir}}=\frac{\left|\mathcal{M}_k(B^-\rightarrow \overline{f})/\mathcal{M}_k(B^+\rightarrow f)\right|^2-1}{\left|\mathcal{M}_k(B^-\rightarrow \overline{f})/\mathcal{M}_k(B^+\rightarrow f)\right|^2+1},\label{Eq:APdir1} \end{eqnarray} and $k=L,\parallel,\perp$ for $B^-\to D^*D^*_q$ decays and $k=L$ for $B^-_u\to DD_q,DD^*_q,D^*D_q$ decays. Then for $B^-_u\to D^*D^*_q$ decays, we have \begin{eqnarray} \mathcal{A}_{\rm CP}^{+,{\rm dir}}(B\to D^*D^*_q)&=&\frac{\mathcal{A}_{\rm CP}^{\parallel,{\rm dir}}|\mathcal{M}_\parallel|^2+\mathcal{A}_{\rm CP}^{L,{\rm dir}}|\mathcal{M}_L|^2} {|\mathcal{M}_\parallel|^2+|\mathcal{M}_L|^2}.\label{Eq:APdir2} \end{eqnarray} For neutral $B_q$ meson decays, the situation becomes complicated because of $B^0_q-\bar{B}^0_q$ mixing, and have been studied by many authors. We do not repeat the lengthy discussions here, one can see Refs.~\cite{Gronau:1989zb,Soto:1988hf,Palmer:1994ec,Ali:1998gb} for details. \section{Numerical calculations} \label{sec:results} \subsection{Input parameters} \begin{itemize} \item CKM matrix elements: In numerical calculation, we will use the following values which given as \cite{CKMfit} \begin{eqnarray} |V_{ud}|&=&0.9743,\quad |V_{us}|=0.2252,\quad |V_{ub}|=0.0035,\nonumber\\ |V_{cd}|&=&0.2251,\quad |V_{cs}|=0.9735,\quad |V_{cb}|=0.0412,\nonumber\\ |V_{td}|&=&0.0086,\quad |V_{ts}|=0.0404,\quad |V_{tb}|=0.9991,\nonumber\\ \beta&=&(21.58^{+0.91}_{-0.81})^\circ,\quad \gamma=(67.8_{-3.9}^{+4.2})^\circ. \end{eqnarray} \item Quark masses. When calculating the decay amplitudes, the pole and current quark masses will be used. For the former, we will use $$m_u=4.2{\rm MeV},\ \ m_c=1.5{\rm GeV},\ \ m_t=175{\rm GeV},$$ $$m_d=7.6{\rm MeV},\ \ m_s=0.122{\rm GeV},\ \ m_b=4.62{\rm GeV}.$$ The current quark mass depends on the renormalization scale. In the $\overline{MS}$ scheme and at a scale of 2GeV, we fix $$\overline {m }_u (2{\rm GeV})=2.4{\rm MeV}, \ \ \overline {m}_d (\rm 2GeV)=6{\rm MeV}, $$ $$ \overline {m}_s (2{\rm GeV}) = 105{\rm MeV}, \ \ \overline {m}_b (\overline {m}_b ) = 4.26{\rm GeV},$$ and then employ the formulae in Ref.\cite{Buchalla:1995vs} \begin{eqnarray} \overline m (\mu ) = \overline m (\mu _0 )\left [\frac{{\alpha _s (\mu )}}{{\alpha _s (\mu _0 )}} \right]^{\frac{{\gamma _m^{(0)} }}{{2\beta _0 }}} \left [1 + \left ( \frac{{\gamma _m^{(1)} }}{{2\beta _0 }} - \frac{{\beta _1 \gamma _m^{(0)} }}{{2\beta _0^2 }} \right ) \frac{{\alpha _s (\mu ) - \alpha _s (\mu _0 )}}{{4\pi }} \right ] \end{eqnarray} to obtain the current quark masses at any scale. The definitions of $\alpha_s$, $\gamma _m^{(0)}$, $\gamma _m^{(1)}$, $\beta_0$, and $\beta_1$ can be found in Ref.\cite{Buchalla:1995vs}. \item Decay constants: The decay constants of $D^*_q$ mesons have not been directly measured in experiments so far. In the heavy-quark limit $(m_c \to\infty )$, spin symmetry predicts that $f_{D^*_{q}}=f_{D_{q}}$, and most theoretical predictions indicate that symmetry-breaking corrections enhance the ratio $f_{D^*_{q}}/f_{D_{q}}$ by $10\% - 20\%$ \cite{Neubert:1993mb,Neubert:1996qg}. In this paper, we will take $f_{D}=0.201 \pm0.017{\rm GeV}$, $f_{D_{s}}=0.249\pm0.016{\rm GeV}$ and $f_{D^*_{q}}=f_{D_{q}}$ as our input values. \item Distribution amplitudes: The distribution amplitudes of $D^{(*)}_q$ mesons are less constrained, and we use the shape parameter $a_{D^{(*)}}=0.7\pm0.2$ and $a_{D^{(*)}_s}=0.3\pm0.2$. \item Form factors: For the form factors involving $B\to D^{(*)}$ transitions, we take expressions which include perturbative QCD corrections induced by hard gluon vertex corrections of $b\to c$ transitions and power corrections in orders of $1/m_{b,c}$ \cite{Neubert:1991xw,Neubert:1992tg}. As for Isgur-Wise function $\xi(\omega)$, we use the fit result $\xi(\omega)=1-1.22(\omega-1)+0.85(\omega-1)^2$ from Ref. \cite{Cheng:2003sm}. \item Mass and lifetimes: For B and D meson masses, the lifetimes, we use the following as input parameters \cite{pdg2008}. \begin{eqnarray} m_{_{B_u}}&=&5.279{\rm GeV},\;\;\;m_{_{B_d}}=5.280{\rm GeV},\;\;\;m_{_{B_s}}=5.366{\rm GeV},\nonumber\\ M_{D^{0}}&=&1.865{\rm GeV}, \;\;\; M_{D^{+}}=1.870{\rm GeV}, \;\;\; M_{D^{+}_s}=1.969{\rm GeV}, \nonumber\\ M_{D^{*0}}&=&2.007{\rm GeV}, \;\;\; M_{D^{*+}}=2.010{\rm GeV}, \;\;\; M_{D^{*+}_s}=2.107{\rm GeV} \nonumber\\ \tau_{_{B_u}}&=&(1.638){\rm ps}, \;\;\; \tau_{_{B_d}}=(1.530){\rm ps},\nonumber\\ \tau_{_{B_s}}&=&(1.425^{+0.041}_{-0.041}){\rm ps}. \end{eqnarray} \end{itemize} Using the input parameters given above, we then present the numerical results and make some theoretical analysis for double charm $B_{u,d}$ and $B_{s}$ decay processes. \subsection{data and theoretical prediction} \subsubsection{$b\to c\bar{c} d$ decays } In the SM, $\bar{B}^0_{d}\to D^{^{(*)+}}D^{^{(*)-}}$, $B^-_{u}\to D^{^{(*)0}}D^{^{(*)-}}$ and $\bar{B}^0_{s}\to D^{^{(*)+}}_sD^{^{(*)-}}$ decays are dominated by the tree $b\to c\bar{c}d$ transition, and receive additional $b\to c\bar{c} d$ penguin diagram contributions. In Table \ref{Table:btoccdBRFL}, we show the theoretical predictions for the $CP$-averaged branching ratios and the polarization fractions in SM and mSUGRA model. The weighted averages of the relevant experimental data \cite{pdg2008} are given in the last column in both the Table \ref{Table:btoccdBRFL} and Table \ref{Table:btoccdACP}. The data with a star in the top right corner denote the BaBar measurement only, while that with two stars are the Belle measurements only. The central values of the theoretical predictions are obtained at the scale $\mu=m_{b}$, while the two errors are induced by the uncertainties of $f_D=0.201 \pm 0.017{\rm GeV}$ and $\gamma=67.8^{\circ}\pm 20^{\circ}$. From the numerical results and the data as given in Table \ref{Table:btoccdBRFL}, we have the following remarks on the branching ratios and the polarization fractions of $b\to c\bar{c}d$ double charm decays: \begin{enumerate} \item[]{(i)} The SUSY contributions to the branching ratios of the considered decays are indeed very small, less than $5\% $, which is consistent with the general expectation since these decays are all "tree" dominated decay processes. \item[]{(ii)} Thhe theoretical predictions of the Br's in both the SM and the mSUGRA model are consistent with the experimental measurements within $\pm 2\sigma$ errors. The central value of the theoretical prediction for $Br(\bar{B}^0_{d}\to D^{+}D^{-})$ ($Br(B^-_{u}\to D^{*0}D^-)$) is, however, much larger (smaller ) than that of the corresponding measurement. This point will be clarified by the forthcoming LHC experiments. \item[]{(iii)} The SUSY contributions to the polarization fractions of these decays in mSUGRA model are very small, less than $2\%$, and can be neglected safely. Only the central values are presented here since they are not sensitive to the variations of the form factors and the weak phase $\gamma$, which can be seen from the definition of the polarization fraction. \end{enumerate} \begin{table}[thb] \caption{Theoretical predictions for CP-averaged branching ratios (in units of $10^{-4}$), polarization (in percent) for $b\to c\bar{c}d$ decays in the SM and mSUGRA model. The last column shows currently available data \cite{pdg2008}.} \label{Table:btoccdBRFL} \begin{center} \begin{tabular} {l|c|cc|c} \hline \hline {Observables} & \multicolumn{1}{|c|}{SM}& \multicolumn{2}{|c|}{mSUGRA} & \multicolumn{1}{|c} {Data} \\ \cline{3-4} \ \ & &(A) &(B) & \\ \hline $\mathcal{B}(\bar{B}^0_d\to D^+D^-)$&$3.26^{+0.57 +0.10}_{-0.53 -0.12}$&$3.27^{+0.58 +0.10}_{-0.53 -0.11}$&$3.15^{+0.55 +0.08}_{-0.51 -0.13}$&$2.1\pm0.3$ \\ $\mathcal{B}(\bar{B}^0_d\to D^{*\pm}D^\mp)$&$5.92^{+1.05 +0.01}_{-0.95 -0.01}$&$5.93^{+1.04 +0.01}_{-0.96 -0.01}$&$5.91^{+1.04 +0.01}_{-0.96 -0.01}$&$6.1\pm1.5$ \\ $\mathcal{B}(\bar{B}^0_d\to D^{*+}D^{*-})$&$7.24^{+1.28 +0.06}_{-1.17 -0.06}$&$7.25^{+1.28 +0.06}_{-1.17 -0.06}$&$7.19^{+1.26 +0.06}_{-1.17 -0.06}$&$8.2\pm0.9$ \\\hline $\mathcal{B}(B^-_u\to D^{0}D^{-})$&$3.48^{+0.61 +0.11}_{-1.20 -0.78}$&$3.50^{+0.62 +0.10}_{-0.57 -0.12}$&$3.37^{+0.59 +0.11}_{-0.55 -0.14}$&$3.8\pm0.4$ \\ $\mathcal{B}(B^-_u\to D^{*0}D^{-})$&$3.43^{+0.60 +0.03}_{-0.51 -0.07}$&$3.43^{+0.60 +0.02}_{-0.56 -0.02}$&$3.44^{+0.61 +0.03}_{-0.55 -0.02}$&$6.3\pm1.4\pm1.0^{*}$ \\ $\mathcal{B}(B^-_u\to D^{0}D^{*-})$&$2.92^{+0.51 +0.02}_{-0.15 -0.03}$&$2.92^{+0.52 +0.02}_{-0.47 -0.02}$&$2.89^{+0.51 +0.03}_{-0.47 -0.03}$& $3.9\pm0.5$ \\ $\mathcal{B}(B^-_u\to D^{*0}D^{*-})$&$7.75^{+1.36 +0.05}_{-1.16 -0.07}$&$7.76^{+1.36 +0.06}_{-1.26 -0.07}$&$7.68^{+1.36 +0.07}_{-1.22 -0.07}$& $8.1\pm1.2\pm1.2^{*}$ \\\hline $\mathcal{B}(\bar{B}^0_s\to D^+_sD^-)$&$3.22^{+0.51 +0.10}_{-0.52 -0.11}$&$3.24^{+0.57 +0.09}_{-0.53 -0.12}$&$3.11^{+0.55 +0.11}_{-0.50 -0.13}$&$-$ \\ $\mathcal{B}(\bar{B}^0_s\to D^{*+}_sD^-)$&$3.13^{+0.55 +0.02}_{-0.51 -0.02}$&$3.13^{+0.55 +0.02}_{-0.51 -0.02}$&$3.14^{+0.55 +0.02}_{-0.51 -0.02}$&$-$ \\ $\mathcal{B}(\bar{B}^0_s\to D^+_sD^{*-})$&$2.67^{+0.48 +0.03}_{-0.43 -0.02}$&$2.68^{+0.47 +0.02}_{-0.44 -0.03}$&$2.65^{+0.47 +0.02}_{-0.45 -0.03}$&$-$ \\ $\mathcal{B}(\bar{B}^0_s\to D^{*+}_sD^{*-})$&$7.12^{+1.26 +0.07}_{-1.15 -0.06}$&$7.13^{+1.26 +0.06}_{-1.15 -0.06}$&$7.07^{+1.24 +0.06}_{-1.15 -0.08}$&$-$ \\\hline $f_L(\bar{B}^0_d\to D^{*+}D^{*-})$&$53.86$&$53.87$&$53.79$&$57.0\pm8.0\pm2.0^{**}$ \\ $f_L(B^-_u\to D^{*0}D^{*-})$&$53.88$&$53.89$&$53.81$&$-$ \\ $f_L(\bar{B}^0_s\to D^{*+}_sD^{*-})$&$53.88$&$53.89$&$53.81$&$-$ \\\hline $f_\perp(\bar{B}^0_d\to D^{*+}D^{*-})$&$5.51$&$5.50$&$5.51$&$15.0\pm2.5$ \\ $f_\perp(B^-_u\to D^{*0}D^{*-})$&$5.52$&$5.52$&$5.53$&$-$ \\ $f_\perp(\bar{B}^0_s\to D^{*+}_sD^{*-})$&$5.20$&$5.20$&$5.21$&$-$ \\\hline \hline \end{tabular} \end{center} \end{table} \begin{table}[thb] \caption{\small Theoretical predictions of CPAs (in percent) for the exclusive color-allowed $b\to c\bar{c}d$ decays. The last column shows the word averages \cite{pdg2008}.} \label{Table:btoccdACP} \begin{center} \begin{tabular} {l|c|cc|c} \hline \hline {Observables} & \multicolumn{1}{|c|}{SM}& \multicolumn{2}{|c|}{mSUGRA} & \multicolumn{1}{|c} {Data} \\ \cline{3-4} \ \ & &(A) &(B) & \\ \hline $\mathcal{S}(B^0_d,\bar{B}^0_d\to D^+D^-)$&$-75.3^{+1.4 +1.4}_{-1.5 -0.6}$&$-75.1^{+1.3 +1.3}_{-1.3 -0.6}$&$-76.3^{+1.3 +1.6}_{-1.2 -0.7}$&$-87\pm26$ \\ $\mathcal{S}(B^0_d,\bar{B}^0_d\to D^{*+}D^-)$&$-68.4^{+0.2 +0.3}_{-0.3 -0.2}$&$-68.4^{+0.2 +0.3}_{-0.3 -0.2}$&$-68.5^{+0.2 +0.3}_{-0.3 -0.2}$&$-61\pm19$ \\ $\mathcal{S}(B^0_d,\bar{B}^0_d\to D^+D^{*-})$&$-68.4^{+0.1 +0.2}_{-0.4 -0.2}$&$-68.4^{+0.1 +0.2}_{-0.4 -0.2}$&$-68.5^{+0.2 +0.2}_{-0.4 -0.2}$&$-78\pm21$ \\ $\mathcal{S}^+(B^0_d,\bar{B}^0_d\to D^{*+}D^{*-})$&$-70.2^{+0.4 +0.4}_{-0.6 -0.1}$&$-70.1^{+0.5 +0.3}_{-0.6 -0.2}$&$-70.4^{+0.4 +0.4}_{-0.7 -0.2}$&$-81\pm14$ \\\hline $\mathcal{C}(B^0_d,\bar{B}^0_d\to D^+D^-)$&$-4.4^{+0.3 +1.0}_{-0.4 -0.5}$&$-4.4^{+0.3 +1.0}_{-0.4 -0.5}$&$-4.5^{+0.3 +1.0}_{-0.4 -0.6}$&$-48\pm42$ \\ $\mathcal{C}(B^0_d,\bar{B}^0_d\to D^{*+}D^-)$&$7.8^{+0.3 +0.7}_{-0.6 -0.6}$&$7.7^{+0.3 +0.7}_{-0.6 -0.6}$&$8.3^{+0.3 +0.8}_{-0.6 -0.7}$&$-9\pm22$ \\ $\mathcal{C}(B^0_d,\bar{B}^0_d\to D^+D^{*-})$&$-8.4^{+1.1 +0.7}_{-1.1 -0.8}$&$-8.3^{+1.1 +0.7}_{-1.1 -0.8}$&$-8.9^{+1.0 +0.8}_{-1.0 -0.9}$&$7\pm14$ \\ $\mathcal{C}^+(B^0_d,\bar{B}^0_d\to D^{*+}D^{*-})$&$-1.2^{+0.2 +0.2}_{-0.4 -0.1}$&$-1.2^{+0.2 +0.2}_{-0.4 -0.1}$&$-1.2^{+0.2 +0.2}_{-0.4 -0.1}$&$-7\pm 9$ \\\hline $\mathcal{A}^{\rm dir}_{\rm CP}(B^-_u\to D^0D^-)$&$4.4^{+0.4 +1.0}_{-0.3 -0.2}$&$4.4^{+0.4 +0.5}_{-0.3 -1.0}$&$4.5^{+0.4 +0.6}_{-0.3 -1.0}$&$-3\pm7$ \\ $\mathcal{A}^{\rm dir}_{\rm CP}(B^-_u\to D^{*0}D^-)$&$-0.6^{+0.4 +0.1}_{-0.2 -0.1}$&$-0.6^{+0.4 +0.1}_{-0.2 -0.1}$&$-0.6^{+0.4 +0.1}_{-0.2 -0.1}$&$13\pm18\pm4^{*}$ \\ $\mathcal{A}^{\rm dir}_{\rm CP}(B^-_u\to D^0D^{*-})$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$3\pm10$ \\ $\mathcal{A}^{+,dir}_{CP}(B^-_u\to D^{*0}D^{*-})$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$-15\pm11\pm2^{*}$ \\\hline $\mathcal{A}^{\rm dir}_{\rm CP}(\bar{B}^0_s\to D^+_sD^-)$&$4.4^{+0.4 +0.5}_{-0.3 -1.0}$&$4.4^{+0.4 +0.5}_{-0.3 -0.6}$&$4.5^{+0.4 +0.6}_{-0.3 -1.0}$&$-$ \\ $\mathcal{A}^{\rm dir}_{\rm CP}(\bar{B}^0_s\to D^{*+}_sD^-)$&$-0.6^{+0.4 +0.1}_{-0.2 -0.2}$&$-0.6^{+0.4 +0.1}_{-0.2 -0.1}$&$-0.6^{+0.4 +0.1}_{-0.2 -0.1}$&$-$ \\ $\mathcal{A}^{\rm dir}_{\rm CP}(\bar{B}^0_s\to D^+_sD^{*-})$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$-$ \\ $\mathcal{A}^{+,{\rm dir}}_{\rm CP}(\bar{B}^0_s\to D^{*+}_sD^{*-})$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$1.2^{+0.4 +0.1}_{-0.2 -0.2}$&$-$ \\\hline \hline \end{tabular} \end{center} \end{table} In Table \ref{Table:btoccdACP}, we present the theoretical predictions for the CPAs in the framework of the SM and the mSUGRA model. The currently available data are also listed in the last column. The uncertainties come from the scale $m_b/2 \leq \mu \leq 2m_b$ and the weak angle $\gamma=67.8^{\circ}\pm 20^{\circ}$. From the numerical results and the data, we find that \begin{enumerate} \item[]{(i)} Just as generally expected based on the SM, the direct CPAs $C_f$ are indeed quite small, while the mixing-induced CPAs of all considered decays are close to $-0.7$: i.e. $S_f\approx \sin(2\beta) \approx -0.7$. \item[]{(ii)} The SUSY contributions to all considered decays are less than $7\%$. The new physics contributions is not sensitive to the variation of the scale $\mu$ and the weak angle $\gamma$. \item[]{(iii)} The theoretical predictions in the SM and mSUGRA model are all consistent with the experimental measurements within $\pm 1\sigma$ error. Of course, the errors of currently available data are very large now. \end{enumerate} \subsubsection{$b\to c\bar{c}s$ decays } The twelves decay modes $\bar{B}^0_{d}\to D^{^{(*)+}}D^{^{(*)-}}_s$, $B^-_{u}\to D^{^{(*)0}}D^{^{(*)-}}_s$ and $\bar{B}^0_{s}\to D^{^{(*)+}}_sD^{^{(*)-}}_s$ are the tree-dominated processes, and also receive the additional $b\to c\bar{c}s$ penguin contributions. \begin{table}[thb] \caption{\small Theoretical predictions for CP-averaged $\mathcal{Br}$ (in units of $10^{-3}$) and polarization fractions (in units of $10^{-2}$) of exclusive color-allowed $b\to c\bar{c}s$ decays in the SM and the mSUGRA model. The last column corresponds to the world averages \cite{pdg2008}.} \label{Table:btoccsBRFL} \begin{center} \begin{tabular} {l|c|cc|c} \hline \hline {Observables} & \multicolumn{1}{|c|}{SM}& \multicolumn{2}{|c|}{mSUGRA} & \multicolumn{1}{|c} {Data} \\ \cline{3-4} \ \ & &(A) &(B) & \\ \hline $\mathcal{B}(\bar{B}^0_d\to D^+D^-_s)$&$8.77^{+1.16 +0.02}_{-1.09 -0.02}$&$8.83^{+1.17 +0.02}_{-1.10 -0.02}$ &$8.39^{+1.11 +0.02}_{-1.36 -0.02}$&$7.4 \pm 0.7$\\ $\mathcal{B}(\bar{B}^0_d\to D^{*+}D^-_s)$&$8.78^{+1.16 +0.01}_{-1.10 -0.01}$ &$8.77^{+1.17 +0.01}_{-1.09 -0.01}$&$8.78^{+1.17 +0.01}_{-1.09 -0.01}$&$8.2\pm1.1$\\ $\mathcal{B}(\bar{B}^0_d\to D^+D^{*-}_s)$&$7.30^{+0.97 +0.01}_{-0.91 -0.01}$&$7.31^{+0.97 +0.01}_{-0.91 -0.01}$ &$7.22^{+0.96 +0.01}_{-0.90 -0.01}$&$7.5\pm1.6$\\ $\mathcal{B}(\bar{B}^0_d\to D^{*+}D^{*-}_s)$& $21.2^{+2.8}_{-2.6}\pm 0.0$& $21.2^{+2.8 }_{-2.6}\pm 0.0$& $20.9^{+2.8 }_{-2.6 }\pm 0.0$ &$17.8\pm1.4$\\\hline $\mathcal{B}(B^-_u\to D^{0}D^{-}_s)$&$9.38^{+1.24 +0.01}_{-1.17 -0.02}$&$9.44^{+1.25 +0.01}_{-1.18 -0.02}$ &$8.97^{+1.19 +0.02}_{-1.12 -0.02}$&$10.2\pm1.7$\\ $\mathcal{B}(B^-_u\to D^{*0}D^{-}_s)$&$9.40^{+1.24 +0.01}_{-1.17 -0.01}$&$9.39^{+1.25 +0.01}_{-1.17 -0.01}$ &$9.40^{+1.25 +0.01}_{-1.17 -0.01}$&$8.4\pm1.7$\\ $\mathcal{B}(B^-_u\to D^{0}D^{*-}_s)$&$7.82^{+1.04 +0.01}_{-0.97 -0.01}$&$7.83^{+1.04 +0.01}_{-0.97 -0.01}$ &$7.73^{+1.03 +0.01}_{-0.96 -0.01}$&$7.8\pm1.6$\\ $\mathcal{B}(B^-_u\to D^{*0}D^{*-}_s)$& $22.6^{+3.0 }_{-2.8}\pm 0.0 $& $22.7^{+3.0 }_{-2.8}\pm 0.0$& $22.4^{+3.0}_{-2.8}\pm 0.0 $ &$17.4\pm2.3$\\\hline $\mathcal{B}(\bar{B}^0_s\to D^+_sD^-_s)$&$8.68^{+1.15 +0.02}_{-1.08 -0.02}$&$8.73^{+1.16 +0.02}_{-1.08 -0.02}$ &$8.30^{+1.10 +0.02}_{-1.03 -0.02}$&$11\pm4$\\ $\mathcal{B}(\bar{B}^0_s\to D^{*+}_sD^-_s)$&$8.74^{+1.16 +0.01}_{-1.09 -0.01}$&$8.73^{+1.16 +0.01}_{-1.08 -0.01}$&$8.75^{+1.16 +0.01}_{-1.09 -0.01}$&$-$\\ $\mathcal{B}(\bar{B}^0_s\to D^+_sD^{*-}_s)$&$7.16^{+0.95 +0.01}_{-0.89 -0.01}$&$7.17^{+0.98 +0.01}_{-0.88 -0.01}$&$7.08^{+0.94 +0.01}_{-0.88 -0.01}$&$<121$\\ $\mathcal{B}(\bar{B}^0_s\to D^{*+}_sD^{*-}_s)$& $20.8^{+2.8}_{-2.6 }\pm 0.0$& $20.8^{+2.8}_{-2.6}\pm 0.0$& $20.6^{+2.7}_{-2.6}\pm 0.0$& $<257$\\\hline $f_L(\bar{B}^0_d\to D^{*+}D^{*-}_s)$&$51.68$&$51.70$&$51.58$&$52\pm5$\\ $f_L(B^-_u\to D^{*0}D^{*-}_s)$&$51.70$&$51.72$&$51.61$&$-$\\ $f_L(\bar{B}^0_s\to D^{*+}_sD^{*-}_s)$&$51.70$&$51.71$&$51.60$&$-$\\\hline $f_\perp(\bar{B}^0_d\to D^{*+}D^{*-}_s)$&$5.50$&$5.50$&$5.51$&$-$\\ $f_\perp(B^-_u\to D^{*0}D^{*-}_s)$&$5.51$&$5.51$&$5.52$&$-$\\ $f_\perp(\bar{B}^0_s\to D^{*+}_sD^{*-}_s)$&$5.19$&$5.18$&$5.20$&$-$\\ \hline \hline \end{tabular} \end{center} \end{table} \begin{table}[thb] \caption{\small Theoretical predictions for CPAs (in percent) of exclusive color-allowed $b\to c \bar c s$ decays in the SM and the mSUGRA model.}\label{Table:btoccsACP} \begin{center} \begin{tabular} {l|c|cc|c} \hline \hline {Observables} & \multicolumn{1}{|c|}{SM}& \multicolumn{2}{|c|}{mSUGRA} & \multicolumn{1}{|c} {Data} \\ \cline{3-4} \ \ & &(A) &(B) & \\ \hline $\mathcal{A}^{\rm dir}_{\rm CP}(\bar{B}^0_d\to D^+D^-_s)$&$-0.26^{+0.02 +0.05}_{-0.03 -0.02}$&$-0.26^{+0.02 +0.05}_{-0.01 -0.02}$&$-0.27^{+0.02 +0.06}_{-0.02 -0.02}$&$-$\\ $\mathcal{A}^{\rm dir}_{\rm CP}(\bar{B}^0_d\to D^{*+}D^-_s)$&$0.03^{+0.02 +0.01}_{-0.02 -0.01}$&$0.03^{+0.02 +0.01}_{-0.02 -0.01}$&$0.03^{+0.02 +0.01}_{-0.02 -0.01}$&$-$\\ $\mathcal{A}^{\rm dir}_{\rm CP}(\bar{B}^0_d\to D^+D^{*-}_s)$&$-0.07^{+0.02 +0.02}_{-0.02 -0.01}$&$-0.07^{+0.02 +0.02}_{-0.01 -0.01}$&$-0.07^{+0.02 +0.02}_{-0.02 -0.01}$&$-$\\ $\mathcal{A}^{+,{\rm dir}}_{\rm CP}(\bar{B}^0_d\to D^{*+}D^{*-}_s)$&$-0.07^{+0.02 +0.02}_{-0.02 -0.01}$&$-0.07^{+0.02 +0.02}_{-0.01 -0.01}$&$-0.07^{+0.02 +0.02}_{-0.02 -0.01}$&$-$\\\hline $\mathcal{A}^{\rm dir}_{\rm CP}(B^-_u\to D^0D^-_s)$&$-0.26^{+0.02 +0.05}_{-0.03 -0.02}$&$-0.26^{+0.02 +0.05}_{-0.01 -0.02}$&$-0.27^{+0.02 +0.06}_{-0.02 -0.02}$&$-$\\ $\mathcal{A}^{\rm dir}_{\rm CP}(B^-_u\to D^{*0}D^-_s)$&$0.03^{+0.02 +0.01}_{-0.02 -0.01}$&$0.03^{+0.02 +0.01}_{-0.02 -0.01}$&$0.03^{+0.02 +0.01}_{-0.02 -0.01}$&$-$\\ $\mathcal{A}^{\rm dir}_{\rm CP}(B^-_u\to D^0D^{*-}_s)$&$-0.07^{+0.02 +0.02}_{-0.02 -0.01}$&$-0.07^{+0.02 +0.02}_{-0.01 -0.01}$&$-0.07^{+0.02 +0.02}_{-0.02 -0.01}$&$-$\\ $\mathcal{A}^{+,{\rm dir}}_{\rm CP}(B^-_u\to D^{*0}D^{*-}_s)$&$-0.07^{+0.02 +0.02}_{-0.02 -0.01}$&$-0.07^{+0.02 +0.02}_{-0.01 -0.01}$&$-0.07^{+0.02 +0.02}_{-0.02 -0.01}$&$-$\\\hline $\mathcal{S}(B^0_s,\bar{B}^0_s\to D^+_sD^-_s)$&$0.53^{+0.11 +0.11}_{-0.12 -0.12}$&$0.51^{+0.06 +0.04}_{-0.11 -0.10}$&$0.62^{+0.11 +0.06}_{-0.11 -0.12}$&$-$\\ $\mathcal{S}(B^0_s,\bar{B}^0_s\to D^{*+}_sD^-_s)$&$0.93^{+0.02 +0.02}_{-0.01 -0.02}$&$0.93^{+0.01 +0.02}_{-0.06 -0.02}$&$0.94^{+0.02 +0.02}_{-0.01 -0.02}$&$-$\\ $\mathcal{S}(B^0_s,\bar{B}^0_s\to D^+_sD^{*-}_s)$&$-0.94^{+0.03 +0.02}_{-0.01 -0.01}$&$-0.94^{+0.10 +0.02}_{-0.01 -0.02}$&$-0.93^{+0.02 +0.02}_{-0.03 -0.02}$&$-$\\ $\mathcal{S}^+(B^0_s,\bar{B}^0_s\to D^{*+}_sD^{*-}_s)$&$0.13^{+0.04 +0.01}_{-0.04 -0.03}$&$0.12^{+0.03 +0.01}_{-0.03 -0.02}$&$0.14^{+0.05 +0.02}_{-0.03 -0.02}$&$-$\\\hline $\mathcal{C}(B^0_s,\bar{B}^0_s\to D^+_sD^-_s)$&$0.26^{+0.03 +0.05}_{-0.02 -0.02}$&$0.26^{+0.01 +0.02}_{-0.02 -0.05}$&$0.27^{+0.02 +0.02}_{-0.02 -0.06}$&$-$\\ $\mathcal{C}(B^0_s,\bar{B}^0_s\to D^{*+}_sD^-_s)$&$9.91^{+0.91 +0.05}_{-1.14 -0.04}$&$9.82^{+0.21 +0.04}_{-1.15 -0.05}$&$10.52^{+0.89 +0.05}_{-1.12 -0.05}$&$-$\\ $\mathcal{C}(B^0_s,\bar{B}^0_s\to D^+_sD^{*-}_s)$&$-9.93^{+1.16 +0.01}_{-0.95 -0.04}$&$-9.84^{+1.18 +0.05}_{-0.25 -0.03}$&$-10.54^{+1.14 +0.05}_{-0.93 -0.05}$&$-$\\ $\mathcal{C}^+(B^0_s,\bar{B}^0_s\to D^{*+}_sD^{*-}_s)$&$0.07^{+0.01 +0.01}_{-0.02 -0.02}$&$0.07^{+0.02 +0.01}_{-0.01 -0.02}$&$0.07^{+0.01 +0.01}_{-0.02 -0.02}$&$-$\\\hline\hline \end{tabular} \end{center} \end{table} In Table \ref{Table:btoccsBRFL}, we present the theoretical predictions for the CP-averaged branching ratios and the polarization fractions in the framework of the SM and the mSUGRA model. The last column in table \ref{Table:btoccsBRFL} correspond to the world averages \cite{pdg2008}. The theoretical predictions for CP asymmetries of considered decays are given in Table \ref{Table:btoccsACP}, although they have not been measured yet. The central values of the theoretical predictions are obtained at the scale $\mu=m_{b}$, while the two errors are induced by the uncertainties of $f_D=0.201 \pm 0.017{\rm GeV}$ and $\gamma=67.8^{\circ}\pm 20^{\circ}$. From the numerical results and currently available data, one can see that \begin{itemize} \item[]{(i)} For the Br's and CPAs, the SUSY contributions again are very small for all considered decays, less than $3\%$ numerically. The theoretical predictions in both the SM and the mSUGRA model are all consistent with currently available data within one or two standard deviations. \item[]{(ii)} The direct CP violations $\mathcal{C}(B^0_s\to D^{*+}_sD^{-}_s)$ and $\mathcal{C}(B^0_s\to D^{+}_sD^{*-}_s)$ are at the $\pm 10\%$ level and to be tested by the LHC experiments. And the CP asymmetries for the remaining ten decays are very small, about $10^{-3}$ or $10^{-4}$ numerically, since the penguin effects are doubly Cabibbo-suppressed for the color-allowed $b\to c\bar{c}s$ decays. \end{itemize} \section{Summary} In this paper, we have investigated the new contributions to the branching rations, polarization fractions and CP asymmetries of the twenty three double charm decays $B/B_s \to D^{(*)}_{(s)} D^{(*)}_{(s)}$ in the SM and the mSUGRA model by employing the effective hamiltonian for $\Delta B=1$ transition and the naive factorization approach. From the numerical results and the phenomenological analysis, the following conclusions can be reached: \begin{enumerate} \item[]{(i)} For the exclusive double charm decays $B/B_s \to D^{(*)}_{(s)} D^{(*)}_{(s)}$ studied in this paper, the SUSY contributions in the mSUGRA model are very small, less than $7\%$ numerically. It may be difficult to observe so small SUSY contributions even at LHC. \item[]{(ii)} All the theoretical predictions in the SM and mSUGRA model are consistent with the experimental measurements within $\pm 2\sigma$ errors. \item[]{(iii)} The theoretical predictions in both the SM and mSUGRA model still have large theoretical uncertainties. The dominant errors are induced by the uncertainties of the form factors $f_{D}$ or $f_{D_s}$. \end{enumerate} \begin{acknowledgments} We are grateful to Wen-juan Zou for valuable help. This work is partially supported by the National Natural Science Foundation of China under Grant No.~10947020, and by Foundation of Henan Educational Committee for Youth Backbone Scholars in Colleges and Universities, and by the Natural Science Foundation of the Eduction Department of Henan Province under Grant No.~2010A140012. \end{acknowledgments}
1,477,468,751,222
arxiv
\section{Introduction and main result} Let $P$ be a $d$-dimensional polytope with $n$ vertices and let $f_i(P)$ be the number of $i$-dimensional faces of $P$. It is known that $f_j(P) \leq \left(\begin{smallmatrix} n\\ j+1 \end{smallmatrix}\right)$ for $j \leq \lfloor \frac{d}{2}\rfloor$ and equality holds when $P$ is the cyclic polytope. However, the situation for centrally symmetric polytopes is different. For example, the largest number of edges, fmax$(d,n;1)$, that a $d$-dimensional centrally symmetric polytope on $n$ vertices can have is unknown even for $d=4$. In [BN08], for fixed even dimension $d=2k$ and an integer $1\leq j <k$ Barvinok and Novik proved that $\textrm{fmax}(d,n;j)$, the maximum number of $j$-dimensional faces of a centrally symmetric $d$-dimensional polytope with $n$ vertices, is at least $(c_j(d)+o(1)) \left(\begin{smallmatrix} n\\ j+1 \end{smallmatrix}\right)$ for some $c_j(d)>0$ and at most $(1-2^{-d}+o(1)) \left(\begin{smallmatrix} n\\ j+1 \end{smallmatrix}\right)$ as $n$ grows. The authors also proved that $c_1(d)\geq 1-\frac{1}{d-1}$ and $c_j(d)>0$ for any $j\leq k-1$. To get a lower bound we need to define a centrally symmetric analog of cyclic polytopes - bicyclic polytopes. \bigskip As in [BN08], the authors consider the convex hull of the \emph{symmetric moment curve} \\ \[ SM_{2k}(t)=(\cos t, \sin t, \cos 3t, \sin 3t, \ldots, \cos(2k-1)t, \sin(2k-1)t)\] This curve is centrally symmetric. We define the Barvinok-Novik orbitope \[{\cal{B}}_{2k} = \textrm{conv}(SM_{2k}(t): 0 \leq t \leq 2\pi )\] In [BN08], it is proven that ${\cal B}_{2k}$ is a locally $k$-neighborly. \begin{thm} For every positive integer $ k $ there exists a number $\psi_k > 0$ such that if $t_1,\ldots,t_k \in S^1$ are distinct points that lie on an arc of length less than $\psi_k$, then \\ \centerline{\textrm{conv}$(SM_{2k}(t_1), \ldots , SM_{2k}(t_{k}))$} is a (k-1)-dimensional face of ${\cal B} _{2k}$. \end{thm} In this paper, a \emph{face} of a convex body is an exposed face, intersection of the body with a supporting hyperplane. Let $\phi_k$ be the supremum of all possible value of $\psi_k$ in Theorem 1.1. Then $\phi_k$ also satisfies Theorem 1.1 because if $t_i$'s are distinct points in $\mathbb{S}^1$ lying on an arc of length less than $\phi_k$, the points also lie on an arc of length $\psi $ less than $\phi_k$ for some $\psi$ satisfying Theorem 1.1. \bigskip The goal of this paper is to find a lower bound of $\phi_k$. \begin{thm} Let $\phi_k$ be the supremum of all possible value of $\psi_k$ in Theorem 1.1. Then we have $\phi_k>\sqrt{6}k^{-3/2}$. \end{thm} The idea of the proof of Theorem 1.2 is as follows. First of all, we find a lower bound of a distance between the boundary of the Barvinok-Novik orbitope and the origin by studying the minimum volume ellipsoid of the Barvinok-Novik orbitope. Then we show that for any $k$ points lying on an arc of length less than $\sqrt{6}k^{-3/2}$, there is no new intersection point between the affine hyperplane that is tangent to the symmetric moment curve at the points and the Barvinok-Novik orbitope. To be more precise, if arc of length is too small then there is no new point on the opposite arc because the line segment joining new point and one of $k$ points will pass through the interior of the Barvinok-Novik orbitope. However, such new point may appear only on the opposite arc , as shown in [BN08], because the hyperplane is a supporting hyperplane of ${\cal B}_{2k}$.\\ In Section 2 we discuss the minimum volume ellipsoid of the Barvinok-Novik orbitope and find a lower bound of a distance between the origin and the boundary of the Barvinok-Novik orbitope. In Section 3 we prove Theorem 1.2. \section{The minimum volume ellipsoid of the Barvinok-Novik orbitope} \smallskip In this section, we prove that ${\cal{B}}_{2k}$ contains the sphere of radius $\frac{1}{\sqrt{2}}$ centered at the origin. To prove this result, we need the notion of the minimum volume ellipsoid and two theorems related to it. \begin{definition} Given a convex body (compact convex set with a non-empty interior) $B \subset \mathbb{R}^d$, there is a unique ellipsoid $E_{\min} \supset B$ of the minimum volume, called the minimum volume ellipsoid of $B$. (See, for example, $[\textrm{B}97]$) \end{definition} \begin{thm} $[\textrm{BB} 05]$ Let $G$ be a compact group acting on the Euclidean space $V$ with $G$-invariant inner product $\langle\cdot,\cdot\rangle$ and $v$ be a nonzero vector in $V$. Let $B$ be the convex hull of the orbit of a vector $v\in V$: \[ B= \textrm{conv}(gv:g\in G).\] Suppose that the affine hull of B is V . Then there exists a decomposition \[V=\bigoplus _i V_i\] of V into the direct sum of pairwise orthogonal irreducible components such that the following holds.\\ The minimum volume ellipsoid $E_{min}$ of $B$ is defined by the inequality \[E_{\min} =\left\{x :\quad \sum_i \frac{\dim V_i}{\dim V}\cdot \frac{\langle x_i,x_i\rangle }{\langle v_i,v_i \rangle} \leq 1\right\},\] where $x_i$ (resp. $v_i$) is the orthogonal projection of $x$ (resp. $v$) onto $V_i$. \end{thm} \begin{thm} If a convex body $B$ is symmetric about the origin, then $(\dim B)^{-1/2} E_{\min} \subset B \subset E_{\min}$. (See, for example, $[\textrm{B}97]$) \end{thm} \bigskip In our situation, we have a following corollary. \begin{cor} $B$ contains the sphere of radius $\frac{1}{\sqrt{2}}$. \end{cor} \noindent{\bf Proof of Corollary 2.4}. In our case, we have $V=\mathbb R^{2k}$, $v=(1,0,1,0,\ldots,1,0)$, $G=\mathbb{S}^1$ and the action of $G$ is given by the matrix \[ \left(\begin{matrix} \cos(t) & -\sin(t) & 0& 0& \cdots & 0 & 0 \\ \sin(t)&\cos(t) & 0& 0& \cdots & 0 & 0 \\ 0 & 0 & \cos(3t) &-\sin(3t) & \cdots & 0 & 0 \\ 0& 0 & \sin(3t) & \cos(3t) & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots &\vdots &\ddots & \vdots & \vdots \\ 0& 0 & 0& 0 &\cdots & \cos((2k-1)t) & -\sin((2k-1)t) \\ 0& 0 & 0& 0 &\cdots & \sin((2k-1)t) & \cos((2k-1)t) \end{matrix}\right).\] In particular, the decomposition of $V=\bigoplus_i V_i$ is multiplicity-free because $V_i$' s are $\mathbb R^2$ with the action of $G$ by multiplication of \[ \left(\begin{matrix} \cos((2j-1)t) & -\sin((2j-1)t)\\ \sin((2j-1)t)&\cos((2j-1)t) \end{matrix}\right)\] so $V_i$' s are not isomorphic each other. Since $v=(1,0,1,0,\ldots,1,0)$ we have $\langle v_i,v_i \rangle=1$ for any $i$. Therefore, $E_{\min}$ is the sphere of radius $\sqrt{k}$ centered at the origin by Theorem 2.2. By Theorem 2.3, $B$ contains the sphere of radius $\frac{1}{\sqrt{2}}$. $\qed$ \section{Proof of Theorem 1.2.} In this section, we prove that $\phi_k$ is bigger than $\sqrt{6} k^{-3/2}$. For simplicity, we use $x(t)$ instead of $SM_{2k}(t)$. \smallskip \begin{thm} Suppose that there exist a positive number $\psi$ less than $\pi$, positive even integers $m_i$ for $i=1,\ldots,l$ satisfying $\sum_{i=1}^{l}m_i=2k$ and $l$ points $t_1,\ldots,t_l$ on an arc of length $\psi$ such that the affine hyperplane $H$ tangent to $x(t)$ at each point $t_i$ with multiplicity $m_i$ is a supporting hyperplane of ${\cal B}_{2k}$ and intersects with an opposite arc at another point $x(s)$. Then \[ |s-\pi-t_i| > \sqrt{3/2}k^{-3/2} \textrm{ for any } i.\] \end{thm} \smallskip {\bf Proof of Theorem 3.1}. Suppose that $-\frac{\psi}{2} \leq t_i \leq \frac{\psi}{2}$ for any $i$. It is known that new point $x(s)$ should lie in the opposite arc (See [BN08] Lemma 6.3), so we can assume that $\pi-\frac{\psi}{2} \leq s \leq \pi + \frac{\psi}{2}$. In this situation, we prove that distance between $x(s)$ and an opposite point of any $x(t_i)$ cannot be too small. Since $x(s)$ and $x(t_i)$ are vertices of the face defined by $x(t_i)$'s, the midpoint $\frac{x(s)+x(t_i)}{2}$ lies on the face. Therefore, we have $\big|\frac{x(s)+x(t_i)}{2}\big|^2 \geq \frac{1}{2} $ because ${\cal{B}}_{2k}$ contains a sphere of radius $\frac{1}{\sqrt{2}}$ centered at the origin. \\ If $|s-\pi-t_i|=\epsilon$, we have \[ \Big| \frac{x(s)+x(t_i)}{2}\Big|^2-\frac{1}{2}=-\frac{1}{2}+\frac{1}{4}\sum_{i=1}^k\left(\cos((2i-1)s)+\cos((2i-1)t_i))^2+(\sin((2i-1)s)+\sin((2i-1)t_i)\right)^2\] \[=-\frac{1}{2} +\frac{1}{2}\sum_{i=1}^k\left( 1+ \cos((2i-1)(s-t_i))\right )=\frac{1}{2}\left( k-1 + \frac{\sin(2k(s-t_i))}{2\sin(s-t_i)} \right) \] \[=\frac{1}{2}\left( k-1 - \frac{\sin(2k\epsilon)}{2\sin\epsilon}\right )\] Now we are using following well-known inequalities. \[x-x^3/6<\sin(x)<x \textrm{ \hspace{0.5cm} for } x>0\] Therefore, we have \[\frac{1}{2}\left( k-1 - \frac{\sin(2k\epsilon)}{2\sin\epsilon}\right )< \frac{1}{2}\left( k-1- \frac{2k\epsilon-8k^3\epsilon^3/6}{2\epsilon}\right )= -1/2 + k^3\epsilon^2/3\] for $\epsilon>0$. Since this is greater than or equal to zero, we have $\epsilon>\sqrt{3/2}k^{-3/2}$. \qed \smallskip We need one more lemma to prove Theorem 1.2. \\ \smallskip \begin{lem} Let $\Gamma \subset \mathbb{S}^1$ be an arc of length less than $\pi$. Let $t_i\subset \Gamma, i=1,\ldots,l$ be distinct points and let $m_i>1, i=1,\ldots,l$ be integers such that \[\sum_{i=1}^l m_i=2k\] Then the following $2k$ vectors \[x(t_i)-x(t_l) \textrm{ for } i=1,\ldots, l-1,\] \[\frac{d^n}{dt^n} x(t)\Big|_{t=t_i} \textrm{ for } n=1,\ldots,m_i -1 \textrm{ and } i=1,\ldots,l \] \[\frac{d^{m_1}}{dt^{m_1}} x(t)\Big|_{t=t_1}\] are linearly independent in $\mathbb{R}^{2k}$ In particular, there exists a unique affine hyperplane $H\subset \mathbb{R}^{2k}$ that is tangent to $x(t)$ at each point $t_i$ with multiplicity $m_i$. \end{lem} {\bf Proof.} Assume that the vectors are not linearly independent. Then there exists a non-zero vector $a\in \mathbb{R}^{2k}$ which is orthogonal to all the vectors. Let us define a trigonometric polynomial \[p(t)=\langle a, x(t)-x(t_l)\rangle,\] Hence $p(t)$ is not identically zero and has zeroes at $t_i$ with multiplicity $m_i$ respectively for $i=2,\ldots, l$ and a zero at $t_1$ with multiplicity $m_1+1$. Therefore the total number of roots of $p(t)$ on $\Gamma$, counting multiplicities, is at least $2k+1$. By Rolle's Theorem, the number of roots of the derivative $p'(t)$ on $\Gamma$ is at least $2k$, counting multiplicities. However, the constant term of $p'(t)$ is 0, so we have $p'(t+\pi)=-p'(t)$ and the total number of roots of $p'$ on the circle is at least $4k$, counting multiplicities. However, since $p'(t)$ is a trigonometric polynomial of degree $2k-1$ it has at most $4k-2$ roots if $p'(t)$ is nonzero. Hence $p'(t)\equiv 0$ and $p(t)$ is a constant, which is a contradiction. \qed \\ \smallskip {\bf Proof of Theorem 1.2.} Let $t_1\leq t_2\leq\ldots\leq t_k$ be $k$ points on an arc $\Gamma$ of length $\phi_k$ of $\mathbb{S}^1$.\\ Let us define a function \[p(t_1,\ldots,t_k)=\textrm{dist}(H_{t_1,t_2,\ldots,t_k},x(\Gamma+\pi))\] where $H_{t_1,t_2,\ldots,t_k}$ is the affine hyperplane that is tangent to $x(t)$ at each point $t_i$ with even multiplicity $m_i$. Note that we use multiset notation such that number of $t_i$ appeared in the multiset is same as $m_i/2$, so we know that sum of all the $m_i$'s are $2k$. Note that $H_{t_1,t_2,\ldots,t_k}$ is well-defined because of Lemma 3.2. \\ Now we take an infimum of $p(t_1,\ldots,t_k)$ for all $t_i$'s lying on an arc of length at most $\phi_k$ centered at 0. By definition of $\phi_k$, the infinum is nonnegative. If the infimum is strictly positive, by continuity of $p$ we can extend the length of the arc bigger than $\phi_k$ such that the infimum of $p$ for $k$ points lying on the bigger arc is positive. However, this means that for any $k$ distinct points lying on the bigger arc the affine hyperplane tangent at these points is a supporting hyperplane, contradicting definition of $\phi_k$.\\ Hence the infimum is 0. Since domain of the function $p$ is compact, we can find $l$ points $t_1,\ldots,t_l$ on $\Gamma$ and positive even integers $m_i$ for $i=1,\ldots,l$ satisfying $\sum_{i=1}^{l}m_i=2k$ such that the affine hyperplane $H$ is a supporting hyperplane of ${\cal B}_{2k}$ tangent to $x(t)$ at each point $t_i$ with multiplicity $m_i$ and intersects with the opposite arc $x(\Gamma+\pi)$ at some point, say $x(s)$. If $\phi_k$ is less than or equal to $\sqrt{6}k^{-3/2}$, there exists a point $t_i$ such that $ |s-\pi-t_i| \leq \sqrt{3/2}k^{-3/2}$ and it contradicts with Theorem 3.1. \qed \section{Concluding remarks} \begin{rmk} The estimate of the Corollary 2.2 can be improved by $\Omega(k^{-5/4})$ from $\Omega(k^{-3/2})$. This can be done by considering a linear combination of $-x(s)$ and two points among $x(t_i)$' s close to $-x(s)$ (say, $x(t_i)$ and $x(t_j)$). To be more precise, we can get a better bound by considering \[ \left|\frac{x(s)}{2}+ \frac{(s-t_j)x(t_i)}{2(t_i-t_j)} + \frac{(t_i-s)x(t_j)}{2(t_i-t_j)} \right|\] \end{rmk} \begin{rmk} The Barvinok-Novik orbitope ${\cal B}_{2k}$ does not contains a sphere of radius bigger than 1. In fact, a hyperplane $x_{2k-1}=1$ defines $(2k-2)$-dimensional face of ${\cal B}_{2k}$ and the distance between the hyperplane and the origin is 1. \end{rmk} {\bf Acknowledgements} Thanks to Alexander Barvinok for many helpful discussions. This research was partially supported by NSF grant DMS 0856640.
1,477,468,751,223
arxiv
\section*{Abstract} {\bf We consider discrete random fractal surfaces with negative Hurst exponent $H<0$. A random colouring of the lattice is provided by activating the sites at which the surface height is greater than a given level $h$. The set of activated sites is usually denoted as the excursion set. The connected components of this set, the level clusters, define a one-parameter ($H$) family of percolation models with long-range correlation in the site occupation. The level clusters percolate at a finite value $h=h_c$ and for $H\leq-\frac{3}{4}$ the phase transition is expected to remain in the same universality class of the pure (i.e. uncorrelated) percolation. For $-\frac{3}{4}<H< 0$ instead, there is a line of critical points with continously varying exponents. The universality class of these points, in particular concerning the conformal invariance of the level clusters, is poorly understood. By combining the Conformal Field Theory and the numerical approach, we provide new insights on these phases. We focus on the connectivity function, defined as the probability that two sites belong to the same level cluster. In our simulations, the surfaces are defined on a lattice torus of size $M\times N$. We show that the topological effects on the connectivity function make manifest the conformal invariance for all the critical line $H<0$. In particular, exploiting the anisotropy of the rectangular torus ($M\neq N$), we directly test the presence of the two components of the traceless stress-energy tensor. Moreover, we probe the spectrum and the structure constants of the underlying Conformal Field Theory. Finally, we observed that the corrections to the scaling clearly point out a breaking of integrability moving from the pure percolation point to the long-range correlated one. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} The percolative properties of random fractal surfaces have been studied for a long time \cite{efros,Molchanov1983,Kalda91_I,stanley92}. The universality class of their critical points remains a very active subject of research in the mathematical \cite{beffara2016percolation,hugo2017,riveraphd} and in the theoretical physics \cite{Saberi_2015} communities, mainly because they challenge our understanding of both the emergence of conformal symmetry and of the way this symmetry is implemented. \noindent Let us consider a random stationary function $u({\bf x})$ on a lattice $u({\bf x}): \mathbb{Z}^2\to \mathbb{R}$ which satisfies: \begin{equation} \label{cov} \mathbb{E} \left[ u({\bf x})\right]=0,\quad \mathbb{E} \left[ (u({\bf x})- u({\bf y}))^2\right] \sim C(H)\;|{\bf x}-{\bf y}|^{2\;H} \quad (|{\bf x}-{\bf y}|>>1) \end{equation} where $\mathbb{E}\left[\cdots\right]$ is the average over the instances of $u({\bf x})$, the symbol $\sim$ stands for asymptotically equivalent and $C(H)$ is some constant depending on $H$. The number $H$, $H\in \mathbb{R}$, is the surface roughness exponent \cite{fractalbook}, also known as Hurst exponent. The fractional Gaussian surfaces \cite{riveraphd} that we consider here, see (\ref{def:genugen}) below, is a class of random surfaces which satisfy the above properties. For positive $H>0$, the function $u({\bf x})$ is a fractional Brownian surface with unbounded height fluctuations, $\mathbb{E}\left[ u({\bf x})^2\right]=\infty$. The fluctuations remain unbounded also for $H=0$ in which case the covariance decreases logarithmically, $\mathbb{E}\left[ u({\bf x})u({\bf 0})\right]\sim -\log |{\bf x}|$. For negative exponent $H< 0$, $u({\bf x})$ is a long-ranged correlated surface with bounded fluctuations, $\mathbb{E}\left[ u({\bf x})^2\right]<\infty$. \noindent A random partition of the lattice is obtained by setting a level $h$, $h\in \mathbb{R}$, and by declaring that a site ${\bf x}$ is activated (not activated) if $\theta_h({\bf x})=1$ ($\theta_h({\bf x})=0$), where $\theta_{h}({\bf x}): \mathbb{Z}^2 \to \{0,1\}$: \begin{equation} \label{theta} \theta_h({\bf x})= \begin{cases} & 0,\quad u(\mathbf{x}) < h\\ &1,\quad u(\mathbf{x}) \geq h. \end{cases} \end{equation} A site is therefore activated with probability $p(h)$: \begin{equation} p(h)=\mathbb{E}\left[ \theta_h({\bf x})\right], \end{equation} where we use the translational invariance in law. The set of activated points is usually known as the excursion set \cite{adler2007}. The study of the connected components of the excursion set, hereafter referred to as level clusters, defines a site percolation model \cite{SA92, Saberi_2015}. For general values of $H$ there is a finite value of $h=h_c>-\infty$ below which a level cluster of infinite size is found with probability one \cite{Schmittbuhl_1993}. This is the percolation critical point. Note that the characterisation of the class of random fields which permit percolation has been given in \cite{Molchanov1983,Molchanov1983_II, Molchanov1986_III}. Close to the critical point, the main scaling behaviours are described by two critical exponents, the correlation length $\nu$ and the order parameter $\beta$ exponents \cite{SA92}. In particular, they determine the scaling of the $h_c$ width distribution with the size of the system, see (\ref{hscaling}), and the Hausdorff dimension $D_f$ of the level cluster, $D_f=2-\beta/\nu$. For $H> 0$, due to unbounded fluctuations of $u({\bf x})$ and to the strong correlations, the level clusters are compact (i.e. without holes) regions with fractal dimension $D_f=2$. The exponent $\nu$ is infinite $\nu=\infty$, as one can see from the fact that the $h_c$ width distribution remains finite in the thermodynamic limit (self-averaging is broken) \cite{Schmittbuhl_1993}. At $H>0$ the transition is not critical. At the point $H=0$, the fluctuations of the surface remain unbounded and the fractal dimension remains $D_f=2$, as argued in \cite{Saleur_Lebowitz_86} and recently proven in \cite{aru2017passage,sepulveda2019} for the Gaussian free field. For negative roughness exponent instead, the surface fluctuations are bounded, the correlation length exponent $\nu$ is finite ($\nu <\infty$) and a genuine continous transition of percolation type occurs. Correspondingly, the level clusters have a richer fractal structure with $D_f<2$. \noindent In this paper we consider random surfaces with negative roughness exponent. If not stated otherwise, we take $H<0$ henceforth. We generate a fractional Gaussian process on a flat torus of dimension $M\times N$. The surface $u({\bf x})$ takes the form \begin{equation} \label{def:genugen} u({\bf x})\propto\sum_{{\bf k}} \lambda_{{\bf k}}^{-\frac{H+1}{2}}\; \hat{w}({\bf k})\; e^{i \;{\bf k} \;{\bf x}}. \end{equation} In the above equation $\lambda_{{\bf k}}$ and $e^{i \;{\bf k}\; {\bf x}}$ are respectively the eigenvalues and the eigenvectors of the discrete Laplacian operator $\Delta_{{\bf x}} u({\bf x})=\sum_{{\bf y},|{\bf y}- {\bf x}|=1}\left(u({\bf y})-u({\bf x})\right)$ on the flat torus, and the $ \hat{w}({\bf k})$ are independent normally distributed random variables. The basic idea is to obtain correlated variables by convoluting uncorrelated ones. For $H=0$ the function $u({\bf x})$ is the discrete two-dimensional Gaussian free field on the torus. The role of open boundary conditions in one-dimensional fractional Gaussian processes is discussed in \cite{Rosso_2005, Santachiara_2007,Zoia_2007}. We generate also a second type of long range correlated random surface where the $ \hat{w}({\bf k})$ are drawn by a different distribution. Full details on how we generate the surfaces are given in Appendix \ref{sec:generatingu}. \noindent The probability of activating two distant sites inherits the long-range correlation of the random surfaces: \begin{equation} \mathbb{E}\left[ \theta_h({\bf x})\theta_h({\bf y})\right]-p(h)^2\sim C'(H)|{\bf x}-{\bf y}|^{2 H}\quad (|{\bf x}-{\bf y}|\to \infty), \end{equation} \noindent where $C'(H)$ is some constant depending on $H$ and on the chosen distribution. For $H=-1$ the surfaces we generate are an instance of the two-dimensional white noise where the probabilities of activating two different sites are uncorrelated ($C'(-1)=0$ in the above equation). The point $H=-1$ corresponds therefore to the pure percolation point. In Figure \ref{fig:instances} we show instances of the surfaces (\ref{def:genugen}) and the corresponding excursion set and level clusters at the critical point. \begin{figure}[H] \centering \begin{tikzpicture} \draw (0,0) node[]{\includegraphics[scale=0.83]{pics_surf}}; \end{tikzpicture}\caption{Instances of the fractional Gaussian surfaces (\ref{def:genugen}) for $H \in \{-7/8,-5/8,-2/8 \}$, generated on a $M\times N$ square lattice with $M=2N,\; N = 2^6$. The excursion sets (white points) corresponding to level $h=h_c$ from Table \ref{tab:hc1} are shown in the second column, while the third column shows the level clusters. The yellow points in the third column are the points belonging to the percolating level cluster. Note that by increasing $H$, i.e. the correlation, the level clusters have less holes. This is consistent with the prediction that the fractal dimension $D_f\to 2$ for $H\to 0^{-}$.} \label{fig:instances} \end{figure} \noindent The common understanding is that the percolating universal properties only depend on the asympotic behaviour of the covariance (\ref{cov}) and therefore on $H$. In \cite{weinrb84} an extended Harris criterion was proposed, according to which the universality class remains the one of pure percolation for $H<-3/4$. Recent new arguments, based on the fractal dimension of the pivotal points support this prediction \cite{hugoprivate,beliaev2018covariance}. The exponents $\nu$ and $D_f$ are expected to be \begin{equation} \label{nupure} \nu = \nu^{\text{pure}}=\frac43, \quad D_f = D_f^{\text{pure}}=\frac{91}{48}, \quad \text{for}\; H\leq -\frac34, \end{equation} where $\nu^{\text{pure}}$ and $D_f^{\text{pure}}$ are the pure percolation ($H=-1$) exponents. The fact that the system behaves as pure percolation for $H<-2$ was put on more rigorous grounds by \cite{beffara2016percolation,alej2017quasiindependence}. For $-3/4<H<0$ instead, the slower decay allows the correlation to change the large distance behaviour of the system, as was also argued in \cite{Kalda91_I}. In particular, it was shown in \cite{weinrb84} that there is a new line of critical points with an exponent $\nu=\nu^{\text{long}}$ which varies continuously with $H$: \begin{equation} \label{nulong} \nu=\nu^{\text{long}}=-\frac{1}{H},\quad -\frac{3}{4}<H < 0. \end{equation} The above prediction was supported by many numerical works, see for instance \cite{Kalda91_I, stanley92, Schmittbuhl_1993, Janke_2017, de_Castro_2018}. There are no theoretical predition for $D_f$ in the range $-3/4<H<0$. In Figure \ref{fig:instances} the level clusters become visibly more compact by increasing the value of $H$. One can expect then $D_f$ to increase when $H\to 0^{-}$. Even if the numerical results are not conclusive about the value of $D^{\text{long}}_f$, there are strong evidences that \cite{Schmittbuhl_1993,kalda2002oceanic,Janke_2017, de_Castro_2018}: \begin{equation} \label{dflong} D_f =D^{\text{pure}}_f\quad \text{for}\; H\leq -\frac{1}{2},\quad \text{and}\quad D^{\text{pure}}_f <D_f <2 \quad \text{for}\; -\frac12<H<0. \end{equation} In Appendix \ref{sec:critlevel}, we numerically compute $D_f$. The results, summarised in Table \ref{tab:df}, support the above scenario. The following diagram summarises the actual state of the art: \begin{figure}[H] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[thick, ->] (-12, 0) -- (8,0) node[below]{$H$}; \draw[thick, ->] (-6, 0) -- (-6,8) node[left]{$\nu, D_f$}; \draw[](-12,4)--(2,4); \draw[](6,5)--(8,5); \draw[, dashed] (2,4)--(6,5); \draw[] (-12,2)--(0,2); \draw[dotted] (0,0)--(0,8); \draw[dotted] (0,0)--(0,8); \draw[dotted] (2,0)--(2,8); \draw[dotted] (6,0)--(6,8); \draw plot [smooth] coordinates {(0, 2)(0.5, 2.18182)(1, 2.4)(1.5, 2.66667)(2, 3)(3, 4)(4, 6)}; \draw[dashed] plot [smooth] coordinates {(4., 6.) (4.25, 6.85714) (4.5, 8.)}; \draw (-3,2) node[above]{$\nu=\nu^{\text{pure}}=\frac43$}; \draw (2,2.5) node[right]{$\nu=\nu^{\text{long}}=-\frac{1}{H}$}; \draw (7,6.5) node[above]{$\nu=\infty$}; \draw (-3,4) node[above]{$D_f=D_f^{\text{pure}}=\frac{91}{48}$}; \draw (7,5) node[above]{$D_f=2$}; \fill[color=blue!90, opacity = .3] (-12,-.1) rectangle (0, .1); \draw [decorate,decoration={brace,mirror,amplitude=10pt},xshift=-4pt,yshift=0pt] (-12,-2.5) -- (0,-2.5) node [below,midway,black,yshift=-0.5cm] {\footnotesize Pure percolation, RG arguments \cite{weinrb84}}; \draw [decorate,decoration={brace,mirror,amplitude=10pt},xshift=-4pt,yshift=0pt] (0,-1) -- (6,-1) node [below,midway,black,yshift=-0.5cm] {\footnotesize Line of new critical points \cite{weinrb84}}; \fill [red, opacity = .3] (0,-.1) rectangle (6,.1); \draw[thick] (-10, -.1) node[below]{$-2$} -- (-10, .1); \draw[thick] (-2, -.1) node[below]{$-1$} -- (-2, .1); \draw[thick] (0, -.1) node[below]{$-\frac34$} -- (0, .1); \draw[thick] (6, -.1) node[below]{$0$} -- (6, .1); \draw[thick] (2, -.1) node[below]{$-\frac12$} -- (2, .1); \end{tikzpicture} \end{center}\caption{ }\label{fig:stateofart} \end{figure} We stress the fact that the results mentioned above are based on the assumption that the kernel has a definite sign at large distances. For other important classes of random functions, this is not true anymore. This is the case for instance of the random plane wave \cite{Berry_1977}: this random function has an oscillating kernel which decays with an exponent $H=-1/4$, and the universality class of its percolation transition is conjectured to be the one of pure percolation \cite{Bogo07}. \noindent Most of the results on critical pure percolation have been discovered by using conformal invariance \cite{cardy2001conformal}whose emergence has been rigourously proven in \cite{smirnov2001critical}. The values (\ref{nupure}) have been predicted by the conformal field theory (CFT) approach \cite{dofa_npb84,saleur87}, which allowed also the computation of the full partition function \cite{fsz87} and the derivation of exact formulas for cluster crossing probabilities \cite{Caperco92}. Contrary to statistical models with local and positive Boltzmann weights, whose critical points are described by the unitary minimal models \cite{Mathieu_2008}, the critical point of pure percolation is described by a non-unitary and logarithmic CFT \cite{Rongvoram_2020, granssamuelsson_2020}. This CFT is not fully known, but very recent results have paved the way to its complete solution\cite{prs16,js18,prs19,he2020fourpoint,saleur2020,Rongvoram_2020,granssamuelsson_2020}. The line of new critical points shown in Figure \ref{fig:stateofart} remains by far less understood. As we will discuss below, even the emergence of conformal invariance is debated. Moreover, if these points are conformal invariant we expect that the corresponding CFT does not coincide with any of the known solutions, due to the highly non-local nature of the lattice model. This will be indeed confirmed by the results presented in this paper. Recent numerical results have shown the emergence of conformal invariance \cite{herrmann13}, while in \cite{bbcf06}, where a random surface with $H=-2/3$ was considered, conformal symmetry has been ruled out. These papers check if the boundary of the percolation level cluster is described by a Stochastic Loewner Evolution (SLE) process \cite{BaBe}. The SLE numerical tests are in general very subtle and, in some cases not conclusive, as argued for instance in \cite{Cao_2015_1}. Moreover we observe that, in case of a positive SLE test as in \cite{herrmann13}, one expects the boundaries of the level clusters to be described by the loops of the $O(n)$ models either in their dense or critical phases \cite{CLE_sh_wer}. In these models, the fractal dimensions of the loops $D_{b}$ and of their interior $D_f$ vary with $n$ \cite{Saleur_89_interloop}. For instance, in the $O(n)$ dense phase, they are related by $D_f=D_b(2-3 D_b)/(4(1-D_b))$. This scenario is not consistent with the numerical findings for the level clusters of long-range correlated random surfaces, as can be directly seen from the fact that $D_f$ does not show significant variation for $-3/4<H<-1/2$ while $D_b$ does \cite{herrmann13}. Moreover, we provide further evidences that the line $-3/4< H<0$ is not the one of the $O(n)$ models. This point illustrates the fact that many fundamental questions remain open. Our objective is to test conformal invariance and to extract new information about these critical points. We use a completely different protocol based on the study of the level clusters and their connectivity function. This is the probability that two sites belong to the same level cluster, see (\ref{def:2conn}). Because the random surfaces have double periodicity, the level clusters live on a torus. For pure percolation, signatures of conformal invariance were shown to be encoded in toric boundary conditions effects in the connectivity function \cite{jps19two}. These effects depend on a non trivial combination of the two exponents $\nu$ and $D_f$, fixed by conformal invariance. Moreover, when the lattice is rectangular, $M\neq N$, a soft breaking of rotational symmetry is introduced. Using this anisotropy, we show that the connectivity function directly probes the existence of the two components of a traceless stress-energy tensor. The existence of this pair of fields is the most basic manifestation of conformal symmetry. Finally, we provide the first numerical measurements of quantities related to the conformal spectrum and structure constants of this new conformal critical points. In Section \ref{sec:2ptconnth} we define the connectivity function and we give the theoretical predictions for the toric effects. We discuss the main ideas behind the CFT approach on which these predictions are based. In Section \ref{sec:numericalresults} we provide the numerical evidences on the connectivity function. In Appendix \ref{sec:generatingu} we provide full details on how we generate the random surfaces and in Appendix \ref{sec:critlevel}, on how we locate the critical percolation point and compute the exponents $\nu$ and $D_f$. \section{Critical two-point connectivity of level clusters} \label{sec:2ptconnth} In this section we consider the two-point connectivity $p_{12}({\bf x_1},{\bf x_2})$, referred to as simply correlation function in \cite{SA92}. Defining the event: \begin{equation} \text{Conn}({\bf x_1},{\bf x_2})= \text{${\bf x_1}$ and ${\bf x_2}$ belong to the same level cluster}, \end{equation} we define: \begin{equation} \label{def:2conn} p_{12}({\bf x_1}-{\bf x_2})= \mathbb{E}\left[ \text{Conn}({\bf x_1},{\bf x_2})\right] \end{equation} where translational invariance in law has been taken into account. A study of two-point connectivity for general Gaussian random surfaces can be found in \cite{Adler_2014} where the large $h$ asymptotic behaviour of (\ref{def:2conn}) has been considered. Here we are interested in the behaviour of this probability at the critical point $h=h_c$. \subsection{Scaling limit in the infinite plane $M,N=\infty$} Let us consider first the regime in which toric size effects are negligeable. It corresponds to $M,N=\infty$, i.e. the infinite plane limit. \noindent At the critical point, $h=h_c$, we have $p_{12}({\bf x})\sim |{\bf x}|^{-\eta}$, where $\eta$ is the standard notation for the anomalous dimension of the two-point function \cite{SA92}. Percolation theory tells us that $\eta$ is directly related to the level cluster dimension $D_f$ via the scaling relation $\eta= 4-2 D_f$ \cite{Kapitulnik_84}. One has therefore: \begin{equation} \label{2connplane} p_{12}({\bf x_1}-{\bf x_2}) = \frac{d_0}{|{\bf x_1}-{\bf x_2}|^{2(2-D_f)}}\quad \left(|{\bf x_1}-{\bf x_2}|>>1, \; M,N=\infty \right), \end{equation} where $d_0$ is a non-universal constant which we evaluate numerically, see Table \ref{tab:d0}. We can use (\ref{2connplane}) to determine $D_f$. The corresponding values are denoted as $D_{f}^{(2)}$ in Table \ref{tab:df}. The good agreement with the values $D^{(1)}_f$, obtained using the scaling of the average mass of the percolating level cluster (see Appendix \ref{app:hDf}), confirms that we are sitting sufficiently close to the critical value $h_c$. \noindent In Figure \ref{fig:p1} we show the behaviour of $p_{12}({\bf x_1}-{\bf x_2})$ for $H=-5/8$. One can easily notice a region $|{\bf x_1}-{\bf x_2}|\in [10,100]$ where the form (\ref{2connplane}) is well satisfied. \begin{figure}[H] \centering \begin{tikzpicture} \begin{loglogaxis}[ legend cell align=center, yticklabel style={/pgf/number format/.cd,fixed zerofill,precision=1}, xlabel={$|{\bf x_1}-{\bf x_2}|$}, ylabel={$p_{12}({\bf x_1}-{\bf x_2})$}, legend pos=north east] ] \addplot+[gray, forget plot,mark=none,domain=1:512] {0.351113/x^0.21623}; \addplot+[orange,mark=o,only marks,mark size=1pt] table [y=p, x=r]{./plots/p-0.75-10.dat}; \legend{$N=M=2^{10}$}; \end{loglogaxis} \end{tikzpicture} \caption{Two-point connectivity (\ref{def:2conn}) for $H=-5/8$ and $N=M=2^{10}$. The data points were obtained by averaging over $10^5$ instances of the surface and over the $N^2$ locations of point ${\bf x_1}$ (cf. Section \ref{sec:numericalresults}). According to Table \ref{tab:hc1}, the level $h$ has been set to $h_c=-0.1985$. The continuous line shows the prediction (\ref{2connplane}) with $D_{f}=D_{f}^{(2)}=1.892$, see Table \ref{tab:df}. For distances $6<|{\bf x_1}-{\bf x_2}|<100$ the data matches very well with the infinite plane prediction. For larger distances, the effect of the toric boundary conditions becomes visible. } \label{fig:p1} \end{figure} \subsection{Scaling limit in the torus: $M,N<\infty$. } \label{sec:thtorus} As can be seen in Figure \ref{fig:p1}, when the distance between points approaches $N/2$, the data points start to deviate from the power-law behaviour: the contributions of the paths connecting the two points on the other side of the torus become non negligeable. We say that the topological corrections become visible. We expect these corrections to provide sub-leading $|{\bf x}|/N$ terms in (\ref{2connplane}) of universal nature. These effects have been studied in \cite{jps19two} for pure percolation ($H=-1$). \noindent In the scaling limit, our system lives on a flat torus $\mathbb{T}_{q}$ of periods $M$ and $N$ and nome $q$: \begin{equation} \label{def:toruspar} \mathbb{T}_{q}: \quad q= e^{-2 \pi \frac{M}{N}}. \end{equation} As the connectivity between two points always depend on the vector connecting them, it is convenient to introduce the vector ${\bf x},{\bf x}^{\perp} \in \mathbb{T}_{q}$ that have polar coordinates $|{\bf x}|$ and $\theta$: \begin{equation} \label{def:polar} {\bf x}\in \mathbb{T}_{q}, \quad {\bf x}= |{\bf x}|(\cos(\theta),\sin(\theta)), \quad {\bf x}^{\perp}= |{\bf x}|(-\sin(\theta),\cos(\theta)). \end{equation} \noindent We propose the following form for the scaling limit of $p_{12}$ on a torus: \begin{equation}\label{predperco} p_{12}({\bf x}) = \frac{d_0}{|{\bf x}|^{2(2-D_f)}}\left(1+c_{\nu}\left(q\right)\left(\frac{|{\bf x}|}{N}\right)^{2-\frac{1}{\nu}}+2c_{T}\left( q \right)\cos(2\theta)\left(\frac{|{\bf x}|}{N}\right)^{2}+o\left( \left(\frac{|{\bf x}|}{N}\right)^2\right)\right), \end{equation} which has been established in \cite{jps19two} for pure percolation and for the more general random cluster $Q-$Potts model. The coefficients $c_{\nu}\left(q\right)$, and $c_{T}(q)$, given in (\ref{ccoeff}), are universal coefficients which depend only on the geometry of the torus. To explain the origin of (\ref{predperco}) and the information we can extract from this formula, we need to recall some basic definitions and notions on CFT. \subsection{Basic notions of CFT} \noindent A CFT is a massless quantum field theory in which each (quantum) field $V_{\Delta,\bar{\Delta}}({\bf x})$ is characterised by a pair of numbers $(\Delta, \bar{\Delta})$, called left and right conformal dimensions, which give the scaling dimension ($\Delta^{\text{phys}}=\Delta+\bar{\Delta}$) and the spin ($s=\Delta-\bar{\Delta}$) of the field. The set of fields entering a CFT is called the spectrum $\mathcal{S}$ of the theory, $\mathcal{S}=\oplus_{(\Delta,\bar{\Delta})} V_{(\Delta,\bar{\Delta})}$. The most important landmark of conformal invariance is the existence of two fields, commonly denoted as $T$ and $\bar{T}$, with left-right dimensions $(\Delta,\bar{\Delta})=(2,0)$ and $(\Delta,\bar{\Delta})=(0,2)$. These fields are the conserved (chiral) Noether current associated to the conformal symmetry, and they correspond to the components of the traceless stress-energy tensor field. \noindent In the CFT approach to statistical models, there is a correspondence between lattice operators and fields $V_{\Delta,\bar{\Delta}}({\bf x})$. In particular, the long distance behaviour of lattice observables is described by the correlation functions of the fields $V_{\Delta,\bar{\Delta}}({\bf x})$. Scale invariance fixes the infinite plane limit of the two-point functions. For a spinless field $V_{\Delta,\Delta}$ we have: \begin{equation} \label{def:2pointplane} \left<V_{\Delta,\Delta}({\bf x})V_{\Delta,\Delta}(0) \right>_q=|{\bf x}|^{-4\Delta}\quad \left(\frac{|{\bf x}|}{N}\to 0\right), \end{equation} where $\left< \cdots \right>_q$ denotes the torus CFT correlation function on $\mathbb{T}_{q}$. A quantum field theory is completely solved if we can compute all its correlation functions. For a CFT, one needs two basic inputs: the spectrum $\mathcal{S}$ and the structure constants $C_{V_1,V_2}^{V_3}$. The latter are real constants associated to the amplitude with which two fields $V_1$ and $V_2$ fuse into a third one $V_3$. Said in other words, the constants $C_{V_1,V_2}^{V_3}$ determine the short-distance behaviour of the CFT correlation functions which is encoded, in the CFT jargon, in the Operator Product Expansion (OPE). \noindent Among all the fields in a CFT, a major role is played by the density energy field $\varepsilon=V_{\Delta_{\varepsilon},\Delta_{\varepsilon}}$ and the magnetic (order parameter) field $\sigma=V_{\Delta_{\sigma},\Delta_{\sigma}}$, which are the (spinless) fields with the lowest scaling dimension in the thermal and magnetic sector. Their names come from the fact that, in a ferromagnetic/paramagnetic type transition, these are the fields which couple respectively to the temperature and to the magnetic field. Their dimensions $\Delta_{\varepsilon}$ and $\Delta_{\sigma}$ give the exponents $\nu$ and $\beta$ of a critical point \cite[Chapter 3]{cardy_1996}. In terms of $\nu$ and $D_f = (4-\eta)/2= 2-\beta/\nu$ \cite[Section 3.3]{SA92} we have: \begin{equation} \label{Deltas} \Delta_{\varepsilon}= 1-\frac{1}{2\nu},\quad \Delta_{\sigma}=1-\frac{D_f}{2}. \end{equation} \subsection{Three main assumptions} \label{sec: 3assumptions} \noindent Our prediction (\ref{predperco}) is based on three assumptions which have been verified for pure percolation \cite{jps19two,javerzat2019fourpoint}. The first two assumptions are more general and concern the fact that the connectivity, which is non-local in nature, can be studied by correlations of local fields in a CFT. \begin{itemize} \item {\bf 1:} The system is conformally invariant in the scaling limit. \item {\bf 2:} The scaling limit of the connectivity (\ref{def:2conn}) is described by the two spin field torus correlator: \begin{equation} \label{asspss} p_{12}({\bf x})=d_0\;\left< \sigma({\bf x}) \sigma(0)\right>_q. \end{equation} \end{itemize} The two-point function $ \left< \sigma \sigma\right>_q$ can be expressed as an ($s$-channel) expansion: \begin{align} \label{eq:genss} p_{12}({\bf x})&=d_0\left< \sigma\left({\bf x}\right) \sigma(0)\right>_q \nonumber \\ &=\frac{d_0}{|{\bf x}|^{4\Delta_{\sigma}}} \; \sum_{\substack{V_{\Delta,\bar{\Delta}}\in \mathcal{S}\\ \Delta\geq \bar{\Delta}}}\; (2-\delta_{\Delta,\bar{\Delta}})\; C_{\sigma,\sigma}^{V_{\Delta,\bar{\Delta}}}\left<V_{\Delta,\bar{\Delta}}\right>_q\; \cos\left((\Delta-\bar{\Delta})\;\theta\right) \left(\frac{|{\bf x}|}{N}\right)^{\Delta+\bar{\Delta}}, \end{align} with ${\bf x}=|{\bf x}|(\cos(\theta), \sin(\theta))$, see (\ref{def:polar}). In general, $p_{12}$ does not get contributions from all the fields in the spectrum $\mathcal{S}$, since structure constants $C_{\sigma,\sigma}^{V_{\Delta,\bar{\Delta}}}$ and/or one-point functions $\left<V_{\Delta,\bar{\Delta}}\right>_q$ may vanish. We refer the reader to \cite{jps19two} for a detailed derivation of the above formula which is a direct consequence of the existence of an operator algebra and of the symmetry between the holomorphic and anti-holomorphic sectors. This latter symmetry is very natural for CFTs without boundaries and implies that if a field with spin $s>0$ enters in the spectrum, then also its anti-holomorphic partner does, with the same physical dimension and with spin with opposite sign $-s$. The expansion (\ref{eq:genss}) is then valid for almost all the CFTs. The information which characterise a specific CFT is encoded in the spectrum $\mathcal{S}$ and in the structure constants $C_{\sigma,\sigma}^{V_{\Delta,\bar{\Delta}}}$. In the case of pure percolation, for instance, the spectrum is known but not the structure constants, even if very recent progresses have paved the way to their determination \cite{saleur2020}. The plane limit $M,N=\infty$ is recovered by noting that all the one-point functions $\left<V_{\Delta,\bar{\Delta}}\right>_q$ vanish but the identity one $\left<\text{Id}\right>_q=\left<V_{0,0}\right>_q=1$. One obtains $p_{12}({\bf x})= d_0 |{\bf x}|^{-4\Delta_{\sigma}}\;(M,N=\infty)$. Note that, in the infinite plane limit, one can prove for pure percolation (or more generally for the $O(n)$ models in their dense or critical phases) that $p_{12}$ is given by the correlator of two spin fields $\sigma$, see for instance \cite{Saleur_89_interloop, Jacobsen_book2012}. The exponent $\eta$ is therefore $\eta=4 \Delta_{\sigma}$ which, by (\ref{Deltas}) gives equation (\ref{2connplane}). \noindent It has been shown in \cite{jps19two} that the first dominant terms in the above series can be computed for pure percolation. Our third assumption is motivated by a generalisation of these results to the case of long-range percolation: \begin{itemize} \item {\bf 3:} The identity field ($\Delta=\bar{\Delta}=0$), the density energy density field $\varepsilon$ and the stress-energy tensor fields $T$ ($\Delta=2,\bar{\Delta}=0$), $\bar{T}$ ($\Delta=0,\bar{\Delta}=2$) are the fields with the lowest conformal dimension that appear in the fusion of two fields $\sigma$ and whose torus one-point function does not vanish. \end{itemize} \noindent Using the above assumption in the expansion (\ref{eq:genss}), one obtains expression (\ref{predperco}) with the coefficients $c_{\nu}\left(q\right)$ and $c_{T}\left(q\right)$ given by: \begin{align} \label{ccoeff} c_{\nu}\left(q\right)&= C_{\sigma,\sigma}^{\varepsilon}\left< \varepsilon\right>_q, \quad c_{T}\left(q\right)=C_{\sigma,\sigma}^T \left< T\right>_q= \frac{2\Delta_{\sigma}}{c} \left< T\right>_q, \end{align} where $c$ is the CFT central charge (which provides for instance the universal Casimir amplitude \cite{blote86}). We refer the reader to \cite{jps19two,javerzat2019fourpoint} for a detailed explication of the CFT techniques used to study the topological effects. \noindent Let us detail further the information one can extract from $c_{\nu}(q)$ and $c_{T}(q)$. The spectrum $\mathcal{S}$ and some structure constants $C_{V_1,V_2}^{V_3}$ enter in the determination of these coefficients. For a general CFT, the spectrum defines the torus partition function \cite{fms97}: \begin{equation}\label{pfunction} Z(q) = q^{-\frac{c}{12}}\sum_{V_{\Delta,\bar{\Delta}}\in \mathcal{S}}n_{V_{\Delta,\bar{\Delta}}}\;q^{\Delta+\bar{\Delta}}, \end{equation} where $ n_{V_{\Delta,\bar{\Delta}}}$ is the multiplicity of the field $V_{\Delta,\bar{\Delta}}$. For small values of $q$, the leading contributions to the partition function are given by the representations with the smallest physical dimensions. The Identity field $V_{0,0}$ has the lowest physical dimension $0$, with $n_{\text{Id}}=1$. We will assume that the sub-leading contribution to the partition function is given by a spinless field $V_{\Delta,\Delta}$ with multiplicity $n_{V_{\Delta,\Delta}}$. For non unitary CFTs, the number $n_{V_{\Delta,\bar{\Delta}}}$ can take general real values. This is the case of the $Q-$ state Potts model \cite{fsz87}, in which the sub-dominant contribution is given by the spin field $\sigma$ with multiplicity $n_{\sigma}=Q-1$. \noindent In a general CFT, one-point torus functions can be expressed in the variable $q$, in a way similar to the partition function (\ref{pfunction}). As detailed in \cite{jps19two}, the three assumptions of Section \ref{sec: 3assumptions} lead to the following form for the energy density one-point torus function: \begin{equation}\label{expcnu} \left<\varepsilon\right>_q = \frac{(2\pi)^{2\Delta_{\varepsilon}}}{Z(q)} C_{\sigma,\sigma}^\varepsilon\; n_\sigma q^{2\Delta_\sigma-\frac{c}{12}}\left(1+O(q)\right). \end{equation} The coefficient $c_{\nu}(q)$, given by (\ref{ccoeff}), can therefore be expanded in $q$ as: \begin{equation}\label{cnuq} c_\nu(q) = (2\pi)^{2\Delta_{\varepsilon}} \left[C_{\sigma,\sigma}^\varepsilon\right]^2 n_\sigma q^{2\Delta_\sigma}+o(q^{2\Delta_\sigma}). \end{equation} In a similar way, using the formula \cite{fms97}: \begin{equation} \left< T\right>_q=- (2\pi)^2 q\;\partial_q \text{ln}Z(q), \end{equation} and expression (\ref{pfunction}) of the partition function, the coefficient $c_T(q)$ (given by (\ref{ccoeff})) admits the following small $q$ expansion: \begin{equation}\label{expc2} c_T(q) = \frac{(2-D_f)\pi^2}{6}\left(1-24\Delta\frac{n_{V_{\Delta,\Delta}}}{c} q ^{2\Delta}+\cdots\right). \end{equation} The above three assumptions do not put any constraint on the dimension $\Delta$ and multiplicity $n_{V_{\Delta,\Delta}}$ of the field giving the leading contribution to (\ref{expc2}). For pure percolation, for which the partition function (\ref{pfunction}) is known exactly, this leading contribution is given by the spin field $\sigma$: \begin{equation}\label{cTpperco} c_T(q) = \frac{(2-D_f)\pi^2}{6}\left(1-12(2-D_f)\frac{n_\sigma}{c} q ^{2-D_f}+\cdots\right). \end{equation} In that case the ratio $n_{\sigma}/c$ can be obtained as the limit $Q\to 1$ of the analogous expression for the $Q-$ Potts model. Using the fact that in this limit the central charge $c_{Q}\sim Q-1 \;(|Q-1|\ll 1)$, the limit $c\to0$ of $n_{\sigma}/c$ yields a finite non-zero limit, $n_{\sigma}/c= 4\pi/(5\sqrt{3})$. \subsection{Numerical protocols for testing CFT predictions}\label{sec:th} \noindent We have seen that, by using a CFT approach, the topological effects on $p_{12}$ encode in principle highly non-trivial information about the critical point. We discuss now how to efficiently extract this information from a numerical study of $p_{12}$ and how to interpret these results. \noindent The torus shape can be exploited to disentangle the contributions of sub-leading and sub-sub leading terms in (\ref{predperco}). This can be done by comparing the connectivities $p_{12}({\bf x_2}-{\bf x_1})$ and $p_{12}({\bf x_3}-{\bf x_1}$) between pairs of points ${\bf x_2}$ and ${\bf x_1}$ and ${\bf x_3}$ and ${\bf x_1}$ that are aligned on orthogonal axes, as illustrated in Figure \ref{fig:schema}. Note that similar ideas were used in \cite{jps19two}. \noindent Let us consider first the square torus, $M=N$ or $q=e^{-2\pi}$ and the case where ${\bf x_2}-{\bf x_1}={\bf x}^h$ and ${\bf x_3}-{\bf x_1}={\bf x}^v$ with $ {\bf x}^h=|{\bf x}|(1,0)$ and ${\bf x}^v =|{\bf x}|(0,1)$. As the two cycles are equivalent, one has $p_{12}({\bf x}^h)=p_{12}({\bf x}^v)$. From (\ref{predperco}) and (\ref{eq:genss}), $p_{12}({\bf x}^h)-p_{12}({\bf x}^v)\sim 4\sum_{\Delta-\bar{\Delta}\neq0} C_{\sigma,\sigma}^{V_{\Delta,\bar{\Delta}}}\left< V_{\Delta,\bar{\Delta}}\right>_{q=e^{-2\pi}}N^{-\Delta-\bar{\Delta}}$, which implies $\left< V_{\Delta,\bar{\Delta}}\right>_{q=e^{-2\pi}}=0$ if $\Delta-\bar{\Delta}\neq0$. In particular $\left< T\right>_{q=e^{-2\pi}}=0$ and therefore: \begin{align} c_{T} (e^{-2\pi})=0. \end{align} The connectivity (\ref{predperco}) therefore reduces to: \begin{equation}\label{predpercosquare} p_{12}({\bf x}) = \frac{d_0}{|{\bf x}|^{2(2-D_f)}}\left(1+c_{\nu}\left(q\right)\left(\frac{|{\bf x}|}{N}\right)^{2-\frac{1}{\nu}}+o\left(\left(\frac{|{\bf x}|}{N}\right)^{2}\right)\right), \quad \text{for}\; M=N. \end{equation} \noindent Let us consider now the rectangular torus $M>N$ with again ${\bf x_2}-{\bf x_1}={\bf x}^h$ and ${\bf x_3}-{\bf x_1}={\bf x}^v$. In Figure \ref{fig:prec} we show the corresponding measurements of $p_{12}({\bf x}^h)$ and $p_{12}({\bf x}^{v})$ when $M=2 N$. The two connectivities are now different, which is explained by the simple fact that the paths closing on the other side of the small cycle ($N$) start to contribute for smaller distances than the ones closing on the largest one ($M$). From (\ref{predperco}) and for general ${\bf x}$ we have: \begin{equation}\label{phpv1} p_{12}({\bf x})-p_{12}({\bf x}^{\perp}) = \frac{d_0}{|{\bf x}|^{2(2-D_f)}}\left(4 \cos(2\theta)\frac{2\Delta_{\sigma}}{c} \left< T\right>_q\left(\frac{|{\bf x}|}{N}\right)^{2}+o\left( \left(\frac{|{\bf x}|}{N}\right)^2\right)\right), \end{equation} where ${\bf x}$ and ${\bf x}^{\perp}$ are parametrised as in (\ref{def:polar}), and $c_{T}(q)$ has been replaced by its expression (\ref{ccoeff}). \begin{figure}[H] \centering \begin{tikzpicture}[scale=1.5] \begin{scope}[rotate = 90] \draw (-1,0) -- (1,0) -- (1,4) -- (-1,4) -- cycle; \draw[step=.1cm,lightgray,very thin] (-1,0) grid (1,4); \draw [line width = .4mm] (0,1)--(.4,1.8) node [left] {${\bf x_2}$} node[midway,below]{${\bf x}$}; \filldraw (.4,1.8) circle (2pt); \draw [line width = .4mm] (0,1)--(-.8,1.4) node [left] {${\bf x_3}$} node [midway,right]{${\bf x}^{\perp}$}; \filldraw (-.8,1.4) circle (2pt); \draw [line width=0.1mm] (.5,1) arc (0:57:.6) node [above, pos=0.5] {$\theta$}; \node at (0,0)[right]{$N$}; \node at (-1,2)[below]{$M$}; \filldraw (0,1) circle (2pt) node [right] {$ {\bf x_1}$}; \end{scope} \begin{scope}[rotate = 90,yshift=-5cm] \draw (-1,0) -- (1,0) -- (1,4) -- (-1,4) -- cycle; \draw[line width=0.1mm, gray,dotted] (-1,1)--(1,1); \draw[line width=0.1mm, gray,dotted] (0,0)--(0,4); \draw[step=.1cm,lightgray,very thin] (-1,0) grid (1,4); \draw [line width = .4mm] (0,1)--(.6,1.6) node [left] {${\bf x_2}$}; \filldraw (.6,1.6) circle (2pt); \draw (0.26,1.35)node[left]{${\bf x}$}; \draw (-0.26,1.4)node[left]{$ {\bf x}^{\perp}$}; \draw [line width = .4mm] (0,1)--(-.6,1.6) node [left] {${\bf x_3}$}; \filldraw (-.6,1.6) circle (2pt); \draw [line width=0.1mm] (.5,1) arc (0:39:.6) node [above, pos=0.5] {$\theta=\frac{\pi}{4}$}; \node at (0,0)[right]{$N$}; \node at (-1,2)[below]{$M$}; \filldraw (0,1) circle (2pt) node [right] {$ {\bf x_1}$}; \draw[dashed, thick] (0,0)--(0,4); \end{scope} \end{tikzpicture}\caption{Left: We take three points ${\bf x_1},{\bf x_2},{\bf x_3}$ on the torus lattice $\mathbb{Z}^2/(N \mathbb{Z}\times M\mathbb{Z})$ such that ${\bf x_2}-{\bf x_1} = {\bf x}$ and ${\bf x_3}-{\bf x_1}={\bf x}^{\perp}$, see (\ref{def:polar}). We measure $p_{12}({\bf x})$ and $p_{12}({\bf x}^{\perp})$, defined in (\ref{def:2conn}). Right: When $\theta=\pi/4$, ${\bf x}$ and ${\bf x}^{\perp}$ are symmetric by reflection with respect to the axis parallel to the $M$ axis and passing through ${\bf x_1}$(dashed line). This implies $p_{12}({\bf x})=p_{12}({\bf x}^{\perp})$ for $\theta=\pi/4$.}\label{fig:schema} \end{figure} \begin{figure}[H] \centering \begin{tikzpicture}[scale=1.2] \begin{semilogxaxis}[ legend cell align=center,legend pos=north west, samples=400, xlabel={$|{\bf x}|/N$}] ] \addplot+[orange,mark=o,only marks,mark size=1.5pt] table[x=r/N,y=rPv] {./plots/2-10_0.6-v.dat}; \addplot+[orange,mark=x,only marks,mark size=1.5pt] table[x=r/N,y=rPh] {./plots/2-10_0.6-h.dat}; \legend{$|{\bf x}|^{2(2-D_f)} p_{12}({\bf x}^{h})$,$|{\bf x}|^{2(2-D_f)} p_{12}({\bf x}^{v})$}; \end{semilogxaxis} \end{tikzpicture}\caption{The connectivity measured for $H=-2/3$, along the small cycle (circles) and the long cycle (crosses) of a torus with $M/N=2,\quad N = 2^{10}$. The data points were obtained by averaging over $10^5$ instances of the surface and over the $N\times M$ locations of ${\bf x}$ (cf. Section \ref{sec:numericalresults}. The connectivity measured along the long cycle of the torus is always smaller than the connectivity measured along the small cycle.}\label{fig:prec} \end{figure} \noindent Equation (\ref{phpv1}) is a clear consequence of the fact that, whenever an anisotropy is introduced, the response of the system is bound to be determined by the stress-energy tensor components $T$ and $\bar{T}$ (see for instance Section 11.3 in \cite{cardy_1996}). It is interesting to note that Monte Carlo algorithms, based on the properties of rectangular torii \cite{Mon_85,Landau_96}, have been proposed to measure the central charge and the leading fields in the partition function \cite{Bastiaansen_98}. However, these methods can be applied to statistical models for which a direct lattice representation of the stress-energy tensor is available, such as the Ising model or the RSOS models \cite{Koo_1994}. In our case we do not know the stress-energy lattice representation. Actually, away from the pure percolation point $H=-1$, we do not even know the energy density lattice representation. This is also the reason why the connectivity functions are the most natural observables to study universal critical amplitudes of non-local models. Note that other non-scalar observables have been defined and discussed in \cite{Couvreur_2017,Tan_2019}, where the angular dependence of their two-point function has been measured by Monte-Carlo simulations. \noindent From the expansion (\ref{eq:genss}) of the connectivity, the difference (\ref{phpv1}) gets in general contributions only from fields with a non-zero spin. By lattice symmetry arguments, this difference vanishes for $\theta=\pi/4$, as shown in Figure \ref{fig:schema}. One can directly see from (\ref{eq:genss}) that the only fields which may contribute to (\ref{phpv1}) are fields with spin $\Delta-\bar{\Delta}=2\;\mathrm{mod}4$. For instance one expects in (\ref{phpv1}) a contribution from fields with $(\Delta,\bar{\Delta})=(6,0)$ and $(\Delta,\bar{\Delta})=(4,2)$. These fields exist in any CFT as, said in CFT jargon, they correspond to the higher level descendants of the identity: $L_{-6}V_{0,0}$, $L_{-4}L_{-2} V_{0,0}$ and $L_{-4}\bar{L}_{-2}$, $L_{-2}^2\bar{L}_{-2}V_{0,0}$. In pure percolation there are no fields in the spectrum with spin greater than 2 and physical dimension $\Delta+\bar{\Delta}<6$. If we assume this is true also for correlated percolation $H>-1$, then we have: \begin{equation}\label{phpvh2} \begin{aligned} p_{12}({\bf x})-p_{12}({\bf x}^{\perp}) &= \frac{d_0}{|{\bf x}|^{2(2-D_f)}}\Bigg(4 \cos(2\theta)c_{T}(q)\left(\frac{|{\bf x}|}{N}\right)^{2}\\&+4 \left[\cos(2\theta) c_{6,2}(q)+\cos(6\theta)c_{6,6}(q)\right]\left(\frac{|{\bf x}|}{N}\right)^{6}+o\left( \left(\frac{{\bf x}}{N}\right)^6\right)\Bigg). \end{aligned} \end{equation} Assuming that the identity descendants are the only fields contributing to $c_{6,2}$ and $c_{6,6}$, these coefficients can be fixed by computing the inner products and the matrix elements between the 11 identity descendants existing at level 6. We refer the reader to \cite{jps19two,javerzat2019fourpoint} and references therein for the details of the general procedure. However, the numerical determination of these coefficients is not accurate enough for this cumbersome computation to be worth it. As a matter of fact we use this order $6$ term as a fitting parameter to obtain better estimations of the order $2$ coefficient. \subsection{Numerical evidences} \label{sec:resuresu} We summarise here the main numerical results for $p_{12}$ and the conclusions we can draw by comparing these results with the CFT predictions. \subsubsection{Conformal invariance} \noindent The quantity (\ref{predperco}) is, first of all, a powerful test of conformal invariance. Via the numerical simulation of the connectivity we test two predictions: \begin{itemize} \item The dominant topological correction shows a precise interplay between the exponents $\nu$ and $D_f$. In particular the leading correction behaves as $|{\bf x}|^{2(2D_f-2)}(|{\bf x}|/N)^{2-1/\nu}$. This effect is more clearly seen on the square torus, see (\ref{predpercosquare}). Figure \ref{fig:beta} shows that the numerical results for the values $H<-1/2$ agree with this prediction. \item The sub-leading term is $\propto |{\bf x}|^{2(2D_f-2)} \cos(2\,\theta)(|{\bf x}|/N)^2$. As explained above, the presence of such term implies the existence of a pair of fields with scaling dimension $\Delta+\bar{\Delta}=2$, which corresponds to the power $2$ in the $(|{\bf x}|/N)^2$ decay, and with spin $\Delta-\bar{\Delta}=\pm 2$, which fixes the $\theta$ dependence. If such fields exist, they correspond by definition to the stress-energy tensor components $T$ and $\bar{T}$. The presence of $T$ and $\bar{T}$ is the most basic and direct consequence of conformal invariance. In numerical simulations, the sub-leading term is seen by considering a rectangular torus. Figures \ref{fig:logdp1} and \ref{fig:cos} show clearly the $(|{\bf x}|/N)^2$ decay and the $\cos(2\theta)$ dependence. Figure \ref{fig:cos6} shows further that the data is well described by the form (\ref{phpvh2}). \end{itemize} \subsubsection{Spectrum and structure constants}\label{sec:Ssc} \begin{itemize} \item The values of $c_{\nu}(q)$ for different values of $q$ have been measured for $-1<H<-1/2$ and reported in Table \ref{tab:cnu2}. The results support the fact that for $H\leq -3/4$ the universality class is the one of pure percolation. Note that this a highly non-trivial verification, as it not only based on the values of critical exponents, but on the values of constants which depend on the spectrum and fusion coefficients of the theory. For $H>-3/4$, the data are quite well consistent with the CFT prediction (\ref{expcnu}), as shown in Figure \ref{fig:cnuq}. This is also consistent with the fact that the fusion between two spin field produces an energy field. \item We could measure with good precision the dependence of the coefficient $c_T(q)$ with $q$. Figure \ref{fig:c2q} shows that (\ref{cTpperco}) is satisfied, and that the dimension of the most dominant field coincides with the dimension of the spin field. \end{itemize} \section{Numerical results on two point connectivity} \label{sec:numericalresults} We generate the random surfaces (\ref{def:genu}, \ref{def:genu_2}) and we measure the connectivity (\ref{def:2conn}) of its level clusters, for the following set of values of $H$: \begin{equation} \label{alvalues} H=-\frac78, \; -\frac{2}{3},\; -\frac58,\; -\frac{21}{40},\; -\frac{19}{40},\; -\frac38,\; -\frac{1}{4} \end{equation} which are representative for the line $-1\leq H<0$. Due to the periodicity properties (\ref{eq:torus}), we have a site percolation model on a doubly-periodic lattice of size $M\times N$, i.e. the toric lattice $\mathbb{Z}^2/(N \mathbb{Z}+M\mathbb{Z})$). In the square torus case ($M= N$), $p_{12}({\bf x})=p_{12}(|{\bf x}|)$. Without losing generality we measure $p_{12}$ between pairs of points ${\bf x_1}$ and ${\bf x_2}$, aligned on the vertical or horizontal axes. For each $H$ in (\ref{alvalues}), the data are taken for distances $|{\bf x_1}-{\bf x_2}| = |{\bf x}| = 1,2,4,\cdots,N/2$, $|{\bf x}| = 3,6,12,\cdots, 3N/8$. For the rectangular torus, $M\neq N$, we measure the connectivity between the points ${\bf x_1}$ and ${\bf x_2}$, and between ${\bf x_1}$ and ${\bf x_3}$, ${\bf x_3}-{\bf x_1}=({\bf x_2}-{\bf x_1})^{\perp}={\bf x}^{\perp}$, see Figure \ref{fig:schema}. When $\bf x$ and $\bf x^\perp$ are aligned with the cycles of the torus ($\theta=0$), measurements are taken for aspect ratios $M/N = 1,2\cdots 5$, and for distances $|{\bf x}| = 1,2,4,\cdots,N/2$, and $|{\bf x}|= 3,6,12,\cdots, 3N/8$. Fixing the aspect ratio, we measured $p_{12}({\bf x})$ for non-zero angles $\theta$. On the lattice, angles are of the form $\theta = \arctan\left(\frac{a_2}{a_1}\right)$, with $a_2$ (resp.$a_1$) a given number of lattice sites in the $M$ (resp.$N$) direction. Distances are then taken to be $|{\bf x}| = \sqrt{a_1^2+a_2^2}\left(1,2,4,\cdots\right)$, $|{\bf x}| = \sqrt{a_1^2+a_2^2}\left(3,6,12,\cdots\right)$, such that $|{\bf x}| \leq N/2$. We chose angles $\theta=0, \arctan(1/4),\arctan(1/3),\arctan(1/2),\arctan(2/3)$, for fixed aspect ratio $M/N=3$. \noindent Exploiting the translational invariance of the surface distribution, we average over the position ${\bf x_1}$ for each instance of $u({\bf x})$, and then over $10^5$ instances. In the scaling limit, the dependence of $p_{12}({\bf x})$ with respect to the lattice size $N$ is expected to be of the form $|{\bf x}|/N$. Plotting the connectivity as a function of $|{\bf x}|/N$, we observe that the corrections to the scaling are still visible as the data points for different sizes do not collapse at large distances. In Figure \ref{fig:Hl} we show the data for $H=-5/8$ and for lattice sizes $M=N = 2^9-2^{12}$. One can see that the scaling limit is still not attained. These non-universal effects become even more important for larger $H$. As shown in Figure \ref{fig:Hb} for $H = -3/8$, even the infinite plane scaling limit is not clearly attained at the sizes of our simulations. Of course these non-universal effects make the analysis of the universal topological effects less precise, in particular for studying the contributions of the spinless fields. On the other hand, we observed that the non-universal effects are less important for the surface (\ref{def:genu_2}) generated by the kernel $\hat{S}_{2}({\bf k})$, at least for values of $H<-1/2$. This is shown in \ref{fig:Hb2}. For values of $H<-1/2$ and for the two surfaces (\ref{def:genu}) and (\ref{def:genu_2}) we could determine the non-universal constant $d_0$, as well as the dimension of the leading spinless contribution. For this latter, the consistency of the results obtained from the two surfaces makes the verification of the CFT predictions more solid. The coefficient $c_\nu$ and its dependence on the aspect ratio, on the other hand, could only be determined with sufficient precision for the surface (\ref{def:genu_2}). \begin{figure}[H] \begin{subfigure}{.4\textwidth} \begin{tikzpicture} \begin{semilogxaxis}[ legend cell align=center,legend pos=north west, xlabel={$|{\bf x}|/N$},ylabel={$|{\bf x}|^{2(2-D_f)}p_{12}(|{\bf x}|)$}] ] \addplot+[blue,mark=o,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/9_0.75.dat}; \addplot+[orange,mark=o,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/10_0.75.dat}; \addplot+[green,mark=o,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/11_0.75.dat}; \addplot+[red,mark=o,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/12_0.75.dat}; \legend{$N = 2^9$,$N = 2^{10}$,$N=2^{11}$,$N=2^{12}$} \end{semilogxaxis} \end{tikzpicture}\caption{$H = -5/8$}\label{fig:Hl} \end{subfigure}\hfill \begin{subfigure}{.4\textwidth} \begin{tikzpicture} \begin{semilogxaxis}[ legend cell align=center,legend pos=north west, xlabel={$\frac{|{\bf x}|}{N}$},ymin = 0.36,ymax = 0.41] ] \addplot+[blue,mark=o,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/9_1.25.dat}; \addplot+[orange,mark=o,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/10_1.25.dat}; \addplot+[green,mark=o,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/11_1.25.dat}; \addplot+[red,mark=o,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/12_1.25.dat}; \end{semilogxaxis} \end{tikzpicture}\caption{$H = -3/8$}\label{fig:Hb} \end{subfigure} \caption{ Convergence of the data points generated with surface (\ref{def:genu}), on the square torus of different sizes, for $H=-5/8$ (a) and $H=-3/8$ (b). Error bars are smaller than the marker size and we do not display them.}\label{fig:nonuniv} \end{figure} \begin{figure}[H] \begin{subfigure}{.4\textwidth} \begin{tikzpicture} \begin{semilogxaxis}[ legend cell align=center,legend pos=north west, xlabel={$|{\bf x}|/N$},ylabel={$|{\bf x}|^{2(2-D_f)}p_{12}(|{\bf x}|)$}] ] \addplot+[blue,mark=square,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/9_0.75-2.dat}; \addplot+[orange,mark=square,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/10_0.75-2.dat}; \addplot+[green,mark=square,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/11_0.75-2.dat}; \addplot+[red,mark=square,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/12_0.75-2.dat}; \legend{$N = 2^9$,$N = 2^{10}$,$N=2^{11}$,$N=2^{12}$} \end{semilogxaxis} \end{tikzpicture}\caption{$H = -5/8$}\label{fig:Hl2} \end{subfigure}\hfill \begin{subfigure}{.4\textwidth} \begin{tikzpicture} \begin{semilogxaxis}[ legend cell align=center,legend pos=north west, xlabel={$\frac{|{\bf x}|}{N}$},ymin = 0.36,ymax = 0.41] ] \addplot+[blue,mark=square,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/9_1.25-2.dat}; \addplot+[orange,mark=square,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/10_1.25-2.dat}; \addplot+[green,mark=square,only marks,mark size=1pt] table[x=r/N,y=rP] {./plots/11_1.25-2.dat}; \end{semilogxaxis} \end{tikzpicture}\caption{$H = -3/8$}\label{fig:Hb2} \end{subfigure} \caption{ Convergence of the data points generated using the surface (\ref{def:genu_2}), on the square torus of different sizes, for $H<-5/8$ (a) and $H=-3/8$ (b).}\label{fig:nonuniv2} \end{figure} \noindent A very remarkable fact is that, for both surfaces, these corrections to the scaling terms cancel when one takes the differences between connectivities. This is shown in Figure \ref{fig:nonunivdiff} for the same values of $H$. The corrections may originate, for instance, from the fact that we are not sufficiently close to the critical point. More generally, any perturbation that drives the system out of the critical point and that does not break rotational invariance is related to a spinless field, whose contributions to the connectivity are isotropic. This explains why they disappear by taking the difference $p_{12}({\bf x})-p_{12}({\bf x}^{\perp})$. This mechanism allows to test the contribution of the fields with spin, and therefore of the stress-energy tensor, with a very good precision. For $H<-1/2$, our determination of the constants $d_0$ allowed moreover to acces the value of the universal coefficient $c_T(q)$. For $H>-1/2$, we could only determine the behaviour of $d_0\, c_T(q)$ with $q$. \begin{figure}[H] \begin{subfigure}{.4\textwidth} \begin{tikzpicture} \begin{semilogxaxis}[ legend cell align=center,legend pos=north west, xlabel={$\frac{|{\bf x}|}{N}$},ylabel={$|{\bf x}|^{2(2-D_f)}\left(p_{12}({\bf x})-p_{12}({\bf x}^{\perp})\right)$}] ] \addplot+[blue,mark=o,only marks,mark size=1.25pt] table[x=r/N,y=rPv-h] {./plots/2-9_0.75-v-h.dat}; \addplot+[orange,mark=o,only marks,mark size=1.25pt] table[x=r/N,y=rPv-h] {./plots/2-10_0.75-v-h.dat}; \addplot+[green,mark=o,only marks,mark size=1.25pt] table[x=r/N,y=rPv-h] {./plots/2-11_0.75-v-h.dat}; \legend{$N = 2^9$,$N=2^{10}$,$N = 2^{11}$} \end{semilogxaxis} \end{tikzpicture}\caption{$H = -5/8$} \end{subfigure}\hfill \begin{subfigure}{.4\textwidth} \begin{tikzpicture} \begin{semilogxaxis}[ legend cell align=center,legend pos=north west, xlabel={$\frac{|{\bf x}|}{N}$}] ] \addplot+[blue,mark=o,only marks,mark size=1.25pt] table[x=r/N,y=rPv-h] {./plots/2-9_1.25-v-h.dat}; \addplot+[orange,mark=o,only marks,mark size=1.25pt] table[x=r/N,y=rPv-h] {./plots/2-10_1.25-v-h.dat}; \addplot+[green,mark=o,only marks,mark size=1.25pt] table[x=r/N,y=rPv-h] {./plots/2-11_1.25-v-h.dat}; \end{semilogxaxis} \end{tikzpicture}\caption{$H = -3/8$} \end{subfigure} \caption{Convergence of the data points for the difference of connectivities (\ref{phpv1}) on rectangular torus $M=2N$, for $H=-5/8$ (a) and $H=-3/8$ (b).}\label{fig:nonunivdiff} \end{figure} \subsection{Plane limit}\label{sec:numsquare} For $N=M=2^{12}$, we fit the data points for $|{\bf x}|\in[4,\,128]$, expected to be well described by the infinite plane limit (\ref{2connplane}) (see Figure \ref{fig:p1}), to the form \begin{equation} p_{12}({\bf x})\sim|{\bf x}|^{-2(2-D_f^{(2)})}. \end{equation} The values $D_f^{(2)}$ of the fractal dimension are given in Table \ref{tab:df}. To extract the topological corrections (\ref{predpercosquare}), we fit our numerical data to the form: \begin{equation}\label{fitp} |{\bf x}|^{2(2-D_f^{(2)})}p_{12}(r) = d_0\left(1+\frac{d_1}{|{\bf x}|^{b_1}}\right)\left(1+c_\nu\left(\frac{|{\bf x}|}{N}\right)^{2-1/\nu}\right). \end{equation} The first factor takes into account the non-universal, small distance effects due to the lattice. We refer the reader to \cite{prs16,prs19} for a more detailed discussion of these ultraviolet corrections. The values of $d_0$ are reported in Table \ref{tab:d0}. The numerical values for the universal coefficient $c_\nu$ are given in Table \ref{tab:cnu2}. They were obtained from the data generated using kernel (\ref{def:kernel2}), which converge faster to the scaling limit, and for which the agreement with (\ref{fitp}) is excellent. This is shown in Figure \ref{fig:beta}. \begin{table}[ht!]\centering \begin{tabular}{ |c|c| c|} \hline $H$ & $d_0^{(1)}$ & $d_0^{(2)}$ \\ \hline -7/8 &0.3438(1) & 0.3433(2) \\ -2/3 &0.3490(1) & 0.3482(1)\\ -5/8 &0.3521(5) & 0.3495(1)\\ -21/40 &0.357(1) & 0.355(9)\\ \hline \end{tabular} \caption{Non universal constant $d_0$ determined from the fit (\ref{fitp}), for surfaces generated (1) with kernel (\ref{def:kernel}) and (2) with kernel (\ref{def:kernel2}). }\label{tab:d0} \end{table} \begin{figure}[H] \centering \begin{tikzpicture}[scale=1.1] \begin{loglogaxis}[ legend cell align=center, xlabel={$|{\bf x}|/N$}, ylabel={$|{\bf x}|^{2(2-D_f)}p_{12}({\bf x})-d_0$}, legend pos=south east] ] \addplot+[blue,mark=o,only marks,mark size=1.2pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y] { x y 0.0117188 0.0108069 0.015625 0.0108857 0.0234375 0.0114257 0.03125 0.0122599 0.046875 0.0144307 0.0625 0.01706 0.09375 0.023273 0.125 0.0304313 0.1875 0.0466871 0.25 0.0651391 0.375 0.108318 0.5 0.162252 }; \addplot+[orange,mark=o,only marks,mark size=1pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y] { x y 0.0117188 0.00276658 0.015625 0.00294666 0.0234375 0.00364526 0.03125 0.00458705 0.046875 0.00683913 0.0625 0.00945586 0.09375 0.0156 0.125 0.0226274 0.1875 0.0388708 0.25 0.0575341 0.375 0.10152 0.5 0.156477 }; \addplot+[green,mark=o,only marks,mark size=1.2pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y] { x y 0.0117188 0.000355638 0.015625 0.00062095 0.0234375 0.0013073 0.03125 0.00208451 0.046875 0.00390437 0.0625 0.00605193 0.09375 0.011167 0.125 0.0172845 0.1875 0.0319178 0.25 0.0492552 0.375 0.0916261 0.5 0.146207 }; \addplot+[blue,mark=square,only marks,mark size=1.2pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y] { x y 0.0117188 0.00560505 0.015625 0.00539867 0.0234375 0.00574151 0.03125 0.00660821 0.046875 0.0091419 0.0625 0.0122712 0.09375 0.0195005 0.125 0.027599 0.1875 0.0454961 0.25 0.0651318 0.375 0.109511 0.5 0.162959 }; \addplot+[orange,mark=square,only marks,mark size=1pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y] { x y 0.0117188 0.00119341 0.015625 0.0015651 0.0234375 0.0024756 0.03125 0.00354599 0.046875 0.00600007 0.0625 0.00874494 0.09375 0.0149663 0.125 0.0219447 0.1875 0.0378658 0.25 0.0560502 0.375 0.0989814 0.5 0.153092 }; \addplot+[green,mark=square,only marks,mark size=1.2pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y] { x y 0.0117188 0.00202245 0.015625 0.00219796 0.0234375 0.00275686 0.03125 0.00348562 0.046875 0.00533878 0.0625 0.00754762 0.09375 0.0127913 0.125 0.0189074 0.1875 0.0331826 0.25 0.0498815 0.375 0.0906985 0.5 0.14404 }; \legend{$H=-7/8$,$H=-2/3$,$H=-5/8$}; \addplot+[blue,forget plot,mark=none,domain=0.01:0.5] {0.35*x^1.25}; \addplot+[orange,forget plot,mark=none,domain=0.01:0.5] {0.352*x^1.333}; \addplot+[green,forget plot,mark=none,domain=0.01:0.5] {0.33*x^1.375}; \end{loglogaxis} \end{tikzpicture}\caption{ Numerical data for $|{\bf x}|^{2(2-D_f)}p_{12}({\bf x})-d_0$ for $H = -7/8,-2/3,-21/40$, from surfaces (\ref{def:genu}) (circles) and (\ref{def:genu_2}) (squares). The lines show the prediction (\ref{predpercosquare}) with the exponent $2-1/\nu(H)$ given by (\ref{nulong}). } \label{fig:beta} \end{figure} \subsection{Evidences of conformal invariance} With $M\neq N$, and following prediction (\ref{phpv1}), the quantity $\log\left[|{\bf x}|^{2(2-D_f^{(2)})}\left(p_{12}({\bf x})-p_{12}({\bf x}^{\perp})\right)\right]$ should follow a line of slope 2. This is very clear for $H<-1/2$, as shown in Figure \ref{fig:logdp1}. \begin{figure}[!ht] \centering \begin{tikzpicture} \begin{loglogaxis}[ legend cell align=center,legend pos=north west, samples=400, xlabel={$|{\bf x}|/N$}, ylabel = {$|{\bf x}|^{2(2-D_f)}\left(p_{12}({\bf x})-p_{12}({\bf x}^{\perp})\right)$}] ] ] \addplot+[green,mark=o,only marks,mark size=1.5pt] table[x=r/N,y=rdP] {./plots/2-11_0.6.dat}; \addplot+[gray,mark=none,forget plot,domain=0.01:.5] {0.14658*x^2.06585}; \end{loglogaxis} \end{tikzpicture}\caption{Difference of connectivities (\ref{phpv1}) for $H=-2/3$, measured for $M/N=2,\,N=2^{11}$ and $\theta=0$. The best fit line has slope $\sim2.07$, indicating the presence of the stress-energy tensor.}\label{fig:logdp1} \end{figure} \noindent When $H>-1/2$, the slope increases significantly: either there is no order 2 term (conformal invariance is broken), or this term is still present, with higher-order corrections making the effective slope significantly greater than 2. Assuming the latter and that the difference of connectivities is described by (\ref{phpvh2}) on the whole line $H<0$, we fit our data for different angles $\theta$ to the form: \begin{equation}\label{fitvh1} |{\bf x}|^{2(2-D_f^{(2)})}\left(p_{12}({\bf x})-p_{12}({\bf x}^{\perp})\right) = c_2(\theta) \left(\frac{|{\bf x}|}{N}\right)^2+ c_6(\theta) \left(\frac{|{\bf x}|}{N}\right)^6. \end{equation} This fit shows good consistency with the data for all values of $H$, and allows to determine $c_2(\theta)$ with good precision. In Figure \ref{fig:cos} we show that $c_2(\theta)$ has the expected behaviour (\ref{eq:genss}): $c_2(\theta)\propto\cos(2\theta)$. This makes manifest the presence of a field with conformal dimension 2 and spin 2, and therefore of conformal invariance for all $H<0$. \begin{figure}[H] \centering \begin{subfigure}{0.4\textwidth} \begin{tikzpicture} \node at (5,5) {$M/N = 3$}; \node at (5,4.5) {$N = 2^{10}$}; \begin{axis}[ legend cell align=center,legend pos=south west, samples=100, xlabel={$\theta$},ylabel={$c_2(\theta)$}] ] \addplot+[orange,mark=o,only marks,mark size=1.5pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error = error] { x y error 0. 0.191 0.0008 0.244979 0.1664 0.0007 0.321751 0.1516 0.0006 0.463648 0.1141 0.0004 0.588003 0.0722 0.0003 0.785398 0. 0. }; \addplot+[gray,mark=none,domain=0:.79] {0.191*cos(deg(2*x))}; \end{axis} \end{tikzpicture}\caption{$H=-2/3$} \end{subfigure}\hfill \begin{subfigure}{0.4\textwidth} \begin{tikzpicture} \begin{axis}[ legend cell align=center,legend pos=south west, samples=100, xlabel={$\theta$}] ] \addplot+[orange,mark=square,only marks,mark size=1.5pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error = error] { x y error 0. 0.204 0.004 0.244979 0.181 0.004 0.321751 0.166 0.003 0.463648 0.124 0.002 0.588003 0.078 0.002 0.785398 0. 0. }; \addplot+[gray,mark=none,domain=0:.79] {0.204*cos(deg(2*x))}; \end{axis} \end{tikzpicture}\caption{$H=-3/10$} \end{subfigure} \caption{Values of $c_2(\theta)$ from fit (\ref{fitvh1}), for different angles $\theta$, for $H<-1/2$ (a) and $H>-1/2$ (b). The curves show the prediction $c_2(\theta)=c_2(0)\cos(2\theta)$.}\label{fig:cos} \end{figure} \noindent The behaviour of the order 6 coefficient is also in fair agreement with prediction (\ref{phpvh2}), as shown in Figure \ref{fig:cos6}. \begin{figure}[H] \centering \begin{subfigure}{0.4\textwidth} \begin{tikzpicture} \node at (5,5) {$M/N = 3$}; \node at (5,4.5) {$N = 2^{10}$}; \begin{axis}[ legend cell align=center,legend pos=south west, samples=100, xlabel={$\theta$},ylabel={$c_2(\theta)$}] ] \addplot+[orange,mark=o,only marks,mark size=1.5pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error = error] { x y error 0. 0.16 0.01 0.244979 0.08 0.01 0.321751 0.019 0.005 0.463648 -0.019 0.005 0.588003 -0.009 0.007 0.785398 0. 0.00001 }; \addplot+[gray,mark=none,domain=0:.79] {0.0814977*cos(deg(2*x))+0.0685128*cos(deg(6*x))}; \end{axis} \end{tikzpicture}\caption{$H=-2/3$} \end{subfigure}\hfill \begin{subfigure}{0.4\textwidth} \begin{tikzpicture} \begin{axis}[ legend cell align=center,legend pos=south west, samples=100, xlabel={$\theta$}] ] \addplot+[orange,mark=square,only marks,mark size=1.5pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error = error] { x y error 0. 0.45 0.06 0.244979 0.32 0.07 0.321751 0.14 0.03 0.463648 0.08 0.03 0.588003 0.15 0.05 0.785398 0. 0.00001 }; \addplot+[gray,mark=none,domain=0:.79] {0.328002*cos(deg(2*x))+0.0890494*cos(deg(6*x))}; \end{axis} \end{tikzpicture}\caption{$H=-3/10$} \end{subfigure} \caption{Values of $c_6(\theta)$ from fit (\ref{fitvh1}), for different angles $\theta$, for $H<-1/2$ (a) and $H>-1/2$ (b). The curves are fits to the form (\ref{phpvh2}): $c_6(\theta)=c_{6,2}\cos(2\theta)+c_{6,6}\cos(6\theta)$.}\label{fig:cos6} \end{figure} \subsection{Spectrum and structure constants}\label{sec:numSsc} Setting $\theta$ to zero, we varied the aspect ratio and obtained $c_\nu$ and $c_T$ as functions of $M/N$, given in Tables \ref{tab:cnu2} and \ref{tab:c2MN}. \noindent The coefficient $c_\nu$ is obtained by fitting the sum of connectivities $\frac{1}{2}\left|{\bf x}\right|^{2(2-D_f^{(2)})}\left(p_{12}({\bf x})+p_{12}({\bf x}^{\perp})\right)$ to the form (\ref{fitp}). Taking the sum allows to remove the order 2 contributions of the stress-tensor fields. \begin{table}[ht!] \centering \begin{tabular}{|l|c|c|c|c|c|}\hline \diagbox[width=5em]{$H$}{$M/N$}& 1 & 2 & 3 & 4 \\ \hline percolation & 0.355402 &0.185569 &0.0964413 &0.0501208\\ \hline -7/8 & 0.371(5) & 0.170(5) & 0.13(1) &0.040(5)\\ \hline -2/3 & 0.352(4) & 0.22(2) & 0.135(5) &0.090(5)\\ \hline -5/8 & 0.327(3) & 0.15(1) & 0.130(5) &0.075(5) \\ \hline \end{tabular}\caption{Best fit parameter $c_\nu(M/N)$, for different aspect ratios $M/N$. These values were obtained with the surface (\ref{def:genu_2}), which showed better convergence. When $H>-1/2$, the non-universal effects are too strong and are not described by the fit (\ref{fitp}).}\label{tab:cnu2} \end{table} \noindent Figure \ref{fig:cnuq} shows that the behaviour of $c_\nu(q)$ is in fair agreement with prediction (\ref{expcnu}): \begin{equation}\label{fitcnuq} c_\nu(q) \sim q^x, \end{equation} with the slope $x$ given by the dimension of the spin field $x=2\Delta_\sigma = 2-D_f$, see Table \ref{tab:dnu}. We point out that this behaviour is incompatible with the fact that the energy field is degenerate at level 2. Indeed, if it was degenerate the slope $x = 2\Delta_\sigma$ would be a continuously varying function of the central charge \cite{jps19two} and would be expected to show significant variation with $H$. In general, the presence of degenerate fields is a crucial feature of a CFT \cite{rib14}, which in some cases allow to solve the theory \cite{dofa_npb85,dofa_pl85,zz95,mr17}. For pure percolation, the energy field is degenerate, which leads to relations between the different structure constants of the theory \cite{ei15,mr17,saleur2020}. \begin{figure}[H] \centering \begin{tikzpicture} \begin{axis}[ legend cell align=center,legend pos=south west, samples=100, ylabel={$\log(c_\nu(q))$},xlabel={$\log q$}] ] \addplot+[blue,mark=square,only marks,mark size=1pt,,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error=error] { x y error -6.28319 -3.49463 0.00284091 -12.5664 -3.96463 0.00909091 -18.8496 -4.45298 0.037037 -25.1327 -4.85845 0.0555556 }; \addplot+[gray,mark=none,domain=-26:-6] {-3.02582 + 0.0746289*x}; \end{axis} \end{tikzpicture} \caption{$c_\nu$ as a function of $q$ and the best fit line, for $H=-2/3$.}\label{fig:cnuq} \end{figure} \begin{table}[H] \centering \begin{tabular}{|c|c|}\hline $H$ & $x$ \\ \hline -7/8 & 0.10(1)\\ \hline -2/3 & 0.08(2) \\ \hline -5/8 & 0.08(1) \\ \hline \end{tabular} \caption{Exponent $x$ determining the behaviour of $c_\nu(q)$ with $q$ (\ref{fitcnuq}), obtained from fitting $\log c_\nu(q)$. These values are to be compared to the value of the spin dimension, which remains equal to the pure percolation value $2-D_f^{\mathrm{pure}}\sim 0.104$ when $H<-1/2$.}\label{tab:dnu} \end{table} \noindent Setting $x$ to $2-D_f$, a fit of $c_\nu(q)$ as a function of $q^{2-D_f}$ gives an estimation of the quantity $\left[C_{\sigma,\sigma}^{\varepsilon}\right]^2 n_\sigma$ (see \ref{cnuq}), given in Table \ref{tab:Cn}. \begin{table}[ht!] \centering \begin{tabular}{|c|c|}\hline $H$ & $\left[C_{\sigma,\sigma}^{\varepsilon}\right]^2 n_\sigma$\\ \hline pure percolation & $\pi\sqrt{3}\left(\frac49 \frac{\Gamma(7/4)}{\Gamma(1/4)}\right)^2\sim 0.069$ \\ \hline -7/8 & 0.07(1)\\ \hline -2/3 & 0.05(1)\\ \hline -5/8 & 0.04(1)\\ \hline \end{tabular}\caption{Estimation of the coefficient $\left[C_{\sigma,\sigma}^{\varepsilon}\right]^2 n_\sigma$. The percolation prediction was computed in \cite{jps19two}.}\label{tab:Cn} \end{table} Conversely, to obtain $c_T(q)$ we fit the difference $|{\bf x}|^{2(2-D_f^{(2)})}\left(p_{12}({\bf x})-p_{12}({\bf x}^{\perp})\right)$ to the form: \begin{equation} |{\bf x}|^{2(2-D_f^{(2)})}\left(p_{12}({\bf x})-p_{12}({\bf x}^{\perp})\right) = c_2(q) \left(\frac{|{\bf x}|}{N}\right)^2+ c_6(q) \left(\frac{|{\bf x}|}{N}\right)^6, \end{equation} where \begin{equation} c_2(q) = 4d_0\,c_T(q). \end{equation} The values we obtained for $c_T(q)$, for both types of surfaces, are given in Tables \ref{tab:c2MN}, \ref{tab:c2MN2}. Figure \ref{fig:c2S} shows the consistency betwwen the two sets of values, as expected from universality. \begin{table}[H] \centering \begin{tabular}{|l|c|c|c|c|c|}\hline \diagbox[width=5em]{$H$}{$M/N$}& 1 & 2 & 3 & 4 & 5 \\ \hline pure percolation&0 & 0.3496 & 0.5109 & 0.5947 &0.6383\\ \hline -7/8& 0 &0.376(5) &0.531(5) &0.610(5) &0.645(5) \\ \hline -2/3& 0 &0.383(5) &0.547(5) &0.607(5) &0.640(5) \\ \hline -5/8& 0 & 0.395(5) &0.555(5 ) &0.619(5) &0.641(5) \\ \hline \end{tabular}\caption{Best fit parameter $c_2(M/N)/d_0$ for different aspect ratios $M/N$, for surfaces (\ref{def:genu}). The first line gives the numerical value of prediction (\ref{cTpperco}) for pure percolation.}\label{tab:c2MN} \end{table} \begin{table}[H] \centering \begin{tabular}{|l|c|c|c|c|c|}\hline \diagbox[width=5em]{$H$}{$M/N$}& 1 & 2 & 3 & 4 & 5 \\ \hline -7/8& 0 &0.355(5) &0.493(5) &0.596(5) &0.602(5) \\ \hline -2/3& 0 &0.340(5) &0.494(5) &0.574(5) &0.600(5) \\ \hline -5/8& 0 & 0.363(5) &0.494(5) &0.581(5) &0.613(5) \\ \hline \end{tabular}\caption{Best fit parameter $c_2(M/N)/d_0$ for different aspect ratios $M/N$, for surfaces (\ref{def:genu_2}).}\label{tab:c2MN2} \end{table} \begin{figure}[H] \centering \begin{tikzpicture} \begin{axis}[ legend cell align=center,legend pos=south east, samples=100, ylabel={$c_2(M/N)/d_0$},xlabel={$M/N$}] ] \addplot+[blue,mark=square,only marks,mark size=1pt,,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error=error] { x y error 1 0 0 2 0.355 0.005 3 0.4893 0.005 4 0.596 0.005 5 0.602 0.005 }; \addplot+[orange,mark=square,only marks,mark size=1pt,,error bars/.cd,y dir=both,y explicit] table[x=x,y=y, y error = error] { x y error 1 0 0 2 0.340 0.005 3 0.494 0.005 4 0.574 0.005 5 0.600 0.005 }; \addplot+[green,mark=square,only marks,mark size=1pt,,error bars/.cd,y dir=both,y explicit] table[x=x,y=y, y error = error] { x y error 1 0 0 2 0.363 0.005 3 0.494 0.005 4 0.581 0.005 5 0.613 0.005 }; \addplot+[blue,mark=o,only marks,mark size=1pt,,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error=error] { x y error 1 0 0 2 0.376 0.005 3 0.531 0.005 4 0.610 0.005 5 0.645 0.005 }; \addplot+[orange,mark=o,only marks,mark size=1pt,,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error=error] { x y error 1 0 0 2 0.383 0.005 3 0.55 0.01 4 0.607 0.005 5 0.640 0.005 }; \addplot+[green,mark=o,only marks,mark size=1pt,,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error=error] { x y error 1 0 0 2 0.395 0.005 3 0.555 0.005 4 0.619 0.005 5 0.641 0.005 }; \legend{$H=-7/8$,$H=-2/3$,$H=-5/8$}; \end{axis} \end{tikzpicture}\caption{ Comparison of the numerical values obtained for the universal quantity $c_2(M/N)/d_0$, for different Hurst exponents, for surfaces (\ref{def:genu}) (circles) and (\ref{def:genu_2}) (squares).}\label{fig:c2S} \end{figure} \noindent Following prediction (\ref{expc2}), we fit the quantity $\log\left(2\frac{2-D_f}{3}\pi^2 - \frac{c_2(q)}{d_0}\right)$ as a function of $\log q$ to a line. This is shown in Figure \ref{fig:c2q}, and we obtain values for the dominant dimension $\Delta$ close to the dimension of the spin field, see Table \ref{tab:2d}. \begin{figure}[H] \centering \begin{subfigure}{.4\textwidth} \begin{tikzpicture} \begin{axis}[ legend cell align=center, xlabel={$\log q$}, ylabel={$\log\left(2\frac{2-D_f}{3}\pi^2 - c_2(q)/d_0\right)$}, legend pos=south east] ] \addplot+[blue,mark=o,only marks,mark size=1.25pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error = error] { x y error -6.28319 -0.377768 0 -12.5664 -1.19604 0.016535 -18.8496 -1.97769 0.03613 -25.1327 -2.54607 0.0637843 -31.4159 -3.09248 0.110158 }; \addplot+[gray,forget plot,mark=none,domain=-32:-6] {0.33496 + 0.113434*x}; \end{axis} \end{tikzpicture} \end{subfigure}\hfill \begin{subfigure}{.4\textwidth} \begin{tikzpicture} \begin{axis}[ legend cell align=center, xlabel={$q^{2-D_f}$}, ylabel={$c_2(q)/d_0$}, legend pos=south east] ] \addplot+[blue,mark=o,only marks,mark size=1.25pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error = error] { x y error 0.519703 0. 0 0.270091 0.383 0.005 0.140367 0.547 0.005 0.0729491 0.607 0.005 0.0379118 0.64 0.005 }; \addplot+[gray,forget plot,mark=none,domain=0:.55] {0.714118 - 1.33868*x}; \end{axis} \end{tikzpicture} \end{subfigure}\caption{Numerical values at $H=-2/3$, for the quantities $\log\left(2\frac{2-D_f}{3}\pi^2 - c_2(q)/d_0\right)$ (left) and $c_2(q)/d_0$ (right), together with the corresponding best fit lines.}\label{fig:c2q} \end{figure} \begin{table}[H] \centering \begin{tabular}{|c|c|c|}\hline $H$ & $2\Delta^{(1)}$ & $2\Delta^{(2)}$ \\\hline -7/8 & 0.12(1) & 0.10(1) \\ \hline -2/3 & 0.11(1) & 0.09(1)\\ \hline -5/8 & 0.12(1) & 0.10(1)\\ \hline \end{tabular}\caption{Values of the dimension $2\Delta$ of the most dominant field obtained from fitting $\log\left(2\frac{2-D_f}{3}\pi^2 - \frac{c_2(q)}{d_0}\right)$, (1) for surfaces (\ref{def:genu}) and (2) for surfaces (\ref{def:genu_2}). }\label{tab:2d} \end{table} \noindent Assuming that this dimension is indeed the one of the spin field, $2\Delta=2\Delta_\sigma = 2-D_f$, we fit $c_2(q)/d_0$ as a function of $q^{2-D_f}$: \begin{equation}\label{fitnc} c_2(q)/d_0 = c_2(0)/d_0 + a\,y ,\quad y = q^{2-D_f}, \end{equation} see Figure \ref{fig:c2q}. In particular, from (\ref{expc2}): \begin{equation} \frac{1}{12(2-D_f)}\frac{a}{c_2(0)/d_0} = \frac{n_\sigma}{c}. \end{equation} The values of the cylinder ($q\to0$) limit and of the ratio $n_\sigma/c$ obtained are given in Tables \ref{tab:a,b} and \ref{tab:a,b2}. \begin{table}[H] \centering \begin{tabular}{|c|c|c|}\hline $H$ & $c_2(0)/d_0$ & $n_\sigma/c$ \\\hline pure percolation & $\frac{2(2-D_f)\pi^2}{3}\sim0.6854$ & $\frac{4\pi}{5\sqrt{3}}\sim1.4510$ \\\hline -7/8 & 0.71(2) & 1.51(7) \\ \hline -2/3 & 0.71(2) & 1.50(9) \\ \hline -5/8 & 0.72(2) &1.5(1) \\ \hline \end{tabular}\caption{Cylinder limit $c_2(0)/d_0$ and ratio of the spin field multiplicity $n_\sigma$ to the central charge $c$, obtained from fit (\ref{fitnc}), for surfaces (\ref{def:genu}).}\label{tab:a,b} \end{table} \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|}\hline $H$ & $c_2(0)/d_0$ & $n/c$ \\\hline -7/8 & 0.67(2) & 1.51(8) \\ \hline -2/3 & 0.66(2) & 1.52(5) \\ \hline -5/8 & 0.68(2) & 1.51(7) \\ \hline \end{tabular}\caption{Cylinder limit $c_2(0)/d_0$ and ratio of the spin field multiplicity $n_\sigma$ to the central charge $c$, obtained from fit (\ref{fitnc}), for surfaces (\ref{def:genu_2}).}\label{tab:a,b2} \end{table} When $H>-1/2$, we could not determine the value of the plateau $d_0$, so we cannot determine the leading dimension in the expansion (\ref{expc2}) as above. In Figure \ref{fig:c2Hl} we show the behaviour of $c_2(q)$ with $q^{2-D_f(H)}$, with $D_f(H)$ from Table \ref{tab:df}. The points corresponding to large $M/N$ deviate significantly from a line. This could be explained by the fact that, when $H\to 0$, the fractal dimension $D_f\to 2$, so that the coefficient of the $q^{2-D_f}$ term in (\ref{expc2}) becomes small and subleading terms in this expansion become non-negligeable. \begin{figure}[H] \centering \begin{tikzpicture} \begin{axis}[ legend cell align=center,legend pos=south west, samples=100, ylabel={$c_2(q)$},xlabel={$q^{2-D_f(H)}$}] ] \addplot+[red,mark=o,only marks,mark size=1pt,,error bars/.cd,y dir=both,y explicit] table[x=x,y=y, y error=error] { x y error 0.519703 0. 0 0.270091 0.171 0.005 0.140367 0.216 0.005 0.0729491 0.225 0.005 0.0379118 0.223 0.005 }; \addplot+[purple,mark=o,only marks,mark size=1pt,,error bars/.cd,y dir=both,y explicit] table[x=x,y=y, y error = error] { x y error 0.520247 0. 0 0.270657 0.15 0.005 0.140809 0.2 0.01 0.0732553 0.207 0.005 0.0381108 0.203 0.005 }; \addplot+[brown,mark=o,only marks,mark size=1pt,,error bars/.cd,y dir=both,y explicit] table[x=x,y=y, y error = error] { x y error 0.568084 0. 0 0.322719 0.166 0.005 0.183331 0.21 0.01 0.104148 0.21 0.01 0.0591645 0.21 0.01 }; \legend{$H=-19/40$,$H=-3/8$,$H=-1/4$}; \end{axis} \end{tikzpicture}\caption{Behaviour of the coefficient $c_2(q)$ in the range $H>-1/2$.}\label{fig:c2Hl} \end{figure} \section{Conclusion} In this paper we have studied the percolative properties of fractional random surfaces with negative Hurst exponent $H$. Via the connected components of their excursion sets, the level clusters, this problem is reformulated in terms of a long-range correlated two-dimensional site percolation model. The main motivation here was to better understand the universality of their percolation critical points, in particular in the region $-3/4<H<0$ where the correlation effects drive the system into universality classes different from the one of pure percolation. When the problem is defined on a rectangular domain of size $M\times N$ with toric boundary conditions, we argued that the two-point connectivity (\ref{def:2conn}) represents an excellent observable to test conformal invariance. On the basis of three main assumptions, explained in Section \ref{sec: 3assumptions}, we predicted the leading contributions to the toric corrections, see (\ref{predperco}) and (\ref{phpvh2}). We tested these predictions by generating two types of fractional random surfaces (\ref{def:genu}) and (\ref{def:genu_2}), expected to have the same long distances behaviour. The comparison between the theory and the numerical simulations is summarised in Section \ref{sec:resuresu}. The main result is shown in Figure \ref{fig:logdp1} and in Figure \ref{fig:cos} and points out, for the first time, the existence of the two components of a traceless stress-energy tensor for all $H<0$. Furthermore, the two point connectivity on rectangular torus lattices gives access to the spectrum and to some fundamental structure constants of the underlying CFT, still unknown for any $H<0$. Importantly, we find that the energy field in this CFT cannot be degenerate, whereas this is the case for pure percolation. We show that the leading contribution to the conformal partition function is the magnetic field $\sigma$ with scaling dimension $2-D_f$, as shown in Figure \ref{fig:c2q} and in Table \ref{tab:2d}. The ratio $n_{\sigma}/c$ of the multiplicity of the magnetic field to the central charge has also been determined numerically with quite good precision, and it is reported in Table \ref{tab:a,b}. Finally, we succeeded in evaluating the product $ \left[C_{\sigma,\sigma}^{\varepsilon}\right]^2 n_\sigma$, directly proportional to the fusion between the thermal and magnetic field. The results are given in Table \ref{tab:Cn}. We conclude by noting that the fact that, for $H<-3/4$, the long-range correlation is irrelevant is a very established one. Nevertheless, the results in Table \ref{tab:Cn} verify this conjecture at the level of the structure constants of the theory, which encode much more information than the critical exponents. At the best of our knowledge, this is the first time such verification has been done. A last noteworthy observation concerns the corrections to the scaling of the critical level, when using the Binder method to locate the critical point (see Appendix \ref{sec:critlevel}). From the values of the corresponding exponent $\omega$ given in Table \ref{tab:hbind}, we argue that the long-range correlations break the integrability of the model. \section*{Acknowledgements} We thank Marco Picco for explaining us many crucial aspects on the numerical analysis of critical percolation points, and Hugo Vanneuville for sharing his insights and guiding us through the mathematical literature. We thank also Sylvain Ribault and Hans Herrmann for useful discussions. SG acknowledges support from a SENESCYT fellowship from the Government of Ecuador as well as from CNRS in the last part of the project. \begin{appendix} \section{Fractional Gaussian surfaces} \label{sec:generatingu} To generate a random function $u({\bf x})$ satisfying the properties (\ref{cov}), we use a method based on the Fourier Filtering Method \cite{fractalbook}. The principle is to create correlated random variables by linearly combining uncorrelated ones. Let us first briefly sketch the method. Given a set of uncorrelated random variable $w({\bf x})$, $\mathbb{E}\left[w({\bf x})w({\bf y})\right]=\delta_{{\bf x},{\bf y}}$, one can define, via a convolution, a new set of random variables $u({\bf x})$: \begin{equation} u({\bf x}) = \sum_{{\bf y}} S({\bf x}-{\bf y})w({\bf y}). \end{equation} The convolution kernel $S({\bf x})$ is a non-random function which determines the $u({\bf x})$ covariance function: \begin{equation} \mathbb{E}\left[u({\bf x})u({\bf y})\right]=\sum_{{\bf z}} S({\bf x}-{\bf z}) S({\bf y}-{\bf z}). \end{equation} By Fourier transforming both sides of the above equation, one can see that the large distance asymptotics (\ref{cov}) is determined by the small ${\bf k}$ asymptotics of $\hat{S}({\bf k})^2$, where $\hat{S}({\bf k})$ is the Fourier transform of $S({\bf x})$. In particular, $\hat{S}({\bf k})\sim |{\bf k}|^{-H-1} (\text{for}\;|{\bf k}|<<1)$. We apply this procedure to generate random long-range correlated surfaces. We consider a domain $\left[0,\cdots,N-1\right]\times \left[0,\cdots,M-1\right] \subset \mathbb{Z}^2$ where ${\bf x}=(x_1,x_2)$ denotes a lattice site: \begin{equation} {\bf x} = (x_1,x_2), \quad x_1=0,\cdots N-1$$ and $$x_2=0,\cdots,M-1. \end{equation} A random function $w({\bf x})$ is generated by drawing its values independently at each point by an initial Gaussian distribution $P(w)=\mathcal{N}(0,1)$. The probability distribution function $P\left[w({\bf x})\right]$ is therefore: \begin{equation} \label{intgaussian} P\left[w({\bf x})\right]=\prod_{{\bf x}}\;\frac{e^{- \frac{w({\bf x})^2}{2}}}{\sqrt{2\pi}}. \end{equation} The discrete Fourier transform of $w({\bf x})$ is defined as: \begin{equation} \hat{w}({\bf k}) = \frac{1}{N\;M}\sum_{{\bf x}}\;w(\mathbf{x}) e^{-i\; {\bf k}\;{\bf x}}=\frac{1}{N \;M}\sum_{x_1=0}^{N-1}\sum_{x_2=0}^{M-1}\;w(x_1,x_2) e^{-2\pi i\;\left(x_1 \frac{k_1}{N}+ x_2 \frac{k_2}{M}\right)}, \end{equation} where \begin{equation} {\bf k}=2\pi\left (\frac{k_1}{N},\frac{k_2}{M}\right),\quad k_1=0,\cdots,N-1,\;k_2=0,\cdots,M-1. \end{equation} From (\ref{intgaussian}) one has: \begin{equation} \label{wqmoments} \mathbb{E}\left[\hat{w}({\bf k})\right]=0,\quad \mathbb{E}\left[\hat{w}({\bf k})\hat{w}({\bf p})\right]=\delta_{k_1, N-p_1}\delta_{k_2, M-p_2}. \end{equation} We use the convolution kernel: \begin{align} \label{def:kernel} \hat S({\bf k})=\begin{cases} & = \lambda_{{\bf k}}^{-\frac{H+1}{2}}, \quad \text{for}\; k_1,k_2 \neq 0 \\ &= 1 \quad \text{for} \;k_1=k_2=0, \end{cases} \end{align} where: \begin{equation} \label{def:lambda} \lambda_{{\bf k}}=\left(2\cos\left(\frac{2\pi}{N}\;k_1\right)+2\cos\left(\frac{2\pi}{M} \;k_2\right)-4\right). \end{equation} We generate the random surface $u({\bf x})$ by doing the following inverse Fourier transform: \begin{align} \label{def:genu} u(\mathbf{x}) &= \frac{1}{\text{norm}}\sum_{\mathbf{k}}\hat S({\bf k})\;\hat{w}({\bf k})\; e^{ i\;\mathbf{k}\;\mathbf{x}},\quad \text{norm} = \sum_{{\bf k}}\; \hat{S}^2({\bf k}). \end{align} \begin{figure}[H] \centering \begin{tikzpicture} \begin{loglogaxis}[ legend cell align=center, xlabel={$\left|{\bf x}-{\bf y}\right|$}, ylabel={$\mathbb{E}\left[u({\bf x})u({\bf y})\right]$}, legend pos=south west] ] \addplot+[green,mark=o,only marks,error bars/.cd,y dir=both,y explicit] table [y=mean, x=r, y error = dev]{./plots/1.250000-8.dat}; \addplot+[orange,mark=o,only marks,error bars/.cd,y dir=both,y explicit] table [y=mean, x=r, y error = dev]{./plots/0.950000-8.dat}; \addplot+[blue,mark=o,only marks,error bars/.cd,y dir=both,y explicit] table [y=mean, x=r, y error = dev]{./plots/0.666667-8.dat}; \addplot+[green,forget plot,mark=none,domain=1:20] {0.3569/x^0.75}; \addplot+[orange,forget plot,mark=none,domain=1:20] {0.2328/x^1.05}; \addplot+[blue,forget plot,mark=none,domain=1:20] {0.1449/x^1.33}; \legend{$H = -3/8$,$H = -21/40$,$H=-2/3$}; \end{loglogaxis} \end{tikzpicture}\caption{Numerical measurement of $\mathbb{E}\left[u({\bf x})u({\bf y})\right]$ for different values of the Hurst exponent, on square lattices of size $M = N = 2^8$. The lines have slopes $-2H$.} \label{fig:asympcov} \end{figure} The universal properties do not depend on the initial distribution $P\left[w({\bf x})\right]$ distribution nor on the precise form of the kernel as long as $\hat{S}({\bf k})$ has the same small ${\bf k}$ asymptotic behaviour \cite{de_Castro_2017}. As we explain in Section \ref{sec:numericalresults}, we find useful to generate long-range correlated random surfaces by using another distribution $P_2\left[w({\bf x})\right]$ for $w({\bf x})$ and a different kernel. In particular, the $P_2\left[w({\bf x})\right]$ is determined by the uniform distribution: \begin{equation} \label{unidist} P_2\left[w({\bf x})\right]=\prod_{{\bf x}} P(w({\bf x})), \quad P(w({\bf x}))= \begin{cases} & 1,\quad |w({\bf x})|< \frac{\sqrt{3}}{N} \\ &0,\quad |w({\bf x})|>\frac{\sqrt{3}}{N} \end{cases} \end{equation} and the kernel: \begin{equation} \label{def:kernel2} \hat{S}_2({\bf k})=\begin{cases} &\left|\mathbf{k}\right|^{-H-1} \quad \text{for} \;{\bf k}\neq (0,0),\\ &1 \quad \text{for} \;{\bf k}= (0,0) \end{cases}, \end{equation} where: \begin{align} &\left|\mathbf{k}\right| = \frac{2\pi}{N}\sqrt{ k_1^2+k_2^2},\quad k_1,k_2 = -N/2,\cdots N/2-1. \end{align} The second kind of surfaces we generate are \begin{align} \label{def:genu_2} u(\mathbf{x}) = \frac{1}{\text{norm}}\sum_{\mathbf{k}}\hat S_2({\bf k})\;\hat{w}_2({\bf k})\; e^{ i\;\mathbf{k}\;\mathbf{x}},\quad \text{norm} = \sum_{{\bf k}}\; \hat{S}_2^2({\bf k}), \end{align} where we indicated as $\hat{w}_2({\bf k})$ the Fourier transforms of the random function $w({\bf x})$ of law (\ref{unidist}). In the above equations we assumed $M=N$, but the generalization to $M\neq N$ is straightforward. Note that, due to the (Lyupanov) central limit theorem, $\hat{w}_2({\bf k})$ is described in the large $N$ limit by a Gaussian distribution and the function $u(\mathbf{x})$ can be considered an instance of a fractional Gaussian surface. For $H<0$, the surface $u({\bf x})$, generated by (\ref{def:genu}) or by (\ref{def:genu_2}): \begin{itemize} \item is real, $u({\bf x})\in \mathbb{R}$, from the property (\ref{wqmoments}) and the symmetry of the kernel (\ref{def:kernel}) \item satisfies (\ref{cov}). In Figure \ref{fig:asympcov} we show the numerical measurements of $ \mathbb{E}\left[ u({\bf x})u({\bf y})\right]$ for the surface (\ref{def:genu})and for different values of the roughness exponent. The data points are compared to the power law decay $|{\bf x}-{\bf y}|^{2 H}$. \item has a zero mode which vanishes in law: \begin{equation} \mathbb{E}\left[\hat{u}({\bf 0})\right]=0. \end{equation} \item is normalised such that: \begin{equation} \mathbb{E}\left[u({\bf x})^2\right]=1. \end{equation} \noindent Note that, in the thermodynamic limit, the normalisation constant in (\ref{def:genu}) is finite for negative $H$, as $\text{norm}\sim N^{2\;H}+O(1) \;(N>>1,M/N=O(1))$. The surface fluctuations are thus bounded. \item satisfies periodic boundary conditions in both directions \begin{equation} \label{eq:torus} u({\bf x}+ {\bf t})=u({\bf x}), \quad \text{for}\; {\bf t}=(n\; N, m\; M),\; n,m \in \mathbb{N}. \end{equation} \end{itemize} \section{Percolation phase transition: critical level $h_c$ and the critical exponents $\nu$ and $D_f$}\label{app:hDf} \label{sec:critlevel} We study here the critical percolative properties of the level clusters of the surface (\ref{def:genu}) and (\ref{def:genu_2}). In particular we determine numerically the critical level $h_c$ and the exponents $\nu$ and $D_f$. \subsection{Critical level and correlation length exponent $\nu$} For a sign-symmetric random function $u({\bf x})$ on the Euclidean space, ${\bf x} \in \mathbb{R}^2$, the critical level is $h_c=0$ by symmetry argument \cite{Molchanov1983_II}. Our function $u({\bf x})$ is defined on a lattice and $h_c$ is expected to be negative. We determine the critical level $h_c$ by the standard procedure of percolation theory \cite{SA92}. We consider square domains of different sizes $N\times N$. We determine the average $\mathbb{E}\left[ h_c(N)\right]$ of the level $h_c(N)$ at which a level cluster connecting the top and the bottom of the lattice appears. This quantity scales with the size of the lattice as: \begin{equation}\label{hscaling} \mathbb{E}\left[ h_c(N)\right] - h_c \sim N^{-\frac{1}{\nu}}. \end{equation} The data point for $\mathbb{E}\left[ h_c(N)\right]$, shown in Figure \ref{fig:hc} as a function of $N^{H}$ for different values of $H$, are very well described by a linear interpolation, thus confirming the predition (\ref{nulong}). Fitting the data to the form (\ref{hscaling}) with $\nu=\nu^{\text{long}}$, we obtain the values of $h_c$ reported in Table \ref{tab:hc1}. \begin{figure}[H] \begin{tikzpicture}[baseline=(current bounding box.center), very thick, scale = 1.1] \begin{axis}[ legend cell align=center, xlabel={$N^{H}$}, ylabel={$\mathbb{E}\left[ h_c(N)\right]$}, legend pos=south east] ] \addplot+[blue,mark=o,only marks,mark size=1pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error=error] { x y error 0.0883883 -0.218361 0.000264888 0.0481941 -0.221042 0.000232505 0.026278 -0.222795 0.000184391 0.0143282 -0.223555 0.000126151 }; \addplot+[orange,mark=o,only marks,mark size=1pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error=error] { x y error 0.233258 -0.180637 0.000273571 0.162105 -0.183086 0.00025384 0.112656 -0.184797 0.000226179 0.0782915 -0.185725 0.000191476 }; \addplot+[green,mark=o,only marks,mark size=1pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error=error] { x y error 0.353553 -0.161775 0.000277157 0.272627 -0.163177 0.000262604 0.210224 -0.164122 0.000244871 0.162105 -0.16513 0.000222954 }; \addplot+[blue,forget plot,mark=none,domain=0:0.35] {-0.224581 + 0.070905*x}; \addplot+[orange,forget plot,mark=none,domain=0:0.35] {-0.188383 + 0.0329251*x}; \addplot+[green,forget plot,mark=none,domain=0:0.35] {-0.167872 + 0.0173051*x}; \legend{$H=-7/8$,$H=-21/40$,$H=-3/8$}; \end{axis} \end{tikzpicture} \caption{$\mathbb{E}\left[ h_c(N)\right]$ for $N = 2^4,\cdots 2^7$ as a function of $N^{H}$. The lines are the best fits to the form (\ref{hscaling}) with $\nu=-1/H$ for different $H$s. The intercepts with the vertical $N^H=0$ axis ($N\to \infty$ limit), give the estimation for $h_c$.} \label{fig:hc} \end{figure} \begin{table}[H]\centering \begin{tabular}{ |c|c| } \hline $H$ & $h_c$ \\ \hline -7/8 &-0.2238(1) \\ -2/3 &-0.2034(1) \\ -5/8 & -0.1985(1) \\ -21/40 & -0.1860(2) \\ -19/40&-0.1775(3) \\ -3/8 &-0.1670(5) \\ -3/10 &-0.1570(5) \\ \hline \end{tabular} \caption{Critical level obtained from scaling (\ref{hscaling}), for the surfaces (\ref{def:genu}).}\label{tab:hc1} \end{table} Another way to determine the critical point is based on the Binder method. We apply this method to study the surface (\ref{def:genu_2}). Defining the moments $M_{m}$ as: \begin{equation} M_m = \sum_{i=0}^{\infty}i^m n_i, \end{equation} with $n_i$ the number of level clusters composed of $i$ sites, one computes the ratio $r_N^{\text{Bin}}(h)$ \begin{equation}\label{binder} r^{\text{Bin}}_N(h)=\frac{\mathbb{E}\left[ M_4\right]}{\mathbb{E}\left[M_2\right]^2}, \end{equation} where the average $\mathbb{E}[\cdots]$ is weighted by the distribution (\ref{unidist}). The ratio $r^{\text{Bin}}_N(h)$ depends on the level $h$ and on the system size $N$ through a scaling relation of the type: \begin{equation} \label{bindscal} r^{\text{Bin}}_N(h)=f\left((h-h_c) N^{\frac{1}{\nu}}\right)+ a\;N^{-\omega}, \end{equation} where the function $f$ is some scaling function, and the term $a\; N^{-\omega}$, with $a$ a non-universal prefactor, is a correction to the scaling term. The interpretation of $\omega$ is discussed below. From (\ref{bindscal}), one can find the point $h_c(N)$ where the curves $r^{\text{Bin}}_N(h)$ and $r^{\text{Bin}}_{2N}(h)$ intersect \cite{salderr85} and use the fitting form: \begin{equation}\label{scalingh} h_c(N) = h_c + \frac{a}{N^{x}}, \end{equation} to determine $h_c$, with $x$ a free parameter. For each value in (\ref{alvalues}), we compute (\ref{binder}) for sizes $N = 2^s,\; s = 4,\cdots,9$ and $N = 3\times2^s,\; s = 3,\cdots,7$ averaged over $10^5$ instances. We interpolate the curves and find their intersections. The Binder method shows less precision for $H$ approaching $0$. Indeed the correlation length exponent $\nu=-1/H$ increases fast, making the size effects much smaller. The curves $r^{\text{Bin}}_N(h)$ and $ r^{\text{Bin}}_N(h)$ tend to be parallel, and localising their crossing point becomes difficult. In Figure \ref{fig:scalingh} we show the scaling of the crossing points $h_c(N)$ for some values of $H$. Once the critical point is located, the thermal exponent $\nu$ can be estimated by using that: \begin{equation}\label{fitnu} \frac{d}{dh}r^{\text{Bin}}_N(h)\vert_{h=h_c} \sim N^{1/\nu}. \end{equation} In Table \ref{tab:hbind} we give the values of $h_c$ obtained from (\ref{scalingh}), and the values of $\nu$ obtained from (\ref{fitnu}). These latter are in fair agreement with the prediction (\ref{nupure}, \ref{nulong}). Setting $\nu$ to (\ref{nulong}) we estimate the values of $\omega$ as $\omega=x-1/\nu$. \begin{table}[H] \centering \begin{tabular}{|c | c | c | c |} \hline $H$ & $h_c$ & $\nu$ & $\omega$\\ \hline -1 & -0.3210(9) & 1.33(2) & 2.00(5)\\ \hline -7/8 & -0.3075(5) & 1.46(8) & 1.00(5) \\ \hline -2/3 & -0.2793(5) & 1.67(5) & 0.8(1) \\ \hline -5/8 & -0.2722(5) & 1.9(1) & 1.0(1) \\ \hline \end{tabular}\caption{Values of the critical level $h_c$ obtained with the Binder method. The $\nu$ exponent is obtained from equation (\ref{fitnu}), and the value of the exponent $w$ is obtained from scaling (\ref{scalingh}), with $\nu$ set to (\ref{nulong}). The measurements have been taken for the surface (\ref{def:genu_2}).}\label{tab:hbind} \end{table} \begin{figure} \centering \begin{tikzpicture} \begin{axis}[ legend cell align=center, yticklabel style={/pgf/number format/.cd,fixed zerofill,precision=2}, xlabel={$N^{-x}$}, ylabel={$h_c(N)$}, legend pos=north west] ] \addplot+[blue,mark=square,only marks,mark size=1pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error = error] { x y error 0.0078125 -0.263164 0.0005 0.00232267 -0.296329 0.0005 0.000690534 -0.304816 0.0005 0.000205297 -0.306087 0.0005 0.00384265 -0.286029 0.0005 0.00114243 -0.301867 0.0005 0.000339645 -0.306096 0.0005 }; \addplot+[gray,forget plot,mark=none,domain=0:0.02] {-0.308333+5.74372*x}; \addplot+[orange,mark=square,only marks,mark size=1pt,error bars/.cd,y dir=both,y explicit] table[x=x,y=y,y error = error] { x y error 0.015625 -0.228296 0.0005 0.00552427 -0.261076 0.0005 0.00195313 -0.273664 0.0005 0.000690534 -0.27734 0.0005 0.00850517 -0.250421 0.0005 0.00300703 -0.269347 0.0005 0.00106315 -0.275798 0.0005 0.000375879 -0.277382 0.0005 }; \addplot+[gray,forget plot,mark=none,domain=0:0.02] {-0.279263+3.2871*x}; \legend{$H=-7/8$,$H=-2/3$}; \end{axis} \end{tikzpicture}\caption{Values of $h_c(N)$ obtained from the crossing of the curves $r^{\text{Bin}}_N(h)$ and $r^{\text{Bin}}_N(h)$, defined in (\ref{binder}). Measurements have been taken for the surface (\ref{def:genu_2}).}\label{fig:scalingh} \end{figure} \noindent It is quite interesting to comment on the exponent $\omega$, which determines the correction to the scaling. The exponent $\omega$ is expected to be the conformal dimension of the first irrelevant thermal field. In \cite{Blote88} is was observed that, when the model is integrable, the corrections to the scaling are always associated to irrelevant fields that appear in the fusion between relevant ones. To be more specific, the authors of \cite{Blote88} considered those statistical models that are described by rational CFTs. The spectrum of these CFTs contain a finite set of primary fields, which close under Operator Product Algebra and which are listed in the so-called Kac Table. When these models are integrable, the correction to scaling are therefore determined by fields inside the Kac table. In the pure percolation CFT, the (relevant) energy density $\varepsilon$ field, $\varepsilon=V_{1,2}$ generate by fusion with itself an infinite series of irrelevant fields with dimension $\Delta_{1,n}$, $n=3,4,...$ (note that we have used the standard minimal model notation $V_{r,s}$ and $\Delta_{r,s}$ for the field and conformal dimension). In the case of pure percolation, which is an integrable model, the value of $\omega$ is therefore expected to be given by the lowest irrelevant thermal field dimension, $\omega=2\Delta_{1,3}=2$. A discussion of this exponent can be found for instance in Appendix D of \cite{Fytas_2019}. In the case of pure percolation, we find indeed $\omega=2$. We observe in Table \ref{tab:hbind} that, when $H\neq -1$, a non-universal correction with $\omega\sim 1$ to the scaling dominates. v \subsection{Fractal dimension $D_f$} \label{sec:df1} At the critical point $h=h_c$, the level clusters have fractal dimension $D_f$. This dimension determines the scaling of the average mass (i.e. number of points) $\mathcal{A}_l$ of a level cluster with respect to its length $l$, $\mathcal{A}_l\sim l^{D_f}$. The length of a level cluster can be defined as its radius of gyration. One effective way to measure $D_f$ is to consider the percolating level cluster whose size is of the same order of the system size, $l\sim N$. To determine $D_f$, we use then the following relation: \begin{equation} \label{df1} \mathbb{E}\left[\text{\# sites of the p.l.c.}\right]\sim N^{D_f}, \quad \text{p.l.c.=percolating level cluster}. \end{equation} A representative example of a numerical measurement of the above average is shown in Figure \ref{fig:Df1}, for $H=-2/3$. To remove the small sizes effects, we perform fits with the successive lower sizes removed, and expect the best fit parameter to converge to the fractal dimension, as in Figure \ref{fig:Df2}. The values $D_f^{(1)}$ obtained are given in Table \ref{tab:df}. \begin{figure}[H] \begin{subfigure}{0.45\textwidth} \begin{tikzpicture} \begin{loglogaxis}[ legend cell align=center, samples=100, xlabel={$N$}, ylabel={$\mathbb{E}\left[\text{\# sites of the p.l.c.}\right]$}, legend pos=north west] ] \addplot+[blue,mark=o,only marks,mark size=.6pt,error bars/.cd,y dir=both,y explicit] table [y=mean, x=N,y error = dev]{./plots/Df.dat}; \addplot+[blue,mark=none,domain=10:96] {0.657592*x^1.89946}; \addplot+[gray,mark=none,domain=10:96] {0.6*x^1.89583}; \end{loglogaxis} \end{tikzpicture}\caption{Average mass of the percolating level cluster for $H=-2/3$, and the best fit line which has slope $\sim 1.90$. The gray line corresponds to the percolation value $D_f = 91/48\sim 1.8958$.}\label{fig:Df1} \end{subfigure}\hfill \begin{subfigure}{0.45\textwidth} \begin{tikzpicture} \begin{axis}[ legend cell align=center, yticklabel style={/pgf/number format/.cd,fixed zerofill,precision=4}, xlabel={$N$}, ylabel={$D_f$}, legend pos=north west] ] \addplot+[gray, forget plot,mark=none,domain=10:72] {1.89583}; \addplot+[blue,mark=o,only marks,mark size=1pt,error bars/.cd,y dir=both,y explicit] table [y=mean, x=N,y error = dev]{./plots/Df2.dat}; \end{axis} \end{tikzpicture}\caption{Convergence of the best fit parameter for the fractal dimension when the lowest size points are removed. Here it converges to the percolation value shown as a grey line.}\label{fig:Df2} \end{subfigure}\caption{ } \end{figure} \begin{table}[ht!]\centering \begin{tabular}{ |c|c|c|c|c|c| } \hline $H$ & $D_f^{(1)}$ & $D_f^{(2)}$&$D_f$ \cite{Janke_2017} \\ \hline -7/8 & 1.8955(5) &1.8945(2) &1.8964(2)\\ -2/3 &1.8960(10) &1.893(1) &\\ -5/8 & 1.8955(6) &1.892(1) &1.8950(3)\\ -21/40 & 1.8965(10) &1.8910(5) & \\ -19/40 &1.8955(8) &1.8897(5) &\\ -3/8 &1.904(1) &1.8970(5) &1.9006(4)\\ -1/4 &1.917(1) &1.906(1) &1.9128(5)\\ \hline \end{tabular} \caption{Fractal dimensions obtained (1) from the scaling of the largest cluster (\ref{df1}) and (2) from the power-law decay of the two-point connectivity (\ref{2connplane}), and comparison with previous numerical work \cite{Janke_2017}.}\label{tab:df} \end{table} \end{appendix}
1,477,468,751,224
arxiv
\section{Introduction} Let $n, e\in\mathbb{N}$. Let $k$ be a field and $0\neq q\in k$. Suppose that either $e>1$ and $q$ is a primitive $e$th root of unity; or $q=1$ and $\ch k=e$.\footnote{In the latter case, $e$ is necessarily to be a prime number.} Let $\HH_k(\mathfrak S_n)$ be the Iwahori-Hecke algebra associated to the symmetric group $\mathfrak S_n$ with parameter $q$ and defined over $k$. The Mullineux involution $\MM$ is a bijection defined on the set of all $e$-regular partitions of $n$, which arises naturally when one twists irreducible modules (labelled by $e$-regular partitions) over $\HH_k(\mathfrak S_n)$ by a $k$-algebra automorphism $\#$ (see Section 2 for definition of $\#$). If $q=1$ and $e$ is an odd prime number, the involution $\MM$ determines which simple module splits and which remains simple when restricting to the alternating subgroup $A_n$. In that case, the set of partitions which are fixed by the involution $\MM$ parameterizes the irreducible modules of $k\mathfrak S_n$ which split on restriction to $A_n$. In \cite{Kl}, Kleshchev gave a remarkable algorithm for computing the involution $\MM$. A crystal bases approach to Kleshchev's algorithm of the involution $\MM$ was given in \cite[(7.1)]{LLT}.\smallskip The purpose of this paper is to study the partitions fixed by Mullineux involution for arbitrary $e$. We find that the set of Mullineux-fixed partitions is related to the twisted affine Lie algebras of type $A_{2\ell}^{(2)}$ and of type $D_{\ell+1}^{(2)}$, which reveals new connection between the theory of affine Lie algebra and the theory of modular representations. Our main tool are Naito-Sagaki's work (\cite{NS2}, \cite{NS1}) on Lakshmibai-Seshadri paths fixed by diagram automorphisms, which was also used in \cite{Hu4} and \cite{Hu5} to derive explicit formulas for the number of modular irreducible representations of the cyclotomic Hecke algebras of type $G(r,p,n)$, see \cite{Hu1}, \cite{Hu2} and \cite{Hu3} for related work. We characterize the set of Mullineux-fixed partitions in terms of crystal graph of basic representations of twisted affine Lie algebras of type $A_{2\ell}^{(2)}$ and of type $D_{\ell+1}^{(2)}$ (Theorem \ref{thm37}). We set up bijections (Theorem \ref{thm315} and Theorem \ref{thm317}) between the set of Mullineux-fixed partitions in the odd case (resp., the set of symmetric partitions) and the set of restricted strict partitions (resp., the set of partitions into distinct parts). As an application, we obtain new identities on the cardinality of the set of Mullineux-fixed partitions in terms of the principal specialized characters of the basic representations of these twisted affine Lie algebras (Theorem \ref{thm313} and Theorem \ref{thm320}). Furthermore, we propose a notion of double restricted strict partitions (Definition \ref{dfn321}), which is a direct explicit characterization of Kang's reduced proper Young wall of type $D_{\ell+1}^{(2)}$ (\cite{K}). We obtain a bijection (Theorem \ref{thm324}) between the set of Mullineux-fixed partitions in the even case and the set of double restricted strict partitions. Our main results shed some new insight on the modular representations of the alternating group and of Hecke-Clifford superalgebras as well as of the spin symmetric group (see Remark 3.25 and Remark 3.18), which clearly deserves further study. \bigskip\bigskip\bigskip \section{Preliminaries} In this section, we shall first review some basic facts about the representation of the Iwahori-Hecke algebras associated to symmetric groups. Then we shall introduce the notion of Mullineux involution, Kleshchev's $e$-good lattice as well as Kleshchev's algorithm of Mullineux involution. \smallskip Let $\mathfrak S_n$ be the symmetric group on $\{1,2,\cdots,n\}$, acting from the right. Let ${\mathcal A}=\mathbb Z} \newcommand{\HH}{\mathcal{H}[v,v^{-1}]$, where $v$ is an indeterminate. The Iwahori-Hecke algebra $\HH_{\mathcal A}(\mathfrak S_n)$ associated to $\mathfrak S_n$ is the associative unital ${\mathcal A}$-algebra with generators $T_1,\cdots,$ $T_{n-1}$ subject to the following relations $$\begin{aligned} &(T_i-v)(T_i+1)=0,\quad\text{for $1\leq i\leq n-1$,}\\ &T_iT_{i+1}T_i=T_{i+1}T_{i}T_{i+1},\quad\text{for $1\leq i\leq n-2$,}\\ &T_iT_j=T_jT_i,\quad\text{for $1\leq i<j-1\leq n-2$.}\end{aligned} $$ For each integer $i$ with $1\leq i\leq n-1$, we define $s_i=(i,i+1)$. Then $S:=\{s_1,s_2,\cdots,s_{n-1}\}$ is the set of all the simple reflections in $\mathfrak S_n$. A word $w=s_{i_1}\cdots s_{i_k}$ for $w\in\mathfrak S_{n}$ is a reduced expression if $k$ is minimal; in this case we say that $w$ has length $k$ and we write $\ell(w)=k$. Given a reduced expression $s_{i_1}\cdots s_{i_k}$ for $w\in\mathfrak S_n$, we write $T_w=T_{i_1}\cdots T_{i_k}$. The braid relations for generators $T_1,\cdots,T_{n-1}$ ensure that $T_w$ is independent of the choice of reduced expression. It is well-known that $\HH_{{\mathcal A}}(\mathfrak S_n)$ is a free ${\mathcal A}$-module with basis $\{T_w|w\in\mathfrak S_n\}$. For any field $k$ which is an ${\mathcal A}$-algebra, we define $\HH_k(\mathfrak S_n):=\HH_{{\mathcal A}}(\mathfrak S_n)\otimes_{{\mathcal A}}k$. Then $\HH_k(\mathfrak S_n)$ can be naturally identified with the $k$-algebra defined by the same generators and relations as $\HH_{\mathcal A}(\mathfrak S_n)$ above. Specializing $v$ to $1\in k$, one recovers the group algebra $k\mathfrak S_n$ of $\mathfrak S_n$ over $k$.\smallskip We recall some combinatorics. A partition of $n$ is a non-increasing sequence of positive integers $\lambda=(\lambda_1,\cdots,\lambda_r)$ such that $\sum_{i=1}^{r}\lambda_i=n$. For any partition $\lambda=(\lambda_1,\lambda_2,\cdots)$, the conjugate of $\lambda$ is defined to be a partition $\lambda^t=(\lambda_1^t,\lambda_2^t,\cdots)$, where $\lambda_j^t:=\#\{i|\lambda_i\geq j\}$ for $j=1,2,\cdots$. We define $\ell(\lambda):=\max\{i|\lambda_i\neq 0\}$. For any partition $\lambda$ of $n$, we denote by $\mathfrak t^{\lambda}$ (resp., $\mathfrak t_{\lambda}$) the standard $\lambda$-tableau in which the numbers $1,2,\cdots,n$ appear in order along successive rows (resp., columns). The row stabilizer of $\mathfrak t^{\lambda}$, denoted by $\mathfrak S_{\lambda}$, is the standard Young subgroup of $\mathfrak S_n$ corresponding to $\lambda$. Let $$ x_{\lambda}=\sum_{w\in\mathfrak S_{\lambda}}T_w,\quad y_{\lambda}=\sum_{w\in\mathfrak S_{\lambda}}(-v)^{-\ell(w)}T_w. $$ Let $w_{\lambda}\in\mathfrak S_n$ be such that $\mathfrak t^{\lambda}w_{\lambda}=\mathfrak t_{\lambda}$. Following \cite[Section 4]{DJ1}, we define $z_{\lambda}=x_{\lambda}T_{w_{\lambda}}y_{\lambda^t}$. \begin{dfn}\label{df21} The right ideal $z_{\lambda}\HH$ is called the right Specht module of $\HH=\HH_{{\mathcal A}}(\mathfrak S_n)$ corresponding to $\lambda$. We denote it by $S^{\lambda}$. \end{dfn} For any field $k$ which is an ${\mathcal A}$-algebra, let $S_k^{\lambda}:=S^{\lambda}\otimes_{\mathcal A} k$. There is a natural bilinear form ${\langle,\rangle}$ on each $S^{\lambda}$ (and hence on each $S_k^{\lambda}$). Let $D_k^{\lambda}:=S_k^{\lambda}/\rad\langle,\rangle$. Let ``$\trianglelefteq$'' be the dominance order on the set of all partitions as defined in \cite[(3.1)]{Mu}. \addtocounter{lem}{1} \begin{lem} {\rm (\cite{DJ1})}\label{lm1} With the above notations, we have \smallskip 1) the set of all the nonzero $D_k^{\lambda}$ (where $\lambda$ runs over partitions of $n$) forms a complete set of pairwise non-isomorphic simple $\HH_{k}(\mathfrak S_{n})$-modules. Moreover, if $\HH_{k}(\mathfrak S_{n})$ is semisimple, then $D_k^{\lambda}=S_k^{\lambda}\neq 0$ for every partition $\lambda$ of $n$; \smallskip 2) if $D_k^{\mu}\neq 0$ is a composition factor of $S_k^{\lambda}$ then $\lambda\trianglelefteq\mu$, and every composition factor of $S_k^{\lambda}$ is isomorphic to some $D_k^{\mu}$ with $\lambda\trianglelefteq\mu$. If $D_k^{\lambda}\neq 0$ then the composition multiplicity of $D_k^{\lambda}$ in $S_k^{\lambda}$ is $1$. \end{lem} Henceforth, let $k$ be a fixed field which is an ${\mathcal A}$-algebra. We assume that $v$ is specialized to $q\in k$ such that $1+q+q^2+\cdots+q^{a-1}=0$ for some positive integer $a$. We define $$ e=\min\bigl\{1<a<\infty\bigm|\text{$1+q+q^2+\cdots+q^{a-1}=0$ in $k$}\bigr\}. $$ Clearly, $e=\ch k$ if $q=1$; and otherwise $e$ is the multiplicative order of $q$. For simplicity, we shall write $\HH_k$ instead of $H_k(\mathfrak S_n)$. A partition $\lambda$ is called $e$-regular if it contains at most $e-1$ repeating parts, i.e., $\lambda=(1^{m_1}2^{m_2}\cdots j^{m_j}\cdots)$ with $0\leq m_i<e$ for every $i$. By \cite{DJ1}, for any partition $\lambda$ of $n$, $D_k^{\lambda}\neq 0$ if and only if $\lambda$ is $e$-regular. Let $\mathcal{K}_n$ be the set of all the $e$-regular partitions of $n$. Let $\#$ (see \cite{DJ1}, \cite[(2.3)]{Mu}) be the $k$-algebra automorphism of $\HH_k$ which is defined on generators by $T_{i}^{\#}=-vT_{i}^{-1}$ for each $1\leq i<n$. For each $\HH_k(\mathfrak S_n)$-module $V$, we denote by $V^{\#}$ the $\HH_k(\mathfrak S_n)$-module obtained by twisting $V$ by $\#$. That is, $V^{\#}=V$ as $k$-linear space, and $v\cdot h:=vh^{\#}$ for any $v\in V$ and $h\in \HH_k(\mathfrak S_n)$. Let $\ast$ be the algebra anti-automorphism on $\HH_k$ which is defined on generators by $T_i^{\ast}=T_{i}$ for any $1\leq i<n$. \addtocounter{dfn}{1} \begin{dfn} {\rm (\cite{Mul}, \cite{Br})} Let $\MM$ be the unique involution defined on the set $\mathcal{K}_n$ such that $\bigl(D_k^{\lambda}\bigr)^{\#}\cong D_k^{\MM(\lambda)}$ for any $\lambda\in\mathcal{K}_n$. We call the map $\MM$ the Mullineux involution, and $\lambda$ a Mullineux-fixed partition if $\MM(\lambda)=\lambda$. \end{dfn} An algorithm which compute the involution $\MM$ was first proposed by Mullineux in 1979, when he constructed an involution on the set of $e$-regular partitions and conjectured its coincidence with the above $\MM$. Mullineux worked in the setup that $q=1$ and $e$ being a prime number, though his combinatorial algorithm does not really depend on $e$ being prime. In \cite{Kl}, Kleshchev gave a quite different remarkable algorithm of the involution $\MM$ based on his work of branching rules for the modular representations of symmetric groups. In \cite{FK}, Ford and Kleshchev proved that Kleshchev's algorithm is equivalent to Mullineux's original algorithm and thus proved Mullineux's conjecture. The validity of Kleshchev's algorithm of $\MM$ for arbitrary $e$ is proved in \cite{Br}.\smallskip Note that the Mullineux involution $\MM$ depends only on $e$. {\it Henceforth, we refer to the case when $e$ is odd as the odd case; and to the case when $e$ is even as the even case.} By \cite[(3.5)]{DJ2} and \cite[(5.2),(5.3)]{Mu}, $\bigl(S^{\lambda}\bigr)^{\#}\cong\bigl(S^{\lambda^t}\bigr)^{\ast}$. If $\HH_{k}(\mathfrak S_{n})$ is semisimple, then $\bigl(S_k^{\lambda^t}\bigr)^{\ast}\cong S_k^{\lambda^t}$, hence in that case the involution $\MM$ degenerates to the map $\lambda\mapsto\lambda^t$. In this paper, we do not need Mullineux's original combinatorial algorithm (\cite{Mul}) for defining $\MM$, but we do need Kleshchev's algorithm (\cite{Kl}) of the involution $\MM$. To this end, we have to recall the notion of Kleshchev's $e$-good lattice.\smallskip Let $\lambda$ be a partition of $n$. The Young diagram of $\lambda$ is the set $$ [\lambda]=\bigl\{(a,b)\bigm|\text{$1\leq b\leq\lambda_{a}$}\bigr\}. $$ The elements of $[\lambda]$ are nodes of $\lambda$. Given any two nodes $\gamma=(a,b), \gamma'=(a',b')$ of $\lambda$, say that $\gamma$ is {\it below} $\gamma'$, or $\gamma'$ is {\it above} $\gamma$, if $a>a'$. The {\it residue} of $\gamma=(a,b)$ is defined to be $\rres(\gamma):=b-a+e\mathbb Z} \newcommand{\HH}{\mathcal{H}\in\mathbb Z} \newcommand{\HH}{\mathcal{H}/e\mathbb Z} \newcommand{\HH}{\mathcal{H}$, and we say that $\gamma$ is a $\rres(\gamma)$-node. Note that we can identify the set $\{0,1,2,\cdots,e-1\}$ with $\mathbb Z} \newcommand{\HH}{\mathcal{H}/e\mathbb Z} \newcommand{\HH}{\mathcal{H}$ via $i\mapsto \overline{i}$ for each $0\leq i\leq e-1$. Therefore, we can also think that the $\res(?)$ function takes values in $\{0,1,2,\cdots,e-1\}$. A removable node is a node of the boundary of the Young diagram $[\lambda]$ which can be removed, while an addable node is a concave corner on the rim of $[\lambda]$ where a node can be added. If $\mu$ is a partition of $n+1$ with $[\mu]=[\lambda]\cup \bigl\{\gamma\bigr\}$ for some removable node $\gamma$ of $\mu$, we write $\lambda\rightarrow\mu$. If in addition $\res(\gamma)=x$, we also write that $\lambda\overset{x}{\rightarrow}\mu$. For example, suppose $n=42$ and $e=3$. The nodes of $\lambda=(9^2,8,7,5,3,1)$ have the following residues $$ \lambda=\left(\begin{matrix} \overline{0} & \overline{1}& \overline{2}& \overline{0}& \overline{1} & \overline{2} & \overline{0} & \overline{1} & \overline{2} \\ \overline{2} & \overline{0} & \overline{1} & \overline{2} & \overline{0} & \overline{1} & \overline{2} &\overline{0} & \overline{1} \\ \overline{1}& \overline{2}& \overline{0}& \overline{1}& \overline{2}& \overline{0} & \overline{1}& \overline{2}& \\ \overline{0}& \overline{1}& \overline{2}& \overline{0}& \overline{1}& \overline{2} & \overline{0}& & \\ \overline{2}& \overline{0} & \overline{1}& \overline{2} &\overline{0} &&&& \\ \overline{1}& \overline{2} & \overline{0} & & & & & & \\ \overline{0}& & & & & & & & \end{matrix} \right) . $$ It has six removable nodes. Fix a residue $x$ and consider the sequence of removable and addable $x$-nodes obtained by reading the boundary of $\lambda$ from the top down. In the above example, if we consider residue $x=\overline{0}$, then we get a sequence AARRRR, where each ``A'' corresponds to an addable $\overline{0}$-node and each ``R'' corresponds to a removable $\overline{0}$-node. Given such a sequence of letters A,R, we remove all occurrences of the string ``AR'' and keep on doing this until no such string ``AR'' is left. The ``R''s that still remain are the {\it normal} $\overline{0}$-nodes of $\lambda$ and the rightmost of these is the {\it good} $\overline{0}$-node. In the above example, the two removable $\overline{0}$-nodes in the last two rows survive after we delete all the string ``AR''. Therefore, the removable $\overline{0}$-node in the last row is the good $\overline{0}$-node. If $\gamma$ is a good $x$-node of $\mu$ and $\lambda$ is the partition such that $[\mu]=[\lambda]\cup\gamma$, we write $\lambda\overset{x}{\twoheadrightarrow}\mu$. The {\it Kleshchev's $e$-good lattice} is, by definition, the infinite graph whose vertices are the $e$-regular partitions and whose arrows are given by $ \text{$\lambda\overset{x}{\twoheadrightarrow}\mu$\,\,\,$\Longleftrightarrow$\, $\lambda$ is obtained from $\mu$ by removing a good $x$-node}$. It is well-known that, for each $e$-regular partition $\lambda$, there is a path (not necessary unique) from the empty partition $\emptyset$ to $\lambda$ in Kleshchev's $e$-good lattice.\medskip Kleshchev's $e$-good lattice in fact provides a combinatorial realization of the crystal graph of the basic representation of the affine Lie algebra of type $A_{e-1}^{(1)}$ (which we denote by $\widehat{\mathfrak{sl}}_{e}$). To be more precise, let $\{\alpha_0,\alpha_1,\cdots,\alpha_{e-1}\}$ be the set of simple roots of $\widehat{\mathfrak{sl}}_{e}$, let $\bigl\{\alpha_0^{\vee},\alpha_1^{\vee},\cdots,\alpha_{e-1}^{\vee}\bigr\}$ be set of simple coroots, let $$ \begin{pmatrix} 2& -1& 0& \cdots & 0& -1\\ -1& 2& -1& \cdots & 0& 0\\ 0& -1& 2& \cdots & 0& 0\\ \vdots & \vdots & \vdots & &\vdots &\vdots \\ 0& 0& 0& \cdots & 2& -1\\ -1& 0& 0& \cdots & -1& 2 \end{pmatrix}_{e\times e}\quad\text{if $e\geq 3$;} $$ or $$\begin{pmatrix} 2& -2\\ -2& 2 \end{pmatrix}_{2\times 2}\quad\text{if $e=2.$} $$ be the corresponding affine Cartan matrix. Let $d$ be the scaling element. Then the set $\bigl\{\alpha_0^{\vee},\alpha_1^{\vee},\cdots,\alpha_{e-1}^{\vee},d\bigr\}$ forms a basis of the Cartan subalgebra of $\widehat{\mathfrak{sl}}_{e}$, let $\bigl\{ \Lambda_0,\Lambda_1,\cdots,\Lambda_{e-1},\delta\bigr\}$ be the corresponding dual basis, where $\delta$ denotes the null root. The integrable highest weight module of highest weight $\Lambda_0$, denoted by $L(\Lambda_0)$, is called the basic representation of $\widehat{\mathfrak{sl}}_{e}$. It is a remarkable fact (\cite{MM}, \cite[(2.11)]{AM}) that the crystal graph of $L(\Lambda_0)$ is exactly the same as the Kleshchev's $e$-good lattice if one use the embedding $L(\Lambda_0)\subset\mathcal{F}(\Lambda_0)$, where $\mathcal{F}(\Lambda_0)$ is the Fock space as defined in \cite[\S4.2]{LLT}. In particular, an explicit formula for the number of irreducible $\HH_k(\mathfrak S_n)$-modules, i.e., $\#\mathcal{K}_n$, is known (see \cite{AM}), which was expressed in terms of principal specialized character of the basic representation $L(\Lambda_0)$.\smallskip Now we can state Kleshchev's algorithm of the Mullineux involution $\MM$. Here we follow Lascoux-Lerclerc-Thibon's reformulation in \cite[(7.1)]{LLT}. \addtocounter{lem}{1} \begin{lem} \label{lm24} {\rm (\cite{Kl})} Let $\lambda\in\mathcal{K}_n$ be an $e$-regular partition of $n$, and let $$ \emptyset\overset{r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{r_n}{\twoheadrightarrow}\lambda $$ be a path from $\emptyset$ to $\lambda$ in Kleshchev's $e$-good lattice. Then, the sequence $$ \underline{\emptyset}\overset{e-r_1}{\twoheadrightarrow}\cdot \overset{e-r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{e-r_n}{\twoheadrightarrow}\cdot $$ also defines a path in Kleshchev's $e$-good lattice, and it connects $\emptyset$ to $\MM(\lambda)$. \end{lem} Note that the Mullineux involution $\MM$ gives rise to an equivalence relation on ${\mathcal K}_n$. That is, $\lambda\sim\mu$ if and only if $\lambda=\MM(\mu)$ for any $\lambda,\mu\in\mathcal{K}_n$. Let $A_n$ be the alternating group, which is a normal subgroup in $\mathfrak S_n$ of index $2$. In the special case where $q=1$ and $e$ is an odd prime number, the involution $\MM$ is closely related to the modular representation of the alternating group $A_n$, as can be seen from the following lemma. \begin{lem} {\rm (\cite[(2.1)]{F})} Suppose that $q=1$ and $e$ is an odd prime number. In particular, $\ch k=e$. Assume that $A_n$ is split over $k$. Then (1) for any $\lambda\in\mathcal{K}_n$ with $\MM(\lambda)\neq\lambda$, $D^{\lambda}\downarrow_{A_n}$ remains irreducible; (2) for any $\lambda\in\mathcal{K}_n$ with $\MM(\lambda)=\lambda$, $D^{\lambda}\downarrow_{A_n}$ is a direct sum of two irreducible, non-equivalent, representations of $kA_n$, say $D_+^{\lambda}$ and $D_{-}^{\lambda}$; (3) the set $$ \Bigl\{D_{+}^{\lambda}, D_{-}^{\lambda}\Bigm|\lambda\in\mathcal{K}_n/{\sim}, \MM(\lambda)=\lambda\Bigl\} \bigsqcup \Bigl\{D^{\lambda}\downarrow_{A_n}\Bigm|\lambda\in\mathcal{K}_n/{\sim}, \MM(\lambda)\neq\lambda\Bigr\} $$ forms a complete set of pairwise non-isomorphic irreducible $kA_n$-modules. \end{lem} As a consequence, we get that $$\begin{aligned} &\quad\,\#\Irr\bigl(kA_n\bigr)\\ &=\frac{1}{2}\Bigl(\#\mathcal{K}_n-\#\bigl\{\lambda\in\mathcal{K}_n\bigm|\MM(\lambda)=\lambda\bigr\} \Bigr)+2\#\bigl\{\lambda\in\mathcal{K}_n\bigm|\MM(\lambda)=\lambda\bigr\}\\ &=\frac{1}{2}\Bigl(\#\mathcal{K}_n+3\#\bigl\{\lambda\in\mathcal{K}_n\bigm|\MM(\lambda)=\lambda\bigr\} \Bigr).\end{aligned} $$ \bigskip\bigskip\bigskip \section{The orbit Lie algebras} In this section, we shall first determine the orbit Lie algebras corresponding to the Dynkin diagram automorphisms arising from the Mullineux involution. Then we shall use Naito-Sagaki's work (\cite{NS2}, \cite{NS1}) to study the set of Mullineux-fixed partitions in terms of crystal graphs of basic representations of the orbit Lie algebras, which are some twisted affine Lie algebras of type $A_{2\ell}^{(2)}$ or of type $D_{\ell+1}^{(2)}$. The main results are given in Theorem \ref{thm37}, Theorem \ref{thm313}, Theorem \ref{thm315}, Theorem \ref{thm317}, Theorem \ref{thm320} and Theorem \ref{thm324}. \medskip Let $\mathfrak{g}$ be the Kac-Moody algebra over $\mathbb{C}$ associated to a symmetrizable generalized Cartan matrix $(a_{i,j})_{i,j\in I}$ of finite size, where $I=\{0,1,\cdots,e-1\}$. Let $\mathfrak{h}$ be its Cartan subalgebra, and $W$ be its Weyl group. Let $\{\alpha_i^{\vee}\}_{0\leq i\leq e-1}$ be the set of simple coroots in $\mathfrak{h}$. Let $\mathcal{X}:=\bigl\{\Lambda\in\mathfrak{h}^{\ast}\bigm| \Lambda(\alpha_i^{\vee})\in\mathbb Z} \newcommand{\HH}{\mathcal{H},\,\forall\,0\leq i<e\bigr\}$ be the weight lattice. Let $\mathcal{X}^{+}:=\bigl\{\Lambda\in\mathcal{X}\bigm|\Lambda(\alpha_i^{\vee})\geq 0,\,\forall\,0\leq i<e\bigr\}$ be the lattice of integral dominant weights. Let $\mathcal{X}_{\mathbb{R}}:=\mathcal{X}\otimes_{\mathbb Z} \newcommand{\HH}{\mathcal{H}}\mathbb{R}$, where $\mathbb{R}$ is the field of real numbers. Assume that $\Lambda\in\mathcal{X}^{+}$. P. Littelmann introduced (\cite{Li1}, \cite{Li2}) the notion of Lakshmibai-Seshadri paths (L-S paths for short) of class $\Lambda$, which are piecewise linear, continuous maps $\pi:[0,1]\rightarrow\mathcal{X}_{\mathbb{R}}$ parameterized by pairs $(\underline{\nu},\underline{a})$ of a sequence $\underline{\nu}: \nu_1>\nu_2>\cdots>\nu_s$ of elements of $W\Lambda$, where $>$ is the ``relative Bruhat order" on $W\Lambda$, and a sequence $\underline{a}: 0=a_0<a_1<\cdots<a_s=1$ of rational numbers with a certain condition, called the chain condition. The set $\mathbb{B}(\Lambda)$ of all L-S paths of class $\Lambda$ is called the path model for the integrable highest weight module $L(\Lambda)$ of highest weight $\Lambda$ over $\mathfrak{g}$. It is a remarkable fact that $\mathbb{B}(\Lambda)$ has a canonical crystal structure isomorphic to the crystal (in the sense of \cite{Kas}) associated to the integrable highest weight module of highest weight $\Lambda$ over the quantum algebra $U'_v(\mathfrak{g})$ .\smallskip Now let $\mathfrak{g}$ be the affine Kac-Moody algebra of type $A_{e-1}^{(1)}$. Let $\omega:\,I\rightarrow I$ be an involution defined by $\omega(0)=0$ and $\omega(i)=e-i$ for any $0\neq {i}\in I$. \begin{lem} $\omega$ is a Dynkin diagram automorphism in the sense of \cite[\S1.2]{NS1}. That is $a_{\omega(i),\omega(j)}=a_{i,j}$, $\forall\,i,j\in I$. \end{lem} \noindent {Proof:} \,This follows from direct verification. \medskip By \cite{FSS}, $\omega$ induces a Lie algebra automorphism (which are called diagram outer automorphism) $\omega\in\Aut(\mathfrak{g})$ of order $2$ and a linear automorphism $\omega^{\ast}\in\GL(\mathfrak{h}^{\ast})$ of order $2$. Following \cite{FRS} and \cite[\S1.3]{NS1} (where they work with an arbitrary Kac-Moody algebra $\mathfrak{g}$ and a Dynkin diagram automorphism $\omega$), we set $c_{i,j}:=\sum\limits_{t=0}^{N_j-1}a_{i,\omega^t(j)}$, where $N_j:=\#\bigl\{\omega^t(i)\bigm|t\geq 0\bigr\}$, $i,j\in I$. We choose a complete set $\widehat{I}$ of representatives of the $\omega$-orbits in $I$, and set $\check{I}:=\bigl\{i\in\widehat{I}\bigm|c_{i,i}>0\bigr\}$. We put $\hat{a}_{i,j}:=2c_{i,j}/c_{j}$ for $i,j\in\widehat{I}$, where $c_i:=c_{ii}$ if $i\in\check{I}$, and $c_i:=2$ otherwise. Then $(\hat{a}_{i,j})_{i,j\in\widehat{I}}$ is a symmetrizable Borcherds-Cartan matrix (\cite{Bo}), and (if $\check{I}\neq\emptyset$) its submatrix $(\hat{a}_{i,j})_{i,j\in\check{I}}$ is a generalized Cartan matrix. Let $\widehat{\mathfrak{g}}$ be the generalized Kac-Moody algebra over $\mathbb C$ associated to $(\hat{a}_{i,j})_{i,j\in\widehat{I}}$, with Cartan subalgebra $\widehat{\mathfrak{h}}$, Chevalley generators $\{\hat{x}_i,\hat{y}_i\}_{i\in\widehat{I}}$. The orbit Lie algebra $\check{\mathfrak{g}}$ is defined to be the subalgebra of $\widehat{\mathfrak{g}}$ generated by $\widehat{\mathfrak{h}}$ and $\hat{x}_i,\hat{y}_i$ for $i\in\check{I}$, which is a usual Kac-Moody algebra. \begin{lem} With the above assumptions and notations, we have that in our special case, $\check{\mathfrak{g}}$ is isomorphic to the twisted affine Lie algebra of type $A_{2\ell}^{(2)}$ if $e=2\ell+1$; and $\check{\mathfrak{g}}$ is isomorphic the twisted affine Lie algebra of type $D_{\ell+1}^{(2)}$ if $e=2\ell$. \end{lem} \noindent {Proof:} \, We divide the proof into two cases: \smallskip \noindent {\it Case 1.}\,\,$e=2\ell+1$. The involution $\omega$ is given by $$ \omega:\left\{\begin{aligned}{0}&\mapsto {0}\\ {1}&\mapsto {2\ell}\\ & \vdots\\ {\ell-1}&\mapsto {\ell+2}\\ {\ell}&\mapsto {\ell+1} \end{aligned}\right. ,\quad\,\, \left\{\begin{aligned} {\ell+1}&\mapsto {\ell}\\ & \vdots\\ {2\ell-1}&\mapsto {2}\\ {2\ell}&\mapsto {1} \end{aligned}\right. . $$ It is easy to check that $c_{i,i}=2$ for any $0\leq i<\ell$ and $c_{\ell,\ell}=1$. We shall take $\widehat{I}=\{{0},{1},\cdots,{l}\}$. By direct verification, we get that $\check{I}=\widehat{I}$ and $$ (\hat{a}_{i,j})_{i,j\in\widehat{I}}=\begin{pmatrix} 2& -2& 0& \cdots & 0& 0& 0\\ -1& 2& -1& \cdots & 0& 0& 0\\ 0& -1& 2& \cdots & 0& 0& 0\\ \vdots & \vdots & \vdots & & \vdots &\vdots &\vdots \\ 0& 0& 0& \cdots & 2& -1& 0\\ 0& 0& 0& \cdots & -1& 2& -2\\ 0& 0& 0& \cdots & 0& -1& 2 \end{pmatrix}_{(\ell+1)\times (\ell+1)}\quad\text{if $\ell\geq 2$;} $$ or $$\begin{pmatrix} 2& -4\\ -1& 2 \end{pmatrix}_{2\times 2}\quad\text{if $\ell=1.$}$$ Clearly this is an affine Cartan matrix of type $A_{2\ell}^{(2)}$, hence in this case $\check{\mathfrak{g}}$ is isomorphic to the twisted affine Lie algebra of type $A_{2\ell}^{(2)}$.\medskip \noindent {\it Case 2.}\,\,$e=2\ell$. The involution $\omega$ is given by $$ \omega:\left\{\begin{aligned}{0}&\mapsto {0}\\ {1}&\mapsto {2\ell-1}\\ & \vdots\\ {\ell-1}&\mapsto {\ell+1}\\ {\ell}&\mapsto{\ell} \end{aligned}\right.,\quad\,\,\left\{\begin{aligned}{\ell+1}&\mapsto {\ell-1}\\ & \vdots\\ {2\ell-2}&\mapsto {2}\\ {2\ell-1}&\mapsto {1} \end{aligned}\right. . $$ It is easy to check that $c_{i,i}=2$ for any $0\leq i\leq\ell$. We shall take $\widehat{I}=\{{0},{1},\cdots,{l}\}$. By direct verification, we get that $\check{I}=\widehat{I}$ and $$ (\hat{a}_{i,j})_{i,j\in\widehat{I}}=\begin{pmatrix} 2& -2& 0& \cdots & 0& 0& 0\\ -1& 2& -1& \cdots & 0& 0& 0\\ 0& -1& 2& \cdots & 0& 0& 0\\ \vdots & \vdots & \vdots & & \vdots &\vdots &\vdots \\ 0& 0& 0& \cdots & 2& -1& 0\\ 0& 0& 0& \cdots & -1& 2& -1\\ 0& 0& 0& \cdots & 0& -2& 2 \end{pmatrix}_{(\ell+1)\times (\ell+1)}\quad\text{if $\ell\geq 2$;} $$ or $$\begin{pmatrix} 2& -2\\ -2& 2 \end{pmatrix}_{2\times 2}\quad\text{if $\ell=1.$}$$ Clearly this is an affine Cartan matrix of type $D_{\ell+1}^{(2)}$, hence in this case $\check{\mathfrak{g}}$ is isomorphic to the twisted affine Lie algebra of type $D_{\ell+1}^{(2)}$.\medskip We define $\bigl(\mathfrak{h}^{\ast}\bigr)^{\circ}:=\bigl\{\Lambda\in\mathfrak{h}^{\ast} \bigm|\omega^{\ast}(\Lambda)=\Lambda\bigr\}$. $\widetilde{W}:=\bigl\{w\in W\bigm|\omega^{\ast}w=w\omega^{\ast}\bigr\}$. We indicate by $\check{}$ the objects for the obit Lie algebra $\check{\mathfrak{g}}$. For example, $\check{\mathfrak{h}}$ denotes the Cartan subalgebra of $\check{\mathfrak{g}}$, $\check{W}$ the Weyl group of $\check{\mathfrak{g}}$, $\{\check{\Lambda}_i\}_{0\leq i\leq \ell}$ the set of fundamental dominant weights in $\check{\mathfrak{h}}^{\ast}$. There exists a linear automorphism $P_{\omega}^{\ast}:\,\check{\mathfrak{h}}^{\ast}\rightarrow \bigl(\mathfrak{h}^{\ast}\bigr)^{\circ}$ and a group isomorphism $\Theta:\,\check{W}\rightarrow\widetilde{W}$ such that $\Theta(\check{w})=P_{\omega}^{\ast}\check{w}\bigl(P_{\omega}^{\ast}\bigr)^{-1}$ for each $w\in\check{W}$. By \cite[\S6.5]{FSS}, for each $0\leq i\leq \ell$, $$ P_{\omega}^{\ast}(\check{\Lambda}_i)=\sum_{t=0}^{N_i-1}\Lambda_{\omega^t(i)}+C\delta, $$ where $N_i$ denotes the number of elements in the $\omega$-orbit of $i$, $C\in\mathbb{Q}$ is some constant depending on $\omega$, $\delta$ denotes the null root of $\mathfrak{g}$. It follows that $P_{\omega}^{\ast}(\check{\Lambda}_0)=\Lambda_0+C'\delta$, for some $C'\in\mathbb{Q}$. \smallskip Let $\mathbb{B}(\Lambda_{0})$ (resp., $\mathbb{B}\bigl(P_{\omega}^{\ast}(\check{\Lambda}_{0})\bigr)$) be the set of all L-S paths of class $\Lambda_{0}$ (resp., of class $P_{\omega}^{\ast}(\check{\Lambda}_{0})$). Let $\pi_{\Lambda_{0}}$ (resp., $\pi_{P_{\omega}^{\ast}(\check{\Lambda}_{0})}$) be the straight path joining $0$ and $\Lambda_{0}$ (resp., $0$ and $P_{\omega}^{\ast}(\check{\Lambda}_{0})$). For each integer $0\leq i\leq e-1$, let $\widetilde{E}_i, \widetilde{F}_i$ denote the raising root operator and the lowering root operator with respect to the simple root $\alpha_i$. \begin{lem} The map which sends $\pi_{P_{\omega}^{\ast}(\check{\Lambda}_0)}$ to $\pi_{\Lambda_{0}}$ extends to a bijection $\beta$ from $\mathbb{B}\bigl(P_{\omega}^{\ast}(\check{\Lambda}_0)\bigr)$ onto $\mathbb{B}(\Lambda_{0})$ such that $$ \beta\bigl(\widetilde{F}_{i_1}\cdots \widetilde{F}_{i_s}\pi_{P_{\omega}^{\ast}(\check{\Lambda}_0)}\bigr)=\widetilde{F}_{i_1}\cdots \widetilde{F}_{i_s}\pi_{\Lambda_{0}}, $$ for any $i_1,\cdots,i_s\in\{0,1,\cdots,e-1\}$. \end{lem} \noindent {Proof:} \,This follows from the fact that $P_{\omega}^{\ast}(\check{\Lambda}_0)-\Lambda_{0}\in\mathbb{Q}\delta$ and the definitions of $\mathbb{B}\bigl(P_{\omega}^{\ast}(\check{\Lambda}_0)\bigr)$ and $\mathbb{B}(\Lambda_{0})$ (see \cite{Li1}).\medskip Henceforth we shall identify $\mathbb{B}\bigl(P_{\omega}^{\ast}(\check{\Lambda}_{0})\bigr)$ with $\mathbb{B}(\Lambda_{0})$. The action of $\omega^{\ast}$ on $\mathfrak{h}^{\ast}$ naturally extends to the set $\mathbb{B}\bigl(P_{\omega}^{\ast}(\check{\Lambda}_{0})\bigr)$ (and hence to the set $\mathbb{B}\bigl(\Lambda_{0}\bigr)$). By \cite[(3.1.1)]{NS2}, if $\widetilde{F}_{i_1}\widetilde{F}_{i_2}\cdots \widetilde{F}_{i_s} \pi_{{\Lambda_{0}}}\in {\mathbb{B}}({\Lambda_{0}})$, then \addtocounter{equation}{3} \begin{equation}\label{equa66} \omega^{\ast}\bigl(\widetilde{F}_{i_1}\widetilde{F}_{i_2}\cdots \widetilde{F}_{i_s} \pi_{{\Lambda_{0}}}\bigr)=\widetilde{F}_{\omega(i_{1})}\widetilde{F}_{\omega(i_{2})}\cdots \widetilde{F}_{\omega(i_{s})}\pi_{{\Lambda_{0}}}. \end{equation} We denote by $\mathbb{B}^{\circ}\bigl(\Lambda_{0}\bigr)$ the set of all L-S paths of class $\Lambda_{0}$ that are fixed by $\omega^{\ast}$. For $\check{\mathfrak{g}}$, for each integer $0\leq i\leq l$, we denote by $\widetilde{e}_i, \widetilde{f}_i$ the raising root operator and the lowering root operator with respect to the simple root $\alpha_i$. Let $\pi_{\check{\Lambda}_{0}}$ be the straight path joining $0$ and $\check{\Lambda}_{0}$. By \cite[(4.2)]{NS1}, the linear map $P_{\omega}^{\ast}$ naturally extends to a map from $\check{\mathbb{B}}(\check{\Lambda}_{0})$ to $\mathbb{B}^{\circ}\bigl(\Lambda_{0}\bigr)$ such that if $\widetilde{f}_{i_1}\widetilde{f}_{i_2}\cdots \widetilde{f}_{i_s} \pi_{\check{\Lambda}_{0}}\in \check{\mathbb{B}}(\check{\Lambda}_{0})$, then (in the above two cases) $$\begin{aligned} &P_{\omega}^{\ast}\bigl(\widetilde{f}_{i_1}\widetilde{f}_{i_2}\cdots \widetilde{f}_{i_s} \pi_{\check{\Lambda}_{0}}\bigr)=\omega\bigl(\widetilde{F}_{i_1}\bigr) \omega\bigl(\widetilde{F}_{i_2}\bigr)\cdots \omega\bigl(\widetilde{F}_{i_s}\bigr)\pi_{\Lambda_{0}},\end{aligned} $$ where $$ \omega\bigl(\widetilde{F}_{i_t}\bigr):=\begin{cases} \widetilde{F}_{i_t}\widetilde{F}_{\omega(i_{t})}, &\text{if $c_{i_t,i_t}=2$ and $N_{i_t}=2$,}\\ \widetilde{F}_{i_{t}}, &\text{if $c_{i_t,i_t}=2$ and $N_{i_t}=1$,}\\ \widetilde{F}_{\omega(i_t)}\widetilde{F}_{i_t}^2\widetilde{F}_{\omega(i_t)}, &\text{if $c_{i_t,i_t}=1$.} \end{cases} $$ Note that the case $c_{i_t,i_t}=1$ only happens when $e=2\ell+1$ and $i_t=\ell$. \addtocounter{lem}{1} \begin{lem} \label{lm35} {\rm(\cite[(4.2),(4.3)]{NS1})} $\mathbb{B}^{\circ}\bigl(\Lambda_{0}\bigr)=P_{\omega}^{\ast}\bigl( \check{\mathbb{B}}(\check{\Lambda}_{0})\bigr)$. \end{lem} Note that both $\check{\mathbb{B}}(\check{\Lambda}_{0})$ and $\mathbb{B}\bigl(\Lambda_{0}\bigr)$ have a canonical crystal structure with the raising and lowering root operators playing the role of Kashiwara operators. They are isomorphic to the crystals associated to the integrable highest weight modules $\check{L}(\check\Lambda_{0})$ of highest weight $\check\Lambda_{0}$ over $U'_v(\check{\mathfrak{g}})$ and the integrable highest weight modules $L\bigl(\Lambda_{0}\bigr)$ of highest weight $\Lambda_{0}$ over $U'_v(\mathfrak{g})$ respectively. Henceforth, we identify them without further comments. Let $v_{\check{\Lambda}_{0}}$ (resp., $v_{\Lambda_{0}}$) denotes the unique highest weight vector of highest weight $\check{\Lambda}_{0}$ (resp., of highest weight $\Lambda_{0}$) in $\check{\mathbb{B}}(\check{\Lambda}_{0})$ (resp., in $\mathbb{B}(\Lambda_{0})$). Therefore, by (\ref{equa66}) and Lemma \ref{lm35}, we get that \addtocounter{cor}{5} \begin{cor} \label{cor36}With the above assumptions and notations, there is an injection $\eta$ from the set $\check{\mathbb{B}}(\check{\Lambda}_{0})$ of crystal bases to the set $\mathbb{B}(\Lambda_{0})$ of crystal bases such that $$\begin{aligned} &\eta\bigl(\widetilde{f}_{i_1}\widetilde{f}_{i_2}\cdots \widetilde{f}_{i_s} v_{\check{\Lambda}_{0}}\bigr)\equiv \omega\bigl(\widetilde{F}_{i_1}\bigr)\omega\bigl(\widetilde{F}_{i_2}\bigr)\cdots \omega\bigl(\widetilde{F}_{i_s}\bigr)v_{\Lambda_{0}}\pmod{{v L(\Lambda_{0})_{A}}},\end{aligned} $$ where $i_1,\cdots,i_s$ are integers in $\{0,1,2,\cdots,\ell\}$, and $A$ denotes the ring of rational functions in $\mathbb{Q}(v)$ which do not have a pole at $0$. Moreover, the image of $\eta$ consists of all crystal basis element $\widetilde{F}_{i_1}\cdots \widetilde{F}_{i_s}v_{\Lambda_{0}}+v L(\Lambda_{0})_{A}$ satisfying $\widetilde{F}_{i_1}\cdots \widetilde{F}_{i_s}v_{\Lambda_{0}}\equiv \widetilde{F}_{\omega(i_{1})}\cdots \widetilde{F}_{\omega(i_{s})}v_{\Lambda_{0}} \pmod{{v L(\Lambda_{0})_{A}}}. $ \end{cor} Let $\mathcal{K}:=\sqcup_{n\geq 0}\mathcal{K}_n$. We translate the language of crystal bases into the language of partitions, we get the following combinatorial result. \addtocounter{thm}{6} \begin{thm} \label{thm37} With the above notations, there is a bijection $\eta$ from the set $\check{\mathbb{B}}(\check{\Lambda}_{0})$ of crystal bases onto the set $\bigl\{\lambda\in\mathcal{K}\bigm|\MM(\lambda)=\lambda\bigr\}$, such that if $$ v_{\check{\Lambda}_{0}}\overset{r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{r_s}{\twoheadrightarrow} \widetilde{f}_{r_s}\cdots \widetilde{f}_{r_1}v_{\check{\Lambda}_{0}} $$ is a path from $v_{\check{\Lambda}_{0}}$ to $\widetilde{f}_{r_s}\cdots \widetilde{f}_{r_1}v_{\check{\Lambda}_{0}}$ in the crystal graph of $L(\check{\lambda}_0)$, then the sequence $$ \emptyset\underbrace{\overset{r_1}{\twoheadrightarrow}\cdot}_{\text{$\omega$ acts}}\,\,\underbrace{\overset{r_2}{\twoheadrightarrow}\cdot}_{\text{$\omega$ acts}}\cdot \cdots\cdots\underbrace{\overset{r_s}{\twoheadrightarrow}\lambda}_{\text{$\omega$ acts}}:=\eta\Bigl(\widetilde{f}_{r_s}\cdots \widetilde{f}_{r_1}v_{\check{\Lambda}_{0}}\Bigr), $$ where $$ \underbrace{\overset{r_t}{\twoheadrightarrow}\cdot}_{\text{$\omega$ acts}}:=\begin{cases} \overset{r_t}{\twoheadrightarrow}\cdot \overset{e-r_t}{\twoheadrightarrow}, &\text{if $c_{r_t,r_t}=2$ and $N_{r_t}=2$,}\\ \overset{r_t}{\twoheadrightarrow}, &\text{if $c_{r_t,r_t}=2$ and $N_{r_t}=1$,}\\ \overset{\ell+1}{\twoheadrightarrow}\cdot\overset{\ell}{\twoheadrightarrow}\cdot \overset{\ell}{\twoheadrightarrow}\cdot\overset{\ell+1}{\twoheadrightarrow}\cdot, &\text{if $e=2\ell+1$ and $r_t=\ell$,} \end{cases} $$ defines a path in Kleshchev's $e$-good lattice which connects $\emptyset$ and $e$-regular partition $\lambda$ satisfying $\MM(\lambda)=\lambda$. \end{thm} \noindent {Proof:} \,This follows from Lemma \ref{lm24}, Lemma \ref{lm35} and Corollary \ref{cor36}.\medskip For each partition $\lambda$ of $n$, and each integer $0\leq i\leq e-1$, we define $$\begin{aligned} \Sigma_i(\lambda):&=\bigl\{\gamma\in[\lambda]\bigm|\res(\gamma)=\overline{i}\bigr\},\\ N_i(\lambda):&=\#\Sigma_i(\lambda).\end{aligned} $$ Theorem 3.7 also implies that if $\widetilde{f}_{r_1}\cdots \widetilde{f}_{r_s}v_{\check{\Lambda}_{0}}\in\check{\mathbb{B}}(\check{\Lambda}_{0})$, $\lambda:=\eta\Bigl(\widetilde{f}_{r_1}\cdots \widetilde{f}_{r_s}v_{\check{\Lambda}_{0}}\Bigr)$, then \addtocounter{equation}{3}\begin{equation}\label{equa38} N_i(\lambda)=\begin{cases}\#\bigl\{1\leq t\leq s\bigm|r_t={i}\bigr\}, &\text{if $i\in\{0,1,2,\cdots,\ell-1\}$,}\\ \#\bigl\{1\leq t\leq s\bigm|r_t={e-i}\bigr\}, &\text{if $i\in\{\ell+2,\ell+3,\cdots,e-1\}$,}\\ \#\bigl\{1\leq t\leq s\bigm|r_t={\ell-1}\bigr\}, &\text{if $e=2\ell$ and $i=\ell+1$,}\\ \#\bigl\{1\leq t\leq s\bigm|r_t={\ell}\bigr\}, &\text{if $e=2\ell$ and $i=\ell$,}\\ 2\#\bigl\{1\leq t\leq s\bigm|r_t={\ell}\bigr\}, &\text{if $e=2\ell+1$ and $i\in\{\ell,\ell+1\}$.} \end{cases} \end{equation} \addtocounter{cor}{2} \begin{cor} \label{cor39} Let $\lambda\in\mathcal{K}_n$. Suppose that $\MM(\lambda)=\lambda$. 1) If $e=2\ell+1$, then $N_{\ell}(\lambda)=N_{\ell+1}(\lambda)$. Furthermore, $N_{\ell}(\lambda)$ and $n-N_0(\lambda)$ are both even integers. 2) If $e=2\ell$, then $n-N_0(\lambda)-N_{\ell}(\lambda)$ is an even integer. \end{cor} For each pair of integers $m, m'$ with $0\leq m+m'\leq n$, we define $$\begin{aligned} \Sigma(n,m,m'):&=\bigl\{\lambda\in\mathcal{K}_n\bigm|\MM(\lambda)=\lambda, N_0(\lambda)=m, N_{\ell}(\lambda)=m'\bigr\},\\ N(n,m,m'):&=\#\Sigma(n,m,m'). \end{aligned}$$ Note that when $e=2\ell+1$, by Corollary \ref{cor39}, $N(n,m,m')=0$ unless $m+2m'\leq n$. Recall the principle graduation introduced in \cite[\S1.5, \S10.10]{Kac}. That is, the weight $\Lambda_0-\sum_{i=0}^{e-1}k_i\alpha_i$ (where $k_i\in\mathbb Z} \newcommand{\HH}{\mathcal{H}$ for each $i$) is assigned to degree $\sum_{i=0}^{e-1}k_i$. Let $\cch_t L(\Lambda_0):=\sum_{n\geq 0}\dim L(\Lambda_0)_n t^n$ be the principle specialized character\footnote{This is called $q$-dimension in the book of Kac, see \cite[\S10.10]{Kac}.} of $L(\Lambda_0)$, where $L(\Lambda_0)_n=\oplus_{\deg\mu=n}L(\Lambda_0)_{\mu}$. Similarly, let $L(\check{\Lambda}_0)$ denote the integrable highest weight module of highest weight $\check{\Lambda}_0$ over $\check{\mathfrak{g}}$. We use $\cch_t L(\check{\Lambda}_0):=\sum_{n\geq 0}\dim L(\check{\Lambda}_0)_n t^n$ to denote the principle specialized character of $L(\check{\Lambda}_0)$. Now applying Lemma \ref{lm24}, Lemma \ref{lm35} and Theorem \ref{thm37}, we get that \addtocounter{equation}{1}\begin{equation}\label{equa310}\dim L(\check{\Lambda}_0)_{n}=\sum_{0\leq m+m'\leq n}N(2n-m+2m',m,2m'). \end{equation} if $e=2\ell+1$; while \begin{equation}\label{equa311}\dim L(\check{\Lambda}_0)_{n}=\sum_{0\leq m+m'\leq n}N(2n-m-m',m,m'). \end{equation} if $e=2\ell$.\medskip Suppose that $e=2\ell+1$. That is, we are in the odd case. In this case, $\check{\mathfrak{g}}$ is the twisted affine Lie algebra of type $A_{2\ell}^{(2)}$. By \cite[(14.5.4)]{Kac}, the principle specialized character of $L(\check{\Lambda}_0)$ is given by \begin{equation}\label{equa312} \cch_t L(\check{\Lambda}_0)=\prod_{\substack{\text{$i\geq 1$, $i$ odd}\\ \text{$i\not\equiv 0\!\!\!\pmod{e}$ }}}\frac{1}{1-t^i}. \end{equation} Hence by (\ref{equa310}) and (\ref{equa312}), we get that \addtocounter{thm}{5} \begin{thm} \label{thm313} With the above notations, we have that $$ \prod_{\substack{\text{$i\geq 1$, $i$ odd}\\ \text{$i\not\equiv 0\!\!\!\pmod{e}$ }}}\frac{1}{1-t^i}= \sum_{n\geq 0}\biggl(\sum_{0\leq m+m'\leq n}N(2n-m+2m',m,2m')\biggr)t^n.$$ \end{thm} In \cite{K}, Kang has given a combinatorial realization of $\check{\mathbb{B}}(\check{\Lambda}_{0})$ in terms of reduced proper Young walls, which are inductively defined. In our $A_{2\ell}^{(2)}$ case, a direct explicit characterization can be given in terms of restricted $e$-strict partitions as follows, see \cite{LT2},\cite{BK1}. Recall that (\cite{BK1},\cite{BK2}) a partition $\lambda$ is called $e$-strict if $\lambda_i=\lambda_{i+1}\,\Rightarrow\,e|\lambda_i$ for each $i=1,2,\cdots $. An $e$-strict partition $\lambda$ is called restricted if in addition $$ \begin{cases}\lambda_i-\lambda_{i+1}\leq e, &\text{if $e\nmid\lambda_i$,}\\ \lambda_i-\lambda_{i+1}<e, &\text{if $e|\lambda_i$.} \end{cases} \quad\text{for each $i=1,2,\cdots.$} $$ Let $DPR_e(n)$ denote the set of all restricted $e$-strict partitions of $n$. Let $DPR_e:=\sqcup_{n\geq 0}DPR_e(n)$. It turns out that there is a natural $1$-$1$ correspondence between $\check{\mathbb{B}}(\check{\Lambda}_{0})$ and $DPR_e$. Furthermore, the crystal structure $\check{\mathbb{B}}(\check{\Lambda}_{0})$ can be concretely realized via some combinatorics of $DPR_e$, which we now describe. We recall some notions. Elements of $(r,s)\in\mathbb Z} \newcommand{\HH}{\mathcal{H}_{>0}\times \mathbb Z} \newcommand{\HH}{\mathcal{H}_{>0}$ are called nodes. Let $\lambda$ be an $e$-strict partition. We label the nodes of $\lambda$ with {\it residues}, which are the elements of $\mathbb Z} \newcommand{\HH}{\mathcal{H}/(\ell+1)\mathbb Z} \newcommand{\HH}{\mathcal{H}$. The residue of the node $A$ is denoted $\res{A}$. The labelling depends only on the column and following the repeating pattern $$ \overline{0},\overline{1},\cdots, \overline{\ell-1},\overline{\ell},\overline{\ell-1},\cdots,\overline{1},\overline{0}, $$ starting from the first column and going to the right. For example, let $e=5$, $\ell=2$, let $\lambda=(10,10,6,1)$ be a restricted $5$-strict partition of $27$. Its residues are as follows: $$ \begin{matrix} \overline{0} & \overline{1} &\overline{2} &\overline{1} &\overline{0} &\overline{0} &\overline{1} &\overline{2} & \overline{1} & \overline{0}\\ \overline{0} & \overline{1} &\overline{2} &\overline{1} &\overline{0} &\overline{0} &\overline{1} &\overline{2} & \overline{1} & \overline{0}\\ \overline{0} & \overline{1} &\overline{2} &\overline{1} &\overline{0} &\overline{0} & & & & \\ \overline{0} & & & & & & & &\end{matrix} $$ A node $A=(r,s)\in[\lambda]$ is called removable (for $\lambda$) if either R1) $\lambda_{A}:=\lambda-\{A\}$ is again an $e$-strict partition; or R2) the node $B=(r,s+1)$ immediately to the right of $A$ belongs to $\lambda$, $\res(A)=\res(B)$, and both $\lambda_B$ and $\lambda_{AB}:=\lambda-\{A,B\}$ are $e$-strict partitions.\smallskip Similarly, a node $B=(r,s)\notin[\lambda]$ is called addable (for $\lambda$) if either A1) $\lambda^{B}:=\lambda\cup\{B\}$ is again an $e$-strict partition; or A2) the node $A=(r,s-1)$ immediately to the left of $B$ does not belong to $\lambda$, $\res(A)=\res(B)$, and both $\lambda^A:=\lambda\cup\{A\}$ and $\lambda^{AB}:=\lambda\cup\{A,B\}$ are $e$-strict partitions. Note that R2) and A2) above are only possible for nodes with residue $\overline{0}$. Now fix a residue $x$ and consider the sequence of removable and addable $x$-nodes obtained by reading the boundary of $\lambda$ from the bottom left to top right. We use ``A'' to denote an addable $x$-node and use ``R'' to denote a removable $x$-node, then we get a sequence of letters A,R. Given such a sequence, we remove all occurrences of the string ``AR'' and keep on doing this until no such string ``AR'' is left. The ``R''s that still remain are the {\it normal} $x$-nodes of $\lambda$ and the rightmost of these is the {\it good} $x$-node, the ``A''s that still remain are the {\it conormal} $x$-nodes of $\lambda$ and the leftmost of these is the {\it cogood} $x$-node. Note that\footnote{This is because any removable node $\gamma$ of type R2) has an adjacent neighborhood $\gamma'$ in his right, which is another removable node with the same residue. If $\gamma$ could survive after deleting all the string ``AR'', then $\gamma'$ must also survive. In that case, $\gamma'$ is a normal node higher than $\gamma$. So $\gamma$ can not be a good node. For cogood node of type A2), the reason is similar.} good $x$-node is necessarily of type R1), and cogood $x$-node is necessarily of type A1). We define $$\begin{aligned} \varepsilon_i(\lambda)&=\#\bigl\{\text{$i$-normal nodes in $\lambda$}\bigr\},\\ \varphi_i(\lambda)&=\#\bigl\{\text{$i$-conormal nodes in $\lambda$}\bigr\}\end{aligned} $$ and we set $$\begin{aligned} \widetilde{e}_{i}(\lambda)&=\begin{cases} \lambda_{A}, &\text{if $\varepsilon_i(\lambda)>0$ and $A$ is the (unique) good $i$-node,}\\ 0, &\text{if $\varepsilon_i(\lambda)=0$.} \end{cases}\\ \widetilde{f}_{i}(\lambda)&=\begin{cases} \lambda^{B}, &\text{if $\varphi_i(\lambda)>0$ and $B$ is the (unique) cogood $i$-node,}\\ 0, &\text{if $\varphi_i(\lambda)=0$.} \end{cases} \end{aligned}$$ Then, we get an infinite colored oriented graph, whose vertices are $e$-strict partitions and whose arrows are given by $$ \text{$\lambda\overset{i}{\twoheadrightarrow}\mu$\,\,\,$\Longleftrightarrow$\, $\mu=\widetilde{f}_{i}(\lambda)$\,\,$\Longleftrightarrow$\,\,$\lambda=\widetilde{e}_{i}(\mu)$}. $$ The sublattice spanned by all restricted $e$-strict partitions equipped with the functions $\varepsilon_i, \varphi_i$ and the operators $\widetilde{e}_i, \widetilde{f}_i$, can be turned into a colored oriented graph which we denote by ${\mathfrak{RP}}_{e}$. \addtocounter{lem}{8} \begin{lem} \label{lem314}{\rm (\cite{K})} With the above notations, the graph $\mathfrak{RP}_e$ can be identified with the crystal graph $\check{\mathbb{B}}(\check{\Lambda}_{0})$ associated to the integrable highest weight $\check{\mathfrak{g}}$-module of highest weight $\check{\Lambda}_0$. \end{lem} Applying Theorem \ref{thm37} and Lemma \ref{lem314}, we get that \addtocounter{thm}{1} \begin{thm} \label{thm315} With the above notations, there is a bijection $\eta$ from the set $DPR_e$ of restricted $e$-strict partitions onto the set $\bigl\{\lambda\in\mathcal{K}\bigm|\MM(\lambda)=\lambda\bigr\}$, such that if $$ \emptyset\overset{r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{r_s}{\twoheadrightarrow} \check{\lambda} $$ is a path from $\emptyset$ to $\check{\lambda}$ in the subgraph $\mathfrak{RP}_e$, then the sequence then the sequence $$ \emptyset\underbrace{\overset{r_1}{\twoheadrightarrow}\cdot \overset{2\ell+1-r_1}{\twoheadrightarrow}\cdot }_{\text{$\widetilde{N_{r_1}}$ terms}}\underbrace{\overset{r_2}{\twoheadrightarrow}\cdot \overset{2\ell+1-r_2}{\twoheadrightarrow}\cdot}_{\text{$\widetilde{N_{r_2}}$ terms}} \cdots\cdots\underbrace{\overset{r_s}{\twoheadrightarrow}\cdot \overset{2\ell+1-r_s}{\twoheadrightarrow}\lambda}_{\text{$\widetilde{N_{r_s}}$ terms}}:=\eta\bigl(\check{\lambda}\bigr), $$ where $$ \underbrace{\overset{r_t}{\twoheadrightarrow}\cdot \overset{2\ell+1-r_t}{\twoheadrightarrow}\cdot}_{\text{$\widetilde{N_{r_t}}$ terms}}:=\begin{cases} \overset{r_t}{\twoheadrightarrow}\cdot\overset{2\ell+1-r_t}{\twoheadrightarrow}\cdot &\text{if $r_t\in\{1,2,\cdots,\ell-1\}$,}\\ \overset{0}{\twoheadrightarrow}\cdot, &\text{if $r_t=0$,}\\ \overset{\ell+1}{\twoheadrightarrow}\cdot \overset{\ell}{\twoheadrightarrow}\cdot \overset{\ell}{\twoheadrightarrow}\cdot \overset{\ell+1}{\twoheadrightarrow}\cdot, &\text{if $r_t=\ell$,} \end{cases} $$ defines a path in Kleshchev's $(2\ell+1)$-good lattice which connects $\emptyset$ and $(2\ell+1)$-regular partition $\lambda$ satisfying $\MM(\lambda)=\lambda$. \end{thm} \noindent {\bf Remark 3.16}\,\,\,In \cite{BK1}, \cite{BK2}, Brundan and Kleshchev investigated the modular representations of Hecke-Clifford superalgebras at defining parameter a primitive $(2\ell+1)$-th root of unity as well as of affine Sergeev superalgebras over a field of characteristic $2\ell+1$. Their main result states that the modular socle branching rules of these superalgebras provide a realization of the crystal of the twisted affine Lie algebra of type $A_{2\ell}^{(2)}$. This applies, in particular, to the modular socle branching rules of the spin symmetric group $\widehat{\mathfrak S}_n$, which is the double cover of the symmetric group $\mathfrak S_n$. It would be interesting to know if there is any connection between their results and ours, at least in the special case where $q=1$ and $2\ell+1$ being a prime number. \medskip Let $P_n$ be the set of all partitions of $n$. Let $P:=\sqcup_{n\geq 0}P_n$. Recall that when $\HH_k(\mathfrak S_n)$ is semisimple, then $\mathcal{K}_n=P_n$ and $\MM$ degenerates to the map $\lambda\mapsto\lambda^t$ for any $\lambda\in P_n$. Let $DP_n$ be the set of all partitions into distinct parts (i.e., the set all $0$-strict partitions). Let $DP:=\sqcup_{n\geq 0}DP_n$. Let $SP$ be the set of all symmetric partitions, i.e., $SP:=\bigl\{\lambda\in P \bigm|\lambda=\lambda^t\bigr\}$. We shall now establish a bijection between the set $DP$ and the set $SP$. Note that in the special case where $q=1$ and $2\ell+1$ is a prime number, the set $DP_n$ parameterizes the ordinary irreducible supermodules of the spin symmetric group $\widehat{\mathfrak S}_n$, while the set $SP_n:=\bigl\{\lambda\in P_n \bigm|\lambda=\lambda^t\bigr\}$ parameterizes those ordinary irreducible modules of the symmetric group which splits on restriction to the alternating group $A_n$. For each partition $\lambda=(\lambda_1,\lambda_2,\cdots,\lambda_s)\in DP$ with $\ell(\lambda)=s$, let $\lambda^t=(\lambda^t_1,\lambda^t_2,\cdots,\lambda^t_{\lambda_1})$ be the conjugate of $\lambda$, we define $$ \widetilde{\eta}\bigl(\lambda\bigr)=(\lambda_1,\lambda_2+1,\lambda_3+2,\cdots,\lambda_s+s-1,\lambda^{t}_{s+1}, \lambda^{t}_{s+2},\cdots,\lambda^t_{\lambda_1}). $$ \addtocounter{thm}{1} \begin{thm} \label{thm317} With the above notations, the map $\widetilde{\eta}$ defines a bijection from the set $DP$ onto the set $SP$. \end{thm} \noindent {Proof:} \,Let $\lambda\in DP$. By definition, $\lambda_1>\lambda_2>\cdots>\lambda_s$, it follows that $$ \lambda_1\geq\lambda_2+1\geq\lambda_3+2\geq\cdots\geq\lambda_s+s-1\geq\lambda^{t}_{s+1}\geq \lambda^{t}_{s+2}\geq\cdots\geq\lambda^t_{\lambda_1}. $$ That is, $\widetilde{\eta}\bigl(\lambda\bigr)\in P$. We claim that $\Bigl(\widetilde{\eta}\bigl(\lambda\bigr)\Bigr)^t=\widetilde{\eta}\bigl(\lambda\bigr)$. We use induction on $\ell(\lambda)$. Suppose that $\Bigl(\widetilde{\eta}\bigl(\nu\bigr)\Bigr)^t=\widetilde{\eta}\bigl(\nu\bigr)$ for any partition $\nu$ satisfying $\ell(\nu)<\ell(\lambda)$. We write $\mu=(\mu_1,\cdots,\mu_{\lambda_1})=\widetilde{\eta}\bigl(\lambda\bigr)$. Then $$ \mu_i=\begin{cases} \lambda_i+i-1, &\text{for $1\leq i\leq s$,}\\ \lambda^t_i, &\text{for $s+1\leq i\leq\lambda_1$.} \end{cases}\quad\text{for $i=1,2,\cdots,\lambda_1$.} $$ By definition, $\mu_i^t=\#\{1\leq j\leq\lambda_1|\mu_j\geq i\}$. It is clear that $\mu^t_1=\lambda_1=\mu_1$. We remove away the first row as well as the first column of $\mu$. Then we get a partition $\widehat{\mu}$. It is easy to see that $$ \widehat{\mu}=(\lambda_2,\lambda_3+1,\cdots,\lambda_s+s-2,\lambda^{t}_{s+1}-1, \lambda^{t}_{s+2}-1,\cdots,\lambda^t_{\lambda_2}-1)=\widetilde{\eta}\bigl(\widehat{\lambda}\bigr), $$ where $\widehat{\lambda}:=(\lambda_2,\lambda_3,\cdots,\lambda_s)$. Note that $\ell(\widehat{\lambda})<\ell(\lambda)$. By induction hypothesis, we know that $\bigl(\widehat{\mu}\bigr)^t=\widehat{\mu}$. It follows that $\mu^t=\mu$ as well. This proves our claim.\smallskip Second, we claim that the map $\widetilde{\eta}$ is injective. In fact, suppose that $$\begin{aligned} \widetilde{\eta}\bigl(\lambda\bigr)&=(\lambda_1,\lambda_2+1,\lambda_3+2,\cdots,\lambda_s+s-1,\lambda^{t}_{s+1}, \lambda^{t}_{s+2},\cdots,\lambda^t_{\lambda_1})\\ &=(\mu_1,\mu_2+1,\mu_3+2,\cdots,\mu_{s'}+s'-1,\mu^{t}_{s'+1}, \mu^{t}_{s'+2},\cdots,\mu^t_{\mu_1})= \widetilde{\eta}\bigl(\mu\bigr),\end{aligned}$$ where $\lambda,\mu\in DP$, $\ell(\lambda)=s, \ell(\mu)=s', s\leq s'$. Then $$ \lambda_1 =\ell\Bigl(\widetilde{\eta}\bigl(\lambda\bigr)\Bigr)=\ell\Bigl(\widetilde{\eta}\bigl(\mu\bigr)\Bigr) =\mu_1. $$ It follows that $\lambda_i=\mu_i$ for $i=1,2,\cdots,s$. If $s<s'$, then $\lambda_{s+1}^t=\mu_{s+1}+s\geq s+1$, which is impossible. Therefore $s=s'$, and hence $\lambda=\mu$. This proves the injectivity of $\widetilde{\eta}$. It remains to show that $\widetilde{\eta}$ is surjective. Let $\mu\in P$ such that $\mu^t=\mu$. Let $A=(r,s)$ be the unique node on the boundary of $[\lambda]$ which sits on the main diagonal of $[\lambda]$. We define $$ \lambda:=(\mu_1,\mu_2-1,\mu_3-2,\cdots,\mu_{r}-r+1). $$ Then one sees easily that $\lambda\in DP$ and $\widetilde{\eta}(\lambda)=\mu$. This proves that $\widetilde{\eta}$ is surjective, hence completes the proof of the whole theorem. \medskip \noindent {\bf Remark 3.18}\,\,\,We remark that if one consider the special case where $q=1$ and $2\ell+1$ is a prime number, it would be interesting to know if the reduced decomposition matrices (in the sense of \cite[(6.2)]{LT2}) of the spin symmetric groups are embedded as submatrices into the decomposition matrices of the alternating groups in odd characteristic $e$ via our bijections $\eta$ and $\widetilde{\eta}$. \bigskip Now we suppose that $e=2\ell$. That is, we are in the even case. In this case, $\check{\mathfrak{g}}$ is the twisted affine Lie algebra of type $D_{\ell+1}^{(2)}$. By \cite[(14.5.4)]{Kac}, the principle specialized character of $L(\check{\Lambda}_0)$ is given by \addtocounter{equation}{6} \begin{equation}\label{equa319} \cch_t L(\check{\Lambda}_0)=\prod_{\text{$i\geq 1$, $i$ odd}}\frac{1}{1-t^i}. \end{equation} Hence by (\ref{equa311}) and (\ref{equa319}), we get that \addtocounter{thm}{2} \begin{thm} \label{thm320} With the above notations, we have that $$ \prod_{\text{$i\geq 1$, $i$ odd}}\frac{1}{1-t^i}= \sum_{n\geq 0}\biggl(\sum_{0\leq m+m'\leq n}N(2n-m-m',m,m')\biggr)t^n.$$ \end{thm} We propose the following definition. \addtocounter{dfn}{20} \begin{dfn} \label{dfn321} Let $f\in\mathbb{N}$ with $f>1$. An $f$-strict partition $\lambda$ is called double restricted if $$\begin{cases}\lambda_i-\lambda_{i+1}\leq 2f, &\text{if $f\nmid\lambda_i$,}\\ \lambda_i-\lambda_{i+1}<2f, &\text{if $f|\lambda_i$.} \end{cases} \quad\text{for each $i=1,2,\cdots.$} $$ Here we make the convention that $\lambda_i=0$ for any $i>\ell(\lambda)$. \end{dfn} Let $DDPR_f(n)$ denote the set of all double restricted $f$-strict partitions of $n$. Let $DDPR_f:=\sqcup_{n\geq 0}DDPR_f(n)$.\smallskip In \cite{K}, Kang has given a combinatorial realization of $\check{\mathbb{B}}(\check{\Lambda}_{0})$ in terms of reduced proper Young walls, which are inductively defined. In our $D_{\ell+1}^{(2)}$ case, we shall give a direct explicit characterization in terms of double restricted $(\ell+1)$-strict partitions as follows. As before, elements of $(r,s)\in\mathbb Z} \newcommand{\HH}{\mathcal{H}_{>0}\times \mathbb Z} \newcommand{\HH}{\mathcal{H}_{>0}$ are called nodes. Let $\lambda$ be an $(\ell+1)$-strict partition. We label the nodes of $\lambda$ with {\it residues}, which are the elements of $\mathbb Z} \newcommand{\HH}{\mathcal{H}/(\ell+1)\mathbb Z} \newcommand{\HH}{\mathcal{H}$. The residue of the node $A$ is denoted $\res{A}$. The labelling depends only on the column and following the repeating pattern $$ \overline{0},\overline{1},\cdots, \overline{\ell-1},\overline{\ell},\overline{\ell},\overline{\ell-1},\cdots,\overline{1},\overline{0}, $$ starting from the first column and going to the right. For example, let $e=4$, $\ell=2$, let $\lambda=(9,9,7,1)$ be a double restricted $3$-strict partition of $26$. Its residues are as follows: $$ \begin{matrix} \overline{0} & \overline{1} & \overline{2} & \overline{2} & \overline{1} & \overline{0} & \overline{0} & \overline{1} & \overline{2} \\ \overline{0} & \overline{1} & \overline{2} & \overline{2} & \overline{1} & \overline{0} & \overline{0} & \overline{1} & \overline{2} \\ \overline{0} & \overline{1} & \overline{2} & \overline{2} & \overline{1} & \overline{0} & \overline{0} & & \\ \overline{0} & & & & & & & & \end{matrix} $$ Let $\lambda$ be an $(\ell+1)$-strict partition. A node $A=(r,s)\in[\lambda]$ is called removable (for $\lambda$) if either R1) $\lambda_{A}:=\lambda-\{A\}$ is again an $(\ell+1)$-strict partition; or R2) the node $B=(r,s+1)$ immediately to the right of $A$ belongs to $\lambda$, $\res(A)=\res(B)$, and both $\lambda_B$ and $\lambda_{AB}:=\lambda-\{A,B\}$ are $(\ell+1)$-strict partitions.\smallskip Similarly, a node $B=(r,s)\notin[\lambda]$ is called addable (for $\lambda$) if either A1) $\lambda^{B}:=\lambda\cup\{B\}$ is again an $(\ell+1)$-strict partition; or A2) the node $A=(r,s-1)$ immediately to the left of $B$ does not belong to $\lambda$, $\res(A)=\res(B)$, and both $\lambda^A:=\lambda\cup\{A\}$ and $\lambda^{AB}:=\lambda\cup\{A,B\}$ are $(\ell+1)$-strict partitions. Now we can define the notions of normal (resp., comormal) nodes, good (resp., cogood) nodes, the functions $\varepsilon_i, \varphi_i$ and the operators $\widetilde{e}_i, \widetilde{f}_i$ in the same way as in the case where $e=2\ell+1$. Note that the definition of residue in the even case is different with the odd case, and in the even case we deal with $(\ell+1)$-strict partitions instead of $e$-strict partitions. \addtocounter{lem}{7} \begin{lem} \label{lem322} Let $\lambda$ be any given double restricted $(\ell+1)$-strict partition. Then 1) there exists good (removable) node as well as cogood (addable) node for $\lambda$; 2) for any good (removable) node $A$ for $\lambda$, $\lambda-\{A\}$ is again a double restricted $(\ell+1)$-strict partition. In particular, there is a path (not necessary unique) from the empty partition $\emptyset$ to $\lambda$ in the lattice spanned by double restricted $(\ell+1)$-strict partitions; 3) for any cogood (addable) node $A$ for $\lambda$, $\lambda\cup\{A\}$ is again a double restricted $(\ell+1)$-strict partition. \end{lem} \noindent {Proof:} \,We write $\lambda=(\lambda_1,\cdots,\lambda_s)$, where $\ell(\lambda)=s$. Let $B=(s,\lambda_s)$. Then, as $\lambda$ is double restricted, either $\lambda_s=1$ or $\lambda_s>1$ and $\res(B)\neq\overline{0}$. In both cases, one sees easily that $B$ must be a normal $\res(B)$-node (as there are no addable $\res(B)$-nodes below $B$). It follows that there must exist good (removable) $\res(B)$-node for $\lambda$. In a similar way, one can show that $B'=(1,\lambda_1+1)$ is a conormal $\res(B')$-node, which implies that there must exist cogood (addable) $\res(B')$-node for $\lambda$. This proves 1). Now let $A=(a,\lambda_a)$ be a good (removable) node for $[\lambda]$. Then $A$ is necessarily of type R1). If $a=1$, then it is easy to check that $\lambda-\{A\}$ is again double restricted $(\ell+1)$-strict. Suppose that $a>1$. We write $\res(A)=i$. We claim that $\lambda_{a-1}-\lambda_a<2(\ell+1)$. In fact, If $\lambda_{a-1}-\lambda_a=2(\ell+1)$, then either $\lambda_a\not\equiv 0\!\pmod{\ell+1}$, or $\lambda_a\equiv 0\!\pmod{\ell+1}$. In the former case, one sees easily that $(a-1,\lambda_{a-1})$ is a removable $i$-node of type R1) next to (the right of) $A$ and there is no addable $i$-node sitting between them. Now as $A$ survives after deleting all the string ``AR'', the node $(a-1,\lambda_{a-1})$ must also survive after deleting all the string ``AR''. In other words, it is in fact a normal $i$-node of $\lambda$ higher than $A$, which is impossible (since $A$ is the unique good $i$-node of $\lambda$); while in the latter case, it would follows that $\lambda_{a-1}\equiv 0\!\pmod{\ell+1}$, and hence $\lambda_{a-1}-\lambda_a<2(\ell+1)$ because $\lambda$ is double restricted $(\ell+1)$-strict, which is again a contradiction. This proves our claim. Now there are only five possibilities: \smallskip \noindent {\it Case 1.}\,\,$i\notin\{\overline{0},\overline{\ell}\}$.\smallskip Then either $\lambda_{a-1}\not\equiv 0\!\pmod{\ell+1}$ or $\lambda_{a-1}\equiv 0\!\pmod{\ell+1}$ and $\lambda_{a-1}-\lambda_a<2\ell+1$. In both cases, one checks easily that $\lambda-\{A\}$ is again a double restricted $(\ell+1)$-strict. \smallskip \noindent {\it Case 2.}\,\,$i=\overline{\ell}$ and $\lambda_a\equiv 0\!\pmod{\ell+1}$.\smallskip Since $\lambda_{a-1}-\lambda_a<2(\ell+1)$, it follows that $\lambda_{a-1}-(\lambda_a-1)\leq 2(\ell+1)$. Now $\lambda_a\equiv 0\!\pmod{\ell+1}$ implies that either $\lambda_{a-1}\not\equiv 0\!\pmod{\ell+1}$ or $\lambda_{a-1}=\lambda_a+\ell+1$. In both cases one sees easily that $\lambda$ is double restricted $(\ell+1)$-strict must imply that $\lambda-\{A\}$ is double restricted $(\ell+1)$-strict too. \smallskip \noindent {\it Case 3.}\,\,$i=\overline{\ell}$ and $\lambda_a\equiv 1\!\pmod{\ell+1}$.\smallskip We know that $\lambda_{a-1}-\lambda_a<2(\ell+1)$. We claim that $\lambda_{a-1}-\lambda_a<2\ell+1$. In fact, if $\lambda_{a-1}-\lambda_a= 2\ell+1$, then $(a-1,\lambda_{a-1})$ must be another normal $\overline{\ell}$-node higher than $A$, which is impossible. This proves our claim. Therefore, $\lambda_{a-1}-(\lambda_a-1)\leq 2\ell+1$, which implies that $\lambda-\{A\}$ is still double restricted $(\ell+1)$-strict.\smallskip \noindent {\it Case 4.}\,\,$i=\overline{0}$ and $\lambda_a\equiv 0\!\pmod{2(\ell+1)}$.\smallskip In this case one proves that $\lambda-\{A\}$ is double restricted $(\ell+1)$-strict by using the same argument as in the proof of Case 2. \smallskip \noindent {\it Case 5.}\,\,$i=\overline{0}$ and $\lambda_a\equiv 1\!\pmod{2(\ell+1)}$.\smallskip In this case one proves that $\lambda-\{A\}$ is double restricted $(\ell+1)$-strict by using the same argument as in the proof of Case 3. This completes the proof of 2). The proof of 3) is similar and is left to the readers. \medskip Therefore, the lattice spanned by all double restricted $(\ell+1)$-strict partitions equipped with the functions $\varepsilon_i, \varphi_i$ and the operators $\widetilde{e}_i, \widetilde{f}_i$, can be turned into a colored oriented graph which we denote by $\widetilde{\mathfrak{RP}}_{\ell+1}$. \begin{lem} The graph $\widetilde{\mathfrak{RP}}_{\ell+1}$ can be identified with the crystal graph $\check{\mathbb{B}}(\check{\Lambda}_{0})$ associated to the integrable highest weight $\check{\mathfrak{g}}$-module of highest weight $\check{\Lambda}_0$.\end{lem} \noindent {Proof:} \,This follows from Lemma \ref{lem322} and Kang's combinatorial construction of the proper Young wall (see \cite{K} and \cite{HK}). Note that our definition of removable and addable node are in accordance with the definition given in \cite[page 275, 278]{K}. To translate the language of proper Young walls into the language of double restricted strict partitions, one has to think the columns of the Young walls in \cite{K} as the rows of our double restricted strict partitions. \smallskip Applying Theorem \ref{thm37}, we get that \addtocounter{thm}{3} \begin{thm} \label{thm324} With the above notations, there is a bijection $\eta$ from the set $DDPR_{\ell+1}$ of double restricted $(\ell+1)$-strict partitions onto the set $\bigl\{\lambda\in\mathcal{K}\bigm|\MM(\lambda)=\lambda\bigr\}$, such that if $$ \emptyset\overset{r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{r_s}{\twoheadrightarrow} \check{\lambda} $$ is a path from $\emptyset$ to $\check{\lambda}$ in the graph $\widetilde{\mathfrak{RP}}_{\ell+1}$, then the sequence $$ \emptyset\underbrace{\overset{r_1}{\twoheadrightarrow}\cdot \overset{2\ell-r_1}{\twoheadrightarrow}\cdot}_{\text{$N_{r_1}$ terms}}\underbrace{\overset{r_2}{\twoheadrightarrow}\cdot \overset{2\ell-r_2}{\twoheadrightarrow}\cdot}_{\text{$N_{r_2}$ terms}} \cdots\cdots\underbrace{\cdot \overset{r_s}{\twoheadrightarrow}\cdot \overset{2\ell-r_s}{\twoheadrightarrow}\lambda}_{\text{$N_{r_s}$ terms}}:=\eta\bigl(\check{\lambda}\bigr), $$ where $$ \underbrace{\overset{r_t}{\twoheadrightarrow}\cdot \overset{2\ell-r_t}{\twoheadrightarrow}\cdot}_{\text{$N_{r_t}$ terms}}:=\begin{cases} \overset{r_t}{\twoheadrightarrow}\cdot \overset{2\ell-r_t}{\twoheadrightarrow}\cdot, &\text{if $r_t\in\{1,2,\cdots,\ell-1\}$,}\\ \overset{r_t}{\twoheadrightarrow}\cdot, &\text{if $r_t\in\{{0},{\ell}\}$,}\\ \end{cases} $$ defines a path in Kleshchev's $(2\ell)$-good lattice which connects $\emptyset$ and $(2\ell)$-regular partition $\lambda$ satisfying $\MM(\lambda)=\lambda$. \end{thm} \noindent {\bf Remark 3.25}\,\,\,In \cite{LT2}, Leclerc-Thibon conjectured that the decomposition matrices of Hecke-Clifford superalgebras with parameter $q$ should related to the Fock space representation of the twisted affine Lie algebra of type $A_{2\ell}^{(2)}$ if $q$ is a primitive $(2\ell+1)$th root of unity; or of type $D_{\ell+1}^{(2)}$ if $q$ is a primitive $2\ell$-th root of unity. In \cite{BK1}, \cite{BK2}, Brundan and Kleshchev show that the modular irreducible super-representations of Hecke-Clifford superalgebras at defining parameter $q$ a primitive $(2\ell+1)$-th root of unity as well as of affine Sergeev superalgebras over a field of characteristic $2\ell+1$ are parameterized by the set of restricted $(2\ell+1)$-strict partitions, which partly verified the idea of \cite{LT2}. It would be interesting to know if our notion of double restricted $(\ell+1)$-strict partitions give a natural parameterization of the modular irreducible super-representations of Hecke-Clifford superalgebras when $q$ is a primitive $(2\ell)$-th root of unity. \bigskip\bigskip \bigskip \bigskip \centerline{ACKNOWLEDGEMENT} \bigskip \thanks{Research supported by the URF of Victoria University of Wellington and the National Natural Science Foundation of China (Project 10401005) and the Program NCET. The author wishes to thank the School of Mathematics, Statistics and Computer Science at Victoria University of Wellington for their hospitality during his visit in 2005. He also appreciates the referee for several helpful comments and for pointing out an error in the first version of this paper.} \bigskip \bigskip\bigskip\bigskip
1,477,468,751,225
arxiv
\section{Introduction} \label{sec:intro} It is widely believed that complex systems of interest in the sciences and engineering are both modular and hierarchical. Network theory uses the tools and visual language of graph theory to model such systems, and has proven to be both effective and flexible in describing their modular character. However, the field has put less of an emphasis on finding powerful and versatile language for describing the hierarchical aspects of complex systems. There is growing confidence that category theory can provide the necessary conceptual setting for this project. This is seen, for example, in Mikhail Gromov's well-known claim, ``the mathematical language developed by the end of the 20th century by far exceeds in its expressive power anything, even imaginable, say, before 1960. Any meaningful idea coming from science can be fully developed in this language.'' \cite{Gromov} Joyal and Street's work on string diagrams \cite{JSTensor} for monoidal categories and (with Verity) on traced monoidal categories \cite{JSTraced} has been used for decades to visualize compositions and feedback in networked systems, for example in the theory of flow charts \cite{Arthan}. Precursors, such as Penrose diagrams and flow diagrams, have been used in physics and the theory of computation, respectively, since the 1970's \cite{Scott,Baez1}. Over the past several years, the second author and collaborators have been developing a novel approach to modular hierarchical systems based on the language of operads and symmetric monoidal categories \cite{Spivak2,RupelSpivak}. The main contribution to the theory of string diagrams of the present research program is the inclusion of an outer box, which allows for holarchic \cite{Koestler} combinations of these diagrams. That is, the parts can be assembled into a whole, which can itself be a part. The composition of such assemblies can now be viewed as morphism composition in an operad. In fact, there is a strong connection between traced monoidal categories and algebras on these operads, such as our operad $\Opd{\mathbf{W}}$ of wiring diagrams, though it will not be explained here (see \cite{SpivakSchultzRupel} for details). More broadly, category theory can organize graphical languages found in a variety of applied contexts. For example, it is demonstrated in \cite{Baez1} and \cite{Coecke} that the theory of monoidal categories unifies the diagrams coming from diverse fields such as physics, topology, logic, computation, and linguistics. More recently, as in \cite{Baez2}, there has been growing interest in viewing more traditionally applied fields, such as ecology, biology, chemistry, electrical engineering, and control theory through such a lens. Specifically, category theory has been used to draw connections among visual languages such as planar knot diagrams, Feynman diagrams, circuit diagrams, signal flow graphs, Petri nets, entity relationship diagrams, social networks, and flow charts. This research is building toward what John Baez has called ``a foundation of applied mathematics'' \cite{BaezTalk}. \hspace{3 mm} The goal of the present paper is to show that open continuous time dynamical systems form an algebra over a certain (colored) operad, which we call the operad of \emph{wiring diagrams}. It is a variant of the operad that appeared in \cite{RupelSpivak}. That is, wiring diagrams provide a straightforward, diagrammatic language to understand how dynamical systems that describe processes can be built up from the systems that describe its sub-processes. More precisely, we will define a symmetric monoidal category $\mathbf{W}$ of black boxes and wiring diagrams. Its underlying operad $\Opd{\mathbf{W}}$ is a graphical language for building larger black boxes out of an interconnected set of smaller ones. We then define two $\mathbf{W}$-algebras, $\mathcal{G}$ and $\mathcal{L}$, which encode \emph{open dynamical systems}, i.e., differential equations of the form \begin{align}\label{dia:basic form} \begin{cases} \dot{Q}=\inp{f}(Q,input)\\ output=\outp{f}(Q) \end{cases} \end{align} where $Q$ represents an internal state vector, $\dot{Q}=\frac{dQ}{dt}$ represents its time derivative, and $input$ and $output$ represent inputs to and outputs from the system. In $\mathcal{G}$, the functions $\inp{f}$ and $\outp{f}$ are smooth, whereas in the subalgebra $\mathcal{L}\subseteq\mathcal{G}$, they are moreover linear. The fact that $\mathcal{G}$ and $\mathcal{L}$ are $\mathbf{W}$-algebras captures the fact that these systems are closed under wiring diagram interconnection. Our notion of interconnection is a generalization of that in Deville and Lerman \cite{DL1}, \cite{DL2}, \cite{DL3}. Their version of interconnection produces a closed system from open ones, and can be understood in the present context as a morphism whose codomain is the closed box (see Definition~\ref{def:mon}). Graph fibrations between wiring diagrams form an important part of their formalism, though we do not discuss that aspect here. This paper is the third in a series, following \cite{RupelSpivak} and \cite{Spivak2}, on using wiring diagrams to model interactions. The algebra we present here, that of open systems, is distinct from the algebras of relations and of propagators studied in earlier works. Beyond the dichotomy of discrete vs. continuous, these algebras are markedly different in structure. For one thing, the internal wires in \cite{RupelSpivak} themselves carry state, whereas here, a wire should be thought of as instantaneously transmitting its contents from an output site to an input site. Another difference between our algebra and those of previous works is that the algebras here involve \emph{open systems} in which, as in (\ref{dia:basic form}), the instantaneous change of state is a function of the current state and the input, whereas the output depends only on the current state (see Definition~\ref{def:general algebra}). The differences between these algebras is also reflected in a mild difference between the operad we use here and the one used in previous work. \subsection{Motivating example} The motivating example for the algebras in this paper comes from classical differential equations pedagogy; namely, systems of tanks containing salt water concentrations, with pipes carrying fluid among them. The systems of ODEs produced by such applications constitute a subset of those our language can address; they are linear systems with a certain form (see Example~\ref{ex:as promised}). To ground the discussion, we consider a specific example. \begin{ex} \label{ex:main} Figure~\ref{fig:pipebrine} below reimagines a problem from Boyce and DiPrima's canonical text \mbox{\cite[Figure 7.1.6]{BD}} as a dynamical system over a \emph{wiring diagram}. \begin{figure}[ht] \activetikz{ \path(0,0); \blackbox{(10,5)}{2}{1}{$Y$}{.7} \node at (.4,3.6) {\small $\inp{Y}_{a}$}; \node at (.4,1.9) {\small $\inp{Y}_{b}$}; \node at (9.6,2.75) {\small $\outp{Y}_a$}; \path(2,1.5); \blackbox{(2,2)}{2}{1}{$X_1$}{.5} \node at (3,2.6) {\tiny $Q_1(t)$ oz salt}; \node at (3,2.3) {\tiny 30 gal water}; \node at (1.7,3.06) {\small $\inp{X}_{1a}$}; \node at (1.7,2.4) {\small $\inp{X}_{1b}$}; \node at (4.38,2.7) {\small $\outp{X}_{1a}$}; \path(6,1.5); \blackbox{(2,2)}{2}{2}{$X_2$}{.5} \node at (7,2.6) {\tiny $Q_2(t)$ oz salt}; \node at (7,2.3) {\tiny 20 gal water}; \node at (5.7,3.07) {\small $\inp{X}_{2a}$}; \node at (5.7,2.4) {\small $\inp{X}_{2b}$}; \node at (8.37,3.07) {\small $\outp{X}_{2a}$}; \node at (8.37,2.37) {\small $\outp{X}_{2b}$}; \directarc{(4.25,2.5)}{(5.75,2.16667)} \node at (5,2) {\tiny 3 gal/min}; \directarc{(0.35,1.6667)}{(1.75,2.83333)} \node at (.7,1.4) {\tiny 1.5 gal/min}; \node at (.7,1.2) {\tiny 1 oz/gal}; \fancyarc{(0.35,3.3333)}{(5.75,2.83333)}{-40}{25} \node at (.6,3.1) {\tiny 1 gal/min}; \node at (.6,2.9) {\tiny 3 oz/gal}; \directarc{(8.25,2.8333)}{(9.65,2.5)} \node at (9.5,2.3) {\tiny 2.5}; \node at (9.5,2.1) {\tiny gal/min}; \fancyarc{(1.75,2.16667)}{(8.25,2.16667)}{20}{-45} \node at (5,.3) {\tiny 1.5 gal/min}; } \caption{A dynamical system from Boyce and DiPrima interpreted over a wiring diagram $\Phi\colon X_1,X_2\to Y$ in $\Opd{\mathbf{W}}$.} \label{fig:pipebrine} \end{figure} In this diagram, $X_1$ and $X_2$ are boxes that represent tanks consisting of salt water solution. The functions $Q_1(t)$ and $Q_2(t)$ represent the amount of salt (in ounces) found in 30 and 20 gallons of water, respectively. These tanks are interconnected with each other by pipes embedded within a total system $Y$. The prescription for how wires are attached among the boxes is formally encoded in the wiring diagram $\Phi:X_1,X_2\to Y$, as we will discuss in Definition \ref{def:W}. Both tanks are being fed salt water concentrations at constant rates from the outside world. Specifically, $X_1$ is fed a 1 ounce salt per gallon water solution at 1.5 gallons per minute and $X_2$ is fed a 3 ounce salt per gallon water solution at 1 gallon per minute. The tanks also both feed each other their solutions, with $X_1$ feeding $X_2$ at 3 gallons per minute and $X_2$ feeding $X_1$ at 1.5 gallons per minute. Finally, $X_2$ feeds the outside world its solution at 2.5 gallons per minute. The dynamics of the salt water concentrations both within and leaving each tank $X_i$ is encoded in a linear open system $f_i$, consisting of a differential equation for $Q_i$ and a readout map for each $X_i$ output (see Definition \ref{def:opensystem}). Our algebra $\mathcal{L}$ allows one to assign a linear open system $f_i$ to each tank $X_i$, and by functoriality the morphism $\Phi\colon X_1,X_2\to Y$ produces a linear open system for the larger box $Y$. We will explore this construction in detail, in particular providing explicit formulas for it in the linear case, as well as for more general systems of ODEs. \end{ex} \section{Preliminary Notions} \label{sec:pre} Throughout this paper we use the language of monoidal categories and functors. Depending on the audience, appropriate background on basic category theory can be found in MacLane \cite{MacLane}, Awodey \cite{Awodey}, or Spivak \cite{Spivak}. Leinster~\cite{Leinster} is a good source for more specific information on monoidal categories and operads. We refer the reader to \cite{KFA} for an introduction to dynamical systems. \begin{notation*} We denote the category of sets and functions by $\mathbf{Set}$ and the full subcategory spanned by finite sets as $\mathbf{FinSet}$. We generally do not concern ourselves with cardinality issues. We follow Leinster \cite{Leinster} and use $\times$ for binary product and $\Pi$ for arbitrary product, and dually $+$ for binary coproduct and $\amalg$ for arbitrary coproduct in any category. By {\em operad} we always mean symmetric colored operad or, equivalently, symmetric multicategory. \end{notation*} \subsection{Monoidal categories and operads} In Section~\ref{sec:W}, we will construct the symmetric monoidal category $(\mathbf{W},\oplus,0)$ of boxes and wiring diagrams, which we often simply denote as $\mathbf{W}$. We will sometimes consider the underlying operad $\Opd{\mathbf{W}}$, obtained by applying the fully faithful functor \[\mathcal{O}\colon \mathbf{SMC}\to\mathbf{Opd}\] to $\mathbf{W}$. A brief description of this functor $\Opd{}$ is given below in Definition~\ref{def:SMC to Opd}. \begin{defn}\label{def:SMC to Opd} Let $\mathbf{SMC}$ denote the category of symmetric monoidal categories and lax monoidal functors; and $\mathbf{Opd}$ be the category of operads and operad functors. Given a symmetric monoidal category $(\mathcal{C},\otimes,I _\mathcal{C})\in\Ob\mathbf{SMC}$, we define the operad $\Opd{\mathcal{C}}$ as follows: \[\Ob \Opd{\mathcal{C}}:=\Ob\mathcal{C}, \hspace{10 mm} \Hom_{\Opd{\mathcal{C}}} (X_1,\ldots,X_n;Y):=\Hom_{\mathcal{C}}(X_1\otimes\cdots\otimes X_n,Y)\] for any $n\in\mathbb N$ and objects $X_1,\ldots,X_n,Y\in\Ob\mathcal{C}$. Now suppose $F\colon (\mathcal{C},\otimes,I _{\mathcal{C}})\to(\mathcal{D},\odot,I _{\mathcal{D}})$ is a lax monoidal functor in $\mathbf{SMC}$. By definition such a functor is equipped with a morphism \[\mu\colon FX_1\odot\cdots\odot FX_n\to F(X_1\otimes\cdots\otimes X_n),\] natural in the $X_i$, called the {\em coherence map}. With this map in hand, we define the operad functor $\Opd{F}\colon \Opd{\mathcal{C}}\to \Opd{\mathcal{D}}$ by stating how it acts on objects $X$ and morphisms $\Phi\colon X_1,\ldots,X_n\to Y$ in $\Opd{\mathcal{C}}$: \[\Opd{F}(X):=F(X),\hspace{1.8 mm} \Opd{F}(\Phi:X_1,\ldots,X_n\to Y):=F(\Phi)\circ\mu:FX_1\odot\cdots\odot FX_n\to FY.\] \end{defn} \begin{ex} \label{ex:sets} Consider the symmetric monoidal category $(\mathbf{Set} ,\times,\star)$, where $\times$ is the cartesian product of sets and $\star$ a one element set. Define \mbox{$\mathbf{Sets}:=\Opd{\mathbf{Set} }$} as in Definition \ref{def:SMC to Opd}. Explicitly, $\mathbf{Sets}$ is the operad in which an object is a set and a morphism $f\colon X_1,\ldots,X_n\to Y$ is a function $f\colon X_1\times\cdots\times X_n\to Y$. \end{ex} \begin{defn} \label{def:algebra} Let $\mathcal{C}$ be a symmetric monoidal category and let $\mathbf{Set}=(\mathbf{Set},\times,\star)$ be as in Example \ref{ex:sets}. A \emph{$\mathcal{C}$-algebra} is a lax monoidal functor $\mathcal{C}\to\mathbf{Set}$. Similarly, if $\mathcal{D}$ is an operad, a \emph{$\mathcal{D}$-algebra} is defined as an operad functor $\mathcal{D}\to\mathbf{Sets}$. \end{defn} To avoid subscripts, we will generally use the formalism of SMCs in this paper. Definitions~\ref{def:SMC to Opd} and \ref{def:algebra} can be applied throughout to recast everything we do in terms of operads. The primary reason operads may be preferable in applications is that they suggest more compelling pictures. Hence throughout this paper, depictions of wiring diagrams will often be operadic, i.e., have many input boxes wired together into one output box. \subsection{Typed sets} Each box in a wiring diagram will consist of finite sets of ports, each labelled by a type. To capture this idea precisely, we define the notion of typed finite sets. By a \emph{finite product} category, we mean a category that is closed under taking finite products. \begin{defn}\label{def:typed finite sets} Let $\mathcal{C}$ be a small finite product category. The category of \emph{$\mathcal{C}$-typed finite sets}, denoted $\TFS{\mathcal{C}}$, is defined as follows. An object in $\TFS{\mathcal{C}}$ is a map from a finite set to the objects of $\mathcal{C}$: \[\Ob\TFS{\mathcal{C}}:=\{(A,\tau)\; |\; A\in\Ob\mathbf{FinSet}, \tau\colon A\to\Ob\mathcal{C})\}.\] Intuitively, one can think of a typed finite set as a finite unordered list of $\mathcal{C}$-objects. For any element $a\in A$, we call the object $\tau(a)$ its {\em type}. If the typing function $\tau$ is clear from context, we may denote $(A,\tau)$ simply by $A$. A morphism $q\colon(A,\tau)\to (A',\tau')$ in $\TFS{\mathcal{C}}$ consists of a function $q\colon A\to A'$ that makes the following diagram of finite sets commute: \[\xymatrix{ A \ar[rr]^q \ar[rd]_\tau & {} & A' \ar[ld]^{\tau'}\\ &\Ob\mathcal{C} } \] Note that $\TFS{\mathcal{C}}$ is a cocartesian monoidal category. We refer to the morphisms of $\TFS{\mathcal{C}}$ as {\em $\mathcal{C}$-typed functions}. If a $\mathcal{C}$-typed function $q$ is bijective, we call it a \emph{$\mathcal{C}$-typed bijection}. \end{defn} In other words, $\TFS{\mathcal{C}}$ is the comma category for the diagram $$\mathbf{FinSet}\To{i}\mathbf{Set}\From{\Ob\mathcal{C}}\{*\}$$ where $i$ is the inclusion. \begin{defn} \label{def:depprod} Let $\mathcal{C}$ be a finite product category, and let $(A,\tau)\in\Ob\TFS{\mathcal{C}}$ be a $\mathcal{C}$-typed finite set. Its \emph{dependent product} $\ol{(A,\tau)}\in\Ob\mathcal{C}$ is defined as \[\overline{(A,\tau)}:=\prod_{a\in A}\tau(a).\] Coordinate projections and diagonals are generalized as follows. Given a typed function $q\colon (A,\tau)\to (A',\tau')$ in $\TFS{\mathcal{C}}$ we define \[\ol{q}\colon \ol{(A',\tau')}\to\ol{(A,\tau)}\] to be the unique morphism for which the following diagram commutes for all \mbox{$a\in A$}: \[\xymatrix{ \prod_{a'\in A'}\tau'(a') \ar[r]^{\overline{q}} \ar[d]_{\pi_{q(a)}} & \prod_{a\in A}\tau(a) \ar[d]^{\pi_a}\\ \tau'(q(a))\ar@{=}[r]&\tau(a) } \] By the universal property for products, this defines a functor, \[\ol{\;\cdot\;}\colon\TFS{\mathcal{C}}^{\text{op}}\to\mathcal{C}.\] \end{defn} \begin{lem} The dependent product functor $\TFS{\mathcal{C}}^{\text{op}}\to\mathcal{C}$ is strong monoidal. In particular, for any finite set $I$ whose elements index typed finite sets $(A_i,\tau_i)$, there is a canonical isomorphism in $\mathcal{C}$, $$\ol{\coprod_{i\in I}(A_i,\tau_i)}\cong\prod_{i\in I}\ol{(A_i,\tau_i)}.$$ \end{lem} \begin{rem} \label{rem:default} The category of second-countable smooth manifolds and smooth maps is essentially small (by the embedding theorem) so we choose a small representative and denote it $\mathbf{Man}$. Note that $\mathbf{Man}$ is a finite product category. Manifolds will be our default typing, in the sense that we generally take $\mathcal{C}:=\mathbf{Man}$ in Definition \ref{def:typed finite sets} and denote \begin{align}\label{dia:TFS} \TFS{}:=\TFS{\mathbf{Man}}. \end{align} We thus refer to the objects, morphisms, and isomorphisms in $\TFS{}$ simply as \emph{typed finite sets}, \emph{typed functions}, and \emph{typed bijections}, respectively. \end{rem} \begin{rem} \label{rem:TFSL} The ports of each box in a wiring diagram will be labeled by manifolds because they are the natural setting for geometrically interpreting differential equations (see \cite{SpiM-CalcMan}). For simplicity, one may wish to restrict attention to the full subcategory $\mathbf{Euc}$ of Euclidean spaces $\mathbb R^n$ for $n\in\mathbb N$, because they are the usual domains for ODEs found in the literature; or to the (non-full) subcategory $\mathbf{Lin}$ of Euclidean spaces and linear maps between them, because they characterize linear systems of ODEs. We will return to $\TFS{\mathbf{Lin}}$ in Section \ref{sec:l}. \end{rem} \subsection{Open systems} As a final preliminary, we define our notion of open dynamical system. Recall that every manifold $M$ has a {\em tangent bundle} manifold, denoted $TM$, and a smooth projection map $p\colon TM\to M$. For any point $m\in M$, the preimage $T_mM:=p^{-1}(m)$ has the structure of a vector space, called the {\em tangent space of $M$ at $m$}. If $M\cong\mathbb R^n$ is a Euclidean space then also $T_mM\cong\mathbb R^n$ for every point $m\in M$. A {\em vector field on $M$} is a smooth map $g\colon M\to TM$ such that $p\circ g=\id_M$. See \cite{SpiM-CalcMan} or \cite{Warner} for more background. For the purposes of this paper we make the following definition of open systems; this may not be completely standard. \begin{defn}\label{def:opensystem} Let $M,\inp{U},\outp{U}\in\Ob\mathbf{Man}$ be smooth manifolds and $TM$ be the tangent bundle of $M$. Let $f=(\inp{f},\outp{f})$ denote a pair of smooth maps \begin{align*} \begin{cases} \inp{f}\colon M\times\inp{U}\to TM\\ \outp{f}\colon M\to\outp{U} \end{cases} \end{align*} where, for all $(m,u)\in M\times\inp{U}$ we have $\inp{f}(m,u)\in T_mM$; that is, the following diagram commutes: \[ \xymatrix{ M\times \inp{U} \ar[rr]^{\inp{f}} \ar[rd]_{\pi_M} & {} & TM \ar[ld]^{p}\\ & M } \] We sometimes use $f$ to denote the whole tuple, $$f=(M,\inp{U},\outp{U},f),$$ which we refer to as an \emph{open dynamical system} (or \emph{open system} for short). We call $M$ the {\em state space}, $\inp{U}$ the \textit{input space}, $\outp{U}$ the \textit{output space}, $\inp{f}$ the \emph{differential equation}, and $\outp{f}$ the \emph{readout map} of the open system. Note that the pair $f=(\inp{f},\outp{f})$ is determined by a single smooth map $$f\colon M\times\inp{U}\to TM\times\outp{U},$$ which, by a minor abuse of notation, we also denote by $f$. In the special case that $M,U^\text{in},U^\text{out}\in\Ob\mathbf{Lin}$ are Euclidean spaces and $f$ is a linear map (or equivalently $\inp{f}$ and $\outp{f}$ are linear), we call $f$ a \emph{linear open system}. \begin{rem}\label{ex:dynamical system} Let $M$ be a smooth manifold, and let \mbox{$\inp{U}=\outp{U}=\mathbb R^0$} be trivial. Then an open system in the sense of Definition~\ref{def:opensystem} is a smooth map \mbox{$f\colon M\to TM$} over $M$, in other words, a vector field on $M$. From the geometric point of view, vector fields are autonomous (i.e., closed!) dynamical systems; see~\cite{Teschl}. \end{rem} \begin{rem} For an arbitrary manifold $\inp{U}$, a map \mbox{$M\times\inp{U}\to TM$} can be considered as a function $\inp{U}\to\mathbf{VF}(M)$, where $\mathbf{VF}(M)$ is the set of vector fields on $M$. Hence, $\inp{U}$ {\em controls} the behavior of the system in the usual sense. \end{rem} \end{defn} \begin{rem} \label{rem:thepoint} Given an open system $f$ we can form a new open system by feeding the readout of $f$ into the inputs of $f$. For example suppose the open system is of the form \[ \begin{cases} M\times A\times B \xrightarrow{F} TM \\ g= (g_A, g_B)\colon M \to C\times B, \end{cases} \] where $A$, $B$, $C$ and $M$ are manifolds. Define $F'\colon M\times A\to TM$ by \[ F'(m,a) := F(m, a, g_B(m))\qquad \textrm{ for all }\quad (m,a)\in M\times A. \] Then \[ \begin{cases} M\times A \xrightarrow{F'} TM \\ g_A\colon M \to C \end{cases} \] is a new open system obtained by plugging a readout of $f$ into the space of inputs $B$. Compare with Figure~\ref{fig:wiringdiagram}. This looks a little boring. It becomes more interesting when we start with several open systems, take their product and then plug (some of the) outputs into inputs. For example suppose we start with two open systems \[ \begin{cases} M_1\times A\times B \xrightarrow{F_1} TM_1 \\ g_1\colon M_1 \to C \end{cases} \] and \[ \begin{cases} M_2\times C \xrightarrow{F_2} TM_2 \\ g_2 = (g_B,g_D)\colon M_2 \to B\times D \end{cases}. \] Here, again, all capital letters denote manifolds. Take their product; we get \[ \begin{cases} M_1\times A\times B \times M_2\times C \xrightarrow{(F_1,F_2)} TM_1\times TM_2 \\ (g_1,g_2)\colon M_1\times M_2 \to C\times B\times D \end{cases} \] Now plug in the functions $g_B$ and $g_1$ into inputs. We get a new system \[ \begin{cases} M_1\times M_2\times A \xrightarrow{F'} TM_1\times TM_2 \\ g'\colon M_1\times M_2 \to D \end{cases} \] where \[ F'(m_1, m_2, a):= (F_1(m_1, a, g_B(m_2)), F_2 (m_2, g_1(m_1)). \] Compare with Figure~\ref{fig:WD}. Making these kinds of operations on open systems precise for an arbitrary number of interacting systems is the point of our paper. \end{rem} By defining the appropriate morphisms, we can consider open dynamical systems as being objects in a category. We are not aware of this notion being defined previously in the literature, but it is convenient for our purposes. \begin{defn} \label{def:odscat} Suppose that $M_i,\inp{U}_i,\outp{U}_i\in\Ob\mathbf{Man}$ and $(M_i,\inp{U}_i,\outp{U}_i,f_i)$ is an open system for \mbox{$i\in\{1,2\}$}. A \emph{morphism of open systems} \[\zeta\colon(M_1,\inp{U}_1,\outp{U}_1,f_1)\to (M_2,\inp{U}_2,\outp{U}_2,f_2)\] is a triple $(\zeta_M,\zeta_{\inp{U}},\zeta_{\outp{U}})$ of smooth maps $\zeta_M\colon M_1\to M_2$, $\zeta_{\inp{U}}\colon \inp{U}_1\to \inp{U}_2$, and $\zeta_{\outp{U}}\colon \outp{U}_1\to \outp{U}_2$, such that the following diagram commutes: \[\xymatrix{ M_1\times\inp{U}_1 \ar[r]^{f_1} \ar[d]_{\zeta_M\times\zeta_{\inp{U}}} &TM_1\times\outp{U}_1 \ar[d]^{T\zeta_M\times\zeta_{\outp{U}}}\\ M_2\times\inp{U}_2 \ar[r]_{f_2} & TM_2\times\outp{U}_2 } \] This defines the category $\mathbf{ODS} $ of open dynamical systems. We define the subcategory $\mathbf{ODS} _\mathbf{Lin}\subseteq\mathbf{ODS}$ by restricting our objects to linear open systems, as in Definition~\ref{def:opensystem}, and imposing that the three maps in $\zeta$ are linear. \end{defn} As in Remark~\ref{rem:thepoint}, we will often want to combine two or more interconnected open systems into one larger one. As we shall see in Section~\ref{sec:g}, this will involve taking a product of the smaller open systems. Before we define this formally, we first remind the reader that the tangent space functor $T$ is strong monoidal, i.e., it canonically preserves products, $$T(M_1\times M_2)\cong TM_1\times TM_2.$$ \begin{lem} \label{def:osprod} The category $\mathbf{ODS}$ of open systems has all finite products. That is, if $I$ is a finite set and $f_i=(M_i,\inp{U}_i,\outp{U}_i,f_i)\in\Ob\mathbf{ODS}$ is an open system for each $i\in I$, then their product is $$\prod_{i\in I}f_i=\left(\prod_{i\in I}M_i,\prod_{i\in I}\inp{U}_i,\prod_{i\in I}\outp{U}_i,\prod_{i\in I}f_i\right)$$ with the obvious projection maps. \end{lem} \section{The Operad of Wiring Diagrams} \label{sec:W} In this section, we define the symmetric monoidal category $(\mathbf{W},\oplus,0)$ of wiring diagrams. We then use Definition~\ref{def:SMC to Opd} to define the wiring diagram operad $\Opd{\mathbf{W}}$, which situates our pictorial setting. We begin by formally defining the underlying category $\mathbf{W}$ and continue with some concrete examples to explicate this definition. \begin{defn}\label{def:W} The category $\mathbf{W}$ has objects \emph{boxes} and morphisms \emph{wiring diagrams}. A box $X$ is an ordered pair of $\mathbf{Man}$-typed finite sets (Definition~\ref{def:typed finite sets}), \[X=(\inp{X},\outp{X})\in\Ob\TFS{}\times\Ob\TFS{}.\] Let $\inp{X}=(A,\tau)$ and $\outp{X}=(A',\tau')$. Then we refer to elements $a\in A$ and $a'\in A'$ as \emph{input ports} and \textit{output ports}, respectively. We call $\tau(a)\in\Ob\mathbf{Man}$ the \emph{type} of port $a$, and similarly for $\tau'(a')$. A wiring diagram $\Phi\colon X\to Y$ in $\mathbf{W}$ is a triple $(X,Y,\varphi)$, where $\varphi$ is a typed bijection (see Definition \ref{def:typed finite sets}) \begin{align}\label{dia:wd function} \varphi\colon\inp{X}+\outp{Y}\xrightarrow{\cong} \outp{X}+\inp{Y}, \end{align} satisfying the following condition: \begin{description} \item[no passing wires] $\varphi(\outp{Y})\cap\inp{Y}=\varnothing$, or equivalently $\varphi(\outp{Y})\subseteq\outp{X}$. \end{description} This condition allows us to decompose $\varphi$ into a pair $\varphi=(\inp{\varphi},\outp{\varphi})$: \begin{align}\label{dia:components of wd} \left\{ \begin{array}{l} \inp{\varphi}\colon \inp{X} \to \outp{X}+\inp{Y} \\ \outp{\varphi}\colon \outp{Y} \to \outp{X} \end{array} \right. \end{align} We often identify the wiring diagram $\Phi=(X,Y,\varphi)$ with the typed bijection $\varphi$, or equivalently its corresponding pair $(\inp{\varphi},\outp{\varphi})$. By a \emph{wire} in $\Phi$, we mean a pair $(a,b)$, where $a\in\inp{X}+\outp{Y}$, $b\in\outp{X}+\inp{Y}$, and $\varphi(a)=b$. In other words a wire in $\Phi$ is a pair of ports connected by $\phi$. The \emph{identity} wiring diagram $\iota:X\to X$ is given by the identity morphism $\inp{X}+\outp{X}\to\inp{X}+\outp{X}$ in $\TFS{}$. Now suppose $\Phi=(X,Y,\varphi)$ and $\Psi=(Y,Z,\psi)$ are wiring diagrams. We define their \emph{composition} as $\Psi\circ\Phi=(X,Z,\omega)$, where $\omega=(\inp{\omega},\outp{\omega})$ is given by the pair of dashed arrows making the following diagrams commute. \begin{equation}\label{dia:composition diagrams} \xymatrixcolsep{3.5pc} \xymatrix{ \inp{X} \ar[dd]_{\inp{\varphi}} \ar@{-->}[r]^{\inp{\omega}} & \outp{X}+\inp{Z} \\ {} & \outp{X}+\outp{X}+\inp{Z} \ar[u]_{\nabla+\mathds{1} _{\inp{Z}}} \\ \outp{X}+\inp{Y} \ar[r]_-{\mathds{1} _{\outp{X}}+\inp{\psi}} & \outp{X}+\outp{Y}+\inp{Z} \ar[u]_{\mathds{1} _{\outp{X}}+\outp{\varphi}+\mathds{1} _{\inp{Z}}} } \mskip5mu \xymatrixcolsep{1.5pc} \xymatrix{ \outp{Z} \ar[rd]_{\outp{\psi}} \ar@{-->}[rr]^{\outp{\omega}} & &\outp{X}\\ &\outp{Y} \ar[ru]_{\outp{\varphi}}} \end{equation} Here $\nabla\colon \outp{X}+\outp{X}\to \outp{X}$ is the codiagonal map in $\TFS{}$. \end{defn} \begin{rem}\label{rem:different Cs} For any finite product category $\mathcal{C}$, we may define the category $\mathbf{W}_{\mathcal{C}}$ by replacing $\mathbf{Man}$ with $\mathcal{C}$, and $\TFS{}$ with $\TFS{\mathcal{C}}$, in Definition~\ref{def:W}. In particular, as in Remark~\ref{rem:TFSL}, we have the symmetric monoidal category $\mathbf{W}_{\mathbf{Lin}}$ of linearly typed wiring diagrams. \end{rem} What we are calling a box is nothing more than an interface; at this stage it has no semantics, e.g., in terms of differential equations. Each box can be given a pictorial representation, as in Example~\ref{ex:pictorial box} below. \begin{ex} \label{ex:pictorial box} As a convention, we depict a box $X=(\{a,b\},\{c\})$ with input ports connecting on the left and output ports connecting on the right, as in Figure~\ref{fig:box} below. When types are displayed, we label ports on the exterior of their box and their types adjacently on the interior of the box with a `:' symbol in between to designate typing. Reading types off of this figure, we see that the type of input port $a$ is the manifold $\mathbb R$, that of input port $b$ is the circle $S^1$, and that of output port $c$ is the torus $T^2$. \begin{figure}[!ht] \activetikz{ \path(0,0);\blackbox{(3.5,2.5)}{2}{1}{$X$}{1.2} \node at (-.25,1.8666) {\small $a:$}; \node at (.25,1.8666) {\small $\mathbb R$}; \node at (-.25,1.0333) {\small $b:$}; \node at (.25,1.0333) {\small $S^1$}; \node at (3.75,1.4333) {\small $:c$}; \node at (3.25,1.4666) {\small $T^2$}; } \caption{A box with two input ports, of types $\mathbb R$ and $S^1$, and one output port with type $T^2$.} \label{fig:box} \end{figure} \end{ex} A morphism in $\mathbf{W}$ is a wiring diagram $\Phi=(X,Y,\varphi)$, the idea being that a smaller box $X$ (the domain) is nested inside of a larger box $Y$ (the codomain). The ports of $X$ and $Y$ are then interconnected by wires, as specified by the typed bijection $\varphi$. We will now see an example of a wiring diagram, accompanied by a picture. \begin{ex}\label{ex:wiringdiagram} Reading off the wiring diagram $\Phi=(X,Y,\varphi)$ drawn below in Figure~\ref{fig:wiringdiagram}, we have the following data for boxes: \[\begin{matrix} \inp{X}=\{a,b\} & \outp{X}=\{c,d\} \\ \inp{Y}=\{m\} & \outp{Y}=\{n\}\end{matrix}\] Table~\ref{tab:wiringdiagram} makes $\varphi$ explicit via a list of its wires, i.e., pairs $(\gamma,\varphi(\gamma))$. \noindent\begin{minipage}{\linewidth} \[ \begin{array}{c||c|c|c} \rule[-4pt]{0pt}{16pt} \gamma\in\inp{X}+\outp{Y}& a & b & n \\\hline \rule[-4pt]{0pt}{16pt} \varphi(\gamma)\in\outp{X}+\inp{Y} & m & d & c \end{array} \] \smallskip \captionof{table}{} \label{tab:wiringdiagram} \end{minipage} \vspace{-5 mm} \begin{figure}[ht] \activetikz{ \path(0,0);\blackbox{(5,4)}{1}{1}{$Y$}{.5} \node at (0.2,2.2) {\small $m$}; \node at (4.85,2.2) {\small $n$}; \path (1,1);\blackbox{(3,2)}{2}{2}{\small$X$}{.25} \node at (0.875,2.533) {\small $a$}; \node at (0.875,1.866) {\small $b$}; \node at (4.2,2.533) {\small $c$}; \node at (4.2,1.866) {\small $d$}; \directarc{(.25,2)}{(.875,2.3333)} \directarc{(4.125,2.3333)}{(4.75,2)} \fancyarc{(.875,1.6666)}{(4.125,1.6666)}{20}{-40} } \caption{A Wiring Diagram $\Phi=(X,Y,\varphi)$.} \label{fig:wiringdiagram} \end{figure} \end{ex} \begin{rem} \label{rem:samestate} The condition that $\varphi$ be typed, as in Definition \ref{def:typed finite sets}, ensures that if two ports are connected by a wire then the associated types are the same. In particular, in Example~\ref{ex:wiringdiagram} above, $(a,b,n)$ must be the same type tuple as $(m,d,c)$. \end{rem} Now that we have made wiring diagrams concrete and visual, we can do the same for their composition. \begin{ex}\label{ex:composition} In Figure~\ref{fig:compose}, we visualize the composition of two wiring diagrams $\Phi=(X,Y,\varphi)$ and $\Psi=(Y,Z,\psi)$ to form $\Psi\circ\Phi=(X,Z,\omega)$. Composition is depicted by drawing the wiring diagram for $\Psi$ and then, inside of the $Y$ box, drawing in the wiring diagram for $\Phi$. Finally, to depict the composition $\Psi\circ\Phi$ as one single wiring diagram, one simply ``erases" the $Y$ box, leaving the $X$ and $Z$ boxes interconnected among themselves. Figure~\ref{fig:compose} represents such a procedure by depicting the $Y$ box with a dashed arrow. It's important to note that the wires also connect, e.g. if a wire in $\Psi$ connects a $Z$ port to some $Y$ port, and that $Y$ port attaches via a $\Phi$ wire to some $X$ port, then these wires ``link together" to a total wire in $\Psi\circ\Phi$, connecting a $Z$ port with an $X$ port. Table~\ref{tab:compose} below traces the wires of $\Psi\circ\Phi$ through the $\inp{\omega}$ and $\outp{\omega}$ composition diagrams in (\ref{dia:composition diagrams}) on its left and right side, respectively. The left portion of the table starts with $\gamma\in\inp{X}$ and ends at $\inp{\omega}(\gamma)\in\outp{X}+\inp{Z}$, with intermediary steps of the composition denoted with superscripts $\gamma^n$. The right portion of the table starts with $\gamma\in\outp{Z}$ then goes through the intermediary of $\gamma'\in\outp{Y}$ and finally reaches $\outp{\omega}(\gamma)\in\outp{Z}$. We skip lines on the right portion to match the spacing on the left. \noindent\begin{minipage}{\linewidth} \[ \begin{array}{c||c|c|c||c||c} \rule[-4pt]{0pt}{16pt} \gamma\in\inp{X}& a & b & c & v & \gamma\in\outp{Z} \\\hline \rule[-4pt]{0pt}{16pt} \gamma^1\in\outp{X}+\inp{Y} & d & k & l & {} & {} \\\hline \rule[-4pt]{0pt}{16pt} \gamma^2\in\outp{X}+\outp{Y}+\inp{Z} & d & u & n & m & \gamma'\in\outp{Y} \\\hline \rule[-4pt]{0pt}{16pt} \gamma^3\in\outp{X}+\outp{X}+\inp{Z} & d & u & f & {} & {} \\\hline \rule[-4pt]{0pt}{16pt} \inp{\omega}(\gamma)\in\outp{X}+\inp{Z} & d & u & f & e & \outp{\omega}(\gamma)\in\outp{X} \end{array} \] \smallskip \captionof{table}{} \label{tab:compose} \end{minipage} \begin{figure}[ht] \activetikz{ \path (-1,-1); \blackbox{(7,6)}{1}{1}{$Z$}{.8} \node at (-0.8,2.2) {\small $u$}; \node at (5.8,2.2) {\small $v$}; \path(0,0);\dashbox{(5,4)}{2}{2}{$Y$}{.5} \node at (0.2,2.88) {\small $k$}; \node at (4.8,2.88) {\small $m$}; \node at (0.2,1.53) {\small $l$}; \node at (4.8,1.53) {\small $n$}; \path (1,1);\blackbox{(3,2)}{3}{3}{\small$X$}{.25} \node at (1.3,1.5) {\small $c$}; \node at (1.3,2) {\small $b$}; \node at (1.3,2.5) {\small $a$}; \node at (3.7,1.5) {\small $f$}; \node at (3.7,2) {\small $e$}; \node at (3.7,2.5) {\small $d$}; \directarc{(-.6,2)}{(-.25,2.666)} \directarc{(5.25,2.666)}{(5.6,2)} \directarc{(0.25,1.333)}{(0.875,1.5)} \directarc{(4.125,1.5)}{(4.75,1.333)} \directarc{(0.25,2.666)}{(0.875,2)} \directarc{(4.125,2)}{(4.75,2.666)} \fancyarc{(-0.25,1.333)}{(5.25,1.333)}{25}{-60} \fancyarc{(0.875,2.5)}{(4.125,2.5)}{15}{28} } \caption{A wiring diagram composition $\Psi\circ\Phi=(X,Z,\omega)$ of $\Phi=(X,Y,\varphi)$ and $\Psi=(Y,Z,\psi)$, with dashed medium box $Y$.} \label{fig:compose} \end{figure} \end{ex} \begin{rem} \label{rem:pathology} The condition that $\varphi$ be both injective and surjective prohibits {\em exposed} ports and {\em split} ports, respectively, as depicted in Figure~\ref{fig:unsafe}{\bf a}. The {\em no passing wires} condition on $\varphi(\outp{Y})$ prohibits wires that go straight across the $Y$ box, as seen in the intermediate box of Figure~\ref{fig:unsafe}{\bf b}. \end{rem} \begin{figure}[!ht] \activetikz{ \node at (1.5,-.2){\bf a}; \path(0,0);\blackbox{(3,2.5)}{2}{1}{\small$Y$}{.5} \path(1,.3);\blackbox{(1,1)}{2}{2}{\small$X$}{.25} \fancyarc{(.875,.9667)}{(2.125,.9667)}{10}{18 \directarc{(2.125,.9667)}{(2.75,1.25) \directarc{(.25,.8333)}{(.875,.6333) \node at (5.5,-.2){\bf b}; \path(4,0);\blackbox{(3,2.5)}{0}{0}{\small$Z$}{.5} \path(4.75,.2);\dashbox{(1.5,1.5)}{1}{1}{\small$Y$}{.25} \path(5,.3); \blackbox{(1,.5)}{0}{0}{\small$X$}{.1} \directarc{(4.85,.95)}{(6.1,.95)} \fancyarc{(4.65,.95)}{(6.3,.95)}{15}{30 } \caption{{\bf (a)} A faux-wiring diagram violating the bijectivity condition in Definition \ref{def:W}. \\ {\bf (b)} A composition of diagrams in which a loop emerges because the inner diagram has a (prohibited) passing wire.} \label{fig:unsafe} \end{figure} Now that we have formally defined and concretely explicated the category $\mathbf{W}$, we will make it into a monoidal category by defining its tensor product. \begin{defn}\label{def:mon} Let $X_1,X_2,,Y_1,Y_2\in\Ob\mathbf{W}$ be boxes and $\Phi_1\colon X_1\to Y_2$ and $\Phi_2\colon X_2\to Y_2$ be wiring diagrams. The \emph{monoidal product} $\oplus$ is given by \[X_1\oplus X_2:=\left(\inp{X}_1+\inp{X}_2\;,\;\outp{X}_1+\outp{X}_2\;\right), \hspace{10 mm} \Phi_1\oplus\Phi_2:=\Phi_1+\Phi_2.\] The \emph{closed box} $0=\{\varnothing,\varnothing\}$ is the monoidal unit. \end{defn} \begin{rem} Once we add semantics in Section~\ref{sec:g}, closed boxes will correspond to \emph{autonomous systems}, which do not interact with any outside environment (see Remark~\ref{ex:dynamical system}). \end{rem} We now make this monoidal product explicit with an example. \begin{ex} Consider boxes $X=(\{x_1,x_2\},\{x_3,x_4\})$ and $Y=(\{y_1\},\{y_2,y_3\})$ depicted below. \activetikz{ \path(-1.5,0);\blackbox{(1.5,1)}{2}{2}{\small$X$}{.25} \node at (-1.2,.666) {\tiny$x_1$}; \node at (-1.2,.333) {\tiny$x_2$}; \node at (-.3,.666) {\tiny$x_3$}; \node at (-.3,.333) {\tiny$x_4$}; \path(1.5,0);\blackbox{(1.5,1)}{1}{2}{\small$Y$}{.25} \node at (1.8,.5){\tiny$y_1$}; \node at (2.7,.666){\tiny$y_2$}; \node at (2.7,.333){\tiny$y_3$}; } We depict their tensor $X\oplus Y=(\{x_1,x_2,y_1\},\{x_3,x_4,y_2,y_3\})$ by stacking boxes. \activetikz{ \path(0,0);\blackbox{(2,2)}{3}{4}{\small$X\oplus Y$}{.25} \node at (.3,1.5) {\tiny$x_1$}; \node at (.3,1) {\tiny$x_2$}; \node at (.3,.5) {\tiny$y_1$}; \node at (1.7,1.6) {\tiny$x_3$}; \node at (1.7,1.2){\tiny$x_4$}; \node at (1.7,.8){\tiny$y_2$}; \node at (1.7,.4){\tiny$y_3$}; } Similarly, consider the following wiring diagrams (with ports left unlabelled). \activetikz{ \node at (1.25,2.2){\small $\Phi_1 \colon X_1\to Y_1$}; \path(0,0);\blackbox{(2.5,2)}{1}{1}{\small$Y_1$}{.5} \path(.5,.2);\blackbox{(1.5,1)}{2}{2}{\small $X_1$}{.25} \directarc{(.25,1)}{(.375,0.8666) \directarc{(2.125,0.8666)}{(2.25,1) \fancyarc{(.375,0.5333)}{(2.125,0.5333)}{14}{28.5 \node at (6.25,2.2){\small $\Phi_2 \colon X_2\to Y_2$}; \path(5,0);\blackbox{(2.5,2)}{1}{1}{\small $Y_2$}{.5} \path(5.5,.5);\blackbox{(1.5,1)}{1}{1}{\small $X_2$}{.25} \directarc{(5.25,1)}{(5.375,1)} \directarc{(7.125,1)}{(7.25,1)} } We can depict their composition via stacking. \activetikz{ \node at (1.5,3.25){\small $\Phi_1\oplus\Phi_2 \colon X_1\oplus X_2\to Y_1\oplus Y_2$}; \path(0,0);\blackbox{(3,3)}{2}{2}{\small$Y_1\oplus Y_2$}{.5} \path(.8,.5);\blackbox{(1.4,1.6)}{3}{3}{\scriptsize$X_1\oplus X_2$}{.25} \directarc{(.25,2)}{(.675,1.7) \directarc{(2.325,1.7)}{(2.75,2) \fancyarc{(.675,1.3)}{(2.325,1.3)}{15}{30 \directarc{(.25,1)}{(.675,.9) \directarc{(2.325,.9)}{(2.75,1) } \end{ex} We now prove that the above data characterizing $(\mathbf{W},\oplus,0)$ indeed constitutes a symmetric monoidal category, at which point we can, as advertised, invoke Definition~\ref{def:SMC to Opd} to define the operad $\Opd{\mathbf{W}}$. \begin{prop} \label{prop:W is SMC} The category $\mathbf{W}$ in Definition~\ref{def:W} and the monoidal product $\oplus$ with unit $0$ in Definition~\ref{def:mon} form a symmetric monoidal category $(\mathbf{W},\oplus,0)$. \end{prop} \begin{proof} We begin by establishing that $\mathbf{W}$ is indeed a category. We first show that our class of wiring diagrams is closed under composition. Let $\Phi=(X,Y,\varphi)$, $\Psi=(Y,Z,\psi)$, and $\Psi\circ\Phi=(X,Z,\omega)$. To show that $\omega$ is a typed bijection, we replace the pair of maps $(\inp{\varphi},\outp{\varphi})$ with a pair of bijections $(\widetilde{\inp{\varphi}},\widetilde{\outp{\varphi}})$ as follows. Let $\expt{X}{\varphi}\subseteq\outp{X}$ (for \emph{exports}) denote the image of $\outp{\varphi}$, and $\loc{X}{\varphi}$ (for \emph{local ports}) be its complement. Then we can identify $\varphi$ with the following pair of typed bijections \begin{displaymath} \left\{ \begin{array}{lr} \widetilde{\inp{\varphi}}\colon \inp{X} \xrightarrow{\cong} \loc{X}{\varphi}+\inp{Y} \\ \widetilde{\outp{\varphi}}\colon \outp{Y} \xrightarrow{\cong} \expt{X}{\varphi} \end{array} \right. \end{displaymath} Similarly, identify $\psi$ with $(\widetilde{\inp{\psi}},\widetilde{\outp{\psi}})$. We can then rewrite the diagram defining $\omega$ in (\ref{dia:composition diagrams}) as one single commutative diagram of typed finite sets. \[ \xymatrixcolsep{4pc} \xymatrix{ \inp{X}+\outp{Z} \ar[d]_{\widetilde{\inp{\varphi}}+\widetilde{\outp{\psi}}} \ar@{-->}[r]^{\omega} &\outp{X}+\inp{Z} \\ \loc{X}{\varphi}+\inp{Y}+\expt{Y}{\psi} \ar[d]_{\mathds{1} _{\loc{X}{\varphi}}+\widetilde{\inp{\psi}}+\mathds{1} _{\expt{Y}{\psi}}} &\loc{X}{\varphi}+\expt{X}{\varphi}+\inp{Z} \ar[u]_{\cong} \\ \loc{X}{\varphi}+\loc{Y}{\psi}+\inp{Z}+\expt{Y}{\psi} \ar[r]_-{\cong} & \loc{X}{\varphi}+\outp{Y}+\inp{Z} \ar[u]_{\mathds{1} _{\loc{X}{\varphi}}+\widetilde{\outp{\varphi}}+\mathds{1} _{\inp{Z}}} }\] As a composition of typed bijections, $\omega$ is also a typed bijection. The following computation proves that $\omega$ has no passing wires: \[\omega(\outp{Z})=\varphi\big(\psi(\outp{Z})\big)\subseteq\varphi(\outp{Y})\subseteq \outp{X}.\] Therefore $\mathbf{W}$ is closed under wiring diagram composition. To show that $\mathbf{W}$ is a category, it remains to prove that composition of wiring diagrams satisfies the unit and associativity axioms. The former is straightforward and will be omitted. We now establish the latter. Consider the wiring diagrams $\Theta=(V,X,\theta),\Phi=(X,Y,\varphi),\Psi=(Y,Z,\psi)$; and let $(\Psi\circ\Phi)\circ\Theta=(V,Z,\kappa)$ and $\Psi\circ(\Phi\circ\Theta)= (V,Z,\lambda)$. We readily see that $\outp{\kappa}=\outp{\lambda}$ by the associativity of composition in $\TFS{}$. Proving that $\inp{\kappa}=\inp{\lambda}$ is equivalent to establishing the commutativity of the following diagram: \begin{equation} \label{eqn:associativity} \xymatrix@C=31pt@R=1.8pc{ {} &\outp{V}+\inp{Z} &{} \\ {} &\outp{V}+\outp{V}+\inp{Z} \ar[u]^{\nabla+\mathds{1} } &{} \\ \outp{V}+\outp{Y}+\inp{Z} \ar[r]^-{\mathds{1} +\outp{\varphi}+\mathds{1} } &\outp{V}+\outp{X}+\inp{Z} \ar[u]^{\mathds{1} + \outp{\theta}+\mathds{1} } &\outp{V}+\outp{X}+\outp{X}+\inp{Z} \ar[l]_-{\mathds{1} +\nabla+\mathds{1} } \\ \outp{V}+\inp{Y} \ar[u]^{\mathds{1} +\inp{\psi}} &{} &{} \\ \outp{V}+\outp{V}+\inp{Y} \ar[u]^{\nabla+\mathds{1} } &\outp{V}+\outp{X}+\inp{Y} \ar[l]^-{\mathds{1} + \outp{\theta}+\mathds{1} } \ar[r]_-{\mathds{1} +\mathds{1} +\inp{\psi}} &\outp{V}+\outp{X}+\outp{Y}+\inp{Z} \ar[uu]_{\mathds{1} +\mathds{1} +\outp{\varphi}+\mathds{1} } \\ {} &\outp{V}+\inp{X} \ar[u]^{\mathds{1} + \inp{\varphi}} &{} \\ {} &\inp{V} \ar[u]^{\inp{\theta}}&{} } \end{equation} This diagram commutes in any category with coproducts, as follows from the associativity and naturality of the codiagonal map. We present a formal argument of this fact below in the language of string diagrams (See \cite{JSTensor}). As in \cite{Selinger}, we let squares with blackened corners denote generic morphisms. We let triangles denote codiagonal maps. See Figure~\ref{string} below. \begin{figure}[ht]\label{fig:string} \centering \begin{tikzpicture} [scale=.8] \node[mysquare] at (2,0) (Theta11) {$\theta^\text{out}$}; \node[fold] at (4.5,.35) (V1) {}; \node[mysquare] at (3.5,-1.15) (Psi) {$\psi^\text{in}$}; \node[mysquare] at (6,-.7) (Phi) {$\varphi^\text{out}$}; \node[mysquare] at (8.5,-.7) (Theta12) {$\theta^\text{out}$}; \node[fold] at (10.7,-.13) (V2) {}; \draw[onearrow={0.4}{$X^\text{out}$}] (.13,0) -- (Theta11.west); \draw[onearrow={0.5}{$V^\text{out}$}] (Theta11.east) -- ([yshift=-10pt]V1.west); \draw[onearrow={0.15}{$V^\text{out}$}] (.13,.75) -- ([yshift=11pt]V1.west); \draw[onearrow={0.5}{$Y^\text{out}$}] ([yshift=12pt]Psi.east) -- ([yshift=-.5pt]Phi.west); \draw[onearrow={0.96}{$Z^\text{in}$}] ([yshift=-11pt]Psi.east) -- (13,-1.5); \draw[onearrow={0.5}{$X^\text{out}$}] (Phi.east) -- (Theta12.west); \draw[onearrow={0.2}{$Y^\text{in}$}] (.13,-1.15) -- (Psi.west); \draw[onearrow={0.5}{$V^\text{out}$}] (V1.east) -- ([yshift=12pt]V2.west); \draw[onearrow={0.5}{$V^\text{out}$}] ([yshift=6pt]Theta12.east) -- ([yshift=-10pt]V2.west); \draw[onearrow={0.7}{$V^\text{out}$}] (V2.east) -- (13,-.13); \end{tikzpicture} \begin{tikzpicture} [scale=.8] \node[mysquare] at (6.5,0.5) (Theta11) {$\theta^\text{out}$}; \node[mysquare] at (2,-1.15) (Psi) {$\psi^\text{in}$}; \node[mysquare] at (4.2,-.7) (Phi) {$\varphi^\text{out}$}; \node[mysquare] at (6.5,-.7) (Theta12) {$\theta^\text{out}$}; \node[fold] at (8.7,.8) (V1) {}; \node[fold] at (11.1,.15) (V2) {}; \draw[onearrow={0.5}{$V^\text{out}$}] ([yshift=-1.7pt]Theta11.east) -- ([yshift=-10pt]V1.west); \draw[onearrow={0.5}{$Y^\text{out}$}] ([yshift=12pt]Psi.east) -- ([yshift=-.5pt]Phi.west); \draw[onearrow={0.96}{$Z^\text{in}$}] ([yshift=-11pt]Psi.east) -- (13,-1.5); \draw[onearrow={0.5}{$X^\text{out}$}] (Phi.east) -- (Theta12.west); \draw[onearrow={.35}{$Y^\text{in}$}] (0,-1.15) -- (Psi.west); \draw[onearrow={0.088}{$X^\text{out}$}] (0,.5) -- (Theta11.west); \draw[onearrow={0.5}{$V^\text{out}$}] ([yshift=12pt]Theta12.east) -- ([yshift=-12pt]V2.west); \draw[onearrow={0.5}{$V^\text{out}$},rounded corners=20pt] (V1.east) -- ([yshift=12pt]V2.west); \draw[onearrow={0.065}{$V^\text{out}$}] (0,1.2) -- ([yshift=12pt]V1.west); \draw[onearrow={0.5}{$V^\text{out}$}] (V2.east) -- (13,.15); \end{tikzpicture} \begin{tikzpicture} [scale=.8] \node[mysquare] at (6.5,0.5) (Theta11) {$\theta^\text{out}$}; \node[mysquare] at (2,-1.15) (Psi) {$\psi^\text{in}$}; \node[mysquare] at (4.2,-.7) (Phi) {$\varphi^\text{out}$}; \node[mysquare] at (6.5,-.7) (Theta12) {$\theta^\text{out}$}; \node[fold] at (8.7,-.13) (V1) {}; \node[fold] at (11.1,.3) (V2) {}; \draw[onearrow={0.5}{$V^\text{out}$}] ([yshift=-6.5pt]Theta11.east) -- ([yshift=11pt]V1.west); \draw[onearrow={0.5}{$Y^\text{out}$}] ([yshift=12pt]Psi.east) -- ([yshift=-.5pt]Phi.west); \draw[onearrow={0.96}{$Z^\text{in}$}] ([yshift=-11pt]Psi.east) -- (13,-1.5); \draw[onearrow={0.5}{$X^\text{out}$}] (Phi.east) -- (Theta12.west); \draw[onearrow={.35}{$Y^\text{in}$}] (0,-1.15) -- (Psi.west); \draw[onearrow={0.088}{$X^\text{out}$}] (0,.5) -- (Theta11.west); \draw[onearrow={0.5}{$V^\text{out}$}] ([yshift=6pt]Theta12.east) -- ([yshift=-10pt]V1.west); \draw[onearrow={0.5}{$V^\text{out}$}] (V1.east) -- ([yshift=-12pt]V2.west); \draw[onearrow={0.05}{$V^\text{out}$},rounded corners=20pt] (0,1.2) -- (8,1.2) -- ([yshift=12pt]V2.west); \draw[onearrow={0.5}{$V^\text{out}$}] (V2.east) -- (13,.3); \end{tikzpicture} \begin{tikzpicture} [scale=.8] \node[mysquare] at (9.2,-0.3) (Theta) {$\theta^\text{out}$}; \node[mysquare] at (2,-1.15) (Psi) {$\psi^\text{in}$}; \node[mysquare] at (4.2,-.7) (Phi) {$\varphi^\text{out}$}; \node[fold] at (6.5,-.3) (V1) {}; \node[fold] at (11.1,.3) (V2) {}; \draw[onearrow={0.5}{$Y^\text{out}$}] ([yshift=12pt]Psi.east) -- ([yshift=-.6pt]Phi.west); \draw[onearrow={0.96}{$Z^\text{in}$}] ([yshift=-11pt]Psi.east) -- (13,-1.5); \draw[onearrow={0.55}{$X^\text{out}$}] (Phi.east) -- ([yshift=-11.pt]V1.west); \draw[onearrow={.35}{$Y^\text{in}$}] (0,-1.15) -- (Psi.west); \draw[onearrow={0.088}{$X^\text{out}$}] (0,.1) -- ([yshift=12pt]V1.west); \draw[onearrow={0.5}{$X^\text{out}$}] (V1.east) -- (Theta.west); \draw[onearrow={0.5}{$V^\text{out}$}] ([yshift=6pt]Theta.east) -- ([yshift=-10pt]V2.west); \draw[onearrow={0.05}{$V^\text{out}$}] (0,.75) -- ([yshift=12pt]V2.west); \draw[onearrow={0.5}{$V^\text{out}$}] (V2.east) -- (13,.3); \end{tikzpicture} \caption{String diagram proof of commutativity of \eqref{eqn:associativity} } \label{string} \end{figure} The first step of the proof follows from the topological nature of string diagrams, which mirror the axioms of monoidal categories. The second step invokes the associativity of codiagonal maps. The third and final step follows from the naturality of codiagonal maps, i.e., the commutativity of the following diagram. \[\xymatrix{\outp{V}+\outp{V} \ar[r]^-{\nabla} \ar[d]_{\outp{\theta}+\outp{\theta}} & \outp{V} \ar[d]^{\outp{\theta}} \\ \outp{X}+\outp{X} \ar[r]^-{\nabla} & \outp{X} }\] Now that we have shown that $\mathbf{W}$ is a category, we show that $(\oplus,0)$ is a monoidal structure on $\mathbf{W}$. Let $X,X',X''\in\Ob\mathbf{W}$ be boxes. We readily observe the following canonical isomorphisms. \begin{align*} &X\oplus 0= X= 0\oplus X &\emph{(unity)}\\ &(X\oplus X')\oplus X''= X\oplus (X'\oplus X'')&\emph{(associativity)}\\ &X\oplus X'= X'\oplus X &\emph{(commutativity)} \end{align*} Hence the monoidal product $\oplus$ is well behaved on objects. It is similarly easy, and hence will be omitted, to show that $\oplus$ is functorial. This completes the proof that $(\mathbf{W},\oplus,0)$ is a symmetric monoidal category. \end{proof} Having established that $(\mathbf{W},\oplus,0)$ is an SMC, we can now speak about the operad $\Opd{\mathbf{W}}$ of wiring diagrams. In particular, we can draw operadic pictures, such as the one in our motivating example in Figure~\ref{fig:pipebrine}, to which we now return. \begin{ex}\label{ex:wiring explained} Figure~\ref{fig:WD} depicts an $\Opd{\mathbf{W}}$ wiring diagram $\Phi\colon X_1,X_2\to Y$, which we may formally denote by the tuple $\Phi=(X_1,X_2;Y;\varphi)$. Reading directly from Figure~\ref{fig:WD}, we have the boxes: \begin{align*} X_1&=\big(\{\inp{X}_{1a},\inp{X}_{1b}\},\{\outp{X}_{1a}\}\big) \\ X_2&=\big(\{\inp{X}_{2a},\inp{X}_{2b}\} ,\{\outp{X}_{2a},\outp{X}_{2b}\}\big) \\ Y&=\big(\{\inp{Y}_a,\inp{Y}_b\},\{\outp{Y}_a\}\big) \end{align*} The wiring diagram $\Phi$ is visualized by nesting the domain boxes $X_1,X_2$ within the codomain box $Y$, and drawing the wires prescribed by $\varphi$, as recorded below in Table~\ref{tab:explicit}. \noindent\begin{minipage}{\linewidth} \[ \begin{array}{c||c|c|c|c|c} \rule[-4pt]{0pt}{16pt} w\in\inp{X}+\outp{Y}&\inp{X}_{1a}&\inp{X}_{1b}&\inp{X}_{2a}&\inp{X}_{2b}&\outp{Y}_{a} \\\hline \rule[-4pt]{0pt}{16pt} \varphi(w)\in\outp{X}+\inp{Y}&\inp{Y}_{b}&\outp{X}_{2b}&\inp{Y}_{a}&\outp{X}_{1a}&\outp{X}_{2a} \end{array} \] \smallskip \captionof{table}{} \label{tab:explicit} \end{minipage} \vspace{-3 mm} \begin{figure}[ht] \activetikz{ \path(0,0); \blackbox{(10,5)}{2}{1}{$Y$}{.7} \node at (.4,3.6) {\small $\inp{Y}_{a}$}; \node at (.4,1.9) {\small $\inp{Y}_{b}$}; \node at (9.6,2.75) {\small $\outp{Y}_a$}; \path(2,1.5); \blackbox{(2,2)}{2}{1}{$X_1$}{.5} \node at (1.7,3.06) {\small $\inp{X}_{1a}$}; \node at (1.7,2.4) {\small $\inp{X}_{1b}$}; \node at (4.38,2.7) {\small $\outp{X}_{1a}$}; \path(6,1.5); \blackbox{(2,2)}{2}{2}{$X_2$}{.5}; \node at (5.7,3.07) {\small $\inp{X}_{2a}$}; \node at (5.7,2.4) {\small $\inp{X}_{2b}$}; \node at (8.37,3.07) {\small $\outp{X}_{2a}$}; \node at (8.37,2.37) {\small $\outp{X}_{2b}$}; \directarc{(4.25,2.5)}{(5.75,2.16667)} \node at (5,2) {}; \directarc{(0.35,1.6667)}{(1.75,2.83333)} \node at (.7,1.4) {}; \node at (.7,1.2) {}; \fancyarc{(0.35,3.3333)}{(5.75,2.83333)}{-40}{25} \node at (.6,3.1) {}; \node at (.6,2.9) {}; \directarc{(8.25,2.8333)}{(9.65,2.5)} \node at (9.5,2.3) {}; \node at (9.5,2.1) {}; \fancyarc{(1.75,2.16667)}{(8.25,2.16667)}{20}{-45} } \caption{A wiring diagram $\Phi\colon X_1,X_2\to Y$ in $\Opd{\mathbf{W}}$.} \label{fig:WD} \end{figure} To reconceptualize $\Phi\colon X_1,X_2\to Y$ as a wiring diagram in $\mathbf{W}$, we simply consider the tensor $\Phi\colon X_1\oplus X_2\to Y$, as given in Figure~\ref{fig:reconcept} below. This demonstrates the fact that operadic pictures are easier to read and hence are more illuminating. \begin{figure}[ht] \activetikz{ \path(0,0); \blackbox{(7,8)}{2}{1}{$Y$}{.7} \node at (0.35,5.666) {\small $\inp{Y_a}$}; \node at (0.35,3) {\small$\inp{Y_b}$}; \node at (6.6,4.233) {\small$\outp{Y_a}$}; \path(2,1.5); \blackbox{(3,5)}{4}{3}{\small$X_1\oplus X_2$}{.5} \node at (1.7,5.8) {\small$\inp{X_{1a}}$}; \node at (1.7,4.8) {\small$\inp{X_{1b}}$}; \node at (1.7,3.8) {\small$\inp{X_{2a}}$}; \node at (1.7,2.8) {\small$\inp{X_{2b}}$}; \node at (5.4,5.55) {\small$\outp{X_{1a}}$}; \node at (5.4,4.3) {\small$\outp{X_{2a}}$}; \node at (5.4,3.05) {\small$\outp{X_{2b}}$}; \directarc{(0.35,5.333)}{(1.75,3.5)} \directarc{(0.35,2.666)}{(1.75,5.5)} \directarc{(5.25,4)}{(6.65,4)} \fancyarc{(1.75,4.5)}{(5.25,2.75)}{40}{-80} \fancyarc{(1.75,2.5)}{(5.25,5.25)}{35}{95} } \caption{A wiring diagram $\Phi\colon X_1\oplus X_2\to Y$ in $\mathbf{W}$ corresponding to the $\Opd{\mathbf{W}}$ wiring diagram $\Phi:X_1,X_2\to Y$ of Figure~\ref{fig:WD}.} \label{fig:reconcept} \end{figure} \end{ex} The following remark explains that our pictures of wiring diagrams are not completely ad hoc---they are depictions of 1-dimensional oriented manifolds with boundary. The boxes in our diagrams simply tie together the positively and negatively oriented components of an individual oriented 0-manifold. \begin{rem}\label{rem:cobordism} For any set $S$, let $\operatorname{1--\bf Cob}/S$ denote the symmetric monoidal category of oriented 0-manifolds over $S$ and the 1-dimensional cobordisms between them. We call its objects \emph{oriented $S$-typed 0-manifolds}. Recall that $\mathbf{W}=\mathbf{W}_{\mathbf{Man}}$ is our category of $\mathbf{Man}$-typed wiring diagrams; let ${\mathbf M}:=\Ob\mathbf{Man}$ denote the set of manifolds (see Remark~\ref{rem:default}). There is a faithful, essentially surjective, strong monoidal functor \[\mathbf{W}\to \operatorname{1--\bf Cob}/{\mathbf M},\] sending a box $(\inp{X},\outp{X})$ to the oriented ${\mathbf M}$-typed 0-manifold $\inp{X}+\outp{X}$ where $\inp{X}$ is oriented positively and $\outp{X}$ negatively. Under this functor, a wiring diagram $\Phi=(X,Y,\varphi)$ is sent to a 1-dimensional cobordism that has no closed loops. A connected component of such a cobordism can be identified with either its left or right endpoint, which correspond to the domain or codomain of the bijection \mbox{$\varphi\colon\inp{X}+\outp{Y}\To{\cong}\outp{X}+\inp{Y}$}. See \cite{SpivakSchultzRupel}. In fact, with the {\em no passing wires} condition on morphisms (cobordisms) $X\to Y$ (see Definition \ref{def:W}), the subcategory $\mathbf{W}\subseteq\operatorname{1--\bf Cob}/{\mathbf M}$ is the left class of an orthogonal factorization system. See \cite{Abadi}. \end{rem} Let $\Phi=(X,Y,\varphi)$ be a wiring diagram. Applying the dependent product functor (see Definition~\ref{def:depprod}) to $\varphi$, we obtain a diffeomorphism of manifolds \begin{equation}\label{eqn:prodwd}\overline{\varphi}\colon \overline{\outp{X}}\times\overline{\inp{Y}}\to \overline{\inp{X}}\times\overline{\outp{Y}}.\end{equation} Equivalently, if $\varphi$ is represented by the pair $(\inp{\varphi},\outp{\varphi})$, as in Definition~\ref{def:W}, we can express $\ol{\varphi}$ in terms of its pair of component maps: \begin{displaymath} \left\{ \begin{array}{lr} \overline{\inp{\varphi}}\colon \overline{\outp{X}}\times\overline{\inp{Y}}\to\overline{\inp{X}} \\ \overline{\outp{\varphi}}\colon \overline{\outp{X}}\to\overline{\outp{Y}} \end{array} \right. \end{displaymath} It will also be useful to apply the dependent product functor to the commutative diagrams in (\ref{dia:composition diagrams}), which define wiring diagram composition. Note that, by the contravariance of the dependent product, the codiagonal $\nabla\colon \outp{X}+\outp{X}\to \outp{X}$ gets sent to the diagonal map $\Delta\colon \overline{\outp{X}}\to\overline{\outp{X}}\times\overline{\outp{X}}$. Thus we have the following commutative diagrams: \begin{align}\label{dia:dep prod of wd} \xymatrixcolsep{3.5pc} \xymatrix{ \overline{\outp{X}}\times\overline{\inp{Z}} \ar[r]^{\overline{\inp{\omega}}} \ar[d]_{\Delta\times\mathds{1} } &\overline{\inp{X}} \\ \overline{\outp{X}}\times\overline{\outp{X}}\times\overline{\inp{Z}} \ar[d]_{\mathds{1} \times\overline{\outp{\varphi}}\times\mathds{1} } &{} \\ \overline{\outp{X}}\times\overline{\outp{Y}}\times\overline{\inp{Z}} \ar[r]_-{\mathds{1} \times\overline{\inp{\psi}}} &\overline{\outp{X}}\times\overline{\inp{Y}} \ar[uu]_{\overline{\inp{\varphi}}} } \mskip15mu \xymatrixcolsep{1.5pc} \xymatrix{ \overline{\outp{X}} \ar[rd]_{\overline{\outp{\varphi}}} \ar[rr]^{\overline{\outp{\omega}}} & &\overline{\outp{Z}}\\ &\overline{\outp{Y}} \ar[ru]_{\overline{\outp{\psi}}}} \end{align} \section{The Algebra of Open Systems} \label{sec:g} In this section we define an algebra $\mathcal{G}\colon(\mathbf{W},\oplus,0)\to(\mathbf{Set},\times,\star)$ (see Definition~\ref{def:algebra}) of general open dynamical systems. A $\mathbf{W}$-algebra can be thought of as a choice of semantics for the syntax of $\mathbf{W}$, i.e., a set of possible meanings for boxes and wiring diagrams. As in Definition~\ref{def:SMC to Opd}, we may use this to construct the corresponding operad algebra $\Opd{\mathcal{G} }:\Opd{\mathbf{W}}\to\mathbf{Sets}$. Before we define $\mathcal{G}$, we revisit Example~\ref{ex:main} for inspiration. \begin{ex} \label{ex:promise} As the textbook exercise \cite[Problem 7.21]{BD} prompts, let's begin by writing down the system of equations that governs the amount of salt $Q_i$ within the tanks $X_i$. This can be done by using dimensional analysis for each port of $X_i$ to find the the rate of salt being carried in ounces per minute, and then equating the rate $\dot{Q}_i$ to the sum across these rates for $\inp{X}_i$ ports minus $\outp{X}_i$ ports. \begin{align*} \dot{Q}_1\frac{\text{oz}}{\text{min}}&= -\left(\frac{Q_1 \text{oz}}{30 \text{gal}}\cdot\frac{3 \text{gal}}{\text{min}}\right) +\left(\frac{Q_2\text{oz}}{20\text{gal}}\cdot\frac{1.5\text{gal}}{\text{min}}\right) +\left(\frac{1\text{oz}}{\text{gal}}\cdot\frac{1.5\text{gal}}{\text{min}}\right) \\ \dot{Q}_2\frac{\text{oz}}{\text{min}}&= -\left(\frac{Q_2 \text{oz}}{20 \text{gal}}\cdot\frac{(1.5+2.5) \text{gal}}{\text{min}}\right) +\left(\frac{Q_1\text{oz}}{30\text{gal}}\cdot\frac{3\text{gal}}{\text{min}}\right) +\left(\frac{3\text{oz}}{\text{gal}}\cdot\frac{1\text{gal}}{\text{min}}\right) \end{align*} Dropping the physical units, we are left with the following system of ODEs: \begin{equation}\label{eqn:naive} \left\{ \begin{array}{lr} \dot{Q}_1=-.1Q_1+.075Q_2+1.5 \\ \dot{Q}_2=.1Q_1-.2Q_2+3 \end{array} \right. \end{equation} \end{ex} The derivations for the equations in (\ref{eqn:naive}) involved a hidden step in which the connection pattern in Figure~\ref{fig:pipebrine}, or equivalently Figure~\ref{fig:WD}, was used. Our wiring diagram approach explains this step and makes it explicit. Each box in a wiring diagram should only ``know'' about its own inputs and outputs, and not how they are connected to others. That is, we can only define a system on $X_i$ by expressing $\dot{Q}_i$ just in terms of $Q_i$ and $\inp{X}_i$---this is precisely the data of an open system (see Definition~\ref{def:opensystem}). We now define our algebra $\mathcal{G}$, which assigns a set of open systems to a box. Given a wiring diagram and an open system on its domain box, it also gives a functorial procedure for assigning an open system to the codomain box. We will then use this new machinery to further revisit Example~\ref{ex:promise} in Example~\ref{ex:as promised}. \begin{defn}\label{def:general algebra} We define $\mathcal{G}:(\mathbf{W},\oplus,0)\to(\mathbf{Set},\times,\star)$ as follows. Let $X\in\Ob\mathbf{W}$. The \emph{set of open systems on $X$}, denoted $\mathcal{G}(X)$, is defined as \[\mathcal{G} (X)=\{(S,f)\; |\;S\in\Ob\TFS{},(\overline{S},\overline{\inp{X}},\overline{\outp{X}},f)\in\Ob\mathbf{ODS} \}.\] We call $S$ the set of \emph{state variables} and its dependent product $\overline{S}$ the \emph{state space}. Let $\Phi=(X,Y,\varphi)$ be a wiring diagram. Then $\mathcal{G} (\Phi)\colon \mathcal{G} (X)\to\mathcal{G} (Y)$ is given by $(S,f)\mapsto (\mathcal{G} (\Phi)S,\mathcal{G} (\Phi)f)$, where $\mathcal{G} (\Phi)S=S$ and $g=\mathcal{G} (\Phi)f\colon \overline{S}\times\overline{\inp{Y}}\to T\overline{S}\times\overline{\outp{Y}}$ is defined by the dashed arrows $(\inp{g},\outp{g})$ (see Definition~\ref{def:opensystem}) that make the diagrams below commute: \begin{equation} \xymatrixcolsep{3.5pc} \xymatrix{ \overline{S}\times\overline{\inp{Y}} \ar[d]_{\Delta\times\mathds{1} _{\overline{\inp{Y}}}} \ar@{-->}[r]^-{\inp{g}} &T\overline{S} \\ \overline{S}\times\overline{S}\times\overline{\inp{Y}} \ar[d]_{\mathds{1} _{\overline{S}}\times \outp{f}\times\mathds{1} _{\overline{\inp{Y}}}} & {} \\ \overline{S}\times\overline{\outp{X}}\times\overline{\inp{Y}} \ar[r]_-{\mathds{1} _{\overline{S}}\times\overline{\inp{\varphi}}} &\overline{S}\times\overline{\inp{X}} \ar[uu]_{\inp{f}} } \hspace{7 mm} \xymatrixcolsep{2.5pc} \xymatrix{ \overline{S} \ar[rd]_{\outp{f}} \ar@{-->}[rr]^{\outp{g}} & &\overline{\outp{Y}}\\ &\overline{\outp{X}} \ar[ru]_{\overline{\outp{\varphi}}}} \label{eqn:G of wd} \end{equation} One may note strong resemblance between the diagrams in (\ref{eqn:G of wd}) and those in (\ref{dia:composition diagrams}). We give $\mathcal{G} $ a lax monoidal structure: for any pair $X,X'\in\mathbf{W}$ we have a coherence map $\mu_{X,X'}:\mathcal{G} (X)\times\mathcal{G} (X')\to\mathcal{G} (X\oplus X')$ given by \[\big((S,f),(S',f')\big)\mapsto (S+S',f\times f'),\] where $f\times f'$ is as in Lemma~\ref{def:osprod}. \label{def:mu} \end{defn} \begin{rem} Recall from Remark \ref{rem:default} that $\mathbf{Man}$ is small, so the collection $\mathcal{G}(X)$ of open systems on $X$ is indeed a set. \end{rem} \begin{rem} One may also encode an initial condition in $\mathcal{G} $ by using $\mathbf{Man}_*$ instead of $\mathbf{Man}$ in Remark~\ref{rem:default} as the default choice of finite product category, where $\mathbf{Man}_*$ is the category of pointed smooth manifolds and base point preserving smooth maps. The base point represents the initialization of the state variables. \end{rem} We now establish that $\mathcal{G}$ is indeed an algebra. \begin{prop}\label{prop:G is W-alg} The pair $(\mathcal{G},\mu)$ of Definition~\ref{def:general algebra} is a lax monoidal functor, i.e., $\mathcal{G}$ is a $\mathbf{W}$-algebra. \end{prop} \begin{proof} Let $\Phi=(X,Y,\varphi)$ and $\Psi=(Y,Z,\psi)$ be wiring diagrams in $\mathbf{W}$. To show that $\mathcal{G}$ is a functor, we must have that $\mathcal{G} (\Psi\circ\Phi)=\mathcal{G} (\Psi)\circ\mathcal{G} (\Phi)$. Immediately we have $\mathcal{G} (\Psi\circ\Phi)S=S=\mathcal{G} (\Psi)(\mathcal{G} (\Phi)S)$. Now let \mbox{$h:=\mathcal{G} (\Psi\circ\Phi)f$} and $k:=\mathcal{G} (\Psi)(\mathcal{G} (\Phi)f)$. It suffices to show $h=k$, or equivalently $(\inp{h},\outp{h})=(\inp{k},\outp{k})$. One readily sees that $\outp{h}=\outp{k}$. We use (\ref{dia:dep prod of wd}) and (\ref{eqn:G of wd}) to produce the following diagram; showing it commutes is equivalent to proving that that $\inp{h}=\inp{k}$. \begin{equation}\label{eqn:algebracomp} \xymatrixcolsep{3pc} \xymatrix{ {} &\overline{S}\times\overline{\inp{Z}}\ar[d]^{\Delta\times\mathds{1} } &{} \\ {} &\overline{S}\times\overline{S}\times\overline{\inp{Z}} \ar[d]^{\mathds{1} \times \outp{f}\times\mathds{1} } &{} \\ \overline{S}\times\overline{\outp{Y}}\times\overline{\inp{Z}} \ar[d]_{\mathds{1} \times\overline{\inp{\psi}}} &\overline{S}\times\overline{\outp{X}}\times\overline{\inp{Z}} \ar[l]_-{\mathds{1} \times\overline{\outp{\varphi}}\times\mathds{1} } \ar[r]^-{\mathds{1} \times\Delta\times\mathds{1} } &\overline{S}\times\overline{\outp{X}}\times\overline{\outp{X}}\times\overline{\inp{Z}} \ar[dd]^{\mathds{1} \times\mathds{1} \times\overline{\outp{\varphi}}\times\mathds{1} } \\ \overline{S}\times\overline{\inp{Y}} \ar[d]_{\Delta\times\mathds{1} } &{} &{} \\ \overline{S}\times\overline{S}\times\overline{\inp{Y}} \ar[r]_-{\mathds{1} \times \outp{f}\times\mathds{1} } &\overline{S}\times\overline{\outp{X}}\times\overline{\inp{Y}} \ar[d]^{\mathds{1} \times\overline{\inp{\varphi}}} &\overline{S}\times\overline{\outp{X}}\times\overline{\outp{Y}}\times\overline{\inp{Z}} \ar[l]^-{\mathds{1} \times\mathds{1} \times\overline{\inp{\psi}}}\\ {} &\overline{S}\times\overline{\inp{X}}\ar[d]^{\inp{f}} &{} \\ {} &T\overline{S} &{} } \end{equation} The commutativity of this diagram, which is dual to the one for associativity in (\ref{eqn:associativity}), holds in an arbitrary category with products. Although the middle square fails to commute by itself, the composite of the first two maps equalizes it; that is, the two composite morphisms \mbox{$\overline{S}\times\overline{\inp{Z}}\to\overline{S}\times \overline{\outp{X}}\times \overline{\inp{Y}}$} agree. Since we proved the analogous result via string diagrams in the proof of Proposition \ref{prop:W is SMC}, we show it concretely using elements this time. Let $(s,z)\in\ol{S}\times\ol{\inp{Z}}$ be an arbitrary element. Composing six morphisms $\ol{S}\times\ol{\inp{Z}}\longrightarrow\ol{S}\times\ol{\outp{X}}\times\ol{\inp{Y}}$ through the left of the diagram gives the same answer as composing through the right; namely, $$\Big(s,\outp{f}(s),\inp{\psi}\big(\outp{\varphi}\circ\outp{f}(s),z\big)\Big)\in\ol{S}\times\ol{\outp{X}}\times\ol{\inp{Y}}.$$ Since the diagram commutes, we have shown that $\mathcal{G}$ is a functor. To prove that the pair $(\mathcal{G},\mu)$ constitutes a lax monoidal functor $\mathbf{W}\to\mathbf{Set}$, i.e., a $\mathbf{W}$-algebra, we must establish coherence. Since $\mu$ simply consists of a coproduct and a product, this is straightforward and will be omitted. \end{proof} As established in Definition~\ref{def:SMC to Opd}, the coherence map $\mu$ allows us to define the operad algebra $\Opd{\mathcal{G}}$ from $\mathcal{G}$. This finally provides the formal setting to consider open dynamical systems over operadic wiring diagrams, such as our motivating one in Figure~\ref{fig:pipebrine}. We note that, in contrast to the trivial equality $\mathcal{G}(\Phi)S=S$ found in Definition~\ref{def:general algebra}, in the operadic setting we have \[\Opd{\mathcal{G}}(\Phi)(S_1,\ldots,S_n)=\amalg_{i=1}^n S_i.\] This simply means that the set of state variables of the larger box $Y$ is the disjoint union of the state variables of its constituent boxes $X_i$. Now that we have the tools to revisit Example~\ref{ex:promise}, we do so in the following section, but first we will define the subalgebra $\mathcal{L}$ to which it belongs---that of linear open systems. \section{The Subalgebra of Linear Open Systems} \label{sec:l} In this section, we define the algebra $\mathcal{L}\colon\mathbf{W}_{\mathbf{Lin}}\to\mathbf{Set}$, which encodes linear open systems. Here $\mathbf{W}_{\mathbf{Lin}}$ is the category of $\mathbf{Lin}$-typed wiring diagrams, as in Remark \ref{rem:different Cs}. Of course, one can use Definition~\ref{def:SMC to Opd} to construct an operad algebra $\Opd{\mathcal{L} }:\Opd{\mathbf{W}_{\mathbf{Lin}}}\to\mathbf{Sets}$. Before we give a formal definition for $\mathcal{L}$, we first provide an alternative description for linear open systems and wiring diagrams in $\mathbf{W}_{\mathbf{Lin}}$. The category $\mathbf{Lin}$ enjoys special properties---in particular it is an additive category, as seen by the fact that there is an equivalence of categories $\mathbf{Lin}\cong \mathbf{Vect}_\mathbb R$. Specifically, finite products and finite coproducts are isomorphic. Hence a morphism \mbox{$f:A_1\times A_2\to B_1\times B_2$} in $\mathbf{Lin}$ canonically decomposes into a matrix equation \[\begin{bmatrix} a_1 \\ a_2 \end{bmatrix} \mapsto \begin{bmatrix} b_1 \\ b_2 \end{bmatrix} = \begin{bmatrix} f^{1,1} & f^{1,2} \\ f^{2,1} & f^{2,2} \end{bmatrix} \begin{bmatrix} a_1 \\ a_2 \end{bmatrix}\] This matrix is naturally equivalent to the whole map $f$ by universal properties. We use these to rewrite our relevant $\mathbf{Lin}$ maps in Definitions~\ref{def:rewrite} and \ref{def:rewrite2} below. \begin{defn} \label{def:rewrite} Suppose that $(M,\inp{U},\outp{U},f)$ is a linear open system and hence $f:M\times\inp{U}\to TM\times \outp{U}$. Then $f$ decomposes into the four linear maps: \begin{align*} f^{M,M}&\colon M\to TM & f^{M,U}&\colon \inp{U}\to TM \\ f^{U,M}&\colon M\to\outp{U} & f^{U,U}&\colon\inp{U}\to\outp{U} \end{align*} By Definition~\ref{def:opensystem}, we know $f^{U,U}=0$. If we let $(m,\inp{u},\outp{u})\in M\times\inp{U}\times\outp{U}$, these equations can be organized into a single matrix equation \begin{equation}\label{eqn:matrixform} \begin{bmatrix}\dot{m} \\ \outp{u} \end{bmatrix}=\begin{bmatrix} f^{M,M} & f^{M,U} \\ f^{U,M} & 0 \end{bmatrix}\begin{bmatrix} m \\ \inp{u} \end{bmatrix} \end{equation} \end{defn} We will exploit this form in Definition~\ref{def:linear algebra} to define how $\mathcal{L}$ acts on wiring diagrams in terms of one single matrix equation, in place of the seemingly complicated commutative diagrams in (\ref{eqn:G of wd}). To do so, we also recast wiring diagrams in matrix format in Definition~\ref{def:rewrite2} below. \begin{defn} \label{def:rewrite2} Suppose $\Phi=(X,Y,\varphi)$ is a wiring diagram in $\mathbf{W}_\mathbf{Lin}$. Recalling (\ref{eqn:prodwd}), we apply the dependent product functor to $\varphi$: \[\ol{\varphi}\colon\ol{\outp{X}}\times\ol{\inp{Y}}\to\ol{\inp{X}}\times\ol{\outp{Y}}\] Since this is a morphism in $\mathbf{Lin}$, it can be decomposed into four linear maps \begin{align*} \overline{\varphi}^{X,X}&\colon \overline{\outp{X}}\to\overline{\inp{X}}&\overline{\varphi}^{X,Y}&\colon \overline{\outp{X}}\to\overline{\outp{Y}}\\ \overline{\varphi}^{Y,X}&\colon \overline{\inp{Y}}\to\overline{\outp{X}}&\overline{\varphi}^{Y,Y}&\colon \overline{\inp{Y}}\to\overline{\outp{Y}} \end{align*} By virtue of the no passing wires condition in Definition~\ref{def:W}, we must have \mbox{$\overline{\varphi}^{Y,Y}=0$}. We can then, as in (\ref{eqn:matrixform}), organize this information in one single matrix: \[ \overline{\varphi}= \begin{bmatrix} \;\overline{\varphi^{X,X}} &\overline{\varphi^{X,Y}}\; \\ \overline{\varphi^{Y,X}} & 0 \end{bmatrix} \] \end{defn} \begin{rem} The bijectivity condition in Definition~\ref{def:W} implies that $\overline{\varphi}$ is a permutation matrix. \end{rem} We now employ these matrix characterizations to define the algebra $\mathcal{L}$ of linear open systems. \begin{defn} \label{def:linear algebra} We define the algebra $\mathcal{L}\colon(\mathbf{W}_\mathbf{Lin},\oplus,0)\to (\mathbf{Set},\times,\star)$ as follows. Let $X\in\Ob\mathbf{W}_{\mathbf{Lin}}$. Then the \emph{set of linear open systems $\mathcal{L}(X)$ on $X$} is defined as \[\mathcal{L}(X):=\big\{(S,f)\;|\;S\in\Ob\TFS{\mathbf{Lin}}, (\overline{S},\overline{\inp{X}},\overline{\outp{X}},f)\in\Ob\mathbf{ODS} _\mathbf{Lin}\big\}.\] Let $\Phi=(X,Y,\varphi)$ be a wiring diagram. Then, as in Definition~\ref{def:general algebra}, we define $\mathcal{L} (\Phi)(S,f):=(S,g)$. We use the format of Definitions~\ref{def:rewrite} and \ref{def:rewrite2} to define $g$: \begin{equation} \label{eqn:glin} \begin{split} g= \begin{bmatrix} g^{S,S} & g^{S,X} \\ g^{X,S} & g^{X,X} \end{bmatrix} & =\begin{bmatrix} f^{S,X} & 0 \\ 0 & I \end{bmatrix} \overline{\varphi} \begin{bmatrix} f^{X,S} & 0 \\ 0 & I \end{bmatrix}+\begin{bmatrix} f^{S,S} & 0 \\ 0 & 0 \end{bmatrix} \\ & =\begin{bmatrix} f^{S,X} & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix}\ol{\varphi}^{X,X}&\ol{\varphi}^{X,Y}\\\ol{\varphi}^{Y,X}&\ol{\varphi}^{Y,Y}\end{bmatrix} \begin{bmatrix} f^{X,S} & 0 \\ 0 & I \end{bmatrix}+\begin{bmatrix} f^{S,S} & 0 \\ 0 & 0 \end{bmatrix} \\ & =\begin{bmatrix} f^{S,X}\overline{\varphi}^{X,X}f^{X,S}+f^{S,S} & f^{S,X}\overline{\varphi}^{X,Y} \\ \overline{\varphi}^{Y,X}f^{X,S} & 0 \end{bmatrix} \end{split} \end{equation} This is really just a linear version of the commutative diagrams in (\ref{eqn:G of wd}). For example, the equation $g^{S,S}=f^{S,X}\overline{\varphi}^{X,X}f^{X,S}+f^{S,S}$ can be read off the diagram for $\inp{g}$ in (\ref{eqn:G of wd}), using the additivity of $\mathbf{Lin}$. Finally, The coherence map $\mu_{\mathbf{Lin}_{X,X'}}:\mathcal{L} (X)\times\mathcal{L} (X')\to\mathcal{L} (X\oplus X')$ is given, as in Definition~\ref{def:mu}, by $\big((S,f),(S',f')\big)\mapsto (S+S',f\times f')$. \end{defn} We now establish that this constitutes an algebra. \begin{prop} The pair $(\mathcal{L},\mu_\mathbf{Lin})$ of Definition~\ref{def:linear algebra} is a lax monoidal functor, i.e. a $\mathbf{W}_\mathbf{Lin}$-algebra. \end{prop} \begin{proof} Since coherence is identical to that in Proposition~\ref{prop:G is W-alg}, it will suffice to show functoriality. Let $\Phi=(X,Y,\varphi)$ and $\Psi=(Y,Z,\psi)$ be wiring diagrams with composition $\Psi\circ\Phi=(X,Z,\omega)$. We now rewrite $\overline{\omega}$ using a matrix equation in terms of $\overline{\varphi}$ and $\overline{\psi}$ by recasting (\ref{dia:composition diagrams}) in matrix form below. \begin{equation} \label{eqn:omegamatrix} \begin{split} \overline{\omega}=\begin{bmatrix} \overline{\omega^{X,X}} & \overline{\omega^{X,Z}} \\ \overline{\omega^{Z,X}} & \overline{\omega^{Z,Z}} \end{bmatrix} & = \begin{bmatrix} \overline{\varphi}^{X,Y} & 0 \\ 0 & I \end{bmatrix}\overline{\psi}\begin{bmatrix} \overline{\varphi}^{Y,X} & 0 \\ 0 & I \end{bmatrix}+\begin{bmatrix} \overline{\varphi}^{X,X} & 0 \\ 0 & 0 \end{bmatrix} \\ & =\begin{bmatrix} \overline{\varphi}^{X,Y}\overline{\psi}^{Y,Y}\overline{\varphi}^{Y,X}+\overline{\varphi}^{X,X} & \overline{\varphi}^{X,Y}\overline{\psi}^{Y,Z} \\ \overline{\psi}^{Z,Y}\overline{\varphi}^{Y,X} & 0 \end{bmatrix} \end{split} \end{equation} We now prove that $\mathcal{L} (\Psi\circ\Phi)=\mathcal{L} (\Psi)\circ\mathcal{L} (\Phi)$. We immediately have $\mathcal{L} (\Psi\circ\Phi)S=S=\mathcal{L} (\Psi)(\mathcal{L} (\Phi)S)$. Let $h:=\mathcal{L} (\Psi\circ\Phi)f$ and \mbox{$k:=\mathcal{L} (\Psi)(\mathcal{L} (\Phi)f)$}. We must show $h=k$. Let $g=\mathcal{L} (\Phi)f$ and $\Psi\circ\Phi=(X,Z,\omega)$. It is then straightforward matrix arithmetic to see that \begin{equation} \label{eqn:matrix} \begin{split} k=\mathcal{L} (\Psi)g &=\begin{bmatrix} g^{S,Y} & 0 \\ 0 & I \end{bmatrix}\overline{\psi}\begin{bmatrix} g^{Y,S} & 0 \\ 0 & I \end{bmatrix} + \begin{bmatrix} g^{S,S} & 0 \\ 0 & 0 \end{bmatrix} \\ & =\begin{bmatrix}f^{S,X}(\overline{\varphi}^{X,Y}\overline{\psi}^{Y,Y}\overline{\varphi}^{Y,X}+\overline{\varphi}^{X,X})f^{X,S}+f^{S,S} & f^{S,X}\overline{\varphi}^{X,Y}\overline{\psi}^{Y,Z} \\ \overline{\psi}^{Z,Y}\overline{\varphi}^{Y,X}f^{X,S} & 0 \end{bmatrix} \\ &=\begin{bmatrix} f^{S,X} & 0 \\ 0 & I \end{bmatrix}\overline{\omega}\begin{bmatrix} f^{X,S} & 0 \\ 0 & I \end{bmatrix} + \begin{bmatrix} f^{S,S} & 0 \\ 0 & 0 \end{bmatrix} =\mathcal{L} (\Psi\circ\Phi)f=h \end{split} \end{equation} Therefore, the pair $(\mathcal{L},\mu_\mathbf{Lin})$ constitutes a lax monoidal functor $\mathbf{W}_{\mathbf{Lin}}\to\mathbf{Set}$, i.e., a $\mathbf{W}_{\mathbf{Lin}}$-algebra. \end{proof} \begin{rem} Although we've been referring to $\mathcal{L}$ as a subalgebra of $\mathcal{G}$, this is technically not the case since they have different source categories. The following diagram illustrates precisely the relationship between the $\mathbf{W}_{\mathbf{Lin}}$-algebra $\mathcal{L}$, defined above, and the $\mathbf{W}$-algebra $\mathcal{G}$, defined in Section \ref{sec:g}. \begin{equation} \label{eqn:final} \xymatrix@C=16pt@R=30pt{ \mathbf{W}_{\mathbf{Lin}}\ar@{^{(}->}[rr]^{\mathbf{W}_i} \ar[dr]_{\mathcal{L} }&\ar@{}[d]|(.4){\overset{\textstyle\epsilon}{\Longrightarrow}}&\mathbf{W}\ar[dl]^{\mathcal{G} }\\ &\mathbf{Set} } \end{equation} Here, the natural inclusion $\mathbf{W}_i\colon\mathbf{W}_{\mathbf{Lin}}\inj\mathbf{W}$ corresponds to $i\colon\mathbf{Lin}\hookrightarrow\mathbf{Man}$, and we have a natural transformation $\epsilon:\mathcal{L} \to\mathcal{G} \circ i$. Hence for each \mbox{$X\in\Ob\mathbf{W}_{\mathbf{Lin}}$}, we have a function $\epsilon_X:\mathcal{L} (X)\to \mathcal{G} (i(X))=\mathcal{G} (X)$ that sends the linear open system $(S,f)\in\mathcal{L} (X)$ to the open system \mbox{$(\TFS{i}(S),i(f))=(S,f)\in\mathcal{G} (X)$}. \end{rem} As promised, we now reformulate Example~\ref{ex:main} in terms of our language. \begin{ex} \label{ex:as promised} For the reader's convenience, we reproduce Figure~\ref{fig:pipebrine} and Table~\ref{tab:explicit}. \begin{figure}[ht] \activetikz{ \path(0,0); \blackbox{(10,5)}{2}{1}{$Y$}{.7} \node at (.4,3.6) {\small $\inp{Y}_{a}$}; \node at (.4,1.9) {\small $\inp{Y}_{b}$}; \node at (9.6,2.75) {\small $\outp{Y}_a$}; \path(2,1.5); \blackbox{(2,2)}{2}{1}{$X_1$}{.5} \node at (3,2.6) {\tiny $Q_1(t)$ oz salt}; \node at (3,2.3) {\tiny 30 gal water}; \node at (1.7,3.06) {\small $\inp{X}_{1a}$}; \node at (1.7,2.4) {\small $\inp{X}_{1b}$}; \node at (4.38,2.7) {\small $\outp{X}_{1a}$}; \path(6,1.5); \blackbox{(2,2)}{2}{2}{$X_2$}{.5} \node at (7,2.6) {\tiny $Q_2(t)$ oz salt}; \node at (7,2.3) {\tiny 20 gal water}; \node at (5.7,3.07) {\small $\inp{X}_{2a}$}; \node at (5.7,2.4) {\small $\inp{X}_{2b}$}; \node at (8.37,3.07) {\small $\outp{X}_{2a}$}; \node at (8.37,2.37) {\small $\outp{X}_{2b}$}; \directarc{(4.25,2.5)}{(5.75,2.16667)} \node at (5,2) {\tiny 3 gal/min}; \directarc{(0.35,1.6667)}{(1.75,2.83333)} \node at (.7,1.4) {\tiny 1.5 gal/min}; \node at (.7,1.2) {\tiny 1 oz/gal}; \fancyarc{(0.35,3.3333)}{(5.75,2.83333)}{-40}{25} \node at (.6,3.1) {\tiny 1 gal/min}; \node at (.6,2.9) {\tiny 3 oz/gal}; \directarc{(8.25,2.8333)}{(9.65,2.5)} \node at (9.5,2.3) {\tiny 2.5}; \node at (9.5,2.1) {\tiny gal/min}; \fancyarc{(1.75,2.16667)}{(8.25,2.16667)}{20}{-45} \node at (5,.3) {\tiny 1.5 gal/min}; } \caption{A dynamical system from Boyce and DiPrima interpreted over a wiring diagram $\Phi=(X_1,X_2;Y;\varphi)$ in $\Opd{\mathbf{W}}$.} \end{figure} \noindent\begin{minipage}{\linewidth} \[ \begin{array}{c||c|c|c|c|c} \rule[-4pt]{0pt}{16pt} w\in\inp{X}+\outp{Y}&\inp{X}_{1a}&\inp{X}_{1b}&\inp{X}_{2a}&\inp{X}_{2b}&\outp{Y}_{a} \\\hline \rule[-4pt]{0pt}{16pt} \varphi(w)\in\outp{X}+\inp{Y}&\inp{Y}_{b}&\outp{X}_{2b}&\inp{Y}_{a}&\outp{X}_{1a}&\outp{X}_{2a} \end{array} \] \smallskip \captionof{table}{} \end{minipage} We can invoke the yoga of Definition~\ref{def:rewrite2} to write $\overline{\varphi}$ as a matrix below: \begin{equation} \label{eqn:phimatrix} \begin{bmatrix} \;\overline{\outp{X_{1a}}}\; \\ \overline{\outp{X_{2a}}} \\ \overline{\outp{X_{2b}}} \\ \overline{\inp{Y_a}} \\ \overline{\inp{Y_b}} \end{bmatrix} = \begin{bmatrix} 0 &0 &I &0 &0 \\ 0 &0 &0 &0 &I \\ 0 &I &0 &0 &0 \\ I &0 &0 &0 &0 \\ 0 &0 &0 &I &0 \end{bmatrix} \begin{bmatrix} \;\overline{\inp{X_{1a}}}\; \\ \overline{\inp{X_{1b}}} \\ \overline{\inp{X_{2a}}} \\ \overline{\inp{X_{2b}}} \\ \overline{\outp{Y_a}} \end{bmatrix} \end{equation} One can think of $\overline{\varphi}$ as a block permutation matrix consisting of identity and zero matrix blocks. An identity matrix in block entry $(i,j)$ represents the fact that the port whose state space corresponds to row $i$ and the one whose state space corresponds to column $j$ get linked by $\Phi$. In general, the dimension of each $I$ is equal to the dimension of the corresponding state space and hence the formula in~(\ref{eqn:phimatrix}) is true, independent of the typing. In the specific example of this system, however, all of these ports are typed in $\mathbb R$, and so we have $I=1$ in~(\ref{eqn:phimatrix}). As promised in Example~\ref{ex:promise}, we now write the open systems for the $X_i$ in Figure~\ref{fig:pipebrine} as elements of $\mathcal{L} (X_i)$. The linear open systems below in (\ref{eqn:tanks}) represent $f_1$ and $f_2$, respectively. \begin{equation} \label{eqn:tanks} \left[ \begin{array}{c} \dot{Q}_1 \\ \outp{X_{1a}} \end{array} \right] = \begin{bmatrix} -.1 & 1 & 1 \\ .1 & 0 & 0 \end{bmatrix} \left[ \begin{array}{c} Q_1 \\ \inp{X_{1a}} \\ \inp{X_{1b}} \end{array} \right], \left[ \begin{array}{c} \dot{Q}_2 \\ \outp{X_{2a}} \\ \outp{X_{2b}} \end{array} \right] = \begin{bmatrix} -.2 & 1 & 1 \\ .125 & 0 & 0 \\ .075 & 0 & 0\end{bmatrix} \left[ \begin{array}{c} Q_2 \\ \inp{X_{2a}} \\ \inp{X_{2b}} \end{array} \right] \end{equation} Note the proportion of zeros and ones in the $f$-matrices of (\ref{eqn:tanks})---this is perhaps why the making explicit of these details was an afterthought in (\ref{eqn:naive}). Because we may have arbitrary nonconstant coefficients, our formalism can capture more intricate systems. We then use (\ref{eqn:phimatrix}) to establish that $\inp{X}_{1b}=\outp{X}_{2b}$ and $\inp{X}_{2b}=\outp{X}_{1a}$. This allows us to recover the equations in (\ref{eqn:naive}): \begin{displaymath} \left\{ \begin{array}{lr} \dot{Q}_1=-.1Q_1+\inp{X_{1a}}+\inp{X_{1b}}=-.1Q_1+1.5+\outp{X_{2b}}=-.1Q_1+.075Q_2+1.5 \\ \dot{Q}_2=-.2Q_2+\inp{X_{2a}}+\inp{X_{2b}}=-.2Q_2+3+\outp{X_{1a}}=-.2Q_2+.1Q_1+3 \end{array} \right. \end{displaymath} The coherence map in Definition~\ref{def:linear algebra} gives us the combined tank system: \[(Q,f):=\mu_\mathbf{Lin}((\{Q_1\},f_1),(\{Q_2\},f_2))=(\{Q_1,Q_2\},f_1\times f_2)\in\mathcal{L}(X).\] This system can then be written out as a matrix below \begin{equation}\label{eqn:combinedsystem}\begin{bmatrix}\dot{Q_1} \\ \dot{Q_2} \\ \outp{X_{1a}} \\ \outp{X_{2a}} \\ \outp{X_{2b}}\end{bmatrix}=\begin{bmatrix} -.1 & 0 & 1 & 1 & 0 & 0 \\ 0 & -.2 & 0 & 0 & 1 & 1 \\ .1 & 0 & 0 & 0 & 0 & 0\\ 0 & .125 & 0 & 0 & 0 & 0 \\ 0 & .075 & 0 & 0 & 0 & 0 \end{bmatrix}\begin{bmatrix}Q_1 \\ Q_2 \\ \inp{X_{1a}} \\ \inp{X_{1b}} \\ \inp{X_{2a}} \\ \inp{X_{2ba}}\end{bmatrix}\end{equation} Finally, we can apply formula (\ref{eqn:glin}) to (\ref{eqn:combinedsystem}) above to express as a matrix the open system $(Q,g)=(\Phi)f\in\mathcal{L}(Y)$ for the outer box $Y$. \[ \left[ \begin{array}{c} \dot{Q}_1 \\ \dot{Q}_2 \\ \outp{Y} \end{array} \right] = \begin{bmatrix} -.1 & .075 & 0 & 1 \\ .1 & -.2 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix} \left[ \begin{array}{c} Q_1 \\ Q_2 \\ \inp{Y_a} \\ \inp{Y_b} \end{array} \right] \] \end{ex} \medskip \bibliographystyle{annotate}
1,477,468,751,226
arxiv
\section{Introduction} \paragraph{Introduction.---} Inspiralling comparable mass compact binaries are the most plausible sources of gravitational radiation for the operational, planned and proposed laser interferometric GW interferometers. GW data analysts, analyzing noisy data from the interferometers, require accurate and efficient temporally evolving GW polarizations, $h_{+} (t)$ and $h_{\times}(t)$, the so-called GW search templates. It is expected that weak GW signals, buried in the noisy interferometric data, should be extracted by employing the technique of `matched filtering'. This is an optimal technique if and only if one can construct search templates that accurately model expected GW signals from astrophysical sources, especially in their phase evolution. Till the late stages of binary inspiral, GW signals may be accurately modeled using the post-Newtonian (PN) approximation to general relativity. The PN approximation to the dynamics of inspiralling compact binaries, usually modeled to consist of point masses, provides, for example, the equations of motion as corrections to the Newtonian one in terms of $({v}/{c})^2 \sim {G m}/{c^2\,r}$, where $v$, $m$, and $r$ are the characteristic orbital velocity, the total mass, and the typical orbital separation, respectively. In PN computations, it is customary to treat a non-spinning inspiralling compact binary to consist of two point masses moving in quasi-circular orbits. These PN computations, to date provided {\em four} quantities that are required to do astrophysics with GW interferometers. For inspiralling compact binaries, the relevant four quantities are the 3PN accurate dynamical (orbital) energy ${\cal E}(x)$, expressed as a PN series in terms of $x = \left ( G\, m\, \omega_{\rm 3PN} /c^3 \right )^{2/3}$, $ \omega_{\rm 3PN}(t) $ being the 3PN accurate orbital angular frequency, the 3.5PN accurate expression for GW energy luminosity $ {\cal L }(x) $ and the 2.5PN amplitude corrected expressions for $h_{+} (t)$ and $h_{\times}(t)$, written in terms of the orbital phase $\phi$ and $x$ \cite{BDI}. GW data analysts employ these inputs to construct various types of search templates and let us take a closer look at the so-called TaylorT1 and TaylorT2 waveforms implemented in the LSC Algorithms Library (LAL) \cite{LAL}. These two template families employ the following expression for the so-called restricted PN waveform \begin{equation} h(t) \propto \left ( \frac{G\,m \, \omega (t) }{c^3} \right )^{2/3} \, \cos 2\, \phi(t)\,, \label{Eq.I1} \end{equation} where the proportionality constant may be set to unity for non-spinning compact binaries. At a given PN order, the above mentioned two families provide two slightly different ways to compute $\omega (t)$ and $\phi(t)$. The TaylorT1 family numerically solves the following two differential equations: \begin{equation} \frac{d \phi (t)}{dt} = \omega (t)\,; \,\,\, \frac{d\,\omega (t)}{dt} = -{\cal L}( \omega) \bigg / \frac{ d {\cal E}}{d \omega}\,, \label{EqI2} \end{equation} where, for example, ${\cal L}( \omega)$ and ${\cal E}$ are respectively the 3.5PN accurate GW energy luminosity and the 3PN accurate orbital energy for TaylorT1 3.5PN waveforms. In other words, for a given PN member of the TaylorT1 family, $ \omega(t) $ and $\phi(t)$ are computed by numerically solving the related approximants in Eq.~(\ref{EqI2}). To construct a member of TaylorT2 family, say TaylorT2 3.5PN, we require 3.5PN (Taylor expanded) accurate version of $d\, \omega (t)/dt $, appearing in Eq.~(\ref{EqI2}). The differential equations that define $\omega(t) $ and $\phi(t)$ for TaylorT2 3.5PN waveforms can be symbolically displayed as \begin{align} \label{EqI3} \frac{d \phi (t)}{dt} &= \omega (t)\,; \frac{d\,\omega (t)}{dt} = \frac{96}{5} \left ( \frac{ G\, {\cal M}\,\omega}{c^3} \right )^{5/3} \omega^2 \, \biggl \{ 1 \nonumber \\ & \quad + {\cal O}(\nu) + {\cal O}(\nu^{3/2}) + {\cal O}(\nu^{2}) + {\cal O}(\nu^{5/2}) \nonumber \\ & \quad + {\cal O}(\nu^{3}) + {\cal O}(\nu^{7/2}) \biggr \}\,, \end{align} where $ \nu = 1/c^2$ is a PN ordering parameter and the explicit expressions for these PN contributions may be extracted from Refs.~\cite{BDI}. In the above equation, the chirp mass ${\cal M} \equiv m\,\eta^{3/5}$, where $\eta$ is the usual symmetric mass ratio and $m$ being the total mass of the binary. In this paper, we provide prescriptions to compute {\em three} new types of time-domain Taylor approximants that should be, in our opinion, interesting to various GW data analysis communities. Let us first list the salient features of these new templates that also employ an expression similar to Eq.~(\ref{Eq.I1}) to generate waveforms. The {\em three} important features of our Taylor approximants are the following. The first point is that, in comparison with TaylorT1 and Taylor T2 waveforms, for a given GW frequency window and at a given PN order, our prescriptions will provide more accumulated GW cycles. Further, our approaches to compute $h(t)$ are numerically as cheap (expensive) as TaylorT1 and Taylor T2 waveforms. Let us consider the second point. It is desirable to construct GW templates using the mathematical formulation employed to construct the (heavily employed) PN accurate relativistic \emph{Damour-Deruelle timing formula} for binary pulsars \cite{DD86}. This is because formally GW phasing for inspiralling compact binaries and timing of relativistic binary pulsars are quite similar. Our construction of these new Taylor approximants are indeed influenced by the GW phasing formalism, available in Ref.~\cite{DGI}, that provided a method to construct GW templates for compact binaries of arbitrary mass ratio moving in inspiralling eccentric orbits. We recall that the techniques adapted in Ref.~\cite{DGI} were influenced by the mathematical formulation, developed in Ref.~\cite{TD82}, to compute the \emph{Damour-Deruelle timing formula}. Finally, a recent preliminary investigation indicates that our new Taylor approximants, at the dominant radiation reaction order, should be very efficient in capturing GWs from compact binaries inspiralling along PN accurate and mildly eccentric orbits \cite{TG07}. This is, in our opinion, a very attractive feature for GW data analysts as GWs from inspiralling (astrophysical) compact binaries should have some tiny eccentricities, when their GWs enter the bandwidth of laser interferometers. Let us describe how we construct these new types of PN accurate time-domain Taylor approximant GW search templates. GWs from inspiralling (astrophysical) compact binaries will have some tiny eccentricities around orbital frequencies of $20$Hz. For example, using Ref.~\cite{DGI}, it is not that difficult to show that the orbital eccentricity of the Hulse-Taylor binary pulsar when its orbital frequency reaches around $20$ Hz will be $\sim 10^{-6}$. Therefore, let us take a closer look at how one can describe, in a PN accurate manner, eccentric orbits, motivated by the fact that GW phasing requires accurate orbital description. Couple of decades ago, it was demonstrated that associated with a PN accurate non-circular orbit, there exists {\em two gauge invariant} quantities, if expressed in terms the conserved orbital energy and angular momentum of the binary \cite{DS88}. These are the PN accurate mean motion $n$ and $k$ that measures the advance of periastron in the time interval $T$, $T$ being the radial orbital period such that $n =2\,\pi/ T $. It is quite convenient to define these quantities in the PN accurate Keplerian type parametric solution to the conservative PN accurate compact binary dynamics, available in Refs.~\cite{DD}. When eccentricity parameter, say time eccentricity $e_t$, associated with the PN accurate Keplerian type parametrization approaches zero, one can define PN accurate orbital angular frequency $ \omega \equiv d \phi/dt = n \left( 1 + k \right )$. This implies that for PN accurate circular orbits, the angular part of the orbital motion is simply given by $ \phi - \phi_{0} = n \times (1 + k) \times ( t - t_0) $. To 3PN order, using Ref.~\cite{KG06}, in the limit $ e_t \rightarrow 0$, we have \begin{align} \omega_{\rm 3PN} = & n \biggl \{ 1 +3\,{\xi}^{2/3} + \left( { \frac {39}{2}}-7\,\eta \right) {\xi}^{4/3} + \biggl [ {\frac {315}{2}} \nonumber \\ & \quad +7\,{\eta}^{2} + \biggl ( -{\frac {817}{4}} +{ \frac {123}{32}}\,{\pi }^{2} \biggr ) \eta \biggr ] {\xi}^{2} \biggr \}\,, \label{Eq2.1} \end{align} where $ \xi = G\,m\, n/c^3$. Let us now compute employing PN accurate expressions for ${\cal E}(x)$ and $ {\cal L}(x)$, available in Refs.~\cite{BDI}, the following 3PN accurate expression for the orbital energy ${\cal E}$ and 3.5PN accurate GW energy luminosity ${\cal L}$, in terms of $ \xi$, as \begin{subequations} \label{Eq2.2} \begin{align} {\cal \tilde E}(\xi) &= \xi^{2/3} \biggl \{ 1 + \left[ \frac{5}{4} - \frac{ \eta}{12} \right] {\xi}^{2/3} + \biggl [{\frac {45}{8}}-{ \frac {21}{8}}\,\eta \nonumber \\ & \quad -\frac{1}{24}\,{\eta}^{2} \biggr ] {\xi}^{4/3} + \biggl [ {\frac {7975}{192}}-{\frac {35}{5184}}\,{\eta}^{3}+{\frac { 1031}{288}}\,{\eta}^{2} \nonumber \\ & \quad + \left( -{\frac {30403}{576}}+{\frac {41}{96}} \,{\pi }^{2} \right) \eta \biggr ] {\xi}^{2} \biggr \}\,, \\ {\cal L}(\xi) &= \frac{32}{5} \, \eta^2 \xi^{10/3} \, \biggl \{ 1+ \left( -{\frac {35}{12}}\, \eta+{\frac {2113}{336}} \right) {\xi}^{2/3} \nonumber \\ & \quad +4\,\pi\,\xi + \left( {\frac {458461}{9072}}-{\frac {20129}{504}}\,\eta+{\frac {65}{18}}\,{ \eta}^{2} \right) {\xi}^{4/3} \nonumber \\ & \quad + \left( -{\frac {583}{24}}\,\eta+{\frac {26753}{672}} \right) \pi\,{\xi}^{5/3} + \biggl [ \biggl ( \frac{16}{3} \nonumber \\ & \quad +{\frac {41}{3}}\,\eta \biggr ) {\pi}^{2} +{\frac {13106635373}{23284800}} -{\frac {6881951}{7776}}\, \eta \nonumber \\ & \quad +{ \frac {375997}{3024}}\,{\eta}^{2} -{\frac {775}{324}}\,{\eta}^{3} -{\frac {1712}{105}}\,\biggl ( \gamma + \log(4\, \xi^{1/3}) \biggr ) \biggr ] {\xi}^{2} \nonumber \\ & \quad + \biggl ( {\frac {771833}{2016}}-{\frac {624559}{1728}}\,\eta +{\frac { 193385}{3024}}\,{\eta}^{2} \biggr ) \pi\,{\xi}^{7/3} \biggr \}\,, \end{align} \end{subequations} where ${\cal \tilde E}=-2\,E$, $E$ being the dimensionless non-relativistic energy per unit reduced mass \cite{DGI} and $\gamma$ being the Euler's gamma. We are now in a position to construct, in our terminology, TaylorK1 3.5PN and TaylorK2 3.5PN restricted PN waveforms. In our approach, the form of the restricted PN waveform, Eq.~(\ref{Eq.I1}), becomes $ h(t) \propto \left ( \frac{G\,m \, n(t) }{c^3} \right )^{2/3} \, \cos 2\, \phi(t)\,,$ This is allowed because at Newtonian order $\omega = n$ and the amplitude is indeed Newtonian accurate in Eq.~(\ref{Eq.I1}). For TaylorK1 3.5PN accurate waveform, $n(t)$ and $\phi(t)$ are numerically obtained using the following two differential equations \begin{subequations} \label{Eq2.4} \begin{align} \frac{d \phi}{dt} &= \omega_{\rm 3PN}\,, \label{Eq2.4a} \\ \frac{d n}{dt} &= - {\cal L}(\xi) \bigg / \frac{d { E}}{d n} \label{Eq2.4b} \end{align} \end{subequations} To construct our TaylorK2 3.5PN waveforms, as expected, we Taylor expand, in terms of $\xi$, the RHS of Eq.~(\ref{Eq2.4b}) and this leads to \begin{subequations} \label{Eq2.6} \begin{align} \frac{d \phi}{dt} &= \omega_{\rm 3PN}\,, \label{Eq2.6a} \\ \frac{d n}{dt} &= {\frac {96}{5}}\,\eta\,{n}^{2}{\xi}^{5/3} \biggl \{ 1 + \left( {\frac {1273}{336 }}-\frac{11}{4}\,\eta \right) {\xi}^{2/3} +4\,\pi\,\xi \nonumber \\ & \quad + \left( {\frac {438887}{18144}}+{\frac {59}{18}}\,{\eta}^{2} -{\frac {49507}{ 2016}}\,\eta \right) {\xi}^{4/3} + \biggl ( {\frac {20033}{672}} \nonumber \\ & \quad -{\frac {189}{8}}\,\eta \biggr ) \pi\,{\xi}^{5/3} + \biggl [ \left( \frac{16}{3} +{\frac {287}{24}}\,\eta \right) {\pi}^{2} \nonumber \\ & \quad -{\frac {5605}{2592}}\,{\eta}^{3} +{\frac {617285}{8064}} \,{\eta}^{2} -{\frac {16554367}{31104} }\,\eta \nonumber \\ & \quad +{\frac {38047038863}{139708800}} -{\frac {1712}{105}}\,\left ( \gamma + \log 4\,\xi^{1/3} \right ) \biggr ] \xi^2 \nonumber \\ & \quad + \biggl [ {\frac {91495}{1512}}\,{\eta}^{2} -{\frac {1608185}{6048}}\,\eta +{\frac {971011}{4032}} \biggr ] \pi\,{\xi}^{7/3} \biggr \}\,, \label{Eq2.6b} \end{align} \end{subequations} Let us now specify, for example, the limits of integration for $n$ to construct TaylorK1 3.5PN and TaylorK2 3.5PN waveforms. For initial LIGO, it is customary to use $\omega_i$ and $\omega_f$, the initial and final final values of $\omega$, to be $40\, \pi$ Hz and $(6^{(3/2)} \, m )^{-1}$Hz, where $\omega_f$ is twice the conventional orbital angular frequency of the innermost stable circular orbit for a test particle around a Schwarzschild black hole. With these inputs, the initial and final values of $n$, denoted by $n_i$ and $n_f$, are numerically computed using Eq.~(\ref{Eq2.1}). This is justified because of the observation in Ref.~\cite{TG06} that the quadrupolar GW frequency from a compact binary, having PN accurate orbital motion, appears at $( 1 + k) \, n/ \pi$. At 3PN order, for a compact binary having $m= 11.4 M_{\odot} $ and $\eta \sim 0.108$ we have $n_i \sim 111.32$ Hz and $n_f \sim 679.3$Hz. In our approaches to construct, for example, TaylorK1 2PN and TaylorK2 2PN waveforms, we use only the 2PN accurate relation connecting $\omega$ and $n$. Let us now compute in the time domain the accumulated number of GW cycles, $\mathcal N_{GW}$, in a given GW frequency window, by numerically integrating Eqs.~(\ref{Eq2.4}) and (\ref{Eq2.6}) representing temporal evolutions for TaylorK1 and TaylorK2 waveforms at {\em four} different PN orders, namely 2PN, 2.5PN, 3PN and 3.5PN orders, for three canonical compact binaries usually considered in the GW literature [we restrict these orbital evolutions such that emitted GWs are in the GW frequency window defined by $40$ Hz and $ ( 6^{3/2} \, \pi\, m )^{-1}$Hz]. Let us also compare these $\mathcal N_{GW}$ with what is expected from TaylorT1 and TaylorT2 waveforms at these four different PN orders . The numbers, relevant for initial LIGO, are listed in Table~\ref{tab1} where we compare $\mathcal N_{GW}$ resulting from TaylorK2 and TaylorT2 prescriptions [results are similar while comparing TaylorK1 and TaylorT1] \begin{table}[!ht] \caption{ \label{tab1} Accumulated number of GW cycles, relevant for initial LIGO, for three types of canonical binaries at four different PN orders using TaylorK2 and TaylorT2 waveforms. The values of $\mathcal N_{GW}$ arising from TaylorT2 waveforms are given in parentheses. We note that TaylorK2 waveforms provide more $\mathcal N_{GW}$ compared to TaylorT2 waveforms. For high mass binaries, the convergence of $\mathcal N_{GW}$ is not that pronounced for TaylorK2 waveforms compared to TaylorT2 waveforms. } \begin{tabular}{||l|r|r|r|} \hline $m_1/ M_{\odot} : m_2/ M_{\odot} $ & $1.4 : 1.4$ & $1.4 : 10$ & $10 : 10$ \\ \hline $ {\rm 2PN} $ \hfill & 1616.4 (1613.5) & 345.6 (333.8) & 57 (52.6) \\ ${\rm 2.5PN } $ \hfill & 1613.5 (1605.8) & 333.8 (333.1) & 53.8 ( 52.6) \\ ${\rm 3PN}$ \hfill & 1623.4 (1616) & 347.3 ( 330.9) & 57.6 (52.9) \\ ${\rm 3.5PN} $ \hfill & 1620.6 (1615.4) & 342.4 (330.5) & 56.2 (52.5) \\ \hline \end{tabular} \end{table} We are aware that LAL also provides routines to create TaylorT3 waveforms. In this prescription, both $\phi(t)$ and $\omega(t)$, appearing in Eq.~(\ref{Eq.I1}), are given as explicit PN accurate functions of time. These explicit time dependencies are usually expressed in terms of the so-called `adimensional' time variable $ \theta = \frac{c^3\, \eta}{5\, G\, m} \left( t_c - t \right)$, where $t_c$ is the PN accurate coalescence time. It is indeed possible for us to compute $n(t)$, using Eqs.~(\ref{Eq2.6}), as a PN series in terms of $\theta$. However, we are reluctant to repeat what is done in TaylorT3 waveforms to get $\phi(t)$ with the help of Eqs.~(\ref{Eq2.6}). Observe that radiation reaction and hence temporal evolution of $n$ first appears at 2.5PN order and therefore, in our opinion, it is better to keep $d \phi/dt$ to at least 2PN order in Eqs.~(\ref{Eq2.6}) to be consistent in a PN way[see Refs.~\cite{DGI,TG07} where similar approaches are employed]. It is important to note, while constructing these time-domain Taylor waveforms, that we employed the following two arguments. The first one is the standard argument that equates the rate of decrease of the conserved orbital energy of a compact binary to the opposite of GW luminosity. However, for constructing TaylorT1, TaylorT2, TaylorK1 and TaylorK2 waveforms, one requires additional PN accurate relations relating $\omega$ (or $n$ as the case may be) to the conserved orbital energy. Further, we speculate that the two different ways of computing $d \omega/dt$, enforced in TaylorT1 and TaylorT2 waveforms, may be based on the fact that observationally $d \omega/dt$ (or the above mentioned standard argument) is only tested to the Newtonian radiation reaction order by the accurate timing of binary pulsars. Therefore, it is natural to ask if we can construct $h(t)$ employing only the energy balance argument. This is indeed possible as demonstrated below \begin{subequations} \label{Eq2.7} \begin{align} h(\hat t) & \propto {\cal \tilde E}(\hat t) \, \cos 2\,\phi (\hat t) \,, \label{Eq2.7a} \\ \frac{d \phi}{d \hat t} &= \zeta^{3/2} \biggl \{ 1+ \frac{1}{8} \left[ {9}+\eta \right] \zeta + \biggl [ {\frac {891}{128}} -{\frac {201 }{64}}\,\eta +{\frac {11}{128}}\,{\eta}^{2} \biggr ] {\zeta}^{2} \nonumber \\ & \quad + \biggl [ {\frac {41445}{1024}} + \left( -{\frac {309715}{3072}} +{\frac {205} {64}}\,{\pi}^{2} \right) \eta +{\frac {1215}{1024}}\,{\eta}^{2} \nonumber \\ & \quad +{\frac {45}{1024}}\,{\eta}^{3} \biggr ] {\zeta}^{3} \biggr \}\,, \label{Eq2.7b} \\ \frac{d \zeta}{d \hat t} &= {\frac {64}{5}}\,\eta\,{\zeta}^{5} \biggl \{ 1+ \left[ {\frac {13}{336}}-\frac{5}{2}\,\eta \right] \zeta +4\,\pi\,{ \zeta}^{3/2} \nonumber \\ & \quad + \left[ { \frac {117857}{18144}} -{\frac {12017}{2016}}\,\eta +\frac{5}{2}\,{\eta}^{2} \right] {\zeta}^{2} + \biggl [ {\frac {4913}{672 }} \nonumber \\ & \quad -{\frac {177}{8}}\,\eta \biggr ] \pi\,{\zeta}^{5/2} + \biggl [ \left( {\frac {369}{32}}\,\eta+ \frac{16}{3}\right) {\pi}^{2} \nonumber \\ & \quad +{\frac {37999588601}{279417600}} -{\frac {24861497}{72576}}\,\eta +{\frac {488849}{16128}}\,{\eta}^{2} \nonumber \\ & \quad -{\frac {85}{64} }\,{\eta}^{3} -{\frac {1712}{105}}\, \biggl ( \ln \left( 4\,\sqrt {\zeta} \right) + \gamma \biggr ) \biggr ] {\zeta}^{3} \nonumber \\ & \quad + \biggl [ {\frac {613373}{12096}}\,{\eta}^{2}+{\frac {129817}{2304}}-{ \frac {3207739}{48384}}\,\eta \biggr ] \pi\,{\zeta}^{7/2} \biggr \}\,, \label{Eq2.7c} \end{align} \end{subequations} where $ \hat t = t\, c^3/G\,m $ and $\zeta = {\cal \tilde E}$. We call the resulting $h(\hat t)$ as TaylorEt waveforms. The values of $\zeta $ corresponding to $\omega_i$ and $\omega_f$ can numerically evaluated using the RHS of Eq.~(\ref{Eq2.7b}) for $d \phi/ d \hat t = \hat \omega$. We evaluated ${\cal N}_{GW}$ associated with TaylorEt 3.5PN waveforms for the three canonical compact binaries and the numbers are the following. For neutron star binaries, $m = 2.8 M_{\odot}$ and $ \eta = 0.25$, $ {\cal N}_{GW} =1617.4$ and for the usual black hole-neutron star binaries, $ m = 11.4 M_{\odot}$ and $\eta =0.108$, we have $ {\cal N}_{GW} =335.4$. For typical stellar mass black hole binaries, $ m = 20 M_{\odot}$ and $\eta =0.25$, one gets $ {\cal N}_{GW} =54.0$. It is interesting to note that we get larger ${\cal N}_{GW}$, compared to TaylorT 3.5PN waveforms and lower values compared to TaylorK 3.5PN waveforms. It should be related to the fact that it takes more time for TaylorEt prescription to reach $\omega_f$ form $\omega_i$ compared to TaylorT1 (or TaylorT2) approach and the opposite is true for the cases of TaylorK1 (or TaylorK2). The observation that TaylorEt waveforms also provide more number of GW cycles in a give GW frequency window, in our opinion, makes it our third prescription to compute $h(t)$. \paragraph{Conclusions.---} We provided new ways of constructing restricted time-domain PN accurate waveforms for non-spinning compact binaries inspiralling along PN accurate quasi-circular orbits. Our prescriptions employed PN accurate expressions for the conserved orbital energy and GW luminosity, available in Refs.~\cite{BDI}, in a democratic manner and heavily depended on certain PN accurate gauge invariant quantities, first introduced in Ref.~\cite{DS88}. These template waveforms provide more number of accumulated GW cycles in a given GW frequency window and may be useful in detecting GWs from inspiraling compact binaries that should have `teeny-weeny' orbital eccentricities. Further, our approaches are influenced by the way PN accurate \emph{Damour-Deruelle timing formula} was constructed. Therefore, we feel that our TaylorK1, TaylorK2 and TaylorEt waveforms should be of certain interest to the practitioners of LAL. Further, we feel that our restricted PN waveforms should be useful for the the recently initiated {\it mock LISA data challenge} task force. The data analysis implications of these templates, relevant for both ground and space based GW detectors, are under active investigations in collaborations with Stas Babak, Sukanta Bose, Christian R\"over and Manuel Tessmer. The GW phase evolution under our prescription is also being compared with its counterpart in numerical relativity based binary black inspiral. \acknowledgments I am indebted to Gerhard Sch\"afer for illuminating discussions and persistent encouragements. Lively discussions with Manuel Tessmer are warmly acknowledged. This work is supported in part by the DFG (Deutsche Forschungsgemeinschaft) through SFB/TR7 ``Gravitationswellenastronomie'' and the DLR (Deutsches Zentrum f\"ur Luft- und Raumfahrt) through ``LISA Germany''.
1,477,468,751,227
arxiv
\section{Introduction} There are two main motivations behind the definitions and results presented here. See next section for a precise definition of Fuchsian convex bodies, the main object of this paper, and Fuchsian convex surfaces (boundaries of Fuchsian convex bodies). The first motivation is to show that the geometry of Fuchsian convex surfaces in the Minkowski space is the right analogue of the classical geometry of convex compact hypersurfaces in the Euclidean space. In the present paper, we show the analogue of the basics results of what is called Brunn--Minkowski theory. Roughly speaking, the matter is to study the relations between the sum and the volume of the bodies under consideration. Actually here we associate to each convex set the volume of another region of the space, determined by the convex set, so we will call it the \emph{covolume} of the convex set. This generalization is as natural as, for example, going from the round sphere to compact hyperbolic surfaces. To strengthen this idea, existing results can be put into perspective. Indeed, Fuchsian convex surfaces are not new objects. As far I know, smooth Fuchsian hypersurfaces appeared in \cite{OS83}, see Subsection~\ref{sub:mink reg}. The simplest examples of convex Fuchsian surfaces are convex hulls of the orbit of one point for the action of the Fuchsian group. They were considered in \cite{NP91}, in relation with the seminal papers \cite{pen87,EP88}. See also \cite{CDM97}. The idea is to study hyperbolic problems via the extrinsic structure given by the Minkowski space. For a recent illustration see \cite{EGM11}. The first study of Fuchsian surfaces for their own is probably \cite{LS00}. The authors proved that for any Riemannian metric on a compact surface of genus $\geq 2$ with negative curvature, there exists an isometric convex Fuchsian surface in the $2+1$-Minkowski space, up to a quotient. In the Euclidean case, the analog problem is known as Weyl problem. A uniqueness result is also given. This kind of result about realization of abstract metrics by (hyper)surfaces invariant under a group action seems to go back to former papers of F.~Labourie and to \cite{Gro86}. The polyhedral analog of \cite{LS00} is considered in \cite{Fil11}. An important intermediate result, about polyhedral infinitesimal rigidity in $d=2$, was proved in \cite{Sch07} (Fuchsian analogue of Dehn theorem). More recently, a Fuchsian analogue of the ``Alexandrov prescribed curvature problem'' was proved in \cite{Ber10}. The proof uses optimal mass transport. A refinement of this result in the polyhedral $d=2$ case was obtained in \cite{isk00}. A solution for the Christoffel problem (prescribed sum of the radii of curvature in the regular case) for Fuchsian convex bodies will be given in \cite{fv} as well as for more general convex sets in the Minkowski space (with or without group action), similarly to \cite{LLS06}. The second motivation is that, up to a quotient, the results presented here are about the covolume defined by convex Cauchy surfaces in the simplest case of flat Lorentzian manifolds, namely the quotient of the interior of the future cone by a Fuchsian group. It is relevant to consider them in a larger class of flat Lorentzian manifolds, known as maximal globally hyperbolic Cauchy-compact flat spacetimes. They were considered in the seminal paper \cite{Mes07}, see \cite{Mes07+} and \cite{Bar05,Bon05}. Roughly speaking, one could consider hypersurfaces in the Minkowski space invariant under a group of isometries whose set of linear isometries forms a Fuchsian group (translations are added). In $d=2$, for such smooth strictly convex surfaces, a Minkowski theorem (generalizing Theorem~\ref{thm:alg lin det} in this dimension) was proved recently in \cite{BBZ10}. Maybe some of the basic objects introduced in the present paper could be extended to the point to these manifolds. The paper is organized as follows. Section~\ref{sec:def} introduces, among main definitions, the tool to study (Fuchsian) convex bodies, the support functions. The case of the $C^2_+$ Fuchsian convex bodies (roughly speaking, the ones with a sufficiently regular boundary) is treated in Section~\ref{sec:reg} and the one of polyhedral Fuchsian convex bodies in Section~\ref{sec:pol}. These two sections are independent. In Section~\ref{sec:gen} the general results are obtained by polyhedral approximation. It appears that the proofs of the main results, even though very analogous to the classical ones, are simpler than in the Euclidean case. \subsection*{Acknowledgment} The author would like to thank Stephanie Alexander, Thierry Barbot, Francesco Bonsante, Bruno Colbois, Ivan Izmestiev, Yves Martinez-Maure, Joan Porti, Jean-Marc Schlenker, Graham Smith, Rolf Schneider and Abdelghani Zeghib for attractive discussions about the content of this paper. The author thanks the anonymous referee for his/her comments and suggestions. Work supported by the ANR GR Analysis-Geometry. \section{Definitions}\label{sec:def} \subsection{Fuchsian convex bodies} The Minkowski space-time of dimension $(d+1)$, $d\geq 1$, is $\mathbb{R}^{d+1}$ endowed with the symmetric bilinear form $$\langle x,y\rangle_-=x_1y_1+\cdots+x_dy_d-x_{d+1}y_{d+1}.$$ We will denote by $\mathcal{F}$ the interior of the future cone of the origin. It is the set of future time-like vectors: the set of $x$ such that $\langle x,x\rangle_-<0$ (time-like) and the last coordinate of $x$ for the standard basis is positive (future). The pseudo-sphere contained in $\mathcal{F}$ at distance $t$ from the origin of ${\mathbb R}^{d+1}$ is $${\mathbb H}_t^d=\{x\in {\mathbb R}^{d+1}\vert \langle x,x\rangle_-=-t^2, x_{d+1}>0\}. $$ All along the paper we identify ${\mathbb H}_1^d$ with the hyperbolic space $\mathbb{H}^d$. In particular the isometries of ${\mathbb H}^d$ are identified with the linear isometries of the Minkowski space keeping ${\mathbb H}_1^d$ invariant \cite[A.2.4]{BP92}. Note that for any point $x\in \mathcal{F}$, there exists $t$ such that $x\in {\mathbb H}_t^d$. \begin{definition}\label{def: fuchsian body} A \emph{Fuchsian group} is a subgroup of the linear isometries group of $\mathbb{R}^{d+1}$, fixing setwise $\mathcal{F}$ and acting freely cocompactly on $\mathbb{H}^d$ (i.e.~${\mathbb H}^d/\Gamma$ is a compact manifold). A \emph{Fuchsian convex body} is the data of a convex closed proper subset $K$ of $\mathcal{F}$, together with a Fuchsian group $\Gamma$, such that $\Gamma K=K$. A \emph{$\Gamma$-convex body} is a Fuchsian convex body with Fuchsian group $\Gamma$. A \emph{Fuchsian convex surface} is the boundary of a Fuchsian convex body. \end{definition} A Fuchsian convex body has to be thought as the analogue of a convex body (compact convex set), with the compactness condition replaced by a ``cocompactness'' condition (we will see that a Fuchsian convex body is never bounded). Joan Porti pointed out to the author that what is done in this paper is probably true without the requirement that the group has no torsion. We will adapt the classical theory to the Fuchsian case. For that one we mainly follow \cite{Sch93}. \paragraph{Examples} The simplest examples of Fuchsian convex surfaces are the ${\mathbb H}_t^d$ (note that all Fuchsian groups act freely and cocompactly on ${\mathbb H}_t^d$). Their convex sides are Fuchsian convex bodies, denoted by $B^d_t$, and $B^d_1$ is sometimes denoted by $B^d$ or $B$. This example shows that a given convex set can be a Fuchsian convex body for many Fuchsian groups. Given a Fuchsian group $\Gamma$, we will see in the remaining of the paper two ways of constructing convex Fuchsian bodies. First, given a finite number of points in ${\mathcal F}$, the convex hull of their orbits for $\Gamma$ is a Fuchsian convex body, see Subsection~\ref{sub:pol}, where a dual construction is introduced. Second, we will see in Subsection~\ref{sub: reg sup} that any function on the compact hyperbolic manifold ${\mathbb H}^d/\Gamma$ satisfying a differential relation corresponds to a Fuchsian convex body. Hence the question of examples reduces to the question of finding the group $\Gamma$, that implies to find compact hyperbolic manifolds. Standard concrete examples of compact hyperbolic manifolds can be easily found in the literature about hyperbolic manifolds. For a general construction in any dimension see \cite{GPS88}. Notwithstanding it is not obvious to get explicit generators. Of course the case $d=1$ is totally trivial as a Fuchsian group is generated by a boost $\left( \begin{array}{cc} \cosh t & \sinh t \\ \sinh t & \cosh t \end{array} \right),$ for a non-zero real $t$. For $d=2$, explicit generators can be constructed following \cite{mas01}. For Figure~\ref{fig:polyhedron} and a computation at the end of the paper (the figure comes from a part of a Fuchsian convex body that can be manipulate on the author's webpage), the group is the simpliest acting on ${\mathbb H}^2$, namely the one having a regular octagon as fundamental domain in a disc model. Generators are given in \cite{kat92}. \paragraph{Remark on the signature of the bilinear form} The classical theory of convex bodies uses the usual scalar product on ${\mathbb R}^{d+1}$. Here we used the usual bilinear form of signature $(d,1)$. A natural question is to ask what happens if we consider a bilinear form of signature $(d+1-k,k)$. (Obviously, the vector structure, the volume, the Levi-Civita connection (and hence the geodesics), the topology and the notion of convexity don't depend on the signature. Moreover, any linear map preserving the bilinear form is of determinant one, hence preserves the volume.) Let us consider first the case of the usual bilinear form with signature $(d-1,2)$ ($d\geq 3$). The set of vectors of pseudo-norm $-1$ is a model of the Anti-de Sitter space, which is the Lorentzian analogue of the Hyperbolic space. First of all, we need groups of linear isometries acting cocompactly on the Anti-de Sitter space. They exist only in odd dimensions \cite{BZ04}. Moreover, Anti-de Sitter space does not bound a convex set. Finally, another interest of the present construction is that, as noted in the introduction, some objects introduced here could serve to study some kind of flat Lorentzian manifolds (with compact Cauchy surface), which can themselves be related to some problems coming from General Relativity. It is not clear if as many attention is given to pseudo-Riemannian manifolds with different signatures. \subsection{Support planes} For a subset $A$ of $\mathbb{R}^{d+1}$, a \emph{support plane} of $A$ at $x$ is an hyperplane $\mathcal{H}$ with $x\in A\cap \mathcal{H}$ and $A$ entirely contained in one side of $\mathcal{H}$. \begin{lemma}\label{lem: future convex} Let $K$ be a $\Gamma$-convex body. Then \begin{enumerate}[nolistsep,label={\bf(\roman{*})}, ref={\bf(\roman{*})}] \item $K$ is not contained in a codimension $>0$ plane. \label{nonvide} \item $K$ is \emph{future convex}:\label{futureconvex} \begin{enumerate}[nolistsep,label=(\alph{*}), ref=\ref{futureconvex}\textit{(\alph{*})}] \item through each boundary point there is a support plane; \label{support} \item all support planes are space-like;\label{suppspace} \item $K$ is contained in the future side of its support planes.\label{future} \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} By definition $K$ is not empty. Let $x\in K$. As $K\subset \mathcal{F}$, there exists a $t$ such that $x\in {\mathbb H}_t^d$, and by definition, all the elements of the orbit $\Gamma x$ of $x$ belong to $K\cap {\mathbb H}_t^d$. Suppose that $K$ is contained in a codimension $>0$ hyperplane $\mathcal{H}$. Then there would exist a codimension $1$ hyperplane $\mathcal{H}'$ with $\mathcal{H}\subset \mathcal{H}'$, and $\Gamma x\in \mathcal{H}' \cap {\mathbb H}_t^d $. This means that on ${\mathbb H}_t^d $ (which is homothetic to the hyperbolic space for the induced metric), $\Gamma x$ is contained in a totally geodesic hyperplane, a hypersphere or a horosphere (depending on $\mathcal{H}'$ to be time-like, space-like or light-like), that is clearly impossible. \ref{nonvide} is proved. \ref{support} is a general property of convex closed subset of $\mathbb{R}^{d+1}$ \cite[1.3.2]{Sch93}. Let $x\in K$ and let $\mathcal{H}$ be the support plane of $K$ at $x$. There exists $t$ such that $\Gamma x \subset {\mathbb H}_t^d$, and all elements of $\Gamma x$ must be on one side of $\mathcal{H}\cap {\mathbb H}_t^d$ on ${\mathbb H}_t^d$. Clearly $\mathcal{H}\cap {\mathbb H}_t^d$ can't be a totally geodesic hyperplane (of ${\mathbb H}_t^d$), and it can't either be a horosphere by Sublemma~\ref{sublem: horo}. Hence $\mathcal{H}$ must be space-like, that gives \ref{suppspace}. The fact that all elements of $\Gamma x$ belong to ${\mathbb H}_t^d$ implies that $K$ is in the future side of its support planes, hence \ref{future}. \end{proof} \begin{sublemma}\label{sublem: horo} Let $\Gamma$ be a group of isometries acting cocompactly on the hyperbolic space ${\mathbb H}^d$. For any $x\in{\mathbb H}^d$, the orbit $\Gamma x$ meets the interior of any horoball. \end{sublemma} \begin{proof} As the action of $\Gamma$ on $\mathbb{H}^d$ is cocompact, it is well-known that the orbit $\Gamma x$ is discrete and that the Dirichlet regions for $\Gamma x$ \begin{equation}\label{eq:dirichelt} D_a(\Gamma)=\{p\in\mathbb{H}^d\vert d(a,p)\leq d(\gamma a,p), \forall \gamma\in\Gamma\setminus\{Id\} \}, a\in\Gamma x\end{equation} where $d$ is the hyperbolic distance, are bounded \cite{Rat06}. The sublemma is a characteristic property of discrete sets with bounded Dirichlet regions \cite[Lemma~3]{CDM97}. \end{proof} \begin{lemma}\label{lem:conitude} Let $K$ be a $\Gamma$-convex body and $x\in K$. For any $\lambda \geq 1$, $\lambda x\in K$. \end{lemma} \begin{proof} From the definition of $K$, it is not hard to see that it has non empty interior. And as $K$ is closed, if the lemma was false, there would exist a point on the boundary of $K$ and a support plane at this point such that $x$ in its past, that is impossible because of Lemma~\ref{lem: future convex}. \end{proof} Let us recall the following elementary results, see e.g.~\cite[3.1.1,3.1.2]{Rat06}. \begin{sublemma}\label{lem:elementary} \begin{enumerate}[nolistsep,label={\bf(\roman{*})}, ref={\bf(\roman{*})}] \item If $x$ and $y$ are nonzero non space-like vectors in $\mathbb{R}^{d+1}$, both past or future, then $\langle x,y\rangle_-\leq 0$ with equality if and only if $x$ and $y$ are linearly dependent light-like vectors. \label{elem1} \item If $x$ and $y$ are nonzero non space-like vectors in $\mathbb{R}^{d+1}$, both past (resp. future), then the vector $x + y$ is past (resp. future) non space-like. Moreover $x+y$ is light-like if and only if $x$ and $y$ are linearly dependent light-like vectors. \label{elem2} \end{enumerate} \end{sublemma} A future time-like vector $\eta$ orthogonal to a support plane at $x$ of a future convex set $A$ is called an \emph{inward normal} of $A$ at $x$. This means that $\forall y\in A$, $y-x$ and $\eta$ are two future time-like vectors at the point $x$, then by Sublemma~\ref{lem:elementary}, $$ \forall y \in A, \langle \eta,y-x\rangle_- \leq 0, \mbox{i.e.~}\langle \eta,y\rangle_- \leq \langle \eta,x \rangle_- $$ or equivalently the sup on all $y\in A$ of $\langle \eta,y \rangle_-$ is attained at $x$. Notice that the set $$\{y\in\mathbb{R}^{d+1}\vert \langle y,\eta\rangle_-=\langle x,\eta\rangle_-\} $$ is the support hyperplane of $A$ at $x$ with inward normal $\eta$. \begin{lemma}\label{lem: tout vect normal} Let $K$ be a $\Gamma$-convex body. For any future time-like vector $\eta$, $\mbox{sup}\{\langle x,\eta\rangle_- \vert x\in K\}$ exists, is attained at a point of $K$ and is negative. In particular any future time-like vector $\eta$ is an inward normal of $K$. A future time-like vector $\eta$ is the inward normal of a single support hyperplane of $K$. \end{lemma} \begin{proof} From \ref{elem1} of Lemma~\ref{lem:elementary}, $\{\langle x,\eta\rangle_- \vert x\in K\}$ is bounded from above by zero hence the sup exists. The sup is a negative number, as a sufficiently small translation of the vector hyperplane $\mathcal{H}$ orthogonal to $\eta$ in direction of ${\mathcal F}$ does not meet $K$. This follows from the separation theorem \cite[1.3.4]{Sch93}, because the origin is the only common point between $\mathcal{H}$ and the boundary of ${\mathcal F}$. As $K$ is closed, the sup is attained when the parallel displacement of $\mathcal{H}$ meets $K$. Suppose that two different support hyperplanes of $K$ have the same inward normal. Hence one is contained in the past of the other, that is impossible. \end{proof} \subsection{Support functions} Let $K$ be a $\Gamma$-convex body. The \emph{extended support function} $H$ of $K$ is \begin{equation}\label{def:sup func} \forall \eta\in{\mathcal F}, H(\eta)=\mbox{sup}\{\langle x,\eta\rangle_- \vert x\in K\}. \end{equation} We know from Lemma~\ref{lem: tout vect normal} that it is a negative function on ${\mathcal F}$. As an example the extended support function of $B_t^d$ is equal to $-t\sqrt{-\langle \eta,\eta\rangle_-}$. \begin{definition} A function $f:A\rightarrow \mathbb{R}$ on a convex subset $A$ of ${\mathbb R}^{d+1}$ is \emph{sublinear} (on $A$) if it is \emph{positively homogeneous of degree one}: \begin{equation}\label{def:pos hom} \forall \eta\in A, f(\lambda \eta)=\lambda f(\eta)\, \forall \lambda > 0, \end{equation} and \emph{subadditive}: \begin{equation} \forall \eta,\mu\in A, f(\eta+\mu)\leq f(\eta)+f(\mu). \end{equation} \end{definition} A sublinear function is convex, in particular it is continuous (by assumptions it takes only finite values in $A$). (It is usefull to note that for a positively homogeneous of degree one function, convexity and sublinearity are equivalent.) It is straightforward from the definition that an extended support function is sublinear and $\Gamma$-invariant. It is useful to expand the definition of extended support function to the whole space. The \emph{total support function} of a $\Gamma$-convex body $K$ is \begin{equation}\label{eq:ext supp fct} \forall \eta\in{\mathbb R}^{d+1}, \tilde{H}(\eta)=\mbox{sup}\{\langle x,\eta\rangle_- \vert x\in K\}. \end{equation} We will consider the total support function for any convex subset of ${\mathbb R}^{d+1}$. The infinite value is allowed. We have the following important property, see \cite[Theorem 2.2.8]{Hor07}. \begin{proposition}\label{prop:hormander} Let $f$ be a lower semi-continuous, convex and positively homogeneous of degree one function on ${\mathbb R}^{d+1}$ (the infinite value is allowed). The set $$ F=\{x\in{\mathbb R}^{d+1}\vert \langle x,\eta\rangle_- \leq f(\eta) \,\forall \eta\in{\mathbb R}^{d+1} \} $$ is a closed convex set with total support function $f$. \end{proposition} From the definition we get: \begin{lemma}\label{lem:point} A convex subset of ${\mathbb R}^{d+1}$ is a point if and only if its total support function is a linear form. (If the point is $p$, the linear form is $ \langle \cdot, p\rangle_-$.) In particular, the total support function of a Fuchsian convex body is never a linear form. \end{lemma} The relation between the extended support function and the total support function is as follows. \begin{lemma} The total support function $\tilde{H}$ of a $\Gamma$-convex body with extended support function $H$ is equal to: \begin{itemize}[nolistsep] \item $H$ on ${\mathcal F}$, \item $0$ on the future light-like vectors and at $0$, \item $+\infty$ elsewhere. \end{itemize} Moreover $\tilde{H}$ is a $\Gamma$ invariant sublinear function. \end{lemma} \begin{proof} We have the following cases \begin{itemize}[nolistsep] \item If $\eta$ is future time-like then $\tilde{H}(\eta)=H(\eta)$. \item If $\eta$ is past time-like or past light-like, then by \ref{elem1} of Sublemma~\ref{lem:elementary} for $x\in K$, $\langle x,\eta\rangle_->0$, and by Lemma~\ref{lem:conitude}, $\tilde{H}(\eta)=+\infty$. \item If $\eta$ is space-like, as seen in the proof of \ref{suppspace} of Lemma~\ref{lem: future convex}, there exists points of $K$ on both side of the orthogonal (for $\langle \cdot,\cdot\rangle_-$) of $\eta$. Hence there exists $x\in K$ with $\langle x,\eta\rangle_->0$, and by the preceding argument, $\tilde{H}(\eta)=+\infty$. \item If $\eta$ is future light-like, then $\tilde{H}(\eta)=0$. As $\tilde{H}$ is lower semi-continuous (as supremum of a family of continuous functions) and as $\tilde{H}=+\infty$ outside of the future cone, this follows from Sublemma~\ref{sub: cl}. \item By definition, $\tilde{H}(0)=0$. \end{itemize} That $\tilde{H}$ is a $\Gamma$ invariant sublinear function follows easily. \end{proof} \begin{sublemma}\label{sub: cl} Let $H$ be a sublinear function on ${\mathcal F}$ with finite values. Let us extend it as a convex function on ${\mathbb R}^{d+1}$ by giving the value $+\infty$ outside ${\mathcal F}$. Let $\tilde{H}$ be the lower semi-continuous hull of $H$: $\tilde{H}(x)=\mathrm{liminf}_{x\rightarrow y}H(y)$. If $H$ is invariant under the action of $\Gamma$, then $H$ is negative or $H\equiv 0$ on ${\mathcal F}$, and $\tilde{H}=0$ on $\partial {\mathcal F}$. \end{sublemma} Note that $H\equiv 0$ is the support function of (the closure of) ${\mathcal F}$. \begin{proof} Let $\ell$ be a future light-like vector. As $\Gamma$ acts cocompactly on ${\mathbb H}^d$, there exists a sequence of $\gamma_k\in\Gamma$ such that for any future time-like ray $r$, the sequence $\gamma_n r$ converges to the ray containing $\ell$ \cite[Example 2, 12.2]{Rat06}. From this sequence we take a sequence $\gamma_k \eta$ for a future time-like vector $\eta$. We have $\tilde{H}(\gamma_k \eta)=\tilde{H}(\eta)$. From this sequence we take a sequence of vectors $\eta'_k$ which all have the same $(d+1)$th coordinate $(\eta'_k)_{d+1}$ as $\ell$, say $\ell_{d+1}$ (hence $\eta'_k\rightarrow \ell$). We have $\eta'_k=\ell_{d+1}/(\gamma_k \eta)_{d+1} \gamma_k \eta$, and by homogeneity $\tilde{H}(\eta'_k)=\tilde{H}(\ell_{d+1}\eta)/(\gamma_k \eta)_{d+1} $ that goes to $0$ as $k$ goes to infinity ($(\gamma_k \eta)_{d+1}$ goes to infinity). This proves t$\tilde{H}=0$ on $\partial {\mathcal F}$ as for any $\ell\in\partial{\mathcal F}^*$ and any $\eta\in{\mathcal F}$, $\tilde{H}_K(\ell)=\underset{t\downarrow 0}{\mathrm{lim}} H_K(\ell+t(x-\ell)) $ (see for example Theorem~7.5 in \cite{Roc97}). In the same way we get that $\tilde{H}(0)=0$. As $\tilde{H}$ is convex and equal to $0$ on $\partial {\mathcal F}$, it is non-positive on ${\mathcal F}$. Suppose that there exists $x\in{\mathcal F}$ with $\tilde{H}(x)=0$, and let $y\in{\mathcal F}\setminus\{x\}$. By homogeneity, $\tilde{H}( \lambda x)=0$ for all $\lambda>0$. Up to choose an appropriate $\lambda$, we can suppose that the line joining $x$ and $y$ meets $\partial {\mathcal F}$ in two points. Let $\ell$ be the one such that there exists $\lambda\in]0,1[$ such that $x=\lambda \ell+(1-\lambda) y$. By convexity and because $\tilde{H}(x)=\tilde{H}(\ell)=0$, we get $0\leq \tilde{H}(y)$, hence $\tilde{H}(y)=0$. \end{proof} \begin{lemma}\label{lem: determination supp fct} Let $H$ be a negative sublinear $\Gamma$-invariant function on ${\mathcal F}$. The set $$ K=\{x\in\mathcal{F}\vert \langle x,\eta\rangle_- \leq H(\eta) \,\forall \eta\in\mathcal{F} \} $$ is a $\Gamma$-convex body with extended support function $H$. \end{lemma} \begin{proof} Let $\tilde{H}$ be as in Sublemma~\ref{sub: cl}. From Proposition~\ref{prop:hormander}, the set $$ \tilde{K}=\{x\in{\mathbb R}^{d+1}\vert \langle x,\eta\rangle_- \leq \tilde{H}(\eta) \,\forall \eta\in{\mathbb R}^{d+1} \} $$ is a closed convex set, with total support function $\tilde{H}$. Let us see that $\tilde{K}=K$. As $\tilde{H}(\eta)=+\infty$ outside the closure $\overline{{\mathcal F}}$ of the future cone we have $$\tilde{K}=\{x\in{\mathbb R}^{d+1}\vert \langle x,\eta\rangle_- \leq \tilde{H}(\eta) \,\forall \eta\in\overline{{\mathcal F}} \}. $$ For $\eta\in{\mathcal F}$, $\tilde{H}(\eta)\leq 0$, it follows that $\tilde{K}$ is contained in $\overline{{\mathcal F}}$: $$\tilde{K}=\{x\in\overline{{\mathcal F}}\vert \langle x,\eta\rangle_- \leq \tilde{H}(\eta) \,\forall \eta\in\overline{{\mathcal F}} \}. $$ As $H$ is $\Gamma$-invariant, $\tilde{H}$ and $\tilde{K}$ are $\Gamma$-invariant too. For $x\in \tilde{K}\cap \partial {\mathcal F}$, the origin is an accumulating point of $\Gamma x$ from Sublemma~\ref{sub:horo}. So for any $\eta\in{\mathcal F}$, $\tilde{H}(\eta)$, which is the sup of $\langle x,\eta\rangle_-$ for $x\in \tilde{K}$, should be zero, that is false. Hence $$\tilde{K}=\{x\in {\mathcal F}\vert \langle x,\eta\rangle_- \leq \tilde{H}(\eta) \,\forall \eta\in\overline{{\mathcal F}} \}$$ and as $\tilde{H}(\eta)=0$ on $\partial {\mathcal F}$ we get $$\tilde{K}=\{x\in {\mathcal F}\vert \langle x,\eta\rangle_- \leq \tilde{H}(\eta) \,\forall \eta\in {\mathcal F} \}=K.$$ The remainder is easy. \end{proof} \begin{sublemma}\label{sub:horo} Let $\Gamma$ be a Fuchsian group and let $x$ be a future light-like vector. Then the origin is an accumulating point of $\Gamma x$. \end{sublemma} \begin{proof} Suppose it is false. As $\Gamma$ acts cocompactly on ${\mathbb H}^d$, there exists an horizontal space-like hyperplane $S$ such that a fundamental domain on ${\mathbb H}^{d}$ for the action of $\Gamma$ lies below $S$. If the origin is not an accumulating point, then there exists $\lambda >0$ such that the horoball $$\{y\in{\mathbb H}^d | -1\leq \langle \lambda x, y \rangle_- <0 \} $$ and its images for the action of $\Gamma$ remain above $S$. This contradicts the definition of fundamental domain. \end{proof} The \emph{polar dual} $K^*$ of a $\Gamma$-convex body $K$ is, if $H$ is the extended support function of $K$: $$K^*=\{x\in {\mathcal F} | H(x) \leq -1\}. $$ For example, $(B_t^d)^*=B^d_{1/t}$. It is not hard to see that $K^*$ is a $\Gamma$-convex body, and that $K^{**}=K$ (see the convex bodies case \cite[1.6.1]{Sch93}). Moreover the points of the boundary of $K^*$ are the $\frac{-1}{H(\eta)}\eta$ for $\eta\in{\mathbb H}^d$. The inverse of this map is the projection $ f(x)=\frac{x}{\sqrt{-\langle x,x\rangle_-}}$. Hence, exchanging the roles of $K$ and $K^*$, we get that the projection of a Fuchsian convex body along rays from the origin gives is a homeomorphism between $\partial K$ and ${\mathbb H}^d$. \subsection{Minkowski sum and covolume} The \emph{(Minkowski) addition} of two sets $A,B\subset {\mathbb R}^{d+1}$ is defined as $$A+B:=\{a+b | a\in A, b\in B\}. $$ It is well-known that the addition of two convex sets is a convex set. Moreover the sum of two future time-like vectors is a future time-like vector, in particular it is never zero. So the sum of two $\Gamma$-convex bodies is contained in ${\mathcal F}$ and closed \cite[3.12]{RW98}. As a Fuchsian group $\Gamma$ acts by linear isometries, the sum is a $\Gamma$-convex body, and the space $\mathcal{K}(\Gamma)$ of $\Gamma$-convex bodies is invariant under the addition. Note also that $\mathcal{K}(\Gamma)$ is invariant under multiplication by positive scalars. It is straightforward to check that extended support functions behave well under these operations: $$H_{K+L}=H_K+H_L, \, K,L\in \mathcal{K}(\Gamma), $$ $$H_{\lambda K}=\lambda H_K, \,\lambda> 0, K\in\mathcal{K}(\Gamma). $$ Note also that from the definition of the extended support function, $$K\subset L \Leftrightarrow H_K \leq H_L.$$ Identifying $\Gamma$-convex bodies with their support functions, $\mathcal{K}(\Gamma)$ is a cone in the vector space of homogeneous of degree $1$, continuous, real, $\Gamma$-invariant, functions on $\mathcal{F}$. By homogeneity this corresponds to a cone in the vector space of continuous real $\Gamma$-invariant functions on $\mathbb{H}^d\subset \mathcal{F}$, and to a cone in the vector space of continuous real functions on the compact hyperbolic manifold $\mathbb{H}^d/\Gamma$. A function in one of this two last cones is called a \emph{support function}. Let $K\in\mathcal{K}(\Gamma)$. Its \emph{covolume} $\mathrm{covol}(K)$ is the volume of $(\mathcal{F}\setminus K)/\Gamma$ (for the Lebesgue measure of ${\mathbb R}^{d+1}$). It is a finite positive number and \begin{equation*}\mathrm{covol}(\lambda K)=\lambda^{d+1}\mathrm{covol}(K).\end{equation*} Note that $$K\subset L \Rightarrow \mathrm{covol}(K)\geq \mathrm{covol}(L). $$ As defined above, the covolume of a $\Gamma$-convex body $K$ is the volume of a compact set of ${\mathbb R}^{d+1}$, namely the volume of the intersection of ${\mathcal F}\setminus K$ with a fundamental cone for the action of $\Gamma$. For such compact (non-convex) sets there is a Brunn--Minkowski theory, see for example \cite{Gar02}. See also \cite{BE99}. But this does not give results about covolume of $\Gamma$-convex bodies. The reason is that, for two $\Gamma$-convex bodies $K_1$ and $K_2$, ${\mathcal F}\setminus (K_1+K_2)$ (from which we define the covolume of $K_1+K_2$) is not equal to $({\mathcal F}\setminus K_1)+({\mathcal F} \setminus K_2)$. For example in $d=1$, $\left(0 \atop 1/2 \right) +\left(5/8\atop 9/8 \right) \in ({\mathcal F}\setminus B + {\mathcal F}\setminus B)$ but does not belong to ${\mathcal F}\setminus(B+B)$. \section{$C^2_+$ case}\label{sec:reg} The first subsection is an adaptation of the classical case \cite{Sch93}. The remainder is the analog of \cite{Ale38} (in \cite{Ale96}). See also \cite{BF87}, \cite{Lei93}, \cite{Hor07}, \cite{Bus08}, and \cite{GMT10} for a kind of extension. The objects and results in this section which can be defined intrinsically on a hyperbolic manifold are already known in more generality, see \cite{OS83} and the references therein. See also Subsection~\ref{sub:mink reg}. \subsection{Regularity of the support function}\label{sub: reg sup} \paragraph{Differentiability} Let $K$ be a $\Gamma$-convex body with extended support function $H$, and let $\eta\in {\mathcal F}$. From Lemma~\ref{lem: tout vect normal} there exists a unique support hyperplane $\mathcal{H}$ of $K$ with inward normal $\eta$. \begin{lemma} The intersection $F$ of $\mathcal{H}$ and $K$ is reduced to a single point $p$ if and only if $H$ is differentiable at $\eta\in{\mathcal F}$. In this case $p=\nabla_{\eta}H$ (the gradient for $\langle \cdot,\cdot\rangle_-$ of $H$ at $\eta$). \end{lemma} \begin{proof} As $H$ is convex all one-sided directional derivatives exist \cite[p.~25]{Sch93}. Let us denote such derivative in the direction of $u\in{\mathbb R}^{d+1}$ at the point $\eta$ by $d_{\eta}H(u)$. The proof of the lemma is based on the following fact: \emph{The function $\mathbb{R}^{d+1} \ni u\mapsto d_{\eta}H(u)$ is the total support function of $F$.} Indeed, if $H$ is differentiable at $\eta$, the fact says that the total support function of $F$ is a linear form, and from Lemma~\ref{lem:point}, $F$ is a point. Conversely, if $F$ is a point $p$, from Lemma~\ref{lem:point} its total support function is a linear form, hence partial derivatives of $H$ exist and as $H$ is convex, this implies differentiability \cite[1.5.6]{Sch93}. Moreover for all $u\in {\mathbb R}^{d+1}$, $\langle p,u \rangle_-=d_{\eta} H(u)$. Now we prove the fact. The function $d_{\eta}H$ is sublinear on ${\mathbb R}^{d+1}$ \cite[1.5.4]{Sch93}, Proposition~\ref{prop:hormander} applies and $d_{\eta}H$ is the total support function of $$F'=\{x\in{\mathbb R}^{d+1}\vert \langle x,u\rangle_- \leq d_{\eta}H(u) \,\forall u\in{\mathbb R}^{d+1} \}.$$ We have to prove that $F'=F$. Let $\tilde{H}$ be the extension of $H$ to ${\mathbb R}^{d+1}$. By definition of directional derivative, the sublinearity of $\tilde{H}$ gives $d_{\eta}H\leq \tilde{H}$. From the proof of Lemma~\ref{lem: determination supp fct}, this implies that $F'\subset K$. In particular, for $y\in F'$, $\langle y,\eta \rangle_-\leq H(\eta)$. On the other hand $y\in F'$ implies $\langle y,-\eta \rangle_-\leq d_{\eta}H(-\eta)=-H(\eta)$ (the last equality follows from the definition of directional derivative, using the homogeneity of $H$). Then $\langle y,\eta \rangle_-=H(\eta)$ so $y\in\mathcal{H}$, hence $F'\subset F=\mathcal{H}\cap K$. Let $y \in F$. By definition $\langle y, \eta\rangle_- = H(\eta)$ and for any $w\in {\mathcal F}$, $\langle y,w \rangle_- \leq H(w)$. For sufficiently small positive $\lambda$ and any $u\in{\mathbb R}^{d+1}$, $w=\eta+\lambda u$ is future time-like and $$\langle y,u\rangle_- \leq \frac{H(\eta+\lambda u)-H(\eta)}{\lambda}$$ so when $\lambda\rightarrow 0$ we have $\langle y,u\rangle_-\leq d_{\eta}H(u)$ hence $F\subset F'$. The fact is proved. \end{proof} If the extended support function $H$ of a $\Gamma$-convex body $K$ is differentiable, the above lemma allows to define the map $$\tilde{G}(\eta)=\nabla_{\eta}H$$ from ${\mathcal F}$ to $\partial K\subset\mathbb{R}^{d+1}$. This can be expressed in term of $h$, the restriction of $H$ to $\mathbb{H}^d$. We use ``hyperbolic coordinates'' on ${\mathcal F}$: an orthonormal frame on ${\mathbb H}^d$ extended to an orthonormal frame of ${\mathcal F}$ with the decomposition $r^2g_{{\mathbb H}^d}-\mathrm{d} r^2$ of the metric on ${\mathcal F}$. $\nabla_{\eta}H$ has $d+1$ entries, and, at $\eta\in{\mathbb H}^d$, the $d$ first ones are the coordinates of $\nabla_{\eta} h$ (here $\nabla$ is the gradient on ${\mathbb H}^d$). We identify $\nabla_{\eta} h\in T_{\eta}{\mathbb H}^d\subset \mathbb{R}^{d+1}$ with a vector of $\mathbb{R}^{d+1}$. The last component of $\nabla_{\eta}H$ is $-\partial H /\partial r(\eta)$, and, using the homogeneity of $H$, it is equal to $-h(\eta)$ when $\eta\in \mathbb{H}^d$. Note that at such a point, $T_{\eta}{\mathcal F}$ is the direct sum of $T_{\eta}\mathbb{H}^d$ and $\eta$. It follows that, for $\eta\in\mathbb{H}^d$, \begin{equation}\label{eq:nablanabla}\nabla_{\eta}H=\nabla_{\eta}h-h(\eta)\eta. \end{equation} This has a clear geometric interpretation, see Figure~\ref{fig:nabla}. \begin{figure} \centering \input nabla.pdf_t \caption{Recovering the convex body from its support function in the Minkowski space. \label{fig:nabla}} \end{figure} \paragraph{$C^2$ support function} If the extended support function $H$ is $C^2$, $\tilde{G}$ is $C^1$, and its differential $\tilde{W}$ satisfies $$\langle \tilde{W}_{\eta}(X),Y\rangle_-=D^2_{\eta} H(X,Y).$$ We denote by $G$ the restriction of $\tilde{G}$ to ${\mathbb H}^d$ and by $W$ its differential (the \emph{reversed shape operator}). If $T_{\nu}$ is the hyperplane of ${\mathbb R}^{d+1}$ orthogonal to $\nu\in{\mathbb H}^d$ for $\langle \cdot,\cdot,\rangle_-$, $W$ is considered as a map from $T_{\nu}$ to $T_{\nu}$. We get from \eqref{eq:nablanabla}, or from the equation above, the Gauss formula and the $1$-homogeneity of $H$, using again hyperbolic coordinates on ${\mathcal F}$: \begin{equation}\label{eq: hess h} W_{ij} = (\nabla^2 h)_{ij}- h \delta_{ij},\end{equation} with $\nabla^2$ the second covariant derivative (the Hessian) on ${\mathbb H}^d$, $\delta_{ij}$ the Kronecker symbol and $h$ the restriction of $H$ to ${\mathbb H}^d$. In particular $W$ is symmetric, and its real eigenvalues $r_1,\ldots,r_d$ are the \emph{radii of curvature} of $K$. Taking the trace on both parts of the equation above leads to \begin{equation}\label{eq:laplacian} r_1+\cdots+r_d=\Delta_{\mathbb{H}^d}h-dh \end{equation} where $\Delta_{\mathbb{H}^d}$ is the Laplacian on the hyperbolic space. It is easy to check that, for $\gamma\in\Gamma$, $\nabla_{\gamma \eta}H=\gamma \nabla_{\eta}H$ and $D^2_{\gamma\eta} H=D^2_{\eta} H$. In particular the objects introduced above can be defined on ${\mathbb H}^d/\Gamma$. \paragraph{$C^2_+$ body} Let $K$ be a $\Gamma$-convex body. The \emph{Gauss map} $N$ is a multivalued map which associates to each $x$ in the boundary of $K$ the set of unit inward normals of $K$ at $x$, which are considered as elements of $\mathbb{H}^d$. If the boundary of $K$ is a $C^2$ hypersurface and if the Gauss map is a $C^1$-homeomorphism from the boundary of $K$ to $\mathbb{H}^d$, $K$ is \emph{$C^2_+$}. In this case we can define the \emph{shape operator} $B=\nabla N$, which is a self-adjoint operator. Its eigenvalues are the \emph{principal curvatures} $\kappa_i$ of $K$, and they are never zero as $B$ has maximal rank by assumption. As $K$ is convex, it is well-known that its principal curvatures are non-negative, hence they are positive. (This implies that $K$ is actually strictly convex.) \begin{lemma}\label{lem: supp of regular} Under the identification of a $\Gamma$-convex body with its support function, the set of $C^2_+$ $\Gamma$-convex body is $C^2_+(\Gamma)$, the set of negative $C^2$ functions $h$ on $M=\mathbb{H}^d/\Gamma$ such that \begin{equation}\label{eq:hess supp} ((\nabla^2 h)_{ij}- h \delta_{ij} )>0 \end{equation} (positive definite) for any orthonormal frame on $M$. \end{lemma} It follows that in the $C^2_+$ case $G=N^{-1}, W=B^{-1}, \mbox{ and } \displaystyle r_i=\frac{1}{\kappa_i\circ N^{-1}}.$ \begin{proof} Let $K$ be a $C^2_+$ $\Gamma$-convex body, $h$ its support function and $H$ its extended support function ($h$ is the restriction of $H$ to ${\mathbb H}^d$). For any $\eta\in\mathbb{H}^d$ we have \begin{equation}\label{eq: supp et GM} h(\eta)=\langle N^{-1}(\eta),\eta\rangle_-, \end{equation} and for $\eta\in{\mathcal F}$, introducing the $0$-homogeneous extension $\tilde{N}^{-1}$ of $N^{-1}$ we obtain $$D_{\eta}H(X)=\langle \tilde{N}^{-1}(\eta),X\rangle_-+\langle D_{\eta}\tilde{N}^{-1}(X),\eta\rangle_-, $$ but $D_{\eta}\tilde{N}^{-1}(X)$ belongs to the support hyperplane of $K$ with inward normal $\eta$ so $D_{\eta}H(X)=\langle \tilde{N}^{-1}(\eta),X\rangle_-. $ Hence $ D^2_{\eta}H(X,Y)=\langle B^{-1}(X),Y\rangle_-, $ in particular $H$ is $C^2$, so $h$ is $C^2$ and \eqref{eq:hess supp} is known. As $h$ is $\Gamma$-invariant, we get a function of $C^2_+(\Gamma)$. Now let $h\in C^2_+(\Gamma)$. We also denote by $h$ the $\Gamma$-invariant map on ${\mathbb H}^d$ which projects on $h$, and by $H$ the $1$-homogeneous extension of $h$ to ${\mathcal F}$. The $1$-homogeneity and \eqref{eq:hess supp} imply that $H$ is convex (in the hyperbolic coordinates, row and column of the Hessian of $H$ corresponding to the radial direction $r$ are zero), hence negative sublinear $\Gamma$-invariant, so it is the support function of a $\Gamma$-convex body $K$ by Lemma~\ref{lem: determination supp fct}. As $h$ is $C^2$, we get a map $G$ from ${\mathbb H}^d$ to $\partial K\subset {\mathbb R}^{d+1}$ which is $C^1$, and regular from \eqref{eq: hess h} and \eqref{eq:hess supp}. Moreover $G$ is surjective by Lemma~\ref{lem: tout vect normal}. It follows that $\partial K$ is $C^1$. This implies that each point of $\partial K$ has a unique support plane \cite[p.~104]{Sch93}, i.e~that the map $G$ is injective. Finally it is a $C^1$ homeomorphism. Let $K^*$ be the polar dual of $K$. We know that the points on the boundary of $K^*$ are graphs above ${\mathbb H}^d$ as they have the form $\eta/(-h(\eta))$ for $\eta\in{\mathbb H}^d$. Hence $\partial K^*$ is $C^2$ as $h$ is. Moreover the Gauss map image of the point $\eta/(-h(\eta))$ of $\partial K^*$ is $G(\eta)/\sqrt{-\langle G(\eta),G(\eta)\rangle_-}$: the Gauss map of $K^*$ is a $C^1$ homeomorphism. It follows that $ K^*$ is $C^2_+$. In particular its support function is $C^2$. Repeating the argument, it follows that the boundary of $K^{**}=K$ is $C^2$. \end{proof} To simplify the matter in the following, we will restrict ourselves to smooth ($C^{\infty}$) support functions, although this restriction will be relevant only in Subsection~\ref{sub:mixed reg}. We denote by $C^{\infty}_+(\Gamma)$ the subset of smooth elements of $C^2_+(\Gamma)$. It corresponds to $C^{\infty}_+$ $\Gamma$-convex bodies, i.e.~$\Gamma$-convex bodies with smooth boundary and with the Gauss map a $C^1$ diffeomorphism (hence smooth). \begin{lemma}\label{lem: supp reg cone} $C^{\infty}_+(\Gamma)$ is a convex cone and $$ C^{\infty}_+(\Gamma) - C^{\infty}_+(\Gamma)=C^{\infty}(\Gamma)$$ (any smooth function on ${\mathbb H}^d/\Gamma$ is the difference of two functions of $C^{\infty}_+(\Gamma)$). \end{lemma} \begin{proof} It is clear that $ C^{\infty}_+(\Gamma)$ is a convex cone. Let $h_1\in C^{\infty}_+(\Gamma)$ and $Z\in C^{\infty}(\Gamma)$. As ${\mathbb H}^d/\Gamma$ is compact, for $t$ sufficiently large, $Z+th_1$ satisfies \eqref{eq:hess supp} and is a negative function, hence there exists $h_2\in C^{\infty}_+(\Gamma)$ such that $Z+th_1=h_2$. \end{proof} \subsection{Covolume and Gaussian curvature operator} Let $K$ be a $C^2_+$ $\Gamma$-convex body and let $P(K)$ be ${\mathcal F}$ minus the interior of $K$. As $P(K)/\Gamma$ is compact, the divergence theorem gives $$ \int_{P(K)/\Gamma} \mbox{div}X \mathrm{d} P(K)= -\int_{\partial K/\Gamma} \langle X,\eta\rangle_- \mathrm{d} \partial K,$$ where $\eta$ is the unit outward normal of $\partial K/\Gamma$ in $P(K)/\Gamma$ (hence it corresponds in the universal cover to the unit inward normal of $K$). If $X$ is the position vector in ${\mathcal F}$ we get $$ (d+1)\mathrm{covol}(K)=-\int_{\partial K/\Gamma} h\circ N \mathrm{d} \partial K $$ with $h$ the support function of $K$ and $N$ the Gauss map. The \emph{Gaussian curvature} (or Gauss--Kronecker curvature) $\kappa$ of $K$ is the product of the principal curvatures. We will consider the map $\kappa^{-1}$ which associates to each $h\in C^{\infty}_+(\Gamma)$ the inverse of the Gaussian curvature of the convex body supported by $h$: \begin{equation}\label{eq: def kappa} \kappa^{-1}(h)=\prod_{i=1}^dr_i(h)\stackrel{\eqref{eq: hess h}}{=}\det \left((\nabla^2 h)_{ij}- h \delta_{ij}\right). \end{equation} As the curvature is the Jacobian of the Gauss map, we get $$(d+1)\mathrm{covol}(K)=-\int_{M} h \kappa^{-1}(h) \mathrm{d} M$$ where $\mathrm{d} M$ is the volume form on $M={\mathbb H}^d/\Gamma$. Finally let us consider the covolume as a functional on $C^{\infty}_+(\Gamma)$, which extension to the whole $C^{\infty}(\Gamma)$ is immediate: \begin{equation}\label{eq: vol reg}\mathrm{covol}(X)=-\frac{1}{d+1}\lgroup X,\kappa^{-1}(X) \rgroup, X\in C^{\infty}(\Gamma)\end{equation} with $\lgroup\cdot,\cdot \rgroup$ the scalar product on $L^2(M)$. We will consider $C^{\infty}(\Gamma)$ as a Fr\'echet space with the usual seminorms $$\|f\|_n=\sum_{i=1}^n \sup_{x\in M} |\nabla^i f(x) |, $$ with $\nabla^i$ the $i$-th covariant derivative and $|\cdot|$ the norm, both given by the Riemannian metric of $M$. All derivatives will be directional (or G\^ateaux) derivatives in Fr\'echet spaces as in \cite{Ham82}: \begin{equation}\label{eq: der dir} D_{Y}\mathrm{covol}(X)=\lim_{t\rightarrow 0}\frac{\mathrm{covol}(Y+tX)-\mathrm{covol}(Y)}{t}, X,Y\in C^{\infty}(\Gamma). \end{equation} \begin{lemma} The function $\mathrm{covol}$ is $C^{\infty}$ on $C^{\infty}(\Gamma)$, and for $h\in C^{\infty}_+(\Gamma), X,Y\in C^{\infty}(\Gamma)$, we have: \begin{eqnarray} \ D_{h}\mathrm{covol}(X)=-\lgroup X, \kappa^{-1}(h) \rgroup \label{eq: der vol reg}, \\ \ D_{h}^2 \mathrm{covol} (X,Y)=-\lgroup X,D_{h}\kappa^{-1}(Y)\rgroup. \label{eq: der sec vol reg} \end{eqnarray} Moreover \eqref{eq: der vol reg} is equivalent to \begin{equation}\label{eq: k self adj} \lgroup X, D_h\kappa^{-1} (Y)\rgroup= \lgroup Y, D_h\kappa^{-1} (X)\rgroup. \end{equation} \end{lemma} \begin{proof} The second order differential operator $\kappa^{-1}$ is smooth as the determinant is smooth \cite[3.6.6]{Ham82}. Differentiating \eqref{eq: vol reg} we get \begin{equation}\label{eq: der vol reg rough} D_{h}\mathrm{covol}(X)=-\frac{1}{d+1}\left(\lgroup X,\kappa^{-1}(h) \rgroup+\lgroup h,D_h\kappa^{-1}(X) \rgroup\right), \end{equation} but the bilinear form $\lgroup \cdot,\cdot\rgroup$ is continuous for the seminorms $\|\cdot \|_n$ (recall that it suffices to check continuity in each variable \cite[2.17]{Rud91}). It follows that $\mathrm{covol}$ is $C^1$, and by iteration that it is $C^{\infty}$. If \eqref{eq: der vol reg} is true we get \eqref{eq: der sec vol reg}, and this expression is symmetric as $\mathrm{covol}$ is $C^2$, so \eqref{eq: k self adj} holds. Let us suppose that \eqref{eq: k self adj} is true. From \eqref{eq: def kappa}, $\kappa^{-1}$ is homogeneous of degree $d$, that gives $D_h\kappa^{-1} (h)=d\kappa^{-1}(h)$. Using this in \eqref{eq: k self adj} with $Y=h$ gives $$d\lgroup X, \kappa^{-1} (h)\rgroup= \lgroup h, D_h\kappa^{-1} (X)\rgroup.$$ Inserting this equation in \eqref{eq: der vol reg rough} leads to \eqref{eq: der vol reg}. A proof of \eqref{eq: k self adj} is done in \cite{CY76} (for the case of $C^2$ functions on the sphere). See also \cite{OS83} and reference therein for more generality. We will prove \eqref{eq: der vol reg} following \cite{Hor07}. From the definition of $\kappa^{-1}$, the map $D_h\kappa^{-1}(\cdot)$ is linear, hence from \eqref{eq: der vol reg rough} $D_h\mathrm{covol}(\cdot)$ is also linear, so by Lemma~\ref{lem: supp reg cone} it suffices to prove \eqref{eq: der vol reg} for $X=h'\in C^{\infty}_+(\Gamma)$. We denote by $K$ (resp. $K'$) the $\Gamma$-convex body supported by $h$ (resp. $h'$) and by $N$ (resp. $N'$) its Gauss map. We have, for $\eta\in {\mathcal F}, \varepsilon>0,$ $$ h(\eta)+\varepsilon h'(\eta)=\langle \eta, N^{-1}(\eta)+\varepsilon (N')^{-1}(\eta)\rangle_-$$ i.e~$h+\varepsilon h'$ supports the hypersurface with position vector $N^{-1}(\eta)+\varepsilon (N')^{-1}(\eta)$. For a compact $U\subset {\mathbb R}^d$, if $f : U\rightarrow{\mathbb R}^{d+1} $ is a local parametrization of $\partial K$, let us introduce $$F : U\times [0,\varepsilon]\rightarrow {\mathbb R}^{d+1}, (y,t)\mapsto f(y)+t (N')^{-1}(N(f(y))).$$ It is a local parametrization of the set between the boundary of $K$ and the boundary of $K+ \varepsilon K'$. Locally, its covolume (which corresponds to $\mathrm{covol}(h+\varepsilon h')-\mathrm{covol}(h)$) is computed as \begin{equation} \label{eq:loc vol}\int_{F(U\times [0,\varepsilon])} \mathrm{d} \operatorname{vol}=\int_0^{\varepsilon }\int_U | \mbox{Jac} F |\mathrm{d} y \mathrm{d} t. \end{equation} The Jacobian of $F$ is equal to $\left((N')^{-1}(N(f(y))), \frac{\partial f}{\partial y_1},\ldots, \frac{\partial f}{\partial y_d}\right) + t R$ where $R$ is a remaining term, and its determinant is equal to the determinant of $\left((N')^{-1}(N(f(y))), \frac{\partial f}{\partial y_1},\ldots, \frac{\partial f}{\partial y_d}\right)$ plus $t$ times remaining terms. As $(\frac{\partial f}{\partial y_1},\ldots, \frac{\partial f}{\partial y_d})$ form a basis of the tangent hyperplane of $\partial K$, and as $N$ is normal to this hyperplane, the determinant is equal to $\langle (N')^{-1}(N(f(y))), N(f(y))\rangle_-=h'(N(f(y)))$ times $|\mbox{Jac} f|$, plus $t$ times remaining terms. The limit of \eqref{eq:loc vol} divided by $\varepsilon$ when $\varepsilon \rightarrow 0$ gives $$\int_U h'(N(f(y)) |\mbox{Jac} f | \mathrm{d} y=\int_{f(U)} h'(N) \mathrm{d}\partial K.$$ The result follows by decomposing the boundary of $K$ with suitable coordinate patches. \end{proof} The main result of this section is the following. \begin{theorem}\label{thm: vol reg conv} The second derivative of $\mathrm{covol}: C^{\infty}(\Gamma) \rightarrow {\mathbb R}$ is positive definite. In particular the covolume of $C^{\infty}_+$ $\Gamma$-convex bodies is strictly convex. \end{theorem} Let us have a look at the case $d=1$. In this case $\kappa^{-1}=r$, the unique radius of curvature. We parametrize the branch of the hyperbola by $(\sinh t,\cosh t)$, and $h$ becomes a function from ${\mathbb R}$ to ${\mathbb R}_-$. Then \eqref{eq:laplacian} reads $$\kappa^{-1}(h)(t)=-h(t)+h''(t),$$ and, as $h$ is $\Gamma$-invariant, we can consider $\kappa^{-1}$ as a linear operator on the set of $C^{\infty}$ functions on $[0,\ell]$, if $\ell$ is the length of the circle ${\mathbb H}^1/\Gamma$. Using integration by parts and the fact that $h$ is $\ell$-periodic, we get $$D_h^2\mathrm{covol}(h,h)=-\lgroup h,\kappa^{-1}(h)\rgroup=-\int_0^{\ell} h \kappa^{-1}(h)= \int_0^{\ell}(h^2+h'^2).$$ We will prove a more general version of Theorem~\ref{thm: vol reg conv} in the next section, using the theory of mixed-volume. The proof is based on the following particular case. \begin{lemma}\label{lem: vol def pos sphere} Let $h_0$ be the support function of $B^d$ (i.e.~$h_0(\eta)=-1$). Then $D_{h_0}^2 \mathrm{covol}$ is positive definite. \end{lemma} \begin{proof} Let $X\in C^{\infty}(\Gamma)$. From the definition \eqref{eq: def kappa} of $\kappa^{-1}$ $$D_h \kappa^{-1} (X)=\kappa^{-1}(h) \sum_{i=1}^d r_i^{-1}(h)D_hr_i(X)$$ and as $r_i(h_0)=1$, $$D_{h_0} \kappa^{-1} (X)=\sum_{i=1}^d D_{h_0}r_i(X).$$ Differentiating \eqref{eq:laplacian} on both side at $h_0$ and passing to the quotient, the equation above gives $$D_{h_0} \kappa^{-1} (X)=-dX+\Delta_{M}X,$$ where $\Delta_{M}$ is the Laplacian on $M={\mathbb H}^d/\Gamma$. From \eqref{eq: der sec vol reg}, $$D^2_{h_0}\mathrm{covol}(X,X)=d\lgroup X,X \rgroup-\lgroup\Delta_{M}X,X\rgroup, $$ which is positive by property of the Laplacian, as $M={\mathbb H}^d/\Gamma$ is compact. \end{proof} \subsection{Smooth Minkowski Theorem}\label{sub:mink reg} One can ask if, given a positive function $f$ on a hyperbolic compact manifold $M={\mathbb H}^d/\Gamma$, it is the Gauss curvature of a $C^{2}_+$ convex Fuchsian surface and if the former one is unique. By Lemma~\ref{lem: supp of regular} and definition of the Gauss curvature, the question reduces to know if there exists a (unique) function $h$ on $M$ such that, in an orthogonal frame on $M$, $$f=\det((\nabla^2h)_{ij}-h\delta_{ij}) $$ and $$((\nabla^2h)_{ij}-h\delta_{ij})>0.$$ This PDE problem is solved in \cite{OS83} in the smooth case. Their main result (Theorem~3.4) can be written as follows. \begin{theorem}\label{thm: reg mink thm} Let $\Gamma$ be a Fuchsian group, $f:{\mathbb H}^d\rightarrow {\mathbb R}_+$ be a positive $C^{\infty}$ $\Gamma$-invariant function. There exists a unique $C^{\infty}_+$ $\Gamma$-convex body with Gauss curvature $f$. \end{theorem} \subsection{Mixed curvature and mixed-covolume}\label{sub:mixed reg} The determinant is a homogeneous polynomial of degree $d$, and we denote by $\det( \cdot,\ldots,\cdot)$ its polar form, that is the unique symmetric $d$-linear form such that $$\det( A,\ldots,A)=\det(A) $$ for any $d\times d$ symmetric matrix $A$ (see for example Appendix~A in \cite{Hor07}). We will need the following key result. \begin{theorem}[{\cite[p.~125]{Ale96}}]\label{thm:alg lin det} Let $A,A_3\ldots,A_d$ be positive definite $d\times d$ matrices and $Z$ be a symmetric matrix. Then $$\det (Z,A,A_3,\ldots,A_d)=0\Rightarrow \det(Z,Z,A_3,\ldots,A_d)\leq 0,$$ and equality holds if and only if $Z$ is identically zero. \end{theorem} For any orthonormal frame on $M=\mathbb{H}^d/\Gamma$ and for $X_k\in C^{\infty}(\Gamma)$, let us denote $$X_k'':= (\nabla^2 X_k)_{ij}- X_k \delta_{ij}$$ and let us introduce the \emph{mixed curvature} $$\kappa^{-1}(X_1,\ldots,X_d):=\det(X_1'',\ldots,X_d''). $$ As $\mathrm{covol}(X)=-\frac{1}{d+1}\lgroup X, \kappa^{-1}(X)\rgroup$, $\mathrm{covol}$ is a homogeneous polynomial of degree $d+1$. Its polar form $\mathrm{covol}(\cdot,\ldots,\cdot)$ ($(d+1)$ entries) is the \emph{mixed-covolume}. \begin{lemma}\label{eq: reg mix vol gen} We have the following equalities, for $X_i\in C^{\infty}(\Gamma)$. \begin{enumerate}[nolistsep,label={\bf(\roman{*})}, ref={\bf(\roman{*})}] \item $D^{d-1}_{X_2}\kappa^{-1}(X_3,\ldots,X_{d+1})=d! \kappa^{-1}(X_2,\ldots,X_{d+1})$,\label{der kappa} \item $D_{X_1}\mathrm{covol} (X_2)=(d+1)\mathrm{covol}(X_2,X_1,\ldots,X_1)$,\label{der mixed reg1} \item $D^2_{X_1}\mathrm{covol} (X_2,X_3)=(d+1)d\mathrm{covol}(X_2,X_3,X_1,\ldots,X_1)$, \label{der mixed reg2} \item $D^{d}_{X_1} \mathrm{covol} (X_2,\ldots,X_{d+1})=(d+1)!\mathrm{covol}(X_1,\ldots,X_{d+1})$,\label{der mixed reg} \item $\mathrm{covol}(X_1,\ldots,X_{d+1})=-\frac{1}{d+1}\lgroup X_1, \kappa^{-1}(X_2,\ldots,X_{d+1})\rgroup$.\label{mv reg} \end{enumerate} \end{lemma} \begin{proof} \ref{der kappa} and \ref{der mixed reg} are proved by induction on the order of the derivative, using the definition of directional derivative and the expansion of the multilinear forms. \ref{der mixed reg1} and \ref{der mixed reg2} are obtained by the way. \ref{mv reg} follows from \eqref{eq: der vol reg}, \ref{der kappa} and \ref{der mixed reg}. \end{proof} \begin{corollary}\label{cor: reg mix vol pos} For $h_i\in C^{\infty}_+(\Gamma)$, $\mathrm{covol}(h_1,\ldots,h_{d+1})$ is positive. \end{corollary} \begin{proof} As $h_i\in C^{\infty}_+(\Gamma)$, $h_i''$ is positive definite, hence $\kappa^{-1}(h_2,\ldots,h_{d+1})>0$ \cite[(5) p.~122]{Ale96}. The result follows from \ref{mv reg} because $h_1<0$. \end{proof} Due to \ref{der mixed reg2} of the preceding lemma, the following result implies Theorem~\ref{thm: vol reg conv}. \begin{theorem}\label{thm:hess vol def pos reg} For any $h_1,\ldots,h_{d-1}$ in $C^{\infty}_+(\Gamma)$, the symmetric bilinear form on $(C^{\infty}(\Gamma))^2$ $$\mathrm{covol}(\cdot,\cdot,h_1,\ldots,h_{d-1}) $$ is positive definite. \end{theorem} \begin{proof} We use a continuity method. We consider the paths $h_i(t)=th_i+(1-t)h_0$, $i=1,\ldots,d-1$, $t\in[0,1]$, where $h_0$ is the (quotient of the) support function of $B^d$ and we denote $$\mathrm{covol}_t(\cdot,\cdot):=\mathrm{covol}(\cdot,\cdot,h_1(t),\ldots,h_{d-1}(t)).$$ The result follows from the facts: \begin{enumerate}[nolistsep,label={\bf(\roman{*})}, ref={\bf(\roman{*})}] \item $\mathrm{covol}_0$ is positive definite,\label{preuve1} \item if, for each $t_0\in[0,1]$, $\mathrm{covol}_{t_0}$ is positive definite, then $\mathrm{covol}_t$ is positive definite for $t$ near $t_0$,\label{preuve2} \item if $t_n \in [0,1]$ with $t_n\rightarrow t_0$ and $\mathrm{covol}_{t_n}$ is positive definite, then $\mathrm{covol}_{t_0}$ is positive definite. \label{preuve3} \end{enumerate} \ref{preuve1} is Lemma~\ref{lem: vol def pos sphere}. Let $t_0$ as in \ref{preuve2}. By Lemma~\ref{lem:elliptic}, each $\kappa^{-1}(\cdot,h_1(t),\ldots,h_{d-1}(t))$ inherits standard properties of elliptic self-adjoint operators on compact manifolds (see for example \cite{Nic07}), and we can apply \cite[Theorem 3.9 p.~392]{Kat95}: as the deformation of the operators is polynomial in $t$, the eigenvalues change analytically with $t$, for $t$ near $t_0$. In particular if $t$ is sufficiently close to $t_0$, the eigenvalues remain positive and \ref{preuve2} holds. Let $t_n$ be as in \ref{preuve3}. For any non zero $X\in C^{\infty}(\Gamma)$ we have $\mathrm{covol}_{t_n}(X,X)>0$ with $$\mathrm{covol}_{t_n}(X,X)=\int_M X \kappa^{-1}(X, (1-t_n)h_0+t_nh_1,\ldots, (1-t_n)h_0+t_nh_{d-1}) \mathrm{d} M.$$ As $\kappa^{-1}$ is multilinear and as $t_n<1$, it is easy to see that the function in the integrand above is bounded by a function (of the kind $X\sum \vert \kappa^{-1}(X,*,\ldots,*)\vert$ where each $*$ is $h_0$ or a $h_i$) which does not depend on $n$ and is continuous on the compact $M$. By Lebesgue's dominated convergence theorem, $\mathrm{covol}_{t_0}(X,X)\geq 0$, and by Lemma~\ref{lem: noyau trivial reg} $\mathrm{covol}_{t_0}(X,X)>0$, and \ref{preuve3} is proved. \end{proof} \begin{lemma}\label{lem:elliptic} For any $h_1,\ldots,h_{d-1}$ in $C^{\infty}_+(\Gamma)$, the operator $\kappa^{-1}(\cdot,h_1,\ldots,h_{d-1})$ is formally self-adjoint linear second order elliptic. \end{lemma} \begin{proof} It is formally self-adjoint because of the symmetry of the mixed-covolume. It is clearly second order linear. Let $Z\in C^{\infty}(\Gamma)$. From properties of the mixed determinant \cite[p.~121]{Ale96}, $\kappa^{-1}(Z,h_1,\ldots,h_{d-1})$ can be written, for an orthonormal frame on $M$, $$\sum_{i,j=1}^d \det(h_1'',\ldots,h_{d-1}'')_{ij}\left( (\nabla^2 Z)_{ij}-Z\delta_{ij}\right) $$ where $\det(h_1'',\ldots,h_{d-1}'')_{ij}$ is, up to a constant factor, the mixed determinant of the matrices obtained from the $h_k''$ by deleting the $i$th row and the $j$th column. Let us consider local coordinates on $M$ around a point $p$ such that at $p$, $\kappa^{-1}(Z,h_1,\ldots,h_{d-1})$ has the expression above. By definition of $C^{\infty}_+(\Gamma)$, $h_k''$ are positive definite at $p$ and then at $p$ $$\sum_{i,j=1}^d \det(h_1'',\ldots,h_{d-1}'')_{ij} x_ix_j $$ is positive definite \cite[Lemma~II p.~124]{Ale96}. \end{proof} \begin{lemma}\label{lem: noyau trivial reg} For any $h_1,\ldots,h_{d-1}$ in $C^{\infty}_+(\Gamma)$, the symmetric bilinear form $$\mathrm{covol}(\cdot,\cdot,h_1,\ldots,h_{d-1}) $$ has trivial kernel. \end{lemma} \begin{proof} Suppose that $Z$ belongs to the kernel of $\mathrm{covol}(\cdot,\cdot,h_1,\ldots,h_{d-1}) $. As $\lgroup\cdot,\cdot\rgroup$ is an inner product, $Z$ belongs to the kernel of $\kappa^{-1}(\cdot,h_1,\ldots,h_{d-1})$: $$\det(Z'',h_1'',\ldots,h_{d-1}'')=0. $$ As $h_i''$ are positive definite matrices, by definition of $C^{\infty}_+(\Gamma)$, Theorem~\ref{thm:alg lin det} implies that $$\det(Z'',Z'',h_2'',\ldots,h_{d-1}'')\leq 0 $$ so $$0=\mathrm{covol}(Z,Z,h_1,\ldots,h_{d-1})=-\int_M h_1 \kappa^{-1}(Z,Z,h_2,\ldots,h_{d-1})\leq 0 $$ but $h_1 <0$ hence $$\det(Z'',Z'',h_2'',\ldots,h_{d-1}'')= 0, $$ and Theorem~\ref{thm:alg lin det} says that $Z''=0$. Consider the $1$-homogeneous extension $\tilde{Z}$ of the $\Gamma$ invariant map on ${\mathbb H}^d$ defined by $Z$. From Subsection~\ref{sub: reg sup} it follows that the Hessian of $\tilde{Z}$ in ${\mathcal F}$ is zero, hence that $\tilde{Z}$ is affine. By invariance $\tilde{Z}$ must be constant, and by homogeneity $\tilde{Z}=0$ hence $Z=0$. \end{proof} \paragraph{Remark on Fuchsian Hedgehogs} If we apply Cauchy--Schwarz inequality to the inner product of Theorem~\ref{thm:hess vol def pos reg}, we get a ``reversed Alexandrov--Fenchel inequality'' (see Theorem~\ref{thm:general}) for $C^{\infty}_+$ convex bodies, but also for any smooth function $h$ on the hyperbolic manifold ${\mathbb H}^d/\Gamma$. From Lemma~\ref{lem: supp reg cone} there exist two elements $h_1,h_2$ of $C^{\infty}_+(\Gamma)$ with $h=h_1-h_2$. Hence $h$ can be seen as the ``support function'' of the (maybe non convex) hypersurface made of the points $\nabla_{\eta}(H_1-H_2), \eta\in{\mathcal F}$. For example if $h_1$ and $h_2$ are the support functions of respectively $B_{t_1}$ and $B_{t_2}$, then $h$ is the support function of a pseudo-sphere in ${\mathcal F}$ if $t_1-t_2> 0$, of a point (the origin) if $t_1-t_2=0$ and of a pseudo-sphere in the past cone if $t_1-t_2<0$. More generally, we could introduce ``Fuchsian hedgehogs'', whose ``support functions'' are difference of support functions of two $\Gamma$-convex bodies. They form the vector space in which the support functions of $\Gamma$-convex bodies naturally live. In the Euclidean space, they were introduced in \cite{LLR88}. An Euclidean analog of the reversed Alexandrov--Fenchel inequality for smooth Fuchsian hedgehogs described above is done in \cite{MM99}, among other results. It would be interesting to know if other results about hedgehogs have a Fuchsian analogue. \section{Polyhedral case}\label{sec:pol} The classical analogue of this section comes from \cite{Ale37} (see \cite{Ale96}). See also \cite{Sch93} and \cite{Ale05}. The toy example $d=1$ is considered in the note \cite{polymink}. \subsection{Support vectors}\label{sub:pol} \paragraph{Definition of Fuchsian convex polyhedron} The notation $a^{\bot}$ will represent the affine hyperplane over the vector hyperplane orthogonal to the vector $a$ and passing through $a$: \begin{equation}\label{eq def affine} a^{\bot}=\{x\in{\mathbb R}^{d+1}| \langle x,a\rangle_-=\langle a,a\rangle_- \}. \end{equation} \begin{definition} Let $R=(\eta_1,\ldots,\eta_n)$, $n\geq 1$, with $\eta_i$ (pairwise non-collinear) vectors in the future cone $\mathcal{F}$, and let $\Gamma$ be a Fuchsian group. A \emph{$\Gamma$-convex polyhedron} is the boundary of the intersection of the half-spaces bounded by the hyperplanes $$(\gamma \eta_i)^{\bot}, \forall \gamma\in\Gamma, \forall i=1,\ldots,n,$$ such that the vectors $\eta_i$ are inward pointing. \end{definition} See Figure~\ref{fig:polyhedron} for a simple example. \begin{figure} \centering \includegraphics[scale=0.4]{fuchs-faceNB.jpeg} \caption{A piece of a $\Gamma$-convex polyhedron in $d=2$ seen from the bottom. It is made with the orbit of $(0,0,1)$ for the Fuchsian group having a regular octagon as fundamental domain in ${\mathbb H}^2$. \label{fig:polyhedron}} \end{figure} \begin{lemma} A $\Gamma$-convex polyhedron $P$ \begin{enumerate}[nolistsep,label={\bf(\roman{*})}, ref={\bf(\roman{*})}] \item is a $\Gamma$-convex body, \label{basic pol 1} \item has a countable number of facets, \item is locally finite, \item each face is a convex Euclidean polytope. \end{enumerate} \end{lemma} Here convex polytope means convex compact polyhedron. \begin{proof} We denote by $P_i$ the $\Gamma$-convex polyhedron made from the vector $\eta_i$ and the group $\Gamma$. We will prove the lemma for $P_i$. The general case follows because $P$ is the intersection of a finite number of $P_i$. All the elements of $\Gamma \eta_i$ belong to ${\mathbb H}_{t_i}^d$, on which $\Gamma$ acts cocompactly. Up to a homothety, it is more suitable to consider that ${\mathbb H}_{t_i}^d$ is ${\mathbb H}_1^d=\mathbb{H}^d$. Let $a\in \Gamma \eta_i$ and $D_a(\Gamma)$ be the Dirichlet region (see \eqref{eq:dirichelt}). Recall that $D_a(\Gamma)$ are convex compact polyhedra in ${\mathbb H}^d$, and that the set of the Dirichlet regions $D_a$, for all $a\in\Gamma \eta_i$, is a locally finite tessellation of $\mathbb{H}^d$. Using \eqref{eq:hyp dist}, the Dirichlet region can be written $$D_a(\Gamma)=\{p\in\mathbb{H}^d\vert \langle a,p\rangle_-\geq\langle \gamma a,p\rangle_-, \forall \gamma\in\Gamma\setminus\{Id\} \}. $$ Let $a_1,a_2\in \Gamma \eta_i$ such that $D_{a_1}(\Gamma)$ and $D_{a_2}(\Gamma)$ have a common facet. This facet is contained in the intersection of ${\mathbb H}^d$ with the hyperplane $$\{p\in{\mathbb R}^{d+1}|\langle a_1,p\rangle_-=\langle a_2,p\rangle_-\},$$ and this hyperplane also contains $a_1^{\bot}\cap a_2^{\bot}$ by \eqref{eq def affine}. It follows that vertices of $P_i$ (codimension $(d+1)$ faces) project along rays from the origin onto the vertices of the Dirichlet tessellation. In particular the vertices are in ${\mathcal F}$, so $P_i\subset {\mathcal F}$, because it is the convex hull of its vertices \cite[1.4.3]{Sch93} and ${\mathcal F}$ is convex. In particular $P_i$ is a $\Gamma$-convex body due to Definition~\ref{def: fuchsian body}. And codimension $k$ faces of $P_i$ projects onto codimension $k$ faces of the Dirichlet tessellation, so $P_i$ is locally finite with a countable number of facets. Facets of $P_i$ are closed, as they project onto compact sets. In particular they are bounded as contained in $\mathcal{F}$ hence compact. They are convex polytopes by construction, and Euclidean as contained in space-like planes. Higher codimension faces are convex Euclidean polytopes as intersections of convex Euclidean polytopes. \end{proof} \paragraph{Support numbers} The extended support function of a $\Gamma$-convex polyhedron $P$ is piecewise linear (it is linear on each solid angle determined by the normals of the support planes at a vertex), it is why the data of the extended support function on each inward unit normal of the facets suffices to determine it. If $\eta_i$ is such a vector and $h$ is the support function of $P$, we call the positive number $$h(i):=-h(\eta_i)$$ the \emph{$i$th support number} of $P$. The facet with normal $\eta_i$ is denoted by $F_i$. Two adjacent facets $F_i$ and $F_j$ meet at a codimension $2$ face $F_{ij}$. If three facets $F_i,F_j,F_k$ meet at a codimension $3$ face, then this face is denoted by $F_{ijk}$. We denote by $\varphi_{ij}$ the hyperbolic distance between $\eta_i$ and $\eta_j$, given by (see for example \cite[(3.2.2)]{Rat06}) \begin{equation}\label{eq:hyp dist} -\cosh \varphi_{ij}=\langle \eta_i,\eta_j\rangle_-. \end{equation} Let $p_i$ be the foot of the perpendicular from the origin to the hyperplane $\mathcal{H}_i$ containing the facet $F_i$. In $\mathcal{H}_i$, let $p_{ij}$ be the foot of the perpendicular from $p_i$ to $F_{ij}$. We denote by $h_{ij}$ the signed distance from $p_i$ to $p_{ij}$: it is non negative if $p_i$ is in the same side of $F_{j}$ than $P$. See Figure~\ref{fig:suppnumb}. \begin{figure}[ht] \begin{center} \input suppnum.pdf_t \end{center} \caption{Supports numbers of a $\Gamma$-convex polyhedron.}\label{fig:suppnumb} \end{figure} For each $i$, $h_{ij}$ are the support numbers of the convex Euclidean polytope $F_i$. ($\mathcal{H}_i$ is identified with the Euclidean space ${\mathbb R}^d$, with $p_i$ as the origin.) If we denote by $\omega_{ijk}$ the angle between $p_ip_{ij}$ and $p_ip_{ik}$, it is well-known that \cite[(5.1.3)]{Sch93} \begin{equation}\label{eq:supp nb eucl} h_{ikj}=\frac{h_{ij}-h_{ik}\cos\omega_{ijk}}{\sin \omega_{ijk}}. \end{equation} We have a similar formula in Minkowski space \cite[Lemma~2.2]{polymink}: \begin{equation}\label{eq: supp num mink} h_{ij}=-\frac{h(j)-h(i)\cosh \varphi_{ij}}{\sinh \varphi_{ij}}. \end{equation} In particular, \begin{eqnarray} \ \label{eq:der lor1} &&\frac{\partial h_{ij}}{\partial h(j)}=-\frac{1}{\sinh \varphi_{ij}}, \\ \ &&\frac{\partial h_{ij}}{\partial h(i)}=\frac{\cosh \varphi_{ij}}{\sinh \varphi_{ij}}.\label{eq:der lor2} \end{eqnarray} If $h(i)=h(j)$ and if the quadrilateral is deformed under this condition, then \begin{equation} \frac{\partial h_{ij}}{\partial h(i)}=\frac{\cosh \varphi_{ij}-1}{\sinh \varphi_{ij}}.\label{eq:der lor3} \end{equation} \paragraph{Space of polyhedra with parallel facets} Let $P$ be a $\Gamma$-convex polyhedron. We label the facets of $P$ in a fundamental domain for the action of $\Gamma$. This set of label is denoted by ${\mathcal I}$, and $\Gamma {\mathcal I}$ labels all the facets of $P$. Let $R=(\eta_1,\ldots,\eta_n)$ be the inward unit normals of the facets of $P$ labeled by ${\mathcal I}$. We denote by $\mathcal{P}(\Gamma,R)$ the set of $\Gamma$-convex polyhedra with inward unit normals belonging to the set $R$. By identifying a $\Gamma$-convex polyhedron with its support numbers labeled by ${\mathcal I}$, $\mathcal{P}(\Gamma,R)$ is a subset of $\mathbb{R}^n$. (The corresponding vector of ${\mathbb R}^n$ is the \emph{support vector} of the polyhedron.) Note that this identification does not commute with the sum. Because the sum of two piecewise linear functions is a piecewise linear function, the Minkowski sum of two $\Gamma$-convex polyhedra is a $\Gamma$-convex polyhedron. (More precisely, the linear functions under consideration are of the form $\langle \cdot, v\rangle_-$, with $v$ a vertex of a polyhedron, hence a future time-like vector, and the sum of two future time-like vectors is a future time-like vector.) But even if the two polyhedra have parallel facets, new facets can appear in the sum. Later we will introduce a class of polyhedra such that the support vector of the Minkowski sum is the sum of the support vectors. \begin{lemma}\label{lem:ens pol} The set $\mathcal{P}(\Gamma,R)$ is a non-empty open convex cone of $\mathbb{R}^n$. \end{lemma} \begin{proof} The condition that the hyperplane supported by $\eta_j$ contains a facet of the polyhedron with support vector $h$ can be written as $$\exists x\in\mathbb{R}^{d+1}, \forall i\in\Gamma\mathcal{I}, i\not=j, \langle \eta_i,x\rangle_- <-h(i)\mbox{ and } \langle \eta_j,x\rangle_-= -h(j).$$ By \eqref{eq:hyp dist} $\mathcal{P}(\Gamma,R)$ always contains the vector $(1,\ldots,1)$. The set is clearly open as a facet can't disappear for any sufficiently small deformation. It is also clearly invariant under homotheties of positive scale factor. So to prove that $\mathcal{P}(\Gamma,R)$ is a convex cone it suffices to check that if $h$ and $h'$ belongs to $\mathcal{P}(\Gamma,R)$ then $h+h'$ belongs to $\mathcal{P}(\Gamma,R)$. It is immediate from the above characterization. \end{proof} \subsection{Covolume of convex Fuchsian polyhedra} Let $F$ be a facet of a $\Gamma$-convex polyhedron $P$, contained in a space-like hyperplane $\mathcal{H}$, with support number $h$. For the induced metric, $\mathcal{H}$ is isometric to the Euclidean space ${\mathbb R}^d$, in which $F$ is a convex polytope, with volume $A(F)$. We call $A(F)$ the \emph{area} of the facet. Let $C$ be the cone in ${\mathbb R}^{d+1}$ over $P$ with apex the origin. Its volume $V(C)$ is invariant under the action of an orientation and time-orientation preserving linear isometry (they have determinant $1$), hence to compute $V(C)$ we can suppose that $\mathcal{H}$ is an horizontal hyperplane (constant last coordinate). For horizontal hyperplanes, the induced metric is the same if the ambient space is endowed with the standard Lorentzian metric or with the standard Euclidean metric. So the well-known formula applies: $$ V(C)=\frac{1}{d+1}h A(F), $$ and then $$ \mathrm{covol}(P)=\frac{1}{d+1}\sum_{i\in\mathcal{I}} h(i) A(F_i). $$ Identifying $P$ with its support vector $h$, if $\big\langle \cdot,\cdot \big\rangle$ is the usual inner product of ${\mathbb R}^n$, we have \begin{equation}\label{eq:def vol pol} \mathrm{covol}(h)=\frac{1}{d+1}\big\langle h, A(h)\big\rangle \end{equation} where $A(h)$ is the vector formed by the area of the facets $A(F_i)$. \begin{lemma}\label{lem: der vol pom} The function $\mathrm{covol}$ is $C^2$ on ${\mathbb R}^n$, and for $h\in \mathcal{P}(\Gamma,R), X,Y\in {\mathbb R}^n$, we have: \begin{eqnarray} \ D_{h}\mathrm{covol}(X)=\big\langle X,A(h) \big\rangle, \label{eq: der vol pol}\\ \ D_{h}^2 \mathrm{covol} (X,Y)=\big\langle X, D_{h}A(Y)\big\rangle. \label{eq: der sec vol pol} \end{eqnarray} Moreover \eqref{eq: der vol pol} is equivalent to \begin{equation}\label{eq: A self adj} \big\langle X, D_hA (Y)\big\rangle= \big\langle Y, D_hA (X)\big\rangle. \end{equation} \end{lemma} \begin{proof} Let $P$ be the polyhedron with support function $h\in \mathcal{P}(\Gamma,R)$. Let $F_i$ be a facet of $P$, with support numbers $h_{i1},\ldots,h_{im}$. If $V_E$ is the $d$ Euclidean volume, it is well-known that \cite[8.2.3]{Ale05} \begin{equation}\label{eq:der area} \frac{\partial V_E(F_i)}{\partial h_{ik}}=L_{ik} \end{equation} where $L_{ik}$ is the area of the facet of $F_i$ with support number $h_{ik}$ (for $d=1$, one has$1$ instead of $L_{ik}$). $A(F_i)$ is not exactly as $V_E(F_i)$, because it is a function of $h$, and, when varying a $h(j)$, a new facet of $F_i$ can appear, as well as a new support number $h_{ij}$ of $F_i$. Actually many new facets can appear, as many as hyperplanes with normals $\Gamma \eta_j$ meeting $F_i$. One has to consider $F_i$ as also supported by $h_{ij}$ (and eventually some orbits). In this case, $L_{ij}=0$, and the variation of the volume is still given by formula \eqref{eq:der area}. So even if the combinatorics of $P$ changes under small change of a support number, there is no contribution to the change of the volume of the facets. So \eqref{eq:der area} gives \begin{equation}\label{eq:der areab} \frac{\partial A(F_i)}{\partial h_{ik}}=L_{ik}. \end{equation} We denote by $E_i^j\subset\Gamma\mathcal{I}$ is the set of indices $k\in\Gamma j$ such that $F_k$ is adjacent to $F_i$ along a codimension $2$ face. It can be empty. But for example if $\mathcal{I}$ is reduced to a single element $i$, $E_i^i$ is the set of facets adjacent to $F_i$ along a codimension $2$ face. If $j\in\mathcal{I}\setminus\{i\}$ we get $$\frac{\partial A(F_i)}{\partial h(j)}=\sum_{k\in E_i^j}\frac{\partial A(F_i)}{\partial h_{ik}}\frac{\partial h_{ik}}{\partial h(j)}$$ From \eqref{eq:der lor1} and \eqref{eq:der areab} it follows that \begin{equation}\label{eq: der par A} \frac{\partial A(F_i)}{\partial h(j)}=-\sum_{k\in E_i^j}\frac{L_{ik}}{\sinh \varphi_{ik}}. \end{equation} For the diagonal terms: \begin{eqnarray}\label{eq: coefdiag} \ \frac{\partial A(F_i)}{\partial h(i)}&=&\sum_{j\in\mathcal{I}\setminus\{i\}}\sum_{k\in E_i^j} \frac{\partial A(F_i)}{\partial h_{ik}}\frac{\partial h_{ik}}{\partial h(i)} +\sum_{k\in E_i^i} \frac{\partial A(F_i)}{\partial h_{ik}}\frac{\partial h_{ik}}{\partial h(i)}\nonumber \\ \ &\stackrel{(\ref{eq:der areab},\ref{eq:der lor2},\ref{eq:der lor3})}{=}&\sum_{j\in\mathcal{I}\setminus\{i\}} \sum_{k\in E_i^j}\cosh \varphi_{ik}\frac{ L_{ik}}{\sinh \varphi_{ik}} + \sum_{k\in E_i^i} L_{ik}\frac{\cosh \varphi_{ik}-1}{\sinh \varphi_{ik}}. \end{eqnarray} These expressions are continuous with respect to $h$, even if the combinatorics changes. So $A$ is $C^1$ and from \eqref{eq:def vol pol} $\mathrm{covol}$ is $C^2$. If \eqref{eq: der vol pol} is true, we get \eqref{eq: der sec vol pol}, and this expression is symmetric as $\mathrm{covol}$ is $C^2$, so \eqref{eq: A self adj} holds. Let us suppose that \eqref{eq: A self adj} is true. As made of volumes of convex polytopes of ${\mathbb R}^d$, $A$ is homogeneous of degree $d$ so by Euler homogeneous theorem $D_hA (h)=dA(h)$. Using this in \eqref{eq: A self adj} with $Y=h$ gives $d\big\langle X, A (h)\big\rangle= \big\langle h, D_hA(X)\big\rangle.$ Now differentiating \eqref{eq:def vol pol} gives $D_{h}\mathrm{covol}(X)=\frac{1}{d+1}\big\langle X,A(h) \big\rangle+\frac{1}{d+1}\big\langle h,D_hA(X) \big\rangle$. Inserting the preceding equation leads to $\eqref{eq: der vol pol}$. Let us prove \eqref{eq: A self adj}. If $e_1,\ldots,e_n$ is the standard basis of ${\mathbb R}^n$, it suffices to prove \eqref{eq: A self adj} for $X=e_i$ and $Y=e_j$, $i\not= j$ i.e~that the gradient of $A$ is symmetric. The sum in \eqref{eq: der par A} means that, in $\partial P/\Gamma$, each times the $i$th polytope meets the $j$th polytope along a codimension $2$ face, we add the quantity $\frac{L_{ik}}{\sinh \varphi_{ik}}$, which is symmetric in its arguments. Hence the gradient of $A$ is symmetric. \end{proof} Let us consider the simplest case of $\Gamma$-convex polyhedra in the Minkowski plane, with only one support number $h\in{\mathbb R}$. Then by \eqref{eq: supp num mink} $\mathrm{covol}(h)$ is equal to $h^2$ times a positive number, in particular it is a strictly convex function. This is always true. \begin{theorem}\label{thm: hess pos} The Hessian of $\mathrm{covol}: {\mathbb R}^n \rightarrow {\mathbb R}$ is positive definite. \end{theorem} Recall that we are looking at the covolume on a space of support vectors, and not on a space of polyhedra (the sum is not the same). \begin{proof} Due to \eqref{eq: der sec vol pol} it suffices to study the Jacobian of $A$. The elements off the diagonal are non-positive due to \eqref{eq: der par A}. Note that the formula is also correct if $E_i^j$ is empty. The diagonal terms \eqref{eq: coefdiag} are positive, as any facet $F_i$ has an adjacent facet. As $\cosh x>1$ for $x\not= 0$, \eqref{eq: coefdiag} and \eqref{eq: der par A} lead to $$\frac{\partial A(F_i)}{\partial h(i)} > \sum_{j\in\mathcal{I}\setminus\{i\}} \left|\frac{\partial A(F_i)}{\partial h(j)} \right|>0$$ that means that the Jacobian is strictly diagonally dominant with positive diagonal entries, hence positive definite, see for example \cite[1.22]{Var00}. \end{proof} \subsection{Polyhedral Minkowski Theorem}\label{sec: mink} We use a classical continuity method, although its Euclidean analog is more often proved using a variational method. \begin{theorem}[Minkowski Theorem]\label{thm:minkex} Let $\Gamma$ be a Fuchsian group, $R=(\eta_1,\ldots,\eta_n)$ be a set of pairwise non collinear unit future time-like vectors of the Minkowski space contained in a fundamental domain of $\Gamma$, and let $(f_1,\ldots,f_n)$ be positive real numbers. There exists a unique $\Gamma$-convex polyhedron with inward unit normals $\eta_i$ such that the facet orthogonal to $\eta_i$ has area $f_i$. \end{theorem} Theorem~\ref{thm:minkex} is equivalent to say that the map $\Phi$ from $\mathcal{P}(\Gamma,R)$ to $(\mathbb{R}_+)^n$ which associates to each $(h_1,\ldots,h_n)\in \mathcal{P}(\Gamma,R)$ the facet areas $(A(F_1),\ldots,A(F_n))$ is a bijection. By Lemma~\ref{lem: der vol pom}, Theorem~\ref{thm: hess pos} and local inverse theorem, $\Phi$ is locally invertible. So $\Phi$ is a local homeomorphism by the invariance of domain theorem. Lemma~\ref{prop:proprete} below says that $\Phi$ is proper. As $(\mathbb{R}_+)^n$ is connected, it follows that $\Phi$ is surjective, hence a covering map. But the target space $(\mathbb{R}_+)^n$ is simply connected and $\mathcal{P}(\Gamma,R)$ is connected (Lemma~\ref{lem:ens pol}), so $\Phi$ is a homeomorphism, in particular bijective, and Theorem~\ref{thm:minkex} is proved. \begin{lemma}\label{prop:proprete} The map $\Phi$ is proper: Let $(a_\alpha)_{\alpha\in\mathbb{N}}$ be a converging sequence of $(\mathbb{R}_+)^n$ such that for all $\alpha$, there exists $h_\alpha=(h_{\alpha}(1),\ldots,h_{\alpha}(n))\in \mathcal{P}(\Gamma,R)$ with $\Phi(h_\alpha)=a_\alpha$. Then a subsequence of $(h_\alpha)_\alpha$ converges in $\mathcal{P}(\Gamma,R)$. \end{lemma} \begin{proof} Let $\alpha\in{\mathbb N}$ and suppose that $h_{\alpha}(i)$ is the largest component of $h_{\alpha}$. For any support number $h_{\alpha}(j)$, $j\in\Gamma\mathcal{I}$, of a facet adjacent to the one supported by $h_{\alpha}(i)$, as $h_{\alpha}(i)\geq h_{\alpha}(j)$, \eqref{eq: supp num mink} gives: $$h_{ij}^{\alpha}=\frac{h_{\alpha}(i)\cosh \varphi_{ij}-h_{\alpha}(j)}{\sinh \varphi_{ij}}\geq h_{\alpha}(i) \frac{\cosh\varphi_{ij}-1}{\sinh \varphi_{ij}}.$$ As $\Gamma$ acts cocompactly on ${\mathbb H}^d$, for any $j\in\Gamma\mathcal{I}$, $\varphi_{ij}$ is bounded from below by a positive constant. Moreover the function $x\mapsto \frac{\cosh x-1}{\sinh x} $ is increasing, then there exists a positive number $\lambda_i$, depending only on $i$, such that $$h_{ij}^{\alpha}\geq h_{\alpha}(i) \lambda_i.$$ As the sequence of areas of the facets is supposed to converge, there exists positive numbers $A^+_i$ and $A^-_i$ such that $A^+_i\geq A(F_i^{\alpha}) \geq A^-_i$, where $A(F_i^{\alpha})$ is the area of the facet $F_i^{\alpha}$ supported by $h_{\alpha}(i)$. If $\mbox{Per}_i^{\alpha}$ (resp. $\mbox{Per}_i$) is the Euclidean $(d-1)$ volume of the hypersphere bounding the ball with Euclidean $d$ volume $A(F_i^{\alpha})$ (resp. $A_i^-$), the isoperimetric inequality gives \cite[10.1]{BZ88} $$\sum_j L_{ij}^{\alpha} \geq \mbox{Per}_i^{\alpha} \geq \mbox{Per}_i,$$ where the sum is on the facets adjacent to $F_i^{\alpha}$ and $L_{ij}^{\alpha}$ is the $(d-1)$ volume of the codimension $2$ face between $F_i^{\alpha}$ and $F_j^{\alpha}$. We get $$A^+_i \geq A(F_i^{\alpha})=\frac{1}{d}\sum_j h_{ij}^{\alpha}L_{ij}^{\alpha} \geq h_{\alpha}(i) \lambda_i \frac{1}{d}\sum_j L_{ij}^{\alpha} \geq h_{\alpha}(i) \frac{\lambda_i \mbox{Per}_i}{d}.$$ As $h_{\alpha}(i)$ is the largest component of $h_{\alpha}$, all the support numbers are bounded from above by a constant which does not depend on $\alpha$. Moreover each component of $h_{\alpha}$ is positive, hence all the components of the elements of the sequence $(h_{\alpha})_{\alpha}$ are bounded from above and below, so there exists a subsequence $(h_{\varphi(\alpha)})_{\varphi(\alpha)}$ converging to $(h(1),\ldots,h(n))$, where $h(i)$ is a non-negative number. Suppose that the limit of $(h_{\varphi(\alpha)}(i))_{\varphi(\alpha)}$ is zero. Let $h_{\varphi(\alpha)}(j)$ be the support number of a facet adjacent to $F_i^{\varphi(\alpha)}$. If $\varphi(\alpha)$ is sufficiently large, $h_{\varphi(\alpha)}(j)$ is arbitrary close to $h(j)$, which is a non-negative number, and $h_{\varphi(\alpha)}(i)$ is arbitrary close to $0$. By \eqref{eq: supp num mink}, $h_{ij}^{\alpha}$ is a non-positive number. So all the support numbers of $F_i^{\varphi(\alpha)}$ are non-positive, hence the $d$ volume of $F_i^{\varphi(\alpha)}$ is non-positive, that is impossible. It follows easily that $(h_{\varphi(\alpha)}(i))_{\varphi(\alpha)}$ converges in $\mathcal{P}(\Gamma,R)$. \end{proof} \subsection{Mixed face area and mixed-covolume} Let us recall some basic facts about convex polytopes in Euclidean space (with non empty interior). A convex polytope of $\mathbb{R}^d$ is \emph{simple} if each vertex is contained in exactly $d$ facets. Each face of a simple convex polytope is a simple convex polytope. The \emph{normal fan} of a convex polytope is the decomposition of $\mathbb{R}^d$ by convex cones defined by the outward unit normals to the facets of the polytope (each cone corresponds to one vertex). Two convex polytopes are \emph{strongly isomorphic} if they have the same normal fan. The Minkowski sum of two strongly isomorphic simple polytopes is a simple polytope strongly isomorphic to the previous ones. Moreover the support vector of the Minkowski sum is the sum of the support vectors. Let $Q$ be a simple convex polytope in ${\mathbb R}^d$ with $n$ facets. The set of convex polytopes of ${\mathbb R}^d$ strongly isomorphic to $Q$ is a convex open cone in ${\mathbb R}^n$. The Euclidean volume $V_E$ is a polynomial of degree $d$ on this set, and its polarization $V_E(\cdot,\ldots,\cdot)$ is the \emph{mixed-volume}. The coefficients of the volume depend on the combinatorics, it's why we have to restrict ourselves to simple strongly isomorphic polytopes. The following result is an equivalent formulation of the Alexandrov--Fenchel inequality. \begin{theorem}[{\cite{Ale96,Sch93}}]\label{thm: AF eucl} Let $Q,Q_3,\ldots,Q_d$ be strongly isomorphic simple convex polytopes of ${\mathbb R}^d$ with $n$ facets and $Z\in{\mathbb R}^n$. Then $$V_E(Z,Q,Q_3,\ldots,Q_d)=0 \Rightarrow V_E(Z,Z,Q_3,\ldots,Q_d)\leq 0$$ and equality holds if and only if $Z$ is the support vector of a point. \end{theorem} We identify a support hyperplane of an element of $\mathcal{P}(\Gamma,R)$ with the Euclidean space ${\mathbb R}^d$ by performing a translation along the ray from the origin orthogonal to the hyperplane. In this way we consider all facets of elements of $\mathcal{P}(\Gamma,R)$ lying in parallel hyperplanes as convex polytopes in the same Euclidean space ${\mathbb R}^d$. The definition of strong isomorphy and simplicity extend to $\Gamma$-convex polyhedra, considering them as polyhedral hypersurface in the ambient vector space. Note that the simplest examples of Euclidean convex polytopes, the simplices, are simple, but the simplest examples of $\Gamma$-convex polyhedra, those defined by only one orbit, are not simple (if $d>1$). Let us formalize the definition of strong isomorphy. The \emph{normal cone} $N(P)$ of a convex $\Gamma$-polyhedron $P$ is the decomposition of ${\mathcal F}$ by convex cones defined by the inward normals to the facets of $P$. It is the minimal decomposition of ${\mathcal F}$ such that the extended support functions of $P$ is the restriction of a linear form on each part. If the normal fan $N(Q)$ subdivides $N(P)$, then we write $N(Q) > N(P)$. Note that $$N(P+Q)>N(P). $$ Two convex $\Gamma$-polyhedron $P$ and $Q$ are \emph{strongly isomorphic} if $N(P)=N(Q)$. If $P$ is simple, we denote by $[P]$ the subset of $\mathcal{P}(\Gamma,R)$ made of polyhedra strongly isomorphic to $P$. \begin{lemma}\label{lem:[P]} All elements of $[P]$ are simple and $[P]$ is an open convex cone of ${\mathbb R}^n$. \end{lemma} \begin{proof} The fact that all elements of $[P]$ are simple and that $[P]$ is open are classical, see for example \cite{Ale37}. The only difference with the Euclidean convex polytopes case is that, around a vertex, two facets can belong to the same orbit for the action of $\Gamma$, hence when one wants to slightly move a facet adjacent to a vertex, one actually moves two (or more) facets. But this does not break the simplicity, nor the strong isomorphy class. Moreover $[P]$ is a convex cone as the sum of two functions piecewise linear on the same decomposition of ${\mathcal F}$ gives a piecewise linear function on the same decomposition. \end{proof} Suppose that $P$ is simple, has $n$ facets (in a fundamental domain), and let $h_1,\ldots,h_{d+1}\in[P]$ (support vectors of polyhedra strongly isomorphic to $P$). Let us denote by $F_k(i)$ the $i$th facet of the polyhedron with support vector $h_k$, and let $h(F_k(i))$ be its support vector ($F_k(i)$ is seen as a convex polytope in ${\mathbb R}^d$). The entries of $h(F_k(i))$ have the form \eqref{eq: supp num mink} so the map $h_k\mapsto h(F_k(i))$ is linear. This map can be defined formally for all $Z\in{\mathbb R}^n$ using \eqref{eq: supp num mink}. The \emph{mixed face area} $A(h_2,\ldots,h_{d+1})$ is the vector formed by the entries $V_E(h(F_2(i)),\ldots,h(F_{d+1}(i)))$, $i=1,\ldots,n$. Together with \eqref{eq:def vol pol}, this implies that $\mathrm{covol}$ is a $(d+1)$-homogeneous polynomial, and we call \emph{mixed-covolume} its polarization $\mathrm{covol}(\cdot,\ldots,\cdot)$. Note that $\mathrm{covol}$ is $C^{\infty}$ on $[P]$. \begin{lemma}\label{lem: gen mixed pol} We have the following equalities, for $X_i\in{\mathbb R}^n$. \begin{enumerate}[nolistsep,label={\bf(\roman{*})}, ref={\bf(\roman{*})}] \item $D^{d-1}_{X_2}A(X_3,\ldots,X_{d+1})=d! A(X_2,\ldots,X_{d+1})$, \item $D_{X_1}\mathrm{covol} (X_2)=(d+1)\mathrm{covol}(X_2,X_1,\ldots,X_1)$, \item $D^2_{X_1}\mathrm{covol} (X_2,X_3)=(d+1)d\mathrm{covol}(X_2,X_3,X_1,\ldots,X_1)$, \label{hess mixed pol} \item $D^{d}_{X_1} \mathrm{covol} (X_2,\ldots,X_{d+1})=(d+1)!\mathrm{covol}(X_1,\ldots,X_{d+1})$, \item $\mathrm{covol}(X_1,\ldots,X_{d+1})=\frac{1}{d+1}\big\langle X_1, A(X_2,\ldots,X_{d+1})\big\rangle$. \label{mv pol} \end{enumerate} \end{lemma} \begin{proof} The proof is analogous to the one of Lemma~\ref{eq: reg mix vol gen}. \end{proof} \begin{corollary}\label{cor: pol mix vol pos} For $h_i\in [P]$, $\mathrm{covol}(h_1,\ldots,h_{d+1})$ is non-negative. \end{corollary} \begin{proof} As $h_i$ are support vectors of strongly isomorphic simple polyhedra, the entries of $A(h_2,\ldots,h_{d+1})$ are mixed-volume of simple strongly isomorphic Euclidean convex polytopes, hence are non-negative (see Theorem~5.1.6 in \cite{Sch93}). The result follows from \ref{mv pol} because the entries of $h_1$ are positive. \end{proof} \begin{lemma}\label{lem: noyau trivial pol} For any $h_1,\ldots,h_{d-1}\in[P]$, the symmetric bilinear form $$\mathrm{covol}(\cdot,\cdot,h_1,\ldots,h_{d-1}) $$ has trivial kernel. \end{lemma} \begin{proof} The analog of the proof of Lemma~\ref{lem: noyau trivial reg}, using Theorem~\ref{thm: AF eucl} instead of Theorem~\ref{thm:alg lin det}, gives that in each support hyperplane, the ``support vectors'' of $Z$ (formally given by \eqref{eq: supp num mink}) are the ones of a point of ${\mathbb R}^d$. Let us denote by $Z_i$ the support vector of $Z$ in the hyperplane with normal $\eta_i$. If $\varepsilon$ is sufficiently small then $h_1+\varepsilon Z$ is the support vector of a $\Gamma$-convex polyhedron $P_1^{\varepsilon}$ strongly isomorphic to $P_1$, the one with support vector $h_1$. Moreover the support numbers of the $i$th facet $F_i$ of $P_1^{\varepsilon}$ are the sum of the support numbers of the facet $F_i^1$ of $P_1$ with the coefficients of $\varepsilon Z_i$. As $Z_i$ is the support vector of a point in $\mathbb{R}^d$, $F_i$ is obtained form $F_i^1$ by a translation. It follows that each facet of $P_1^{\varepsilon}$ is obtained by a translation of the corresponding facet of $P_1$, hence $P_1^{\varepsilon}$ is a translate of $h_1$ (the translations of each facet have to coincide on each codimension $2$ face). As $h_1+\varepsilon Z$ is supposed to be a $\Gamma$-convex polyhedron for $\varepsilon$ sufficiently small, and as the translation of a $\Gamma$-convex polyhedron is not a $\Gamma$-convex polyhedron, it follows that $Z=0$. \end{proof} \begin{theorem}\label{thm:hess vol def pos pol} For any $h_1,\ldots,h_{d-1}\in[P]$, the symmetric bilinear form $$\mathrm{covol}(\cdot,\cdot,h_1,\ldots,h_{d-1}) $$ is positive definite. \end{theorem} \begin{proof} The proof is analogous to the one of Theorem~\ref{thm:hess vol def pos reg}. \end{proof} \paragraph{Remark on spherical polyhedra} The sets of strongly isomorphic simple $\Gamma$-convex polyhedra form convex cones in vector spaces (Lemma~\ref{lem:[P]}). The mixed-covolume allow to endow these vector spaces with an inner product. Hence, if we restrict to polyhedra of covolume $1$, those sets are isometric to convex spherical polyhedra. For $d=1$ we get simplices named orthoschemes \cite{polymink}. In $d=2$, if we look at the metric induced on the boundary of the Fuchsian polyhedra, we get spherical metrics on subsets of the spaces of flat metrics with cone-singularities of negative curvature on the compact surfaces of genus $>1$. It could be interesting to investigate the shape of these subsets. \section{General case}\label{sec:gen} \subsection{Convexity of the covolume} \paragraph{Hausdorff metric} Recall that ${\mathcal K}(\Gamma)$ is the set of $\Gamma$-convex bodies for a given $\Gamma$. For $K,K'$ we define the \emph{Hausdorff metric} by $$ d(K,K')=\min\{\lambda \geq 0 | K'+\lambda B \subset K, K+\lambda B\subset K' \}. $$ It is not hard to check that this is a distance and that Minkowski sum and multiplication by a positive scalar are continuous for this distance. If we identify $\Gamma$-convex bodies with their support functions, then ${\mathcal K}(\Gamma)$ is isometric to a convex cone in $C^0({\mathbb H}^d/\Gamma)$ endowed with the maximum norm, i.e.: $$d(K,K')=\sup_{\eta\in{\mathbb H}^d/\Gamma} |h(\eta)-h'(\eta)|.$$ The proofs is easy and formally the same as in the Euclidean case \cite[1.8.11]{Sch93}. \begin{lemma} The covolume is a continuous function. \end{lemma} \begin{proof} Let $K$ be in ${\mathcal K}(\Gamma)$ with support function $h$. For a given $\varepsilon>0$, choose $\lambda>1$ such that $(\lambda^{d+1}-1)\lambda^{d+1} \mathrm{covol}(K)<\varepsilon$. Let $\rho <0$ such that $h>\rho$, and let $\overline{\alpha}>0$ be the minimum of $h-\rho$. Let $\alpha=\mbox{min}(\overline{\alpha},(1-\lambda)\rho)>0$. In particular, \begin{equation}\label{eq alpha} \rho\leq h-\alpha. \end{equation} Finally, let $\overline{K}$ with support function $\overline{h}$ be such that $d(K,\overline{K})<\alpha$. In particular, $ h-\alpha < \overline{h}$, that, inserted in \eqref{eq alpha}, gives that $\rho < \overline{h}$. This and the definition of $\alpha$ give $$\overline{h} \leq h+\alpha \leq h+(1-\lambda)\rho\leq h+(1-\lambda)\overline{h},$$ i.e.~$\lambda \overline{h} \leq h$, i.e.~$\lambda \overline{K}\subset K$, in particular $\mathrm{covol}(K)\leq \lambda^{d+1}\mathrm{covol}(\overline{K})$. In a similar way we get $\mathrm{covol}(\overline{K})\leq \lambda^{d+1}\mathrm{covol}(K)$. This allows to write \begin{eqnarray*} \ && \mathrm{covol}(K)-\mathrm{covol}(\overline{K}) \leq (\lambda^{d+1}-1)\mathrm{covol}(\overline{K})\leq (\lambda^{d+1}-1)\lambda^{d+1} \mathrm{covol}(K)<\varepsilon \\ \ && \mathrm{covol}(\overline{K})-\mathrm{covol}(K) \leq (\lambda^{d+1}-1)\mathrm{covol}(K)\leq (\lambda^{d+1}-1)\lambda^{d+1} \mathrm{covol}(K)<\varepsilon \end{eqnarray*} i.e.~$|\mathrm{covol}(K)-\mathrm{covol}(\overline{K})|< \varepsilon$. \end{proof} The general results are based on polyhedral approximation. \begin{lemma}\label{lem: approximation} Let $K_1,\ldots,K_p\in {\mathcal K}(\Gamma)$. There exists a sequence $(P^1_k,\ldots,P^p_k)_k$ of strongly isomorphic simple $\Gamma$-convex polyhedra converging to $(K_1,\ldots,K_p)$. \end{lemma} \begin{proof} First, any $\Gamma$-convex body $K$ is arbitrarily close to a $\Gamma$-convex polyhedron $Q$. Consider a finite number of points on $K$ and let $Q$ be the polyhedron made by the hyperplanes orthogonal to the orbits of these points, and passing through these points. We get $K\subset Q$. For any $\varepsilon >0$, if $Q+\varepsilon B$ is not included in $K$ then add facets to $Q$. The process ends by cocompactness. Let $Q^i$ be a $\Gamma$-convex polyhedron arbitrary close to $K_i$, and let $P$ be the $\Gamma$-convex polyhedron $Q^1+\cdots+Q^p$. Let us suppose that around a vertex $x$ of $P$, two facets belong to the same orbit for the action of $\Gamma$. We perform a little translation in direction of $P$ of a support hyperplane at $x$, which is not a support hyperplane of a face containing $x$. A new facet appears, the vertex $x$ disappears, and the two facets in the same orbit share one less vertex. Repeating this operation a finite number of times, we get a polyhedron $P'$ with $N(P')>N(P)$ and such that around each vertex, no facets belong to the same orbit. If $P'$ is not simple, there exists a vertex $x$ of $P'$ such that more than $d+1$ facets meet at this vertex. We perform a small little parallel move of one of this facets. In this case the number of facets meeting at the vertex $x'$ corresponding to $x$ decreases, and new vertices can appear, but the number of facets meeting at each of those vertices is strictly less than the number of facets meeting at $x$. If the move is sufficiently small, the number of facets meeting at the other vertices is not greater than it was on $P'$. Repeating this operation a finite number of times leads to the simple polyhedra $P''$, and $N(P'')>N(P')$. Now we define $P^i=Q^i+\alpha P''$, with $\alpha>0$ sufficiently small such that $P^i$ remains close to $Q^i$ and hence close to $K_i$. By definition of $P$, $N(P)>N(Q^i)$ and finally $N(P'')>N(Q^i)$ hence $N(P^i)=N(P'')$: all the $P^i$ are strongly isomorphic to $P''$, which is simple. \end{proof} \begin{theorem}\label{them: vol conv} The covolume is a convex function on the space of $\Gamma$-convex bodies: for any $K_1, K_2\in {\mathcal K}(\Gamma)$, $\forall t\in[0,1]$, $$\mathrm{covol}((1-t)K_1+tK_2)\leq t\mathrm{covol}(K_1)+(1-t)\mathrm{covol}(K_2).$$ \end{theorem} \begin{proof} By Lemma~\ref{lem: approximation}, there exist strongly isomorphic simple $\Gamma$-convex polyhedra $P_1$ and $P_2$ arbitrary close to respectively $K_1$ and $K_2$. As for simple strongly isomorphic $\Gamma$-convex polyhedra, the addition of support vectors is the same as Minkowski addition, Theorem~\ref{thm: hess pos} gives that $$\mathrm{covol}((1-t)P_1+tP_2)\leq t\mathrm{covol}(P_1)+(1-t)\mathrm{covol}(P_2)$$ and the theorem follows by continuity of the covolume. \end{proof} \subsection{Mixed covolume and standard inequalities} \begin{lemma} The covolume on ${\mathcal K}(\Gamma)$ is a homogeneous polynomial of degree $(d+1)$. Its polar form is the \emph{mixed-covolume} $\mathrm{covol}(\cdot,\ldots,\cdot)$, a continuous non-negative symmetric map on $({\mathcal K}(\Gamma))^{d+1}$ such that $$\mathrm{covol}(K,\ldots,K)=\mathrm{covol}(K).$$ Moreover if we restrict to a space of strongly isomorphic simple $\Gamma$-convex polyhedra, or to the space of $C^{\infty}_+$ $\Gamma$-convex bodies, then $\mathrm{covol}(\cdot,\ldots,\cdot)$ is the same map as the one previously considered. \end{lemma} \begin{proof} Let us define \begin{equation}\label{eq:polar} \mathrm{covol}(K_1,\ldots,K_{d+1})=\frac{1}{(d+1)!}\sum_{i=1}^{d+1} (-1)^{d+1+k}\sum_{i_1<\cdots<i_{d+1}} \mathrm{covol}(K_{i_1}+\cdots +K_{i_{d+1}}) \end{equation} which is a symmetric map. From the continuity of the covolume and of the Minkowski addition, it is a continuous map. In the case when $K_i$ are strongly isomorphic simple polyhedra, the right-hand side of \eqref{eq:polar} to the mixed-covolumes previously introduced \cite[5.1.3]{Sch93} (we also could have used another polarization formula \cite[(A.5)]{Hor07}). Let us consider a sequence of strongly isomorphic simple $\Gamma$-convex polyhedra $P_1(k),\ldots,P_{d+1}(k)$ converging to $K_1,\ldots,K_{d+1}$ (Lemma~\ref{lem: approximation}). From the definition of the mixed-covolume we have $$\mathrm{covol}(\lambda_1 P_1(k)+\cdots+\lambda_{d+1} P_{d+1}(k))=\sum_{i_1,\ldots,i_{d+1}=1}^{d+1} \lambda_{i_1}\cdots\lambda_{i_{d+1}} \mathrm{covol}(P(k)_{i_1},\ldots,P(k)_{i_{d+1}})$$ and by continuity, passing to the limit, $$\mathrm{covol}(\lambda_1 K_1+\cdots+\lambda_{d+1} K_{d+1})=\sum_{i_1,\ldots,i_{d+1}=1}^{d+1} \lambda_{i_1}\cdots\lambda_{i_{d+1}} \mathrm{covol}(K_{i_1},\ldots,K_{i_{d+1}})$$ so the covolume is a polynomial, and $\mathrm{covol}(\cdot,\ldots,\cdot)$ introduced at the beginning of the proof is its polarization. It is non-negative due to Corollary~\ref{cor: pol mix vol pos}. In the case of $C^{2}_+$ $\Gamma$-convex bodies, both notions of mixed-covolume satisfy \eqref{eq:polar}. \end{proof} \begin{theorem}\label{thm:general} Let $K_i\in {\mathcal K}(\Gamma)$ and $0<t<1$. We have the following inequalities. \begin{eqnarray*} \ &&\mbox{\emph{Reversed Alexandrov--Fenchel inequality:}} \\ \ &&\mathrm{covol}(K_1,K_2,K_3,\ldots,K_{d+1})^2\leq \mathrm{covol}(K_1,K_1,K_3,\ldots,K_{d+1})\mathrm{covol}(K_2,K_2,K_3,\ldots,K_{d+1}) \\ \ &&\mbox{\emph{First reversed Minkowski inequality:}}\\ \ &&\mathrm{covol}(K_1,K_2,\ldots,K_2)^{d+1}\leq \mathrm{covol}(K_2)^{d}\mathrm{covol}(K_1)\\ \ &&\mbox{\emph{Second or quadratic reversed Minkowski inequality:}}\\ \ &&\mathrm{covol}(K_1,K_2,\ldots,K_2)^2\leq \mathrm{covol}(K_2)\mathrm{covol}(K_1,K_1,K_2,\ldots,K_2)\\ \ &&\mbox{\emph{Reversed Brunn--Minkowski inequality:}} \\ \ &&\mathrm{covol}((1-t)K_1+tK_2)^{\frac{1}{d+1}} \leq (1-t)\mathrm{covol}(K_1)^{\frac{1}{d+1}}+t\mathrm{covol}(K_2)^{\frac{1}{d+1}} \\ \ &&\mbox{\emph{Reversed linearized first Minkowski inequality:}}\\ \ && (d+1) \mathrm{covol}(K_1,K_2,\ldots,K_2)\leq d\mathrm{covol}(K_2)+\mathrm{covol}(K_1) \end{eqnarray*} If all the $K_i$ are $C^{\infty}_+$ or strongly isomorphic simple polyhedra, then equality holds in reversed Alexandrov--Fenchel and second reversed Minkowski inequalities if and only if $K_1$ and $K_2$ are homothetic. \end{theorem} In the classical case of Euclidean convex bodies, the linearized first Minkowski inequality is valid only on particular subsets of the space of convex bodies, see \cite[(6.7.11)]{Sch93}. \begin{proof} Let $P_1(k),\ldots,P_{d+1}(k)$ be a sequence of simple strongly isomorphic $\Gamma$-convex polyhedra converging to $K_1,\ldots,K_{d+1}$ (Lemma~\ref{lem: approximation}). Applying Cauchy--Schwarz inequality to the inner product $\mathrm{covol}(\cdot,\cdot,P_3(k),\ldots,P_{d+1}(k))$ (Theorem~\ref{thm:hess vol def pos pol}) at $(P_1(k),P_2(k))$ and passing to the limit gives reversed Alexandrov--Fenchel inequality. Equalities cases follow from Theorem~\ref{thm:hess vol def pos reg} and \ref{thm:hess vol def pos pol}. The second reversed Minkowski inequality and its equality case follows from Alexandrov--Fenchel inequality. As the covolume is convex (Theorem~\ref{them: vol conv}), for $\overline{K}_1$ and $\overline{K}_2$ of unit covolume, for $\overline{t}\in[0,1]$ we get $$\mathrm{covol}((1-\overline{t})\overline{K}_1+\overline{t}\overline{K}_2)\leq 1.$$ Taking $\overline{K}_i=K_i/\mathrm{covol}(K_i)^{\frac{1}{d+1}}$ and $$\overline{t}=\frac{t\mathrm{covol}(K_2)^{\frac{1}{d+1}}}{(1-t)\mathrm{covol}(K_1)^{\frac{1}{d+1}}+t\mathrm{covol}(K_2)^{\frac{1}{d+1}}} $$ leads to the reversed Brunn--Minkowski inequality. As $\mathrm{covol}(\cdot)$ is convex, the map $$f(\lambda)=\mathrm{covol}((1-\lambda)K_1+\lambda K_2)-(1-\lambda)\mathrm{covol}(K_1)-\lambda \mathrm{covol}(K_2), 0\leq \lambda \leq 1, $$ is convex. As $f(0)=f(1)=0$, we have $f'(0)\leq 0$, that is the reversed linearized first Minkowski inequality. (Remember that $$\mathrm{covol}((1-\lambda)K_1+\lambda K_2)=(1-\lambda)^{d+1}\mathrm{covol}(K_1)+(d+1)(1-\lambda)^d\lambda \mathrm{covol}(K_1,\ldots,K_1,K_2)+\lambda^2[\ldots].) $$ Reversed Brunn--Minkowski says that the map $\mathrm{covol}(\cdot)^{\frac{1}{d+1}}$ is convex. Doing the same as above with the convex map $$g(\lambda)=\mathrm{covol}((1-\lambda)K_1+\lambda K_2)^{\frac{1}{d+1}}-(1-\lambda)\mathrm{covol}(K_1)^{\frac{1}{d+1}}-\lambda \mathrm{covol}(K_2)^{\frac{1}{d+1}}, 0\leq \lambda \leq 1, $$ leads to the first reversed Minkowski inequality. \end{proof} The \emph{(Minkowski) area} $S(K)$ of a $\Gamma$-convex body $K$ is $(d+1)\mathrm{covol}(B,K,\ldots,K)$. Note that it can be defined from the covolume: $$S(K)=\lim_{\varepsilon\rightarrow 0^+} \frac{\mathrm{covol}(K+\varepsilon B)-\mathrm{covol}(K)}{\varepsilon}. $$ The following inequality says that, among $\Gamma$-convex bodies of area $1$, $B$ has smaller covolume, or equivalently that among $\Gamma$-convex bodies of covolume $1$, $B$ has larger area. \begin{corollary}[Isoperimetric inequality] Let $K$ be a $\Gamma$-convex body. Then $$\left(\frac{S(K)}{S(B)}\right)^{d+1}\leq \left(\frac{\mathrm{covol}(K)}{\mathrm{covol}(B)}\right)^d. $$ \end{corollary} \begin{proof} It follows from the first reversed Minkowski with $K_1=B$, $K_2=K$, divided by $S(B)^{d+1}$, with $(d+1)\mathrm{covol}(B)=S(B)$. \end{proof} \begin{lemma} If $K$ is a $C^{\infty}_+$ $\Gamma$-convex body, then $S(K)$ is the volume of the Riemannian manifold $\partial K/\Gamma$. If $K$ is a $\Gamma$-convex polyhedron, then $S(K)$ is the total face area of $K$ (the sum of the area of the facets of $K$ in a fundamental domain). \end{lemma} In particular $S(B)$ is the volume of the compact hyperbolic manifold $ {\mathbb H}^d/\Gamma$. \begin{proof} The $C^2_+$ case follows from the formulas in Section~\ref{sec:reg}, because $B$ is a $C^2_+$ convex body. Let $K$ be polyhedral. Let $(P_k)_k$ be a sequence of polyhedra converging to $B$ and such that all the support numbers of $P_k$ are equal to $1$ (i.e.~all facets are tangent to ${\mathbb H}^d$). Up to add facets, we can construct $P_k$ such that $N(P_k)>N(K)$ and $P_k$ is simple. Let $\alpha$ be a small positive number. The polyhedron $K+ \alpha P_k$ is strongly isomorphic to $P_k$. It follows from formulas of Section~\ref{sec:pol} than $(d+1)\mathrm{covol}(P_k,K+ \alpha P_k,\ldots,K+ \alpha P_k)$ is equal to the total face area of $K+\alpha P_k$. By continuity of the mixed-covolume, $(d+1)\mathrm{covol}(P_k,K+ \alpha P_k,\ldots,K+ \alpha P_k)$ converges to $(d+1)\mathrm{covol}(P_k,K,\ldots,K)$ when $\alpha$ goes to $0$. We associate to $K$ a support vector $h(K)$ whose entries are support numbers of facets of $K$, but also to support hyperplanes of $K$ parallel to facets of $P_k$. We also consider ``false faces'' of larger codimension, such that the resulting normal fan is the same as the one of $P_k$. This is possible as the normal fan of $P_k$ is finer than the one of $K$. The support numbers of the false faces can be computed using \eqref{eq:supp nb eucl} and \eqref{eq: supp num mink} (i.e.~$K$ is seen as an element of the closure of $[P_k]$). In particular $h(K+\alpha P_k)=h(K)+\alpha h(P_k)$ and as the map $s$ giving the support numbers of a facet in terms of the support numbers of the polyhedron is linear, the area of this facet is $V_E(s(h(K))+\alpha s(h(P_k)))$. By continuity of the Euclidean volume, when $\alpha$ goes to $0$ this area goes to the area of the facet of $K$ (it is $0$ if the facet was a ``false facet'' of $K$). Hence $(d+1)\mathrm{covol}(P_k,K,\ldots,K)$ is equal to the total face area of $K$, and on the other hand it goes to $S(K)$ when $k$ goes to infinity. \end{proof} Let us end with an example. Let $K$ be a polyhedral $\Gamma$-convex body with support numbers equal to $1$. In this case $S(K)=(d+1)\mathrm{covol}(K)$, and as $S(B)=(d+1)\mathrm{covol}(B)$, the isoperimetric inequality becomes $$\frac{S(K)}{S(B)}\leq 1.$$ Let $d=2$ and $\Gamma$ be the Fuchsian group which has a regular octagon as a fundamental domain in the Klein model of ${\mathbb H}^2$. Then by the Gauss--Bonnet theorem $S(B)=4\pi$. The total face area of $K$ is the area of only one facet, which is eight times the area of a Euclidean triangle of height $h'=\frac{\cosh \varphi-1}{\sinh \varphi}$ and with edge length two times $h'\frac{1-\cos \pi/4}{\sin \pi/4}$ (see \eqref{eq:supp nb eucl} and \eqref{eq: supp num mink}). $\varphi$ is the distance between a point of ${\mathbb H}^2$ and its image by a generator of $\Gamma$, and $\cosh \varphi = 2+2\sqrt{2}$ (compare Example~C p.~95 in \cite{kat92} with Lemma~12.1.2 in \cite{MR03}). By a direct computation the isoperimetric inequality becomes $$0,27\approx 13-9\sqrt{2} \leq \frac{\pi}{2}\approx 1,57.$$ \paragraph{Remarks on equality cases and general Minkowski theorem} Brunn--Minkowski inequality for non-degenerated (convex) bodies in the Euclidean space comes with a description of the equality case. Namely, the equality occurs for a $t$ if and only if the bodies are homothetic (the part ``if'' is trivial). At a first sigh it is not possible to adapt the standard proof of the equality case to the Fuchsian case, as it heavily lies on translations \cite{BF87,Sch93,Ale05}. If such a result was known, it should imply, in a way formally equivalent to the classical one, the characterization of the equality case in the reversed first Minkowski inequality, as well as the uniqueness part in the Minkowski theorem and the equality case in the isoperimetric equality (see below). The Minkowski problem in the classical case is to find a convex body having a prescribed measure as ``area measure'' (see notes of Section~5.1 in \cite{Sch93}). It can be solved by approximation (by $C^2_+$ or polyhedral convex bodies), see \cite{Sch93}, or by a variational argument using the volume, see \cite{Ale96}. Both methods require a compactness result, which is known as the Blaschke selection Theorem. Another classical question about Minkowski problem in the $C^2_+$ case, is to know the regularity of the hypersurface with respect to the regularity of the curvature function, see the survey \cite{TW08}. All those questions can be transposed in the setting of Fuchsian convex bodies. \begin{spacing}{0.9} \begin{footnotesize} \bibliographystyle{apalike}
1,477,468,751,228
arxiv
\section{Introduction.} In recent years we have witnessed a resurrection of interest in light-front Hamiltonian physics in two areas. The first one, the new nonperturbative approach to QCD \cite{thelongpaper}, is related to the original application of light-front coordinates \cite{ancient}, i.e. hadron spectroscopy, and the other comes from string theory \cite{thorn}. The popular M-theory \cite{Mtheory} is formulated in light-front coordinates \cite{motl}. With the rise of the second application, some rather academic questions became of interest.For example, dualities in string theories are one of the most powerful tools, yet not much is known about how they work on the light front. In this paper, we attempt to study one of the simplest cases of known dualities - the electromagnetic duality in the Abelian gauge field theory in 3+1 dimensions. Susskind has conjectured \cite{iowa} that since light-front coordinates are non-local in the longitudinal direction, it might be possible to formulate a light-front theory with both electric and magnetic sources without having to introduce any additional non-localities corresponding to Dirac strings \cite{goddardolive}. He observed that the role of electric and magnetic fields reverses in the light-front Hamiltonian (which contains only the physical, transverse fields) when the original fields are replaced by (transverse) fields perpendicular to them. He concluded that the above described transformation of fields is the electromagnetic duality on the light front. Then he suggested that magnetic sources be added into the Hamiltonian by symmetry. In this paper we investigate this idea. The paper is organized as follows: Since there are some misconceptions in the literature, and since each beginning researcher in this field has to set up his/her own notes on the light-front conventions, we summarize in the section 2 formalisms of light-front coordinates, free Abelian fields, and we establish the connection between the components of the $F^{\mu \nu} $ tensor and electric and magnetic fields. We show how classical electric sources can be added to the theory. For completeness, we list the surface terms even though they do not enter the calculation presented here, and we list some manipulations with the $(\partial^+)^{-1}$ operator. Further, we wish to mention that there are, in general, problems regarding other than $+$-components of light-front currents, even though this does not affect our calculation since we restrict ourselves to external classical currents. The section 2 is rather formal; a reader familiar with the light front may want to skip most of it. Section 3 is devoted to Susskind's idea. The last section contains our conclusions. \section{Light-front field theory: Formalities} In a light-front quantum field theory, fields are quantized at an equal light-front time \cite{dirac}. Advantages of a light-front formulation are: The light-front has the largest kinematic subgroup of Lorentz generators, boosts are kinematic \cite{coester}, and the light-front vacuum can be decoupled from the physical states by imposing a longitudinal momentum cutoff \cite{thelongpaper}. The price to pay is more complicated renormalization, rotations involving the z-axis are dynamical, and as a consequence the physical picture is less intuitive for nonrelativistic systems at rest that are naturally described in equal-time coordinates. On the other hand, it is a natural framework for highly relativistic systems (e.g. description of deep inelastic scattering). \subsection{ Light-front coordinates} Dirac showed that it is possible to formulate relativistic dynamics in coordinates other than the usual equal-time form in which everything is expressed in terms of dynamical variables at one instant of time (hence {\it instant form}). The other forms he found are the {\it point form} and the {\it front form} \cite{dirac}. In the front form (or light-front) $x^+ = t + z$ plays the role of time. The remaining coordinates, $x^- = t-z$ and $x^{\perp} \equiv (x^1, x^2)$, are spatial. Given a four-vector $a$, its components in light-front coordinates are: \footnote{There are two slightly different conventions regarding the $+,-$ components. The other one differs from the one used here by a factor of $\sqrt{2}$: $a^{\pm} = (a^0 \pm a^3)/\sqrt{2}$, so that the metric tensor $g^{+-}=1$. It is, therefore, a good idea to check the definitions of coordinates before comparing any results. } \begin{eqnarray} a^- & = & a^0 - a^3 , \nonumber\\ a^+ & = & a^0 + a^3 , \nonumber\\ a^{\perp} & = &(a^1, a^2) . \end{eqnarray} A scalar product of two four-vectors $a,b$ is: \begin{eqnarray} a_{\mu} b^{\mu} = {1\over{2}} a^+ b^- + {1\over{2}} a^- b^+ - a^{\perp}\cdot b^{\perp}. \end{eqnarray} The metric tensor in light-front coordinates is: \begin{eqnarray} g_{\mu \nu} = \left( \begin{array}{cccc} g_{++} & g_{+-} & g_{+1} & g_{+2} \\ g_{-+} & g_{--} & g_{-1} & g_{-2} \\ g_{1+} & g_{1-} & g_{11} & g_{12} \\ g_{2+} & g_{2-} & g_{21} & g_{22} \end{array} \right) =\left( \begin{array}{cccc}0 & 1\over{2} & 0 & 0\\ 1\over{2} & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 &-1 \end{array} \right) \end{eqnarray} and \begin{eqnarray} g^{\mu \nu} = \left( \begin{array}{cccc} g^{++} & g^{+-} & g^{+1} & g^{+2} \\ g^{-+} & g^{--} & g^{-1} & g^{-2} \\ g^{1+} & g^{1-} & g^{11} & g^{12} \\ g^{2+} & g^{2-} & g^{21} & g^{22} \end{array} \right) =\left( \begin{array}{cccc} 0 & 2 & 0 & 0\\ 2 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 &-1 \end{array}\right) \end{eqnarray} We can now write down the derivatives with respect to coordinates: \begin{eqnarray} \partial ^- & = & {\partial \over{\partial x_-}} = 2 {\partial \over{\partial x^+}} = \partial ^0 - \partial ^3 , \nonumber\\ \partial ^+ & = &{\partial \over{\partial x_+}} = 2 {\partial \over{\partial x^-}} = \partial ^0 + \partial ^3 , \nonumber\\ \partial ^{\perp} & = & (\partial ^1, \partial ^2) . \end{eqnarray} $\partial ^-$ is the time derivative, the remaining derivatives are spatial. The four-dimensional volume element is: \begin{eqnarray} \big[ d^4x \big] = {1\over{2}} dx^+dx^-d^2x^{\perp} . \end{eqnarray} Let $p= (p^-, p^+, p^1, p^2)$ be the four-momentum of a free particle with mass $m$ in light-front coordinates. Then \begin{eqnarray} p_{\mu} x^{\mu} = {1\over{2}} p^+ x^- + {1\over{2}} p^- x^+ - p^{\perp}\cdot x^{\perp}. \end{eqnarray} $p^+$ is the {\it longitudinal} momentum, $p^1$ and $p^2$ are the {\it transverse} momenta, and $p^-$ is {\it the light-front energy}: \begin{eqnarray} p^- ={{p^{\perp}}^2 + m^2 \over{p^+}} . \end{eqnarray} The Lorentz invariant momentum integration element is obtained as follows: \begin{eqnarray} \big[ d^4q \big] \, 2 \pi \, \delta (m^2 -q^2) = {1\over{2}} {dq^- dq^+ d^2q^{\perp}\over{ (2\pi)^3}} \delta(m^2 -q^+ q^- +q^{\perp \ 2}) ={dq^+ d^2q^{\perp} \over{2 (2 \pi)^3 q^+}} . \end{eqnarray} The light-front energy is well defined apart from peculiar modes which have zero longitudinal momentum (so-called {\it zero modes}). $p^+$, which is equal to $p^0 +p^3$, satisfies $p^+ \geq 0$. This means that in the vacuum all particles must have precisely zero longitudinal momentum. From the expression for the light-front energy we can see that the energy diverges as $p^+ \rightarrow 0$ for massive particles. For massless particles, the light-front energy can be finite even at $p^+ =0$, but the vacuum can be made trivial by imposing a small longitudinal momentum cutoff, e.g. requiring that all longitudinal momenta satisfy $p_i^+ >\epsilon$. Another frequently used method of regularization is a discretized light-cone quantization (DLCQ) \cite{DLCQ} which removes both ultraviolet and infrared divergences. The physics of $p^+=0$ cannot be recovered by renormalization with respect to high energy states, and has to be added by hand using counterterms. The so-called ``constraint zero mode'' \cite{zero} is a specific counterterm consequent of DLCQ, and it does not require a nontrivial vacuum structure. \subsection{Free Abelian gauge fields} Let us start with pure electromagnetism. The Lagrangian density is: \begin{eqnarray} {\cal L} = -{1\over{4}} F_{\mu \nu} F^{\mu \nu} , \end{eqnarray} where \begin{eqnarray} F^{\mu \nu} = \partial ^{\mu} A^{\nu} - \partial ^{\nu} A^{\mu}. \end{eqnarray} In a light-front formulation, indices $\mu$, $\nu$ run through $+, -$, and $ \perp =(1,2)$. In {\it light-front gauge}, $A^+ =0$, and the Lagrangian density reduces to: \begin{eqnarray} {\cal L} = {1\over{8}} (\partial ^+ A^-)^2 + {1\over{2}}\partial ^+ A^i \partial ^-A^i - {1\over{2}} \partial ^+ A^i \partial ^i A^- - {1\over{4} } \left(\partial ^i A^j -\partial ^j A^i\right)^2 . \end{eqnarray} The Lagrangian density does not contain a time-derivative of $A^-$ so it is immediately obvious that this component of $A^{\mu}$ is not dynamical. Indeed, conjugate momenta, $\Pi $, to the fields are: \begin{eqnarray} \Pi _{A^i} = {\partial {\cal L} \over{\partial \ \partial ^- A^i}} & = & {1\over{2}} \partial ^+ A^i ,\nonumber\\ \Pi _{A^-} = {\partial {\cal L} \over{\partial \ \partial ^- A^-}} & = & 0 . \end{eqnarray} $A^-$ can be eliminated using the equations of motion: \begin{eqnarray} {\partial {\cal L} \over{ \partial A^-}} = \partial ^{\mu} {\partial {\cal L} \over{ \partial \ \partial ^{\mu} A^-} }, \end{eqnarray} leading to \begin{eqnarray} \left( \partial ^+ \right)^2 A^- = 2 \partial ^+ \partial ^i A^i . \end{eqnarray} Apart from zero modes (i.e. $p^+=0$ states introduced above), $\partial ^+$ can be inverted, and $A^-$ is then given by: \begin{eqnarray} A^- = {2\over{\partial ^+}} \partial ^i A^i . \end{eqnarray} To proceed further, some manipulations with $(\partial^+)^{-1}$ are needed. Up to a constant in $x^-$ which can depend on remaining coordinates, the operator $(\partial^+)^{-1}$ is defined as follows: \begin{eqnarray*} {1\over{\partial^+}}f(x^-) \equiv \int {dy^-\over{4}} \epsilon (x^- -y^-) f(y^-) , \end{eqnarray*} where $\epsilon(x) = 1$, if $x>0$, and $\epsilon(-x) = - \epsilon(x)$, and $f(x^-)$ is an arbitrary function. Using the properties of $\epsilon (x)$ it is straightforward to find: \begin{eqnarray*} \int d^3x \left({1\over{\partial^+}}f(x)\right)^2 = - \int d^3x f(x) \left({1\over{\partial^+}}\right)^2f(x), \end{eqnarray*} \begin{eqnarray*} \int d^3x f(x) \left({1\over{\partial^+}}g(x)\right) = -\int d^3x \left({1\over{\partial^+}}f(x)\right)g(x). \end{eqnarray*} Substituting for $A^-$, and using the properties of the operator $(\partial^+)^{-1}$ shown above, gives the Lagrangian in terms of physical degrees of freedom: \begin{eqnarray} {\cal L} = {1\over{2}} \partial ^+ A^i \partial ^- A^i - {1\over{2}} \left( \partial ^i A^i\right) ^2 -{1\over{4}} \left( \partial ^i A^j -\partial ^j A^i \right)^2 + {\rm surface \ \ terms} , \end{eqnarray} where the surface terms, \begin{eqnarray} -\left\{ \partial ^+ \left( A^j \partial ^j {1\over{\partial ^+}} \partial ^i A^i \right) - \partial ^j \left( A^j \partial ^i A^i \right) \right\} , \end{eqnarray} are traditionally dropped. $F^{\mu \nu} $ in terms of $A^i$ is: \begin{eqnarray} F^{+-} = 2 \partial ^i A^i , & \ & F^{+i} = \partial ^+ A^i , \nonumber\\ F^{-i} = \partial ^- A^i - \partial^i {2\over{\partial ^+}}\partial ^j A^j ,& \ & F^{ij}= \partial ^i A^j - \partial ^j A^i . \end{eqnarray} Let us also introduce the dual tensor $\tilde{F}^{\mu \nu} = 1/2 \epsilon ^{\mu \nu \lambda \rho} F_{\lambda \rho}$, where $\epsilon ^{\mu \nu \lambda \rho}$ is totally antisymmetric, \begin{eqnarray} \epsilon^{+-12} \equiv 2 , \end{eqnarray} so that it satisfies \begin{eqnarray} \epsilon ^{\mu \nu \alpha \beta} \epsilon _{\mu} ^{\ \nu ' \alpha ' \beta '} & = & g^{\nu \alpha '} g^{\alpha \nu '} g^{\beta \beta '} + g^{\nu \nu '} g^{\alpha \beta '} g^{\beta \alpha '} + g^{\nu \beta ' } g^{\alpha \alpha ' } g^{\beta \nu '} \nonumber\\ & & - g^{\nu \nu '} g^{\alpha \alpha '} g^{\beta \beta '} - g^{\nu \beta '} g^{\alpha \nu ' } g^{\beta \alpha '} - g^{\nu \alpha '} g^{\alpha \beta '} g^{\beta \nu '} . \end{eqnarray} Then, \begin{eqnarray} \tilde{F}^{+-} = \epsilon_{ij} F^{ij}, & \ & \tilde{F}^{+i} = \epsilon_{ij} F^{+j}, \nonumber\\ \tilde{F}^{ij} = -{1\over{2}}\epsilon_{ij}F^{+-}, & \ & \tilde{F}^{-i} = - \epsilon_{ij} F^{-j}. \end{eqnarray} where $\epsilon_{12}=1$ is antisymmetric. Let us note that $F$ and $\tilde{F}$ are related by electromagnetic duality $\vec{B} \rightarrow \vec{E}$, $\vec{E} \rightarrow -\vec{B}$. The connection between electric and magnetic fields $\vec{E}$ and $\vec{B}$ and the tensor $F^{\mu \nu}$ given in light-front coordinates is shown in the next section. \subsection{ Connection between electric and magnetic fields $\vec{E}$ and $\vec{B}$ and the tensor $F^{\mu \nu}$ in light-front coordinates.} The connection between $\vec{B}$, $\vec{E}$ and $F^{\mu \nu}$ can be established using the definition of the potential: \begin{eqnarray} \vec{E} & = &-\vec{\nabla} A^0 - {\partial \over{\partial t}}\vec{A} , \nonumber\\ \vec{B} & = & \vec{\nabla} \times \vec{A} . \end{eqnarray} Substituting: \begin{eqnarray} A^0 = {1\over{2}} \left( A^+ +A^-\right), & \ & A^3 = {1\over{2}} \left( A^+ - A^- \right), \nonumber\\ {\partial \over{\partial t}} = \partial ^0 = {1\over{2}} \left( \partial ^+ + \partial ^-\right), & \ & {\partial \over{\partial z}} = - \partial ^3 = - {1\over{2}} \left( \partial ^+ - \partial ^- \right), \end{eqnarray} we obtain: \begin{eqnarray} E^i = -{1\over{2}}\left( F^{+i} +F^{-i} \right) , & \ & E^z = {1\over{2}} F^{+-}, \nonumber\\ B^i = \epsilon_{ij} {1\over{2}} \left( F^{+j} - F^{-j} \right), & \ & B^z = -{1\over{2}} \epsilon _{ij} F^{ij}, \end{eqnarray} or, \begin{eqnarray} F^{+-} = 2E^z , & \ & \tilde{F}^{+-} = -2B^z \nonumber\\ F^{+i} = -\left( E^i + \epsilon _{ij} B^j \right), & \ & \tilde{F}^{+i} = \left( B^i - \epsilon _{ij} E^j \right) ,\nonumber\\ F^{ij} = -\epsilon _{ij} B^z, & \ & \tilde{F}^{ij} = -\epsilon _{ij} E^z, \nonumber\\ F^{-i} = -\left( E^i - \epsilon _{ij} B^j \right) , & \ & \tilde{F}^{-i} = \left( B^i + \epsilon _{ij} E^j \right) , \end{eqnarray} where $i,j$ are transverse indices ($i,j =(1,2)$). These definitions ensure that the Lagrangian equations of motion give the correct set of Maxwell's equations. In ref. \cite{xy} the magnetic and electric fields are defined differently, in particular, in analogy with equal time, $E^{\mu} = 1/2 F^{+\mu}$ and $B^- =F^{12}$, but this is misleading, because these definitions do not lead to Maxwell's equations. Moreover, $\vec{E}$ and $\vec{B}$ are not four-vectors, $E^0$ and $B^0$ are not defined, so there is no natural way to form the minus and plus components. \subsection{ Adding classical electric sources} In this section we add classical electric sources. The Lagrangian density in the presence of classical sources $j_{\mu}$ is: \begin{eqnarray} {\cal L} = -{1\over{4}} F^{\mu \nu} F_{\mu \nu} - j_{\mu} A^{\mu} \end{eqnarray} As before, $A^- $ is not dynamical and it can be eliminated using the equations of motion, leading to: \begin{eqnarray} A^- & = & {2\over{\partial ^+}} \left( \partial ^i A^i - {1\over{\partial ^+}} j^+ \right) \nonumber\\ & = & A^-_{\rm free} - {2\over{(\partial ^+)^2}}j^+ . \end{eqnarray} Replacing $A^-$ modifies $F^{\mu \nu}$, in particular, the $+$-component of the current is absorbed into $F^{+-}$ and $F^{-i} $: \begin{eqnarray} F^{+-} & = & F_{\rm free}^{+-} -{2\over{\partial ^+}} j^+ , \nonumber\\ F^{-i} & = & F_{\rm free}^{-i} -\partial ^i {2\over{(\partial ^+)^2}}j^+ , \end{eqnarray} where $F_{\rm free}^{\mu \nu}$ is given in previous sections. The remaining two components of $F^{\mu \nu}$ are unchanged. The Lagrangian density then reads: \begin{eqnarray} {\cal L} = {1\over{2}} \partial ^+ A^i \partial ^- A^i - {1\over{2}} \left( \partial ^i A^i -{1\over{\partial ^+}} j^+ \right) ^2 -{1\over{4}} \left( \partial ^i A^j -\partial ^j A^i \right)^2 +j^{\perp} A^{\perp} \nonumber\\ +{\rm surface \ \ terms} , \end{eqnarray} and the surface terms are modified also: \begin{eqnarray} \lefteqn{{\rm surface \ terms} =}\nonumber\\ & - & \partial ^+ \left[ A^j \partial ^j {1\over{\partial ^+}} \left( \partial ^i A^i - {1\over{\partial^+}}j^+\right) \right] - \partial ^j \left[ A^j \left( \partial ^i A^i -{1\over{\partial^+}} \right)\right] \nonumber\\ & + & {1\over{\partial^+}}\left[ j^+\left( \partial^i A^i -{1\over{\partial^+}} j^+ \right)\right] . \end{eqnarray} The Lagrangian equations of motion are: \begin{eqnarray} \partial _{\mu} F^{\mu i} = j^i , \end{eqnarray} $\partial _{\mu}F^{\mu +} =j^+$ is satisfied identically, and using equations of motion it can be shown that \begin{eqnarray*} \partial _{\mu}F^{\mu -} = -{2\over{\partial ^+}}\left[ {1\over{2}} \partial ^- j^+ - \partial ^i j^i \right] , \end{eqnarray*} which implies a continuity equation for $j^\mu$. The Hamiltonian density in the presence of classical sources is: \begin{eqnarray} {\cal H} ={1\over{2} } (\partial ^i A^i-{1\over{\partial ^+}} j^+ )^2 + {1\over{4}}(\partial^i A^j -\partial^j A^i)^2 -j^{\perp} A^{\perp} , \end{eqnarray} and the fields $A^i$ can be quantized as if they were free. \section{Electromagnetic duality} In this section we investigate the question of whether it is possible to formulate electromagnetic duality as a transformation of the potential $A^{\perp}$ itself rather than the field strength tensor and its dual. Given the Hamiltonian in the transverse degrees of freedom, a natural starting point is a transformation \begin{eqnarray} A^j \ \ \rightarrow \ \ \tilde{A}^i \equiv -\epsilon _{ij} A^j. \end{eqnarray} Indeed, under this transformation the first and second term in the free Hamiltonian ``interchange'': \begin{eqnarray} {\cal H} = {1\over{4}}(\partial^i \tilde{A}^j -\partial^j \tilde{A}^i)^2 + {1\over{2} } (\partial ^i \tilde{A}^i )^2 . \end{eqnarray} By comparison with the Hamiltonian including electric sources (see eqn. $(33)$ ), it appears that one can by symmetry add magnetic sources, as well as the electric sources. In complete analogy one could then expect that the $+$-component of the magnetic current $\tilde{j}^{\mu}$ was absorbed into the definition of the field strength tensor and/or the dual tensor \cite{iowa}. Taking advantage of the kinematical boost invariance (for a review see \cite{coester}), it would be sufficient to consider the simple case of a magnetic current with only the $+$-component (the so-called {\it good component}) being non-zero, viz. \begin{eqnarray*} {\cal H} = {1\over{2} } (\partial ^i A^i-{1\over{\partial ^+}} j^+ )^2 +{1\over{2} } (\partial ^i \tilde{A}^i-{1\over{\partial ^+}} \tilde{j}^+ )^2 -j^{\perp} A^{\perp}. \end{eqnarray*} \noindent It is straightforward to show that if one proceeds as described, the Hamiltonian leads to the desired equations of motion, including the continuity equation for the $-$-component of $\tilde{j}$. \footnote{It is not really a mystery - due to the definition of the dual tensor as $\tilde{F}^{\mu \nu} \equiv 1/2 \epsilon ^{\mu \nu \lambda \rho} F_{\lambda \rho}$, it follows that $\partial _{\mu} \tilde{F}^{\mu \nu} \equiv 0$. However, absorbing the $\tilde{j}^+$ appropriately into the definition of $\tilde{F}$ produces a non-zero right-hand side. It is somewhat reminiscent of introducing a Dirac string. Note that in our case this trick does not work for $\tilde{j}^i \neq 0$.} The catch is that the Hamiltonian itself is {\it not} equivalent to the complete set of Maxwell equations. It is, rather, the Hamiltonian {\it and} the gauge conditions \cite{jacksonBaby}. In particular, {\it only }with the gauge conditions $A^+=0$ and $A^-= 2 (\partial^+)^{-1} \partial ^i A^i$ are all components of the field strength tensor defined unambiguously. Let us look at what happens with the field strength tensor and its dual under the transformation $(34)$. In order for the transformation $(34)$ to be the operation of electromagnetic duality, it has to lead to \begin{eqnarray} -\tilde{F}^{\mu \nu}(A^{\perp}) = F^{\mu \nu} (\tilde{A}^{\perp}), \nonumber\\ {F}^{\mu \nu}(A^{\perp}) =\tilde{F}^{\mu \nu} (\tilde{A}^{\perp}). \end{eqnarray} However, \begin{eqnarray} -\tilde{F}^{+-} & = & 2 \partial^i \tilde{A}^i , \nonumber \\ -\tilde{F}^{+i} & = & \partial ^+ \tilde{A}^i , \nonumber\\ -\tilde{F}^{-i} & = & \partial ^- \tilde{A}^i -\partial^i \left( {2\over{\partial ^+}}\partial ^k \tilde{A}^k \right) - {2\over{\partial ^+}}\Box \tilde{A}^i , \nonumber\\ -\tilde{F}^{ij} & = & = \partial^i \tilde{A}^j - \partial^j \tilde{A}^i \end{eqnarray} shows that the transformation $(34)$ is not quite electromagnetic duality: It works for all components except $F^{-i}$. The $\tilde{F}^{-i}$ contains an additional term $- 2({\partial ^+})^{-1}\Box \tilde{A}^i $. For free fields, the additional term is zero, and $(34)$ is therefore electromagnetic duality. Is it possible to remove the additional term in general, realizing the electromagnetic duality as a generalization of the original Susskind's suggestion, e.g. $(34)$ plus a gauge transformation? After fixing the gauge, there is still a residual gauge freedom. In order not to disturb the gauge conditions used to derive the Hamiltonian, the residual gauge function $\Lambda$ has to satisfy \begin{eqnarray} \partial ^+ \Lambda = 0, & \ & \partial^- \Lambda = {2\over{\partial^+}}(\partial^i)^2 \Lambda . \end{eqnarray} Ignoring for a moment the question of zero modes, this implies that \begin{eqnarray*} {2\over{\partial ^+}}\Box \Lambda =0 \end{eqnarray*} and thus cannot cancel the unwanted term $- 2({\partial ^+})^{-1}\Box \tilde{A}^i $. We now return to the question of zero modes. Since they correspond to a constant in $x^-$, they cannot cancel the $- 2({\partial ^+})^{-1}\Box \tilde{A}^i $ term which, in general, does depend on $x^-$. \section{Conclusion and summary} We reviewed the formalism of Abelian gauge theory in light-front coordinates. We argued that while the potential $A^{\mu}$ can be described in light-front coordinates, there is no light-front analogue to electric and magnetic fields $\vec{E}$, $\vec{B}$ in the sense that if one defines electric and magnetic fields as components of the light-front field strenght tensor, the definitions do not lead to Maxwell equations, and electromagnetic duality is not realized as $\vec{B} \rightarrow \vec{E}$, $\vec{E} \rightarrow -\vec{B}$. We then studied electromagnetic duality on the level of fields $A^{\mu}$ (in light-front gauge $A^+ =0$). Our study was motivated by the fact that the light-front Hamiltonian is in this case expressed in terms of transverse fields only, and that by under a specific transformation of transverse fields the electric and magnetic terms in the Hamiltonian interchange. However, the electromagnetic duality in light-front coordinates cannot be realized by a transformation of transverse fields only. Neither can it be written as a transformation of transverse fields plus a gauge transformation, not even when the gauge transformation has a zero mode. Altering $A^-$ in addition to $A^{\perp}$ is not likely to fix the problem either, because it is only one of the two components of the dual tensor involving $A^-$ (i.e. $\tilde{F}^{-i}$) that does not transform as desired; fixing $\tilde{F}^{-i}$ would spoil the transformation of $\tilde{F}^{-+}$. To include magnetic monopoles one would have to allow for additional non-localities (in the gauge function), most likely equivalent to Dirac strings in an equal-time theory \cite{goddardolive}. \section{ Acknowledgments } My work has been supported by the United States Department of Energy. I would like to acknowledge L. Susskind for bringing this problem into my attention. I am grateful to G. t'Hooft for useful discussions during the {\it NATO ASI Workshop on Confinement, Duality, and Non-Perturbative Aspects of QCD } held June 23 - July 4, 1997, and to the organizers of the workshop, particularly P. van Baal for providing such a stimulating research environment. I am also grateful to R. Furnstahl, T. Goldman and R. Perry for reading the manuscript. Last but not least, I want to thank M\'{a}ria Barn\'{a}\v{s}ov\'{a} for making this work possible.
1,477,468,751,229
arxiv
\section{Introduction} \label{sec:introduction} Circumstellar disks and planetary companions around nearby stars are routinely observed on the ground by several facilities with exoplanet direct-imaging capabilities \cite{Beuzit2008,Macintosh2008,Guyon2010c,Hinkley2011,Skemer2012,Close2014}. Of these facilities, the instruments VLT/SPHERE and Gemini Planet Imager (GPI) have seen first light in 2013-2014, providing unprecedented sensitivity and inner working angle for exoplanet observations \cite{Macintosh2014,vigan2015}. Since their commissioning, they have shed light on known or newly detected planetary companions with insights on their physical characteristics (orbit and mass) and atmospheric chemical features through spectral characterization and photometric and astrometric information \cite{galicher2014b,chilcote2015,vigan2016,zurlo2016}. In parallel, they are used to perform large surveys (GPIES, SHINE) that will target several hundreds of young, nearby stars, with the goal of probing the demography of the giant exoplanets population at large orbital separation. Up to now, these surveys have uncovered a few giant companions\cite{Macintosh2015,Konopacky2016,Chauvin2017,Ginski2018}, but many dozens of candidates remained to be confirmed and may lead to additional discoveries. These ground-based instruments rely on a combination of extreme adaptive optics (ExAO) system, coronagraphy, dedicated observational strategies and post-processing methods. Differential aberrations between the ExAO sensing path and the science path, so-called non-common path aberrations (NCPA), have been identified as setting high-contrast performance limits for adaptive optics instruments. Their importance was well known \cite{Fusco2006} at the start of the development of the recently commissioned planet imagers, GPI and SPHERE, and various strategies were implemented to minimize them. For SPHERE, the very low order NCPA correction (tip, tilt and defocus) are optimised directly at the level of the coronagraph during the target acquisition, but the calibration strategy for higher orders, originally based on phase diversity techniques \cite{sauvage2007}, was not found to improve the final image quality and was finally discarded because the wavefront error budget was already within the specifications necessary to achieve the contrast objectives of the instrument. Still, the remaining NCPA are on the order of a few tens of nanometers, preventing coronagraphs from achieving their ultimate performance. These wavefront errors can be split into two contributions: the long-timescale aberrations that are due to the optical surface errors or misalignment in the instrument optical train and the slowly varying instrumental aberrations that are caused by thermal or opto-mechanical deformations as well as moving optics such as atmospheric dispersion correctors\cite{macintosh2005,martinez2012,Martinez2013}. They lead to static and quasi-static speckles in the coronagraphic images, which represent critical limitations for the detection and observation of older or lighter gaseous planets at smaller separations. More precise measurement strategies are required to measure and correct for these small errors with accuracy and achieve deeper contrast (down to $10^{-7}$, representing the ultimate contrast limit of these instruments) for the observation of the faintest companions. For this purpose, we proposed the use of a Zernike phase mask sensor to calibrate the NCPA that are seen by the coronagraph in exoplanet direct imagers \cite{N'Diaye2013a}. A prototype of such a sensor, called ZELDA (Zernike sensor for Extremely Low-level Differential Aberrations), was finally implemented in SPHERE during the commissioning phase. The first validation of ZELDA was presented in N'Diaye et al. (2016) and we demonstrated a clear potential to increase the raw contrast by a factor up to 10 at very small angular separation (0.1\ensuremath{^{\prime\prime}}\xspace--0.4\ensuremath{^{\prime\prime}}\xspace)\cite{N'Diaye2016}. However, these tests were done on the internal point source of the instrument during daytime, which provides an extremely stable environment for testing ultimate performance but is not necessarily representative of real observations. Indeed, the instrument being installed on a Nasmyth platform of the VLT-UT3, observations require the use of a derotator to stabilise either the field or the pupil. Also the fact that usual SPHERE/IRDIFS observations cover a broad range of wavelengths (from $Y$- to $K$-band) also requires the use of atmospheric dispersion correctors, which can also potentially introduce a small amount of NCPA. The next logical step was therefore to perform on-sky tests with the goal of measuring the gain in contrast provided by a proper compensation of the NPCA. In this work, we present preliminary results of a series of on-sky tests performed with ZELDA on VLT/SPHERE. The final results will be provided in a forthcoming, more complete publication (Vigan et al. in prep.). In Sect.~\ref{sec:presentation_tests} we present a brief description of the strategy and technical aspects of the tests, then in Sect.~\ref{sec:loop_convergence} we compare the convergence of the NCPA compensation loop on the internal source and on sky, and in Sect.~\ref{sec:coronagraphic_perf} we present the coronagraphic performance with and without NCPA compensation on-sky. Finally in Sect.~\ref{sec:prospects} we present the conclusions and prospects of this work. \section{Short presentation of the tests} \label{sec:presentation_tests} To perform on-sky tests with ZELDA, we benefited from 1.5 night of technical time awarded by ESO and the Paranal observatory. The tests were shared between daytime activities during 6 days and 3 half-nights on VLT-UT3 entirely dedicated to ZELDA validation. The goals for this test period were multiple: \begin{enumerate} \item checking previous results and performance that were obtained in 2015, \label{goal1} \item demonstrate that the ZELDA measurements can be performed on sky and used to compensate for NCPA, \label{goal2} \item measure the contrast gain in coronagraphic images in the presence of on-sky NCPA calibration, \label{goal3} \item compare internal and on-sky measurements, \label{goal4} \item define an operational strategy for a future implementation of NCPA calibration with ZELDA for all observations. \label{goal5} \end{enumerate} \noindent In the present work we focus on items \ref{goal2}, \ref{goal3} and \ref{goal4}, which are (to the best of our knowledge) completely new and have important implications for future instrumentation such as the high-contrast arm of ELT/HARMONI\cite{Thatte2016} or WFIRST/CGI\cite{Bailey2018}. The ZELDA data analysis is entirely based on the public \texttt{pyZELDA} code\cite{Vigan2018a}. The approach for the tests follows a strategy very close to what was previously described in N'Diaye et al. (2016)\cite{N'Diaye2016} for the implementation of the ZELDA closed-loop operation, at the exception of the projection of the ZELDA optical path difference (OPD) map on the SPHERE deformable mirror (DM). As described in N'Diaye et al. (2016), the OPD map computed from ZELDA data cannot be directly projected on the DM because ZELDA measures spatial frequencies much higher than what the DM can actually correct for: up to 192 cycle/pupil (c/p) vs. up to 40 c/p, respectively. A direct projection of the OPD map on the DM will necessarily result in unpredictable results due to aliasing effects. In our previous work, we circumvented this issue by applying a low-pass spatial filtering on the OPD maps, with a cutoff frequency at 25 c/p, which was the value providing the best result at the time. Although effective, this approach was not the most elegant because it completely bypasses the Karhunen-Lo\`eve modes of the SPHERE AO system, SAXO\cite{Petit2014,Sauvage2016}. In the current approach, the OPD maps are first projected on the first 700 modes (out of 950+) that can be seen and controlled by SAXO, and finally being projected on the reference slopes of the WFS. This approach is hopefully more stable and less subject to noise than the previous one. As presented in N'Diaye et al. (2016), another important part of the NCPA compensation with ZELDA is to make sure that the right amount of aberrations can be measured and then applied on the deformable mirror. This is what we called the \emph{sensitivity factor} of the wavefront sensor (WFS), which was previously calibrated using the introduction of a ramp of focus at calibrated amplitudes\cite{N'Diaye2016}. This calibration remains one of the basic pre-requirements of the NCPA calibration in SPHERE, but there are still several important questions that have not been addressed yet. In particular, the sensitivity factor displays a variability of up to 25\% depending at which time of the day the calibration is performed. This variability is not yet fully understood so for the time being the calibration is performed before all important test with ZELDA. Such an inaccuracy could be extremely problematic in open loop, because the applied correction would not correspond to the actual value. However, most of our tests are performed in closed loop, where this inaccuracy is acceptable, because the correction will always be performed in the proper direction as long as the amount of NCPA is small enough (see e.g. Vigan et al. 2011\cite{Vigan2011} for a similar scenario in the case of cophasing). For the moment we neglect the origin of the variation on the sensitivity factor, but keeping in mind that we would need to understand it for any open-loop use of the ZELDA sensor. \section{Loop convergence} \label{sec:loop_convergence} Ideally, the NCPA are sufficiently small so that the ZELDA measurements are all within the quasi-linear range of the sensor, which enables to compensate for them in a single iteration. However, in practice the calibration issues described in Sect.~\ref{sec:presentation_tests} prevent to compensate for the NCPA in just 1 iteration, so that the compensation must be implemented in a closed-loop fashion that follows 5 distinct steps: \begin{enumerate} \item obtain ZELDA measurement, \item compute the NCPA OPD map, \item filter the OPD map on the first 700 SAXO KL modes, \item project the filtered map on the WFS reference slopes, \item apply the new reference slopes on the WFS. \end{enumerate} Figure~\ref{fig:int_convergence} shows the convergence of the ZELDA loop on the internal source of VLT/SPHERE. The top figure displays the OPD maps computed at the 5 iterations of the loop, while the bottom plot displays the integrated power spectral density (PSD) computed on each of the OPD maps. The plot is not directly the raw PSD, but the PSD integrated within bounds of width 1 c/p: the aberration value provided at spatial frequency $s$ is equal to the value of the PSD integrated between $s$ and $s+1$ cycle/pupil. This enables to directly obtain an estimation of the aberrations, in nm rms, in a given spatial frequency range. The total amount of NCPA can be obtained by computing the quadratic sum of all values. In this analysis, the central part of the DM, where the actuators are not controlled, and the dead actuators of the mirror have been masked. The evolution of the OPD maps in Fig.~\ref{fig:int_convergence} clearly shows a visual improvement of the NCPA: the strong astigmatism that clearly dominates at start quickly disappears to only leave a relatively flat wavefront dominated by the print-through of the actuators and some high-spatial frequencies at the right edge of the DM where several dead actuators prevent from a completely clean correction. More quantitatively, the integrated PSD shows a decrease by factor of more than 10 in the 1-2~c/p bin after 3 iterations, and a decrease by a factor 2-3 in the 4-12~c/p bins. There is no visible effect for spatial frequencies above 15~c/p because of the filtering of the OPD maps on the first 700 SAXO KL modes. Between iteration 0 and iteration 3, the amount of aberrations goes from 53~nm rms down to 12~nm~rms in the 1-15~c/p range, and from 48~nm rms down to 6~nm rms in the 1-4~c/p range. Figure~\ref{fig:sky_convergence} shows the same results as Figure~\ref{fig:int_convergence} but this time on-sky on star $\alpha$ Crt, a very bright K0 star ($V=4.07$, $H=1.76$) which was observed at high elevation in relatively poor observing conditions on 2018-04-01 (1.0\ensuremath{^{\prime\prime}}\xspace-1.2\ensuremath{^{\prime\prime}}\xspace seeing, coherence time $<$3~ms). The OPD maps look relatively similar to the maps on the internal source, except for the presence of the spiders holding the secondary mirror. Some mid-spatial frequency circular phase structures from the primary mirror are also clearly visible. They correspond to polishing errors on the telescope primary mirror due to the original machining. The integrated OPD curve shows a very similar evolution as on the internal source, although the starting point in the 5-15~c/p range seems lower for these on-sky measurements. The reason for this different starting point is not completely understood yet. In these measurements, between iteration 0 and iteration 3, the quantity of aberrations goes from 39~nm rms down to 14~nm~rms in the 1-15~c/p range, and from 35~nm rms down to 8~nm rms in the 1-4~c/p range. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{Figures/f1_2018-04-01_ncpa_loop_700modes_2_ncpa_loop_opd.pdf} \includegraphics[width=0.5\textwidth]{Figures/f1_2018-04-01_ncpa_loop_700modes_ncpa_loop_psd.pdf} \caption{Convergence of the ZELDA loop on the internal source using 700 SAXO modes. \emph{Top}: OPD maps measured with ZELDA and calibrated in nanometers at all the iterations of the loop. The central part of the DM and the dead actuators have been masked in the OPD maps. \emph{Bottom}: Integrated power spectral density of the OPD maps as a function of spatial frequency for all the iterations. The aberration value provided at spatial frequency $s$ is equal to the value of the PSD integrated between $s$ and $s+1$ cycle/pupil. The central part of the DM and the dead actuators have also been masked in this analysis.} \label{fig:int_convergence} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{Figures/f3_2018-04-03_night_ncpa_loop_sky_2_ncpa_loop_opd.pdf} \includegraphics[width=0.5\textwidth]{Figures/f3_2018-04-03_night_ncpa_loop_sky_2_ncpa_loop_psd.pdf} \caption{Same as Fig.~\ref{fig:int_convergence} but the data is now acquired on-sky on a bright star ($\alpha$ Crt, K0, $V=4.07$, $H=1.76$) observed on 2018-04-03. In these on-sky data, the spiders have been masked in the analysis in additional to the central obscuration and the dead actuators of the DM.} \label{fig:sky_convergence} \end{figure} These initial on-sky results are extremely encouraging and demonstrate that NCPA measurement and compensation in presence of ExAO-filtered atmospheric residuals are possible. The differences with the internal calibration will be investigated in the coming months, for example to understand whether or not the difference on the starting point is real or the result of a measurement bias on sky. An interesting aspect of these tests is also that it does not seem necessary to average the atmospheric residuals for integration times longer than 15-20~sec. Indeed, we perform additional tests (not present here) which did not show any visible differences in the PSD of the measured NCPA with integration times from 15~sec up to 120~sec. This means that for bright stars, for which we would expect the NCPA correction to bring a visible improvement on the quasi-static speckles, a short overhead of only $\sim$1~min would be necessary at the beginning of the observation sequence. \section{Coronagraphic performance} \label{sec:coronagraphic_perf} The validation of the NCPA correction was performed on 2018-04-01 by switching to coronagraphic imaging immediately after the NCPA calibration sequence. For the coronagraph, We used an APLC\cite{Soummer2005} with a focal plane mask of 185~mas in diameter. The images were acquired in the IRDIS/DBI mode with the $H2$ filter at $\lambda = 1.593$~\ensuremath{\mu\mathrm{m}}\xspace in relatively poor observing conditions with \texttt{NEXP}$\times$\texttt{DIT}$\times$\texttt{NDIT}=4$\times$7$\times$10~sec. Unfortunately the 10 images for each of the 4 exposures were averaged at the level of the detector controller, which means that we do not have access to the individual exposures. Since the observing conditions were far from optimal, it is possible that some images with poor AO correction were averaged in the sequence, making it difficult to estimate the absolute gain provided by the NCPA correction. The results are presented in Fig.~\ref{fig:sky_coro}. The top row of the figure shows coronagraphic images without NCPA compensation (left, iteration 0 in Fig.~\ref{fig:sky_convergence}) and with NCPA compensation (right, iteration 3 in Fig.~\ref{fig:sky_convergence}). These images are normalized to the peak flux of an off-axis reference PSF of the star acquired at the beginning of the sequence. The bottom row of the figure show the contrast curves for the images acquired at each loop iteration, again normalized to the intensity peak of the off-axis stellar PSF. The contrast curves are calculated as the azimuthal standard deviation in annulii of width \ensuremath{\lambda/D}\xspace. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Figures/f4_2018-04-03_night_aplc_test2_coro_images.pdf} \includegraphics[width=0.7\textwidth]{Figures/f4_2018-04-01_night_aplc_test_medium_coro_profiles.pdf} \caption{Coronagraphic performance on sky based on a sequence acquired on 2018-04-01 on $\alpha$ Crt (K0, $V=4.07$, $H=1.76$). \emph{Top}: coronagraphic images, calibrated in contrast with respect to the peak of unocculted PSF, with the default WFS reference slopes and with the reference slopes corrected to compensate the NCPA based on the ZELDA measurement after 3 iterations (see Fig.~\ref{fig:sky_convergence}). The coronagraph is an APLC with a focal plane mask of 185~mas in diameter. The images are acquired in the IRDIS/DBI mode with the $H2$ filter at $\lambda = 1.593$~\ensuremath{\mu\mathrm{m}}\xspace. The data were acquired in relatively poor observing conditions (1.0\ensuremath{^{\prime\prime}}\xspace-1.2\ensuremath{^{\prime\prime}}\xspace seeing) with \texttt{NEXP}$\times$\texttt{DIT}$\times$\texttt{NDIT}=4$\times$7$\times$10~sec. Unfortunately the 10 images for each of the 4 exposures were averaged at the level of the detector controller, which means that we do not have access to the individual exposures and we cannot select the best frames for the contrast estimation in these poor observing conditions. \emph{Bottom}: Contrast curves corresponding to the two images. The contrast is calculated as the azimuthal standard deviation in annulii of width \ensuremath{\lambda/D}\xspace. } \label{fig:sky_coro} \end{figure} The gain in contrast is not obvious by just looking at the coronagraphic images. Speckles at the edge of the coronagraph are clearly modified by the compensation of the NCPA, but it is hard to tell whether their variance is decreased or not. The horizontal and vertical radial structures caused by the print-through of the actuators of the DM are clearly attenuated up to separations of $\sim$15\ensuremath{\lambda/D}\xspace. The contrast plot shows a small gain in contrast between 100 and 200 mas and between 400 and 700 mas, but this gain remains below a factor of 2, which is marginal. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Figures/f5_2018-04-01_aplc_internal_source_700modes_2_2018-04-01_night_aplc_test_medium_coro_profiles.pdf} \caption{Comparison of the contrast curves with and without compensation of the NCPA on the internal source and on sky based on the data acquired on 2018-04-01. The on-sky data is compared to internal data acquired during the afternoon preceding the observations. For the on-sky data, the 4 curves correspond to the contrast obtained on the 4 different exposures acquired on 2018-04-01. The are the same as the curves presented in Fig.~\ref{fig:sky_coro}} \label{fig:int_sky_coro} \end{figure} The comparison between the gain on the internal source and on sky is presented in Fig.~\ref{fig:int_sky_coro}. However, the on-sky data is most likely limited by the uncorrected atmospheric residuals. This is expected, especially in poor observing conditions. Temporal residuals close to the axis seem to be much higher on-sky, as well as the ``speckle floor'' between 200 and 600~mas. Nonetheless some gain is visible at similar separations both on-sky and internally, which confirms that the NCPA compensation really has an effect on the quasi-static speckles, but because of the poor conditions it is impossible to exactly estimate the final contrast level set by these speckles. The data remain dominated by uncorrected AO-residuals. \section{Conclusions \& prospects} \label{sec:prospects} High-contrast imaging of exoplanets from the ground requires to combine extreme AO and coronagraphy, but for optimal coronagraphic attenuation one needs to properly measure and compensate for the NCPA between the wavefront sensing and the science paths in the instrument. The ZELDA wavefront sensor implemented in VLT/SPHERE enables to calibrate some with nanometric accuracy. The performance of this sensor for NCPA compensation has been previously demonstrated. In our new results, the quantitative estimation of the PSD of the OPD maps measured with ZELDA demonstrates that the low-spatial frequency aberrations (1-4 c/p) can be reduced by a factor 8 on the internal source, which translates into a gain of more than a factor 3 in contrast at 200 mas, and almost 10 at 600 mas. These results are slightly worse than the ones that were previously obtained in 2015\cite{N'Diaye2016}. This could potentially be explained by the fact that in 2015 the defocus at the level of the coronagraph mask may not have been compensated in the data without NCPA compensation, resulting in worse coronagraphic performance in our recent data. In the new data, particular care was taken to ensure an optimal performance in every configuration for fair comparison. We also present the very first on-sky NCPA compensation using ZELDA. The analysis of the OPD maps and the corresponding PSD shows a clear gain, at an equivalent level to the one obtained on the internal source. The final aberration level seems to be almost identical between the internal source and on-sky. On the coronagraphic images, the gain is less obvious. Some speckles are clearly affected by the NCPA compensation, but the gain in contrast remains marginal. However, a gain in contrast is observed at similar separations both internally and on-sky, which is a good indication that the NCPA are indeed properly compensated. The fact that the observing conditions were relatively poor when the data were acquired almost certainly explains the small gain in contrast: the data remain dominated by uncorrected AO-residuals. From this we conclude that under poor to fair seeing and coherence time conditions, the calibration of the NCPA brings only a marginal gain as quasi-static speckles do not constitute the main limitation in the well-corrected region of the focal plane. Therefore, use of night-time for the calibration of the NCPA would only make sense in moderate to good corrections. Such a scheme would fit well within the new set of atmospheric constraints including coherence time that will be implemented for service mode observing on VLT/SPHERE in 2019, which will give the possibility for astronomers to request excellent to good conditions based on coherence time. Additional ZELDA data and coronagraphic data have been acquired during our tests at the ESO/Paranal observatory, which hopefully will provide more insight into the current limitations of ZELDA on-sky. In particular some data will provide information regarding the stability of the NCPA compensation in real observing conditions, where the telescope is moving and the derotator and ADCs are rotating. This temporal aspect is essential to define operational aspects like the frequency at which the NCPA calibration must be executed to bring a real benefit to the observations. All these results will be presented in a forthcoming publication. \acknowledgments This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 757561). AV and MN would like to thank ESO and the Paranal observatory for their strong support during their visitor run and their continued support for the implementation of ZELDA as the NCPA calibration strategy for SPHERE. \smallskip SPHERE is an instrument designed and built by a consortium consisting of IPAG (Grenoble, France), MPIA (Heidelberg, Germany), LAM (Marseille, France), LESIA (Paris, France), Laboratoire Lagrange (Nice, France), INAF - Osservatorio di Padova (Italy), Observatoire de Genève (Switzerland), ETH Zurich (Switzerland), NOVA (Netherlands), ONERA (France) and ASTRON (Netherlands) in collaboration with ESO. SPHERE was funded by ESO, with additional contributions from CNRS (France), MPIA (Germany), INAF (Italy), FINES (Switzerland) and NOVA (Netherlands). SPHERE also received funding from the European Commission Sixth and Seventh Framework Programmes as part of the Optical Infrared Coordination Network for Astronomy (OPTICON) under grant number RII3-Ct-2004-001566 for FP6 (2004-2008), grant number 226604 for FP7 (2009-2012) and grant number 312430 for FP7 (2013-2016).
1,477,468,751,230
arxiv
\section{Introduction} \subsection{Overview} Propagation models on lattices or more general graphs describe the spreading of some discrete signal through a set of discrete entities. In the most general terms, the signal corresponds to some qualitative change that causes the entity to interact differently with its neighbors. Examples include the spreading of damage in power grids \cite{Sachtjen:00, Kinney:05}, the spreading of disease through a population \cite{Hethcote:00, Newman:02, Cohen:03}, the spreading of a computer virus on the Internet \cite{Pastor-Satorras:01, Lloyd:01}, or the alteration of gene expression patterns in a cell due to a mutation \cite{Hughes:00, Ramo:06}. In the general case, the individual entities are represented as nodes in a graph where the links indicate paths along which the signal can spread \cite{Watts:99,Newman:03, Callaway:00, Moreno:02, Watts:02, Motter:04, Crucitti:04}. Because the signal can be thought of as disrupting the static or dynamical state of the original system, we refer to its propagation as spreading {\it damage}, though in many cases the ``damage'' may enhance a desired property or simply represent some natural dynamical process. A single instance of a given spreading process initiated from a particular subset of nodes is often called an avalanche. In analyzing spreading processes, one is often interested in the transition between those that die out quickly and those that spread to a finite fraction of the system in the large-system limit, a transition that may occur as the probability of transmitting damage across links is varied. This percolation transition is relevant for systems in which the fraction of initially damaged nodes tends to zero in the limit of infinite system size. The order parameter for the transition is the average fraction of nodes damaged in a single avalanche, which remains zero for small transmission probabilities and continuously increases when the probability rises above a threshold value. We will refer to this as the {\it sparse percolation} (\SP) transition. The \SP\ transition occurs for spreading processes in which the probability that a node becomes damaged is zero unless at least one of its neighbors is damaged. (If this probability were nonzero, a nonzero fraction of the nodes would always get damaged.) For a certain class of propagation models, there is another transition of interest. When the fraction of initially affected nodes remains fixed as the system size is increased, the fraction of nodes that remain {\it undamaged} can undergo a transition from finite values to zero at transmission probabilities above some threshold. We refer to this as the {\it exhaustive percolation} (\EP) transition. The \EP\ transition occurs only for propagation models in which the probability of a node remaining undamaged is zero when all of its neighbors are damaged (all of its inputs in the case of a directed graph). We assume also that there is a nonzero probability for a node to remain undamaged if it has at least one undamaged input. There is then one more condition for the \EP\ transition: the density of directed loops of any specified size must vanish in the large system limit. For any loop there is a finite probability that no member of the loop will be damaged, since no member of the loop can have all of its inputs damaged until one of the members becomes damaged through a probabilistic event. Thus \EP\ is {\it not} observable on spatial lattices of the type generally encountered in statistical mechanics. \EP\ is observable, however, on directed lattices and on graphs in which the nodes serving as inputs to a given node are selected at random. In this paper we derive the probability distribution for the number of undamaged nodes at the \EP\ transition on random graphs for a general class of propagation models exhibiting what we call {\it unordered binary avalanches} (\UBA). This is analogous to finding the distribution of avalanche sizes at the usual percolation transition, but here we are asking for the distribution of the number of nodes {\it not} participating in the avalanche. As an application of our \EP\ results, we consider the problem of identifying unfrozen nodes in a random Boolean network (\RBN). In a \RBN, each node has a binary state that is updated according to a rule that takes the values of some other nodes as inputs. The dynamics of \RBN s has been investigated extensively; see, e.g., \cite{Kauffman:69, Derrida:86a, Bastolla:97, Socolar:03, Aldana:03a, Aldana:03b, Samuelsson:03, Kauffman:04, Drossel:05b}. A \RBN\ can have several dynamical attractors, but some nodes might have the same value at all times on all attractors. Such nodes are called {\it stable} and the set of stable nodes is important for the dynamics in \RBN s \cite{Flyvbjerg:88b, Bastolla:98a, Bilke:01}. Almost all stable nodes in a broad class of \RBN s can be identified through a dynamic process that was introduced by Flyvbjerg \cite{Flyvbjerg:88b} and formalized to facilitate numeric simulations by Bilke and Sjunnesson \cite{Bilke:01}. We call the stable nodes that can be identified by this dynamic process {\it frozen} (and nodes that are not frozen are called {\it unfrozen}). Provided that the Boolean rule distribution is symmetric with respect to inversion of any subset of inputs, the set of frozen nodes can be identified through an \UBA\ in which frozen inputs cause new nodes to become frozen (damaged). Most rule distributions that have been examined in the literature exhibit this symmetry. The requirement is satisfied, for example, for any model that assigns given probability $p$ for obtaining a 1 in each entry of the truth table for each node. This paper is organized as follows. We first develop the notation and basic definitions required for discussing \UBA s in general. In Section~\ref{sec: basic defs}, we give an introduction to the \UBA\ formalism from the perspective of percolation processes. A more formal description is given in Section~\ref{sec: introduction to EP}, followed by a numerical illustration of the basic concepts. In Section~\ref{sec: random networks}, we present analytic derivations for \UBA\ in random networks with emphasis on \EP\ and the \EP\ transition. We also present explicit results for the special case of Erd\H{o}s--R\'{e}nyi networks with a natural choice for the avalanche rules. In Section~\ref{sec: application} we show how to apply the \UBA\ formalism to obtain the statistics of frozen nodes in two-input \RBN s. In the present context, this serves as an illustration of the general theory, but this particular example was also the primary motivation for studying \EP. The results on \RBN s are consistent with those found by Kaufman, Mihaljev, and Drossel.~\cite{Kaufman:05}. The main advantage of using the \EP\ formalism for this problem is that it makes clear how the calculation can be extended to networks with more than two inputs per node, including networks with an in-degree distribution that (with a low probability) allows arbitrarily large in-degrees. \subsection{Basic definitions} \label{sec: basic defs} An unordered binary avalanche (\UBA ) is defined as a spreading process with the following properties: \begin{description} \item[Binary states:] the state of each node can be characterized as a binary variable $s$, with $s=0$ meaning {\it undamaged} and $s=1$ meaning {\it damaged}; \item[Boolean rules:] the state of each node is determined by a Boolean function of the states of its input nodes; \item[Order independence:] the probability of having a given set of nodes damaged at the end of the process does not depend upon the order in which nodes are chosen for updating. \end{description} Order independence refers to the dynamics of the spreading process or a simulation of it. In such a simulation, one typically chooses a site and updates it according to a rule depending on the states of sites that provide inputs to it, repeating the process until a test of every site yields no change in the state of the system. We are interested in cases where the order in which sites are chosen for possible updating has no bearing on the final state of the system. \UBA\ is a natural extension of site or bond percolation. To determine the avalanche size distribution in site percolation, for example, one identifies an initial subset of damaged sites and then tests neighbors of damaged sites to see whether the damage spreads to them. After a given site is tested for the first time, its value is permanently fixed. The process is iterated until no new damaged sites are generated. See, e.g., Ref.~\cite{Sahimi:03}. This method of investigating site percolation is equivalent to assigning all sites a value, then beginning with a damaged site and determining all of the damaged sites in a connected cluster. Site percolation where each site has the probability $p$ to be occupied can be recast as a \UBA\ system as follows. Let each site be associated with a rule that is an {\sc or}-rule of all of its neighbors with probability $p$ and is a constant 0 with probability $1-p$. Then the above described site percolation is achieved by first selecting the rules and clamping the value of a given site to 1, and then repeatedly updating the system according to the Boolean rules. In this situation, the 1s in the final state mark a site percolation cluster. A more practical way of simulating the same \UBA\ is to determine probabilistically the Boolean rule at each site only when that site is first encountered in the percolation process and to update only those nodes where the rules have been determined. To ensure order independence in \UBA, it is sufficient to require that each Boolean function is non-decreasing, meaning that if one of the inputs to the rule changes from 0 to 1, the output is not allowed to change from 1 to 0. For non-decreasing Boolean functions, if a specific node is eventually going to be assigned the value 1 during an avalanche, updating other nodes to 1 first cannot change the outcome. We are particularly interested in \UBA s that are initiated by damage at a set of nodes comprising a nonzero fraction of the total number of nodes. Such a process would be relevant, for example, if the probability that any given node is damaged at the start is independent of the system size. To clarify both the distinction between \EP\ (exhaustive percolation) and \SP\ (sparse percolation) and the similarities between them, we describe a particular case of a propagation model that exhibits both transitions. Consider a graph with a total of $N$ nodes, some of which have three input links each while the others have no input links at all. The graph is random in that the node supplying the input value on any given link is selected at random, but stays fixed throughout the avalanche. Let $\nu_0$ be the fraction of nodes with no inputs. Define a spreading process as follows: The initial condition is that all nodes with no inputs are considered damaged. Each other node is now selected in turn to see whether the damage spreads to it. If a node has one damaged input, the probability that it will be damaged is $p_1$; if it has two damaged inputs, the probability of damage is $p_2$ (with $p_2 \ge p_1$); and nodes with three damaged inputs are guaranteed to become damaged ($p_3 = 1$). These probabilities are realized, for example, by the following Boolean rule distribution: a 3-input {\sc or}-rule with probability $p_1$; a 3-input majority rule with probability $p_2-p_1$; and a 3-input {\sc and}-rule with probability $1-p_2$. As $N$ goes to infinity, the number of initially damaged nodes can be a nonzero number that grows slower than $N$, meaning that $\nu_0$ goes to zero as $N$ goes to infinity. In this limit, the \SP\ transition occurs at $p_1 = 1/3$ and the spreading from each initially damaged node is described by a {\it Galton--Watson process}. In a Galton--Watson process, a tree is created by adding branches to existing nodes, with the number of branches emerging from each node drawn from a fixed probability distribution. Such branching processes have been investigated extensively. (See, e.g., Ref.~\cite{Harris:63}.) In particular, the correspondence to Galton--Watson processes means that for critical \SP, the probability of finding $n$ damaged nodes scales like $n^{-3/2}$ for $1\ll n\ll N$ \cite{Otter:49, Ramo:06}. For any nonzero value of $\nu_0$, the \EP\ transition occurs for $p_2$ satisfying $(1 - p_2)(1-\nu_0) = 1/3$ (assuming that this value of $p_2$ is greater than $p_1$.) The analysis described in Section~\ref{sec: random networks} provides a method of calculating the probability $P(u)$ of having $u$ {\it undamaged} nodes in this case. The result in the large $N$ limit is $P(u) \sim P(0)u^{-1/2}$ for large $u$. A difference between \EP\ and \SP\ is that both $P(0)$ and the cutoff on the $u^{-1/2}$ distribution scale with $N$ for \EP, while for \SP\ only the cutoff scales with $N$. \section{Introduction to exhaustive percolation} \label{sec: introduction to EP} \subsection{Formal description of UBA} \label{sec: formal UBA} We now describe a formalism and establish some notation that is suitable for a detailed treatment of \UBA. Let $N$ denote the number of nodes in a network with a specified set of links and let the nodes be indexed by $j=1,\ldots,N$. The network state is described by the vector $\s=\{s_1,\ldots,s_N\}$. Let $K_j$ denote the number of inputs to node $j$, and let $\kb_j$ denote the vector of $K_j$ inputs to node $j$. Furthermore, let $R$ denote a Boolean function and let $\Pi_j(R)$ denote the initial probability that node $j$ has the rule $R$. [It is required that $R$ has precisely $K_j$ inputs for $\Pi_j(R)$ to be nonzero.] To efficiently simulate \UBA, we keep track of the information that is known about each node at each step in the process. In particular, it is important to keep track of whether or not the change from 0 to 1 of a given input has already been accounted for in determining the output. The simplest way to do this is to introduce an extra state \tsa\ that labels a site whose rule $R$ implies an output value of 1 but for which the update to 1 has not yet been implemented. When a node changes its state from 0 to \tsa, it is a silent change in the sense that the Boolean rules at the other nodes treat an input \tsa\ exactly the same as 0. To retrieve the final state of the network, all occurrences of \tsa\ must be updated to 1. When a single update to 1 is made, the information that the given node has value 1 is passed along to all nodes with inputs from it. The values of these nodes may then change from 0 to \tsa. The conditional probability that the value of node $i$ is updated from 0 to \tsa\ when $j$ changes value from \tsa\ to 1, is given by \begin{equation} U_i(\s,j) \equiv \frac{P_1(\kb'_i) - P_1(\kb_i)}{1 - P_1(\kb_i)}, \label{eq: Ui} \end{equation} where $\kb'_i$ is the value of $\kb_i$ after $s_j$ has been updated and $P_1(\kb_i)$ is the probability that $R_i(\kb_i) = 1$: \begin{equation} P_1(\kb_i) \equiv \sum_R R(\kb_i)\Pi_i(R). \label{eq: P1} \end{equation} The numerator in Eq.~(\ref{eq: Ui}) is the probability that $R_i$ produces a 1 after the update of node $j$ minus the probability that $R_i$ produced a 1 before the update. The denominator is the probability that node $i$ had the value 0 before the update. Let $\Pi_i(1)$ denote the probability that the rule at node $i$ has output 1 regardless of its input values. If some particular nodes are selected for initiation of the \UBA, $\Pi_i(1)$ is set to one for these nodes [which means $\Pi_i(R) = 0$ for all other rules]. We are now ready to present a formal algorithm for determining the final state of an instance of \UBA\ on a finite network. We carry out the following procedure (where $:=$ denotes the assignment operator): \begin{enumerate} \item $s_j:=0$ for all $j$; \item $s_j:=\msa$ with probability $\Pi_j(1)$ for each $j$; \item Some $j$ with $s_j=\msa$ is selected; \label{j select} \item $s_i:=\msa$ with probability $U_i(\s,j)$ for each $i$ with $s_i=0$; \item $s_j:=1$; \label{j := 1} \item Steps \ref{j select}--\ref{j := 1} are iterated as long as there exists a node in state \tsa. \label{iterate} \end{enumerate} \UBA\ can also be considered on infinite networks, but that requires a more technical description of the process. First, the choices of $j$ in step 3 for both descriptions must be such that any given $j$ that satisfies the conditions in step~\ref{j select} will be selected in a finite number of iterations. Second, the ensemble of final states needs to be defined in terms of a suitable limit process because the stopping criterion in step~\ref{iterate} can not be applied to an infinite system. Note that the dynamics is only dependent on the probability functions $\{P_1(\kb_i)\}$. That is, the precise rule distributions affect the avalanche results only through their contributions to $P_1$. Because the Boolean rules are non-decreasing functions, $P_1(\kb_i)$ is also a non-decreasing function. In fact, every non-decreasing function, $f(\kb_i)$, with values in the interval $[0,1]$ can be realized by $P_1(\kb_i)$ for a suitable Boolean rule distribution. One such rule distribution can be constructed as follow: for each $i$ and each $\kb_i$, select a random number $y$ from a uniform distribution on the unit interval and set $R_i(\kb_i) = 1$ if and only if $y < f(\kb_i)$. \subsection{An example of EP on a lattice} \label{sec: EP lattice} To illustrate the concepts of \UBA\ and \EP, consider a directed network on a two-dimensional square lattice with periodic boundary conditions. Each node in the lattice has integral coordinates $(i,j)$ where $i+j$ is odd and the node at $(i,j)$ receives inputs from the two nodes at $(i-1,j\pm1)$. The rule for propagation of damage to a node is either {\sc or} or {\sc and}, with probabilities $\Pi_{(i,j)}(\trm{\sc or}) = r$ and $\Pi_{(i,j)}(\trm{\sc and}) = 1 - r$, respectively. \begin{figure}[bt] \begin{center} \includegraphics{uba_2d.eps} \end{center} \caption{\label{fig: lattice UBA} An example of \UBA\ on a lattice, displaying undamaged nodes (dots), initially damaged nodes (filled circles), and nodes damaged during the avalanche (empty circles). Each node has either an {\sc or}-rule or an {\sc and}-rule with inputs from its neighbors in the row immediately above the node. The probability for a node to be initially damaged is $\rho = 1/8$ and the probability for obtaining an {\sc or}-rule is $r=0.3$. Periodic boundary conditions are used and the first row and column are repeated in gray after the last row and column to illustrate the periodic boundary conditions. } \end{figure} Figure~\ref{fig: lattice UBA} displays an avalanche that is initiated by letting each node be initially damaged with probability $\rho=1/8$. A node assigned {\sc or} becomes damaged if either of its neighbors one layer above is damaged; a node assigned {\sc and} becomes damaged if and only if both neighbors above it are damaged. This means that \begin{align} P_1(\kb_{(i,j)}) &= \left\{ \begin{array}{ll} 0 & \trm{if }\kb_{(i,j)} = (0,0)\\ r & \trm{if }\kb_{(i,j)} \in \{(0,1),(1,0)\}\\ 1 & \trm{if }\kb_{(i,j)} = (1,1)~. \end{array}\right. \end{align} Note that clusters of damaged nodes formed in an avalanche initiated by a single damaged node cannot contain any holes, as the uppermost undamaged node in the hole would have to have two damaged inputs and hence would become damaged when updated. For localized initial damage, the \SP\ threshold is found at $r = 1/2$. Above this value of $r$, domains of damage tend to widen as the avalanche proceeds. Since the growing cluster has no holes, this is simultaneously an \EP\ transition. The \EP\ transition can be found for smaller values of $r$ in lattices where each node is initially damaged with a given nonzero probability $\rho$. [For every initially damaged node, $\Pi_{(i,j)}(1)$ is set to 1, meaning that $P_1(\kb_{(i,j)})=1$ for every value of $\kb_{(i,j)}$.] \begin{figure}[bt] \begin{center} \includegraphics{EP_2d.eps} \end{center} \caption{\label{fig: lattice EP} The average fraction of undamaged nodes for \UBA\ on a lattice of the type shown in Fig.~\ref{fig: lattice UBA}, as a function of the selection probability $p$ for {\sc or}-rules and the probability $\rho=1/8$ for initial damage. The lattice has periodic boundary conditions and covers a square that has a side of $10$, $10^2$, $10^3$, and $10^4$ lattice points, respectively, with steeper curves for larger systems. The statistical uncertainty in the estimated mean is less than the line width. } \end{figure} Figure~\ref{fig: lattice EP} shows the average number of unaffected nodes as a function of $r$ for $\rho=1/8$ on lattices with periodic boundary conditions. The numerics displayed in Fig.~\ref{fig: lattice EP} clearly suggest that there is a second-order \EP\ phase transition. Furthermore, these numerical results suggest that the avalanche in Fig.~\ref{fig: lattice UBA} is within the parameter regime for \EP\ and that \EP\ does not occur in this case due to finite size effects. For the case $r = 0$, it is possible to map the \EP\ transition onto ordinary, directed, site percolation on the same lattice. When all nodes in the lattice have {\sc and}-rules, the following algorithm may be used to determine whether a given node will be damaged: select a node; put a mark on the selected node unless it is initially damaged; and recursively mark each initially undamaged node that has an output to a marked node. The selected node will get damaged if and only if this recursion ends in a finite number of steps. The algorithm describes ordinary directed site percolation where the initially undamaged (damaged) nodes are considered active (inactive) sites and the process propagates in the opposite direction relative to the \UBA. We therefore expect the \EP\ transition to occur for a value of $\rho$ equal to $1-p_{\trm c}$, where $p_{\trm c} = 0.70549$ is the threshold for directed site percolation \cite{Essam:88} and we have confirmed this with numerical tests. Further study of \EP\ on the lattice is beyond the scope of this paper. \subsection{Suppression of EP by resistant motifs} \label{sec: resistant motifs} In the lattice example above, the fact that the network had no feedback loops smaller than the lattice size was important. In general, \EP\ is suppressed by the presence of short feedback loops. As already noted, for \EP\ to occur, it is required that the output of each rule in the rule distribution is 1 if all of its inputs have the value 1. Otherwise, there would be a finite fraction of nodes that keep the value 0 regardless of the influence from the rest of the network. Generalization of this reasoning allows us to rule out \EP\ in other situations, indicating that \EP\ is most likely to occur in directed or highly disordered networks. To pursue this idea, we introduce the notion of {\it resistant motifs}. A {\it motif} is a small network with a particular arrangement of internal links. A given motif may occur many times in a network with different rules assigned to its nodes and with different configurations of external inputs. A motif is {\it resistant} with respect to a given ensemble of rule assignments if the probability of damage entering the motif when all external inputs are damaged is strictly less than unity. For the rule distributions that we consider for the \EP\ transition in random networks, each node has a nonzero probability of being assigned a rule that sets its output to 0 if at least one of its inputs is 0. Thus when all of the nodes in a feedback loop of any length have the value 0, there is a nonzero probability that they will all remain 0 even if all external inputs to the loop are set to 1. Every feedback loop of a given length is therefore a resistant motif. If the number of occurrences of a resistant motif grows linearly with the network size, there will in total be a finite fraction of nodes that remain unaffected with a finite probability. For such networks, \EP\ cannot occur in the limit of large systems. Examples include typically studied regular lattices and small world networks with link directions assigned so that short feedback loops are prevalent. The problem resistant motifs can be avoided in random networks having a mean indegree $\langle K\rangle$ that is well-defined and independent of $N$, in which case the number of feedback loops of a given length approaches a constant. Though the total number of resistant motifs may grow with system size, the larger motifs have a low probability of avoiding damage. For large $N$, the out-degree distribution is a Poisson distribution with a mean value of $\langle K\rangle$. The outputs emerging from a given node form a tree with approximately $\langle K\rangle^m$ nodes at the $m$th level. Thus, the probability for a given node to be part of a cycle of $m$ nodes is approximately $\langle K\rangle^m/N$, which means that the typical number of feedback loops of length $m$ is approximately $\langle K\rangle^m/m$. On the other hand, the loop may contain either initially damaged nodes or some rules that allow damage to enter from external inputs. The probability that this will {\em not} occur decays exponentially with $m$. If the decay is faster than $\langle K\rangle^{-m}$, the density of nodes in undamaged resistant motifs will approach zero. In summary, \EP\ (for the considered type of rule distributions) is excluded on lattices with a high density of feedback loops. For random networks, however, the fraction of nodes in undamaged resistant motifs can go to zero in the large $N$ limit. This property allows \EP\ to occur on random networks as demonstrated in the following section. \section{EP on random networks} \label{sec: random networks} \subsection{Criteria for EP} \label{sec: criteria for EP} Consider a network such that the inputs to each node are chosen randomly and uniformly from all nodes in the network and the probability functions $\{P_1(\kb_i)\}$ are determined from a given distribution of Boolean rules. For such networks, \UBA\ can be handled analytically. Define $g(x)$ as the probability for a rule in the random network to output $1$ if each input has the value $1$ with probability $x$. The function $g$ reflects the probability for propagation of damage to a single node, for the considered instance of \UBA. We refer to $g$ as the {\it damage propagation function}. In random networks, $P_1(\kb_i)$ is independent of $i$ and can be replaced by $P_1(\kb)$. Let $K$ denote the number of components of $\kb$, i.e., the number of inputs to the considered node. $g(x)$ can then be expressed as \begin{align} \label{eq: g definition} g(x) &= \sum_{K=0}^\infty P(K)\!\!\!\!\sum_{\kb\in\{0,1\}^{K}}x^{I}(1-x)^{K-I}P_1(\kb), \end{align} where $I$ is the number of 1s in $\kb$ and $P(K)$ is the probability to draw a rule with $K$ inputs. Let $N$ denote the total number of nodes, and let $n_0$, $n_\msa$, and $n_1$ denote the number of nodes with the values $0$, $\msa$, and $1$, respectively. With these definitions and the fact that $U_i(\s,j)$ is independent of $i$ for the random network, the role of $\{P_1(\kb_i)\}$ is taken over by $g(n_1/N)$ and Eq.~\eqref{eq: Ui} is replaced by \begin{align} U(\s,j) &= \frac{g(n'_1/N)-g(n_1/N)}{1-g(n_1/N)}, \label{eq: n-update first} \end{align} where $n'_1=n_1+1$. This means that the size of the network, the number of initially damaged nodes, and the damage propagation function $g$ taken together are sufficient to uniquely determine the stochastic spreading process. After one pass of the update steps \ref{j select}--\ref{j := 1} (from Section~\ref{sec: formal UBA}), the new values $n'_0$ and $n'_\msa$ of $n_0$ and $n_\msa$ are given by \begin{align} n'_0 &= n_0 - \delta \intertext{and} n'_\msa &= n_\msa + \delta - 1 \intertext{where} \delta &= B_{n_0}[U(\s,j)], \label{eq: n-update last} \end{align} with $B_n(a)$ being a stochastic function that returns the number of selected items among $n$ items if the selection probability for each of them is $a$. The avalanche ends when $n_\msa=0$. The number of damaged nodes, $n$, in a complete avalanche is the final value of $n_1$, whereas the number of undamaged nodes, $u$, is the final value of $n_0$. An order parameter for the system is $\phi = \lim_{N\rightarrow\infty}\langle n/N\rangle$, where the average is taken over the ensemble of networks. The \SP\ transition is found when $\phi$ changes from zero to a nonzero value, whereas the \EP\ transition is found when $\phi$ reaches 1. To understand the typical development of an avalanche, it is convenient to change from the variables $n_0$, $n_\msa$, and $n_1$, which are constrained to sum to $N$, to the variables $x_1 \equiv n_1/N$ and \begin{align} c &\equiv \frac{n_0}{1-g(x_1)}. \label{eq: c definition} \end{align} As long as $n_\msa>0$, the average value of $c$ after a single update is given by \begin{align} \langle c'\rangle &= \frac{\langle n'_0\rangle}{1-g(x'_1)}\\ &= \frac{n_0-c[g(x'_1)-g(x_1)]}{1-g(x'_1)}\\ &= c. \label{eq: c constant} \end{align} Hence, as long as $n_\msa>0$ for all members of an ensemble of avalanches, $\langle c\rangle$ (the average of $c$ over the ensemble) is conserved as the avalanche proceeds. From Eqs.~\eqref{eq: n-update first}--\eqref{eq: n-update last} and the definition of $c$, the variance in $c$ can be calculated. We begin by computing the increment of the variance due to one update step, $\sigma^2(c')$. To leading order as $N\rightarrow\infty$, we get \begin{align} \sigma^2(c') &= \frac{\sigma^2(\delta_0)}{[1-g(x'_1)]^2}\\ &= \frac{n_0 U(\s,j)[1-U(\s,j)]}{[1-g(x'_1)]^2}\\ &= \frac{c\,U(\s,j)}{1-g(x_1)}\\ &= \frac{c}{N[1-g(x_1)]^2}\frac{dg(x)}{dx}\bigg|_{x=x_1}. \label{eq: c-variance} \end{align} Eq.~\eqref{eq: c-variance} gives the increment of the variance of $c$ from one update step. To get the total variance of $c$, we need to sum over all updates from $n_1=0$ to the desired value of $n_1$. Provided that there is an upper bound $\kappa$ such that $dg(x)/dx<\kappa$ for all $x$, the total variance of $c$ satisfies \begin{align} \sigma_{\trm{tot}}^2(c) &< n_1\frac{c\,\kappa}{N[1-g(x_1)]^2} <\frac{\kappa N}{1-g(x_1)} \label{eq: c tot var upper} \end{align} for $x_1 < 1$. (Note that $1/[1-g(x)]$ is a nondecreasing function because $g(x)$ is nondecreasing.) The avalanche is initiated with $n_\msa \equiv n_\msa^\ti$, $n_0=N-n_\msa^\ti$, and $n_1=0$. The process ends when $n_0+n_1=N$ and we seek the distribution of $n_0$ or $n_1$ when this happens. According to Eq.~\eqref{eq: c tot var upper}, the standard deviation of $c/N$ scales like $1/\sqrt{N}$, which implies that both $n_0/N$ and $n_\msa/N$ have standard deviations that scale like $1/\sqrt{N}$. ($x_1$ has zero standard deviation because $n_1$ is incremented by exactly unity on every update step.) Thus in the large system limit, the probability of any member of the the ensemble of avalanches stopping is negligibly small as long as $n_\msa/N$ is finite, and we may treat $c$ as exactly conserved as long as this condition holds. Using the initial values $x_1 = 0$ and $n_0 = N - n_\msa^\ti$, which determine $c$, Eq.~\eqref{eq: c definition} can be rearranged to give \begin{align} n_0 &= [1-g(x_1)]\frac{N-n_\msa^\ti}{1-g(0)}. \end{align} Noting that $n_0/N = 1 - x_1 - n_\msa/N$, we see that in the large $N$ limit, the process continues as long as the strict inequality \begin{align} 1 - x_1 &> [1-g(x_1)] \frac{\displaystyle 1-\lim_{N\rightarrow\infty}n_\msa^\ti/N} {1-g(0)}~ \label{eq: x1 fp} \end{align} holds, since the inequality implies that $n_\msa/N$ remains finite. Moreover, in the large $N$ limit it is impossible to reach values of $x_1$ for which the inequality has the opposite sign, because the process stops when $n_\msa$ reaches zero. Note that because of the zero probability of a node remaining undamaged when all of its neighbors are damaged, we have $g(1)=1$, which in turn implies that Eq.~\eqref{eq: x1 fp} becomes an equality at $x_1 = 1$. If Eq.~\eqref{eq: x1 fp} is satisfied for all $x_1 < 1$, the process will be exhaustive in the sense that it will not end with a finite value of $n_0/N$. If, on the other hand, the inequality changes sign for $x_1$ above some threshold value, then the process will terminate when the threshold is reached. If the left hand side of Eq.~\eqref{eq: x1 fp} forms a tangent line to the right hand side of the expression at some value of $x_1$, the process will exhibit critical scaling laws. The critical case for \EP\ occurs when the when the tangency occurs at $x_1 = 1$. Examples of these behaviors are presented below and in Section~\ref{sec: application}. As an aside, we note that the \SP\ transition is an instance of criticality at $x_1=0$. For the above mentioned criterion of criticality to hold at $x_1=0$, the right hand side of Eq.~\eqref{eq: x1 fp} must have the value $1$ and the slope $-1$ at $x_1=0$. Thus, the system is critical with respect to \SP\ if $\lim_{N\rightarrow\infty}n_\msa^\ti/N=0$ and \begin{align} \label{eq: critical g} \frac{dg(x)}{dx}\bigg|_{x=0} &= 1 - g(0). \end{align} Eqs.~\eqref{eq: g definition} and~\eqref{eq: critical g} immediately give a criterion for critical percolation on graphs in which every possible directed link (including self-inputs) exists with an independent, fixed probability, assuming the conventional choice in which damage spreads to a given node with probability $p$ from each of its damaged neighbors. In this case we have \begin{align} g(x) &= \sum_{K=0}^\infty P(K)\bigl[1 - (1-px)^K\bigr], \end{align} which yields \begin{align} \frac{dg(x)}{dx}\bigg|_{x=0} &= p \sum_{K=0}^\infty P(K) K \\ \ &= p \langle K \rangle. \label{eq: ER percolation} \end{align} This result is closely related to the well-known criterion for the presence of a percolating cluster in an Erd\H{o}s--R\'{e}nyi graph: percolation occurs when the probability $p_{\trm {\sc er}}$ for the presence of a link between two randomly selected nodes exceeds $1/N$, where $N$ is the number of nodes.~\cite{Bollabas:85} In the present context, $p_{\trm {\sc er}}$ is mapped to $p_{\trm{link}} p$, where $p_{\trm{link}}$ is the probability that a link exists connecting the two randomly selected nodes and $p$ is the probability that damage spreads across that link. At the same time, we have $\langle K\rangle = p_{\trm{link}} N$. (Recall that $K$ is only the indegree of a node, not the total number of links connected to it.) Thus Eq.~\eqref{eq: ER percolation}, which implies that the critical value of $p$ is $1/\langle K \rangle$, is consistent with the well-known theory of Erd\H{o}s--R\'{e}nyi graphs.~\cite{Bollabas:85} Eq.~\eqref{eq: ER percolation} applies for any distribution of indegrees so long as $\langle K \rangle$ is well-defined and the source of each input is selected at random (so that the outdegrees are Poisson distributed). We note that the latter condition is {\em not} met by random regular graphs (graphs in which all nodes have the same outdegree) because the probabilities of two nodes getting an output from the same node are correlated. \SP\ can also be understood by the theory of Galton--Watson processes. If $\lim_{N\rightarrow\infty}n_\msa^\ti/N=0$, the update described by Eqs.~\eqref{eq: n-update first}--\eqref{eq: n-update last} is consistent with a Galton--Watson processes that has a Poisson out-degree distribution with a mean value \begin{align} \lambda &= \frac1{1 - g(0)}\frac{dg(x)}{dx}\bigg|_{x=0}. \end{align} See References~\cite{Harris:63, Otter:49, Ramo:06}. See Appendix~\ref{app: SP} for more details on \SP\ in relation to known results. Cases of tangencies at intermediate values of $x_1$ are beyond the scope of the present work. Returning to the question of the \EP\ transition, it is convenient to change variables once again. We define $x_\msia \equiv 1-x_1$ and $q(x_\msia) \equiv 1-g(x_1)$. In words, $q(x)$ is the probability that a randomly selected node will output 0 given that each of its inputs has the value 0 with probability $x$. We refer to $q$ as the {\it damage control function} as it characterizes the probability that damage will be prevented from spreading to a single node. Equation~\eqref{eq: x1 fp} is then transformed to \begin{align} x_\msia &> q(x_\msia) \frac{\displaystyle 1-\lim_{N\rightarrow\infty}n_\msa^\ti/N} {q(1)}. \label{eq: x0 gt} \end{align} Critical \EP\ is found when the left hand side of Eq.~\eqref{eq: x0 gt} forms a tangent line to the right hand side of the expression at $x_\msia=0$. At criticality, the right hand side of Eq.~\eqref{eq: x0 gt} should have the value 0 and the slope 1. Hence, the conditions $q(0)=0$ and \begin{align} \frac{dq(x)}{dx}\bigg|_{x=0} &= \frac{q(1)}{1-n_\msa^\ti/N} \label{eq: EP criticality} \end{align} are required for an \EP\ transition. \subsubsection*{Example: EP on random digraphs} We now consider the special case of graphs in which every possible directed link (including self-inputs) exists with an independent, fixed probability. (We have already discussed \SP\ on such graphs.) If damage spreads along each directed link with probability $p$, there is no \EP\ transition because there is a nonzero probability for a node to remain undamaged when all of its inputs are damaged. A minimal change that allows \EP\ on such graphs is to give a special treatment to nodes whose inputs are all damaged, in which case the considered node should always get damaged. For the same reason, all nodes with no inputs must be initially damaged. Other nodes might also be initially damaged, and we let this happen with a given probability $\rho$ for each node with at least one input. For such a network, we can calculate the damage propagation function according to \begin{align} g(x) &= \sum_{K=0}^\infty P(K) \bigl[1-(1-px)^K+(1-p)^Kx^K\bigr]\\ &= 1-e^{-\langle K\rangle px}\bigl(1-e^{-\langle K\rangle(1-x)}\bigr). \end{align} The corresponding damage control function becomes \begin{align} q(x) &= e^{-\langle K\rangle p(1-x)}\bigl(1-e^{-\langle K\rangle x}\bigr). \end{align} A necessary condition for the \EP\ transition is derived from Eq.~\eqref{eq: EP criticality}, yielding \begin{align} \langle K\rangle e^{-p\langle K\rangle} &= \frac1{1-\rho}~. \label{eq: E-R EP slope} \end{align} For the \EP\ transition to occur, it is also required that \begin{align} f(x) \equiv x - q(x)(1-\rho) &\ge 0 \label{eq: E-R EP ineq} \end{align} for all $x\in[0,1]$ according to Eq.~\eqref{eq: x0 gt}. If both Eqs.~\eqref{eq: E-R EP slope} and~\eqref{eq: E-R EP ineq} are satisfied, the \EP\ transition occurs at the value of $p$ given by Eq.~\eqref{eq: E-R EP slope}: \begin{align} p_{\trm c} &= \frac{\ln\langle K\rangle+\ln(1-\rho)}{\langle K\rangle}~. \end{align} Equation~\eqref{eq: E-R EP slope} turns out to be a sufficient and necessary condition for the \EP\ transition. Provided that Eq.~\eqref{eq: E-R EP slope} holds, the first derivative satisfies $f'(0)=0$. From the observation $f'''(x)<0$, it is then straightforward to show that $f(x)$ has no local minimum on the interval $(0,1)$. Since $f(0)=0$ and $f(1)>0$, Eq.~\eqref{eq: E-R EP ineq} holds for all $x\in[0,1]$. It is instructive to examine the phase diagram at fixed $\rho$. A negative value of $p_{\trm c}$ indicates that the system is always in the \EP\ regime, so for $\langle K\rangle < 1$ the system exhibits \EP\ and it is not possible to observe a transition. For $\langle K\rangle > 1$, an \EP\ transition can be observed at $p=p_{\trm c}$. A curious feature of this system is that $p_{\trm c}$ is not a monotonic function of $\langle K\rangle$, having a maximum value of $(1-\rho)/e$ at $\langle K\rangle = e/(1-\rho)$ and approaching zero as $\langle K\rangle$ approaches infinity. Thus if $p$ is held fixed at any value between zero and $(1-\rho)/e$, the system will undergo two transitions as $\langle K\rangle$ is increased from zero. The system will begin in the \EP\ regime (i.e. $p>p_{\trm c}$), undergo a transition to subcritical behavior at some $\langle K\rangle$, then reenter the \EP\ regime for a higher value of $\langle K\rangle$. The calculated phase diagram is shown in Fig.~\ref{fig: E-R EP phase} and has been verified by direct numerical simulations of avalanches. Roughly speaking, at low $\langle K\rangle$ \EP\ occurs due to the high density of initially damaged nodes with no inputs. At high $\langle K\rangle$, on the other hand, \EP\ occurs due to the high probability of nodes being damaged because of their large number of inputs. \begin{figure}[bt] \begin{center} \includegraphics{E-R_EP_phase.eps} \end{center} \caption{\label{fig: E-R EP phase} Phase diagram for \EP\ on random digraphs, where damage spreads along each directed link with probability $p$ and a node is guaranteed to get damaged in the special case that all of its inputs are connected to damaged nodes. All nodes with zero inputs are initially damaged, and the other nodes are initially damaged with probability $\rho$. The gray area bounded by a solid line shows the region where \EP\ occurs for $\rho=0$ and the dashed lines show the \EP\ transition when $\rho$ has the values $1/4$, $1/2$, and $3/4$, respectively.} \end{figure} \subsection{The probability of complete coverage} \label{sec: cc} An important quantity associated with \EP\ is the probability of an avalanche yielding complete coverage of the system; i.e., the probability that all sites are damaged by the \UBA\ so that $u=0$. Let $\Pex(N,q; n_0,n_\msa)$ denote the probability that a \UBA\ on a random network will yield complete coverage for a system with a given network size $N$, a given damage control function $q$, and starting with particular values of $n_0$ and $n_\msa$. For future convenience we also define $\Pex(N,q)$ to be the probability for complete coverage assuming that each node is initially damaged with probability $1-q(1)$ and we average over the corresponding probability distribution for $n_\msa$. To calculate $\Pex(N,q; n_0,n_\msa)$, we note that \begin{align} \Pex(N,q;m,0)=0\quad {\rm if\ } m>0, \end{align} since the process stops when $n_\msa = 0$. We also have \begin{align} \Pex(N,q;0,m)=1\quad {\rm for\ any\ } m, \end{align} since updating can never create 0s. These values of $\Pex$ can be used for recursive calculation of $\Pex$. Let $n_\msia$ denote $n_0+n_\msa$, or $Nx_\msia$. Performing steps \ref{j select}--\ref{j := 1} (from Section~\ref{sec: formal UBA}) one time decreases $n_\msia$ by $1$ as described by Eqs.~\eqref{eq: n-update first}--\eqref{eq: n-update last}. This means that $\Pex(N,q; n_0,n_\msa)$ can be calculated for all $n_\msia=m$ if $\Pex(N,q; n_0,n_\msa)$ is known for all $n_\msia=m-1$. The recursion starts at $n_\msia=0$ with $\Pex(N,q;0,0)=1$ and uses the boundary conditions $\Pex(N,q; n_\msia,0)=0$ and $\Pex(N,q; 0,n_\msia)=1$ for $n_\msia>0$. For large $N$, $\Pex$ can be calculated in the framework of a continuous approximation. Let $p(n_\msia,c)$ denote a continuous version of $\Pex(N,q$; $n_0,n_\msa)$. Then, the boundary conditions $\Pex(N,q; n_\msia,0)=0$ and $\Pex(N,q$; $0,n_\msia)=1$ are expressed as \begin{align} p[n_\msia,c_{\trm{max}}(x_\msia)] &= 0, \label{eq: p-diff bound beg} \intertext{and} p(n_\msia,0) &= 1, \intertext{where} c_{\trm{max}}(x_\msia) &= \frac{n_\msia}{q(x_\msia)}. \label{eq: p-diff bound end} \end{align} In the continuous approximation, the recurrence relation that can be derived from Eqs.~\eqref{eq: n-update first}--\eqref{eq: n-update last} is transformed to a partial differential equation. In such an update, the change $n_\msia$ decreases by unity and, for large $N$, the change in $c$ is much less than $c$ itself. In the continuous approximation, this means that $p(n_\msia,c)$ satisfies a partial differential equation of the form \begin{align} \frac{\partial p}{\partial n_\msia} &= h_1(n_\msia,c)\frac{\partial p}{\partial c} + h_2(n_\msia,c)\frac{\partial^2p}{\partial c^2}, \label{eq: p-diff 0} \end{align} where $h_1(n_\msia,c)$ and $h_2(n_\msia,c)$ are functions to be determined. This is recognizable as a 1D diffusion equation in which $n_\msia$ plays the role of time and $c$ the role of space. Note that later times in the diffusion equation correspond to earlier stages of the \UBA, since $n_\msia$ decreases as nodes are converted to 1s. The boundary conditions on the diffusion are given by Eqs.~\eqref{eq: p-diff bound beg} and~\eqref{eq: p-diff bound end}. We are interested in computing $p(n_{\msia},c)$ for values of $n_\msia$ and $c$ corresponding to $n_\msa = n_\msa^\ti$ and $n_1 = 0$. The fact that the average of $c$ is constant means that the coefficient of the drift term in the diffusion equation must vanish; i.e., $h_1(n_\msia,c)=0$. The diffusion coefficient, $h_2(n_\msia,c)$, is given by \begin{align} h_2 &= \tfrac12\sigma^2(c'), \label{eq: h2} \end{align} where $\sigma^2(c')$ is the variance of $c'$ when a fixed $c$ is updated. Using Eqs.\ \eqref{eq: c-variance} and \eqref{eq: h2} and converting $g$'s to $q$'s, we find \begin{align} \frac{\partial p}{\partial n_\msia} &= \frac{c}{2N[q(x_\msia)]^2} \frac{dq(x)}{dx}\bigg|_{x=x_\msia} \frac{\partial^2p}{\partial c^2}. \label{eq: p-diff 1} \end{align} The large $N$ behavior of Eq.~\eqref{eq: p-diff 1}, with the boundary conditions in Eqs.\ \eqref{eq: p-diff bound beg} and \eqref{eq: p-diff bound end}, can be found by expanding $q(x)$ around $x=0$. If $q(x)$ is well-behaved, such an expansion can be written as \begin{align} q(x) &= \alpha_1x - \alpha_2x^2 + \Oc(x^3). \label{eq: q-Taylor} \end{align} This expansion can always be performed if the probability $P(K)$ for a node to have $K$ inputs decays as least as fast as $K^{-4}$ and in the case that $p_K$ decays slower than $K^{-4}$ but faster than $K^{-3}$, only the residue term can be affected. See Appendix \ref{app: q calc}. In particular, the expansion is always valid if $K$ has a maximal value. The most interesting case in terms of asymptotic behavior is when $\alpha_1$ is close to $1$ and $\alpha_2$ is positive. With suitable $N$-dependent transformations of $p$ and its arguments, described in Appendix \ref{app: PEP}, the large $N$ behavior of Eq.~\eqref{eq: p-diff 1} can be expressed in terms of a function $\pt(\tti,\yt)$ determined by the differential equation \begin{align} \frac{\partial\pt}{\partial\tti} &= \frac12 \frac{\partial^2\pt}{\partial\yt^2}, \label{eq: diffuse} \end{align} with the boundary conditions \begin{align} \pt(\tti,1/\tti) &= 0 \quad\trm{for }\tti<0 \label{eq: diffuse bound 0} \intertext{and} \lim_{\tti\rightarrow-\infty}\pt(\tti,\yt) &= \yt \quad \trm{for }\yt\ge0. \label{eq: diffuse bound 1} \end{align} The Crank--Nicholson method can be used to calculate $\pt(\tti,\yt)$ numerically in an efficient way. (See, e.g., \cite{numrecip}.) Appendix~\ref{app: PEP} shows that the probability for complete coverage is given by \begin{align} \Pex(N, q) &\approx \Nt^{-1/3}\pt[0,\Nt^{1/3}(1-\alpha_1)], \label{eq: PexNq} \end{align} where $\Nt=\alpha_1N/\alpha_2$. The calculation assumes that the avalanche is initiated on the nodes whose outputs are independent of their inputs, as accounted for in $q(1)$. To our knowledge, the critical point for \EP\ has not been investigated previously in its own right. Two special cases have been studied, however. First, results for numbers of frozen and unfrozen nodes in critical \RBN s can be mapped to an \EP\ process, as discussed in Section~\ref{sec: application}. In this context, frozen nodes in the network are considered to be the damaged nodes of the \UBA, and the scaling with $N$ of the number of unfrozen nodes at the phase transition has been investigated for certain class of \RBN s \cite{Socolar:03,Kaufman:05}. Second, in the special case that $q(x)=x$, the exact result \begin{align} \Pex(N,x\mapsto x;n_0,n_\msa) &= \frac{n_\msa}{n_0+n_\msa} \label{eq: q-ident main} \end{align} is obtained. [See Eq.~\eqref{eq: q-ident} in Appendix \ref{app: PEP}.] This means that the probability for complete coverage is exactly $n^\ti_\msa/N$. The simplest realization of $q(x)=x$ is provided by a network of one-input nodes with rules that copy the input state. Such networks have strong connections to random maps from a set of $N$ elements into itself. A map $T$ is derived from a network of one-input nodes by letting each node map to the node from where its input is taken. In this picture, the damage originating from one initially damaged node $i$, corresponds to the set of nodes $j$ such that $T^k(j)=i$ for some $k\ge0$ (where $T^k$ denotes the $k$th iterate of $T$). Such a $j$ is called a {\it predecessor} to $i$. See, e.g., Ref.~\cite{Bollabas:85} for an overview of the theory of random maps and see Refs.~\cite{Rubin:54, Harris:60} for results on predecessors in random maps. See Appendix~\ref{app: exact} for analytic results that relate \UBA\ to random maps. \subsection{On the number of damaged nodes in random networks} \label{sec: avalanche size} In the Sections~\ref{sec: criteria for EP} and~\ref{sec: cc} we focused on determining the parameters that lead to \EP\ (a vanishing fraction of undamaged nodes large $N$ limit) and on the probability that the number of undamaged nodes will be exactly zero (complete coverage). We now consider the full probability distribution for the number of nodes damaged in an avalanche in a manner that provides a suitable base for understanding both \SP\ and \EP\ in random networks. The calculational strategy involves considering a given set of $n$ nodes to be the damaged set and computing the probability that this is both consistent with all of the Boolean rules and the probability that the avalanche will actually cover the whole set. The probability of consistency is calculated via elementary combinatorics. The probability of reaching the whole set is precisely the probability of complete coverage for an avalanche on the sub-network of $n$ candidate nodes. For this we can directly apply the results of the last section. For the purposes of explaining the calculation, we refer to the selected set of $n$ nodes as the {\it candidate set}. We let $\Pn(n)$ denote the probability that $n$ nodes will be damaged in an avalanche, averaged over the ensemble of $N$-node networks with a rule distribution characterized by a given damage propagation function $g$ or the corresponding damage control function $q$. We assume that the avalanche is initiated by randomly selecting $\ell$ nodes to set to $\msa$, regardless of their Boolean rules, then setting to $\msa$ all nodes with rules that always output 1 for any inputs. The set of $\ell$ initially damaged nodes must be a subset of the candidate set. The probability that the candidate set contains all of the nodes with ``always 1'' rules will be taken into account by the value of $g(0)$ in the expression below for the consistency probability. We use the notation $\binom m k$ for the usual binomial coefficient (the number of combinations of $k$ objects chosen from a set of $m$ objects). The probability $\Pn(n)$ can be expressed as \begin{align} \Pn(n) = \binom{N-\ell}{n-\ell} P_{\trm c}(n, \ell; N)\, \PNex(n,\ell;N), \label{eq: Pnn} \end{align} where $P_{\trm c}(n, \ell; N)$ and $\PNex(n,\ell;N)$ are defined below. The binomial factor counts the number of different sets of $n-\ell$ nodes that could be damaged in a process corresponding to a given set of $\ell$ nodes that are initially damaged without regard to their rules. $P_{\trm c}(n, \ell; N)$ is the probability that a given choice of $n-\ell$ nodes assumed to be damaged by the avalanche will constitute a final state that is consistent with the Boolean rules for each node, including the nodes that are initially damaged because their rules require it. $\PNex(n,\ell;N)$ is the probability that the avalanche will not die out before damaging all $n$ nodes. This factor is necessary to avoid counting final states that contain loops of damaged nodes consistent with the rules but unreachable because damage cannot spread to the loop from any nodes outside the loop. Consistency with the Boolean rules requires that the given set of $n-\ell$ nodes damaged in the avalanche have inputs that cause them to be damaged. In a random network, the probability that any single node will be damaged is $g(x_1)$, where $x_1$ is the fraction of damaged nodes. Similarly, the probability that any node will {\it not} be damaged is $1-g(x_1)$. We are considering candidate sets of damaged nodes with $x_1 = n/N$. Thus we have \begin{align} P_{\trm c}(n,\ell;N) = [g(n/N)]^{n-\ell}[1-g(n/N)]^{N-n}. \label{eq: Pconsistency} \end{align} The computation of $\PNex(n,\ell;N)$ involves the rule distribution on the restricted network formed by the candidate set with all inputs from the undamaged nodes removed. This distribution, $g^1(x)$, is different from $g(x)$ because $P_{\trm c}$ already accounts for rules that are not consistent with the pattern of damage. Thus the spreading of damage on the $n$-node network involves $g(nx/N)$, the probability that a rule outputs $1$ when a fraction $x$ of the $n$-node candidate set is damaged. The probability must be normalized such that it goes to unity when $x$ goes to 1. (We know that a node in the $n$-node set should get damaged if all of its inputs are damaged.) Thus we have \begin{align} g^1_{N,n}(x) = \frac{g(nx/N)}{g(n/N)} \label{eq: g1} \end{align} or, equivalently, \begin{align} q^1_{N,u}(x) &= \frac{q[u/N+(1-u/N)x]-q(u/N)}{1-q(u/N)}. \label{eq: q1} \end{align} (Recall that $u = N-n$ is the number of undamaged nodes after an avalanche.) There are two cases of interest for the probability of complete coverage of the candidate set. For \EP, $g(0)>0$ and the fixed number $\ell$ of nodes arbitrarily selected for damage is irrelevant compared to the finite fraction of nodes with rules that produce damage for any combination of inputs. In this case, we assume $\ell=0$, which allows reduction of $\Pex$ to its two-argument form defined at the beginning of Section~\ref{sec: cc}: \begin{align} \PNex(n,0;N) & = \Pex(n,q^1_{N,N-n}). \label{eq: cc on n EP} \end{align} For \SP, we have $g(0)=0$ so the avalanche must be initiated with a nonzero value of $\ell$. In this case we have \begin{align} \PNex(n,\ell;N) = \Pex(n,q^1_{N,N-n};n-\ell,\ell). \label{eq: cc on n SP} \end{align} Note that $\PNex(n,\ell;N)$ depends on $N$ only through $q^1$. For notational convenience, we now let $\PNex$ stand for whichever expression on the right-hand side of Eqs.~\eqref{eq: cc on n EP} or~\eqref{eq: cc on n SP} is relevant, and we use $u$ where $N-n$ would be the strictly proper form. By combining Eqs.~\eqref{eq: Pnn} and \eqref{eq: Pconsistency}, we get \begin{align} \Pn(n) =\,&\binom{N-\ell}{n-\ell}[g(n/N)]^{n-\ell}[1-g(n/N)]^{u} \PNex. \label{eq: Pn0} \end{align} To make some important features of Eq.~\eqref{eq: Pn0} apparent, we introduce the functions \begin{align} \rho(n) &= \frac{n^n}{e^nn!},\\ \tau(n, k) &= \frac{n!}{n^k(n-k)!}, \intertext{and} G(x) &= \biggl(\frac{g(x)}x\biggr)^{\!x} \biggl(\frac{1-g(x)}{1-x}\biggr)^{\!1-x}. \end{align} Then Eq.~\eqref{eq: Pn0} can be rewritten as \begin{align} \Pn(n) =\,&\frac{\rho(n)\rho(u)}{\rho(N)} \frac{\tau(n, \ell)}{\tau(N, \ell)} \biggl(\frac{n/N}{g(n/N)}\biggr)^{\!\ell} \nonumber\\ &\times [G(n/N)]^N \PNex. \label{eq: Pn1} \end{align} Stirling's formula, \begin{align} n! &\approx \sqrt{2\pi n}\,\frac{n^n}{e^n}, \label{eq: Stirling} \end{align} yields \begin{align} \rho(n)&\approx\frac1{\sqrt{2\pi n}} \intertext{and} \frac{\rho(n)\rho(u)}{\rho(N)}&\approx\frac1{\sqrt{2\pi nu/N}}. \end{align} The factor $\tau(n,\ell)/\tau(N,\ell)$ is approximately 1 for large $n$ and satisfies \begin{align} \frac{\tau(n, \ell)}{\tau(N, \ell)}&\le1 \end{align} for $n\leq N$, with equality if $n=N$ or $\ell=1$ or $\ell=0$. The only factors in Eq.~\eqref{eq: Pn1} that can show exponential dependence on $N$ are the $G$ and $\PNex$ factors. Because $\PNex$ is a probability (and therefore cannot exceed unity) and $G(x)\leq1$ with equality if and only if $g(x)=x$, $\Pn(n)$ vanishes exponentially as $N$ goes to infinity for any fixed $n/N$ such that $g(n/N)\ne n/N$. This is consistent with the above result that the probability of an avalanche stopping with $x_1 \neq g(x_1)$ is vanishingly small. [See Eqs.~\eqref{eq: x1 fp} and~\eqref{eq: x0 gt}.] For \EP, we are interested in the number of undamaged nodes, $u$. We let \begin{align} \Pu(u) &= \Pn(N-u) \intertext{and} Q(x) &= G(1-x) \nonumber \\ &= \biggl(\frac{1-q(x)}{1-x}\biggr)^{\!1-x} \biggl(\frac{q(x)}x\biggr)^{\!x}. \end{align} For \EP, $g(0)>0$ and a fixed $\ell$ is irrelevant when $N\rightarrow\infty$. Hence, we let $\ell=0$ and rewrite Eq.~\eqref{eq: Pn1} as \begin{align} \Pu(u) =\frac{\rho(n)\rho(u)}{\rho(N)} [Q(u/N)]^N \PNex. \label{eq: Pu0} \end{align} To some respects, $\Pu$ is similar to $\Pn$: the factor $[\rho(u)\rho(n)]/\rho(N)$ is fully symmetric with respect to interchange of $n$ and $u$; and the role of $G(n/N)$ in Eq.~\eqref{eq: Pn1} is identical to the role of $Q(u/N)$ in Eq.~\eqref{eq: Pu0}. However, the behavior of $\PNex$ for $n\ll N$ given by Eq.~\eqref{eq: cc on n SP} is significantly different from the behavior of $\PNex$ for $u\ll N$ given by Eq.~\eqref{eq: cc on n EP}. For \EP, we consider damage control functions $q(x)$ that can be expanded according to Eq.~\eqref{eq: q-Taylor}. For supercritical \EP, with $\alpha_1<1$, $\Pu(u)$ decays exponentially with $u$. In Appendix \ref{app: super EP}, we demonstrate that \begin{align} \lim_{N\rightarrow\infty}\Pu(u) &= (1-\alpha_1)\frac{(u\alpha_1)^{u}}{u!}e^{-u\alpha_1} \label{eq: pu(u) exact asympt} \\ \ &\approx\frac{1-\alpha_1}{\sqrt{2\pi}}e^{u(1-\alpha_1)} \alpha_1^uu^{-1/2}. \label{eq: pu(u) asympt} \end{align} For critical \EP, Eq.~\eqref{eq: PexNq} gives \begin{align} \PNex(n,0;N) & = \Pex(n,q^1_{N,N-n}) \nonumber \\ & \approx \tilde{n}^{-1/3}\pt[0,\tilde{n}^{1/3}(1-\alpha_1^1)], \label{eq: cc on n crit EP} \end{align} where $\tilde{n} \equiv \alpha_1^1 n/\alpha_2^1$ and $\alpha_1^1$ and $\alpha_2^1$ are the first two coefficients of the power series expansion of $q^1(x)$ about $x=0$. With $\alpha_1=1$ and $\alpha_2>0$, a Taylor expansion of $\log Q(x)$ about $x=0$ gives \begin{align} Q(x) &\approx \exp\biggl(-\frac{\alpha_2^2x^3}2\biggr) \end{align} for small $x$. This yields that the typical number of undamaged nodes, $u$, scales like $N^{2/3}$. In Appendix \ref{app: crit EP}, we derive the asymptotic distribution of $u$ for large $N$. With $\ut=\Nt^{-2/3}u = (\alpha_2/N)^{2/3}u$, we find that the large $N$ limit of the probability density for $\ut$ is \begin{align} P(\ut) &= \frac{\exp(-\frac12\ut^3)} {\sqrt{2\pi\ut}}\,\pt(0,2\ut). \label{eq: p(ut) asympt} \end{align} Eq.~\eqref{eq: Pn1} is suitable for understanding \SP\ as well as \EP. For \SP, $g(0)=0$ and $\ell>0$. In the large $N$ limit, \SP\ is a branching process with a Poisson distribution in the number of branches from each node. The average number of branches per node is given by the derivative of $g(x)$ at $x=0$, because $\lim_{x\rightarrow0} g(x)/x$ is the average number of nodes that will be damaged in one update as a direct consequence of damaging a single node in the large network limit. In Appendix \ref{app: SP}, we re-derive known results on \SP\ in the framework of our formalism. \section{An application: Frozen nodes in random Boolean networks} \label{sec: application} An important application of our results on \EP\ in random networks is the determination of the size distribution for the set of unfrozen nodes in 2-input random Boolean networks, a subject of interest since the introduction of the Kauffman model in 1969 \cite{Kauffman:69}. The Kauffman model was originally proposed as a vehicle for studying aspects of the complex dynamics of transcriptional networks within cells. In a Boolean network, there are usually some nodes that will reach a fixed final state after a transient time regardless of the initial state of the network. For most random Boolean networks, nearly all of these nodes can be found by a procedure introduced in Ref.~\cite{Flyvbjerg:88b} and applied numerically in Ref.~\cite{Bilke:01}. We refer to nodes identified by this procedure as {\it frozen}. The nodes that cannot be identified as frozen are labeled {\it unfrozen}. Their output may switch on and off for all time or simply have different values on different attractors of the network dynamics. A frozen node will always reach its fixed final state regardless of the initial state of the network. The converse is not true: an unfrozen node can have a fixed final state that is independent of the initial state due to correlations that are not accounted for in the identification procedure for frozen nodes. In a typical random Boolean network, the number of nodes that are mislabeled in this sense is negligible \cite{Bilke:01}. For the purposes of investigating dynamics of the network at long times, one is interested in the size of the unfrozen set. The procedure for identification of the frozen nodes starts by marking all nodes with a constant output function as frozen. There may then be nodes that, as a consequence of receiving one or many inputs from frozen nodes, will also produce a constant output. These nodes are also marked as frozen, and the process continues iteratively until there are no further nodes that can be identified as frozen. We note here that the process of finding frozen nodes in a \RBN\ can often be framed as a \UBA, where the property of being frozen corresponds to damage. That is, the process of identifying frozen nodes involves continually checking all nodes to see whether their inputs are frozen in such a way that they themselves become frozen, a process which satisfies the conditions for \UBA. The damage propagation and damage control functions for the \UBA\ are determined by the relative weights of different Boolean logic functions in the \RBN. By changing these weights, one can observe a transition in the dynamical behavior of \RBN s corresponding precisely to the \EP\ transition in the \UBA. We consider here \RBN s with exactly two inputs at each node, with some explicit choices for the weights of the Boolean logic functions that permit observation of both sides of the transition. The only restriction required for mapping the freezing of nodes in a \RBN\ to a \UBA\ system is that the logic functions in the \RBN\ be symmetric with respect to the probability of freezing being due to {\sc true} and {\sc false} inputs. That is, the probability that a node with a certain set of frozen inputs will itself be frozen should not depend on the values of the frozen inputs. This condition is satisfied for the most commonly investigated classes of rule distributions, where there is a given probability $p$ for obtaining a 1 at each entry in the truth table for each rule. If the above mentioned symmetry condition were violated, it would be necessary to distinguish nodes frozen {\sc true} from nodes frozen {\sc false}, which would mean that the state of a node could not be specified by a binary variable. For the rest of this section we consider only \RBN s that respect the symmetry condition. It is useful to distinguish different types of Boolean logic functions. A {\it canalizing} rule is one for which the output is independent of one of the inputs for at least one value of the other input. Among the 16 possible 2-input Boolean rules, 2 rules are constant (``always on'' or ``always off''), 12 rules are non-constant and canalizing, and 2 rules are non-canalizing ({\sc xor} and not-{\sc xor}). The original version of the Kauffman model assumes that all 2-input Boolean rules are equally likely, which turns out to give critical dynamics. Let $p_i$ denote the probability that a randomly selected node's output is frozen if exactly $i$ of its inputs are frozen. The damage propagation function $g(x)$ can be expressed directly in terms of $p_i$: \begin{align} g(x) = p_0(1-x)^2 + 2p_1x(1-x) + p_2x^2. \end{align} Nodes with constant rules are guaranteed to be frozen. (These nodes will initiate the \UBA.) Nodes with non-constant canalizing rules are unfrozen if both inputs are unfrozen, and they are frozen with probability $1/2$ if exactly one randomly selected input is frozen. Nodes with rules that are non-canalizing become frozen if and only if both of their inputs are frozen. Finally, if both inputs are frozen, the output of any 2-input rule is frozen. Thus for the 2-input Kauffman model, $p_0=1/8$, $p_1=1/2$, and $p_2=1$. \begin{figure}[bt] \begin{center} \includegraphics{gq.eps} \includegraphics{GQ.eps} \end{center} \caption{\label{fig: gq} The functions (a) $g(x)\equiv 1-q(1-x)$ and (b) $G(x)\equiv Q(1-x)$ for three 2-inputs rule distributions. All three distributions have $p_0=1/8$ and $p_2=1$, whereas $p_1$ takes the values $7/16$, $1/2$, and $9/16$. The case that has $p_1=1/2$ (marked with $=$) is critical with respect to \EP\ and corresponds to the propagation of frozen node values in the original Kauffman model. The other cases $p_1=7/16$ ($<$) and $p_1=9/16$ ($>$) are subcritical and supercritical, respectively. The dashed line in (a) shows the identity function $x\mapsto x$.} \end{figure} If the two non-canalizing rules in the 2-input Kauffman model are replaced by canalizing rules, $p_1$ becomes $9/16$, whereas $p_0$ and $p_2$ are unchanged. Such networks exhibit supercritical \EP. To get a subcritical network, we replace two of the canalizing rules with non-canalizing rules and get $p_1=7/16$. (Note that some care must be taken to maintain the {\sc true}--{\sc false} symmetry mentioned above.) The functions $g(x)$ and $G(x)$ for critical, supercritical, and subcritical rule distributions are shown in Fig.~\ref{fig: gq}. \begin{figure}[bt] \begin{center} \includegraphics{K2_sub.eps} \\\vspace*{-42pt} \includegraphics{K2_cri.eps} \\\vspace*{-42pt} \includegraphics{K2_sup.eps} \end{center} \caption{\label{fig: K2} The probability density distribution $N\Pn(n)$ with respect to the fraction of nodes ($n/N$) involved in an avalanche. The rule distributions have the same $g(x)$ as displayed in Fig.~\ref{fig: gq}, showing rule distributions that are ($<$) subcritical, ($=$) critical, and ($>$) supercritical with respect to \EP. The displayed networks sizes, $N$, are 10 (large dots), 100 (small dots), $10^3$ (bold line), $10^4$, $10^5$, and $10^6$ (gradually thinner lines).} \end{figure} \begin{figure}[bt] \begin{center} \includegraphics{K2_num_N1000.eps} \end{center} \caption{\label{fig: K2_num} A numeric comparison between analytic calculations (black lines) and explicit reductions of random Boolean networks (gray lines). For both cases, the probability density distribution $N\Pn(n)$ is displayed as a function of $n/N$. The rule distributions have the same $g(x)$ as displayed in Figs.\ \ref{fig: gq} and \ref{fig: K2}, showing rule distributions that are ($<$) subcritical, ($=$) critical, and ($>$) supercritical with respect to \EP. The \UBA\ rule distributions are realized in random Boolean networks by rule distributions with the following respective selection probabilities: $1/8$, $1/4$, $5/8-p_{\trm r}$, and $p_{\trm r}$ for a constant rule, a rule that depends on exactly 1 input, a canalizing rule that depends on 2 inputs, a 2-input reversible rule. The values of $p_{\trm r}$ are ($<$) 0, ($=$) 1/8, and ($>$) 1/4. For each rule distribution, $10^6$ networks were tested.} \end{figure} As can be seen from Fig.~\ref{fig: gq}, a small change in $g(x)$ may lead to a qualitative change in $G(x)$ for rule distributions close to criticality. Such changes have a strong impact on the avalanche size distribution for large $N$. Figure~\ref{fig: K2} shows the probability density distribution of the fraction, $n/N$, of nodes that are affected by avalanches in networks with the above mentioned rule distributions. The probability distributions are obtained by recursive calculation of the distribution of $n_\msa$ as $n_1$ increases. The recurrence relations are obtained from Eqs.~\eqref{eq: n-update first}--\eqref{eq: n-update last} and the result is exact up to truncation errors. To verify these calculations, we generated $10^6$ random Boolean networks of size $N=10^3$ for each of the above described rule distributions. The distributions in the numbers of frozen nodes in those networks are displayed in Fig.~\ref{fig: K2_num}. \begin{figure}[bt] \begin{center} \includegraphics{EP_K2_cri.eps} \includegraphics{EP_K2_sup.eps} \end{center} \caption{\label{fig: EP_K2} Rescaled versions of the probability distributions displayed in Fig.~\ref{fig: K2}: (a) the probability density for the critical case, with respect to the rescaled number of undamaged nodes, $\ut\equiv(\alpha_2/N)^{2/3}u=u/(4N^{2/3})$; (b) the probability distribution $\Pu(u)$ for the supercritical case without rescaling. The displayed networks sizes, $N$, are 10 (large dots), 100 (small dots), $10^3$ (bold line), $10^4$, $10^5$, and $10^6$ (gradually thinner lines). The analytically derived asymptotes are shown as dashed lines. In (b), the distributions for networks of sizes $10^4$, $10^5$, and $10^6$ are not plotted because they are indistinguishable from the asymptotic curve.} \end{figure} In Fig.~\ref{fig: EP_K2}, the probability distributions of the number of undamaged nodes, $u$, are shown in comparison to the asymptotic results in Eqs.~\eqref{eq: pu(u) exact asympt} and~\eqref{eq: p(ut) asympt}. Our analytic results are strengthened by the data in Fig.~\ref{fig: EP_K2} as the distributions for finite networks approaches the predicted asymptotes. Finite size effects are clearly visible in the critical case even for network sizes as big as $N=10^6$, whereas convergence in the supercritical case is achieved for $N \gtrsim 10^3$. Kaufman, Mihaljev, and Drossel\ studied distributions of unfrozen nodes in 2-input critical \RBN s using a method similar to ours in that differential equations for populations of different types of nodes are developed from a discrete process in which frozen nodes are identified by the propagation of information from their inputs~\cite{Kaufman:05}. Their result for the numbers of unfrozen nodes in 2-input critical \RBN s corresponds to a particular application of Eq.~\eqref{eq: p(ut) asympt}. In Ref.~\cite{Kaufman:05}, the function corresponding to $P(\ut)$ [which they call $G(y)$] is determined by running a stochastic process and a numerically motivated approximation is proposed: \begin{align} P(\ut) &\approx 0.25\exp(-\tfrac12\ut^3) \frac{1-0.5\sqrt{\ut}+3\ut}{\sqrt{\ut}}. \label{eq: p(ut) Kaufman approx} \end{align} The scaling law $P(\ut)\propto\ut^{-1/2}$ for small $\ut$ is also derived analytically in Ref.~\cite{Kaufman:05}. For large $x$, Eqs.~\eqref{eq: diffuse}--\eqref{eq: diffuse bound 1} imply $\pt(0,x)\propto x$ for large positive $x$. This means that \begin{align} P(\ut) \approx \sqrt{\frac{2\ut}{\pi}}\exp(-\tfrac12\ut^3) \end{align} for large $\ut$. Thus the large $\ut$ limit of Eq.~\eqref{eq: p(ut) Kaufman approx} differs from the exact result by a factor of $(3/4)\sqrt{\pi/2}$, an underestimate of about 6\%. We are able to improve further on Eq.~\eqref{eq: p(ut) Kaufman approx} by numerical investigations of $\pt(0,x)$ calculated by the Crank--Nicholson method (see, e.g., \cite{numrecip}) using Eqs.~\eqref{eq: diffuse}--\eqref{eq: diffuse bound 1}. We find that the high-precision numerical results are fit by the function \begin{align} P(\ut) \approx \sqrt{\frac{2\ut}{\pi}}\exp(-\tfrac12\ut^3) \biggl(\!1\!+\frac1{3.248\ut+4.27\ut^2+4.76\ut^3}\!\biggr) \end{align} with a relative error that is maximally 0.25\% and vanishing for large $\ut$. By explicitly keeping track of the populations of nodes with each of the different types of Boolean logic functions as links from frozen nodes are deleted, Kaufman, Mihaljev, and Drossel~\cite{Kaufman:05} also derive results for other quantities, such as the number of links in the sub-network of unfrozen nodes. The \EP\ formalism described above can be applied once again to investigate these additional quantities in a broader class of networks. Detailed results for \RBN s with various degree distributions will be presented elsewhere. \section{Summary and discussion} Unordered binary avalanches can in some cases lead to damage on every node or almost every node of a network, a phenomenon we have dubbed {\it exhaustive percolation}. We have studied a broad class of random networks that can exhibit \EP. We have shown how to calculate the probability $\Pex(N)$ that complete coverage occurs (i.e that all nodes are damaged) and also derived expressions for the probability distribution $P(u)$ of the number of undamaged nodes, $u$, in the large $N$ limit when \EP\ does occur. A logical curiosity in our approach is the fact that the calculation of $P(u)$ involves application of the $\Pex$ result to subnetworks containing candidate sets of damaged nodes. Our primary results flow from the realization that all of the relevant information about a \UBA\ defined on a random network is contained in the damage propagation function $g(x)$ or, equivalently, the damage control function $q(x)$. We derive scaling law exponents and exact results for the distribution of $u$ that are valid for a broad class of random networks and Boolean rule distributions in the \EP\ regime and for networks at the \EP\ critical point. This class includes the \UBA s that determine the set of frozen nodes in \RBN s with more than two inputs per node and therefore constitute a generalization of the results on the set of unfrozen nodes in \RBN s presented in Ref.~\cite{Kaufman:05}. Interestingly, the asymptotic behavior found in Ref.~\cite{Kaufman:05} for the distribution of $u$ at the critical point is shown to be valid for a broad class of network problems. For networks outside the above mentioned class but within the framework of \UBA, we find connections to previous work on Galton--Watson processes \cite{Otter:49} and random maps \cite{Harris:60}. The central result of our investigations is displayed in Eqs.~\eqref{eq: pu(u) asympt} and~\eqref{eq: p(ut) asympt}, which provide explicit formulas for the probability of finding $u$ undamaged nodes after an avalanche runs to completion. The out-degree distributions of the networks described by our formulas are all Poissonian, but the in-degree distributions may have different forms, including power laws, so long as the probability of having in-degree $K$ decays faster than $K^{-3}$. The exact nature of the \EP\ transition on networks with broader in-degree distributions is an interesting issue for future research. Further work is also needed to handle correlations between input links to different nodes, a situation that arises, for example, in random regular graphs or networks with scale free out-degree distributions. Our original motivation for studying \EP\ arose from attempts to understand the dynamical behavior of \RBN s. We have described one nontrivial example of how the \EP\ formalism is relevant: the calculation of the probability distribution for the number of unfrozen nodes in any \RBN\ with a rule distribution that leads to a given damage control function $q$ for the associated \UBA. The problem of determining how many of the unfrozen nodes are actually relevant for determining the attractor structure of the \RBN\ can also be framed as an \EP\ problem, which will be addressed in a separate publication. \section*{Acknowledgment} This work was supported by the National Science Foundation through Grant No.~PHY-0417372.
1,477,468,751,231
arxiv
\section{Introduction} \label{sec:1} \IEEEPARstart{V}{isual} tracking is one of the fundamental task in computer vision with a plethora of applications such as video surveillance, robotics, human-computer interaction etc. The goal of visual tracking is to estimate the state of the target (location and size) in subsequent video frames with only the initial target position given. Most of the existing trackers \cite{d42,d43,d44} tackle this issue by exploiting machine learning techniques to train a robust classifier or filter based upon the extracted feature of the target and its surrounding background. With the use of powerful classifiers, such tracking methods have achieved competitive results on both accuracy and robustness. However, considering the time-critical property for tracking application, the efficiency for the aforementioned trackers are limited by the number of training samples. \begin{figure}[!t] \vspace{0.07cm} \centering{ \includegraphics[width=9cm,height=7.cm]{fig1.eps}\hspace{-0.2cm} } \\ \vspace{-0.3cm} \includegraphics[width=5.5cm,height=0.7cm]{anno.eps}\\ \vspace{-0.4cm} \caption{ Comparisons of our SAT trackers with state-of-the-art trackers CSR and SAMF-CA in challenging situations of background clutter, deformation and occlusion on sequence Human3, Football and Motorrolling, respectively. Best viewed in color.} \label{fig:1} \end{figure} Recently, Correlation filter (CF) based trackers \cite{d3} have sparked a lot of interest due to their high accuracy while running at high speed. Instead of randomly extract positive and negative samples in a small searching window, CF trackers approximate dense sampling strategy by circularly shifting the training samples. Therefore, instead of solving the computational-cost matrix inversion, CF trackers could handle the issues by element-wise operation in Fourier domain, by taking advantage of the property for circulant matrix. Despite of the simplicity and the huge success they have achieved these years, CF trackers still suffer from model drift problem due to the challenging factors in tracking scenarios. Figure 1 illustrates the tracking results of two state-of-the-art CF trackers (CSR\cite{d16} and SAMF-CA \cite{d24}), on several challenging sequences in \cite{d2}. One can see that the CSR tracker fails to locate the target stably when occlusion or similar distractor appears. On the other hand, the SAMF-CA tracker shows inferior performance when the target undergoes rotation as shown in the third row. In terms of such phenomenon, we notice that even though various of attributes could lead to model drift and tracking failure problem, as listed in OTB13 \cite{d1}. They could be categorized into the external interference and internal target appearance variation. For the external interference, even the target appearance remains steady while there are distractors which have similar color or texture with the target in the surrounding background. In extreme cases (i.e. occlusion or background clutter occurs), the target appearance may be contaminated by these distractors. On the other hand, in scenarios with challenging deformation, rotation and scale variation, the background texture remains stable while only the target appearance model changes rapidly. Thus, we argue that a robust tracker should be able to handle both the external interference and the internal issues simultaneously. However, most of the existing CF trackers usually fail to pay attention on both sides. To tackle such limitations, we advocate a novel CF-based optimization problem and further develop a state-aware anti-drift tracker (SAT) in this paper, which jointly model the discriminative and reliable information in filter learning stage. Specifically, surrounding contextual patches are employed into tracking framework in order to equip the tracker with discriminative ability against the external distractions. Furthermore, a color-based reliable mask is learned for each frame to segment the foreground as well as encourage the filter to focus on the reliable region when internal interference occurs. The optimal filter is obtained as the element product of the discriminative filter and the reliable mask. We show that the proposed optimization problem could be solved element-wisely using Alternating Direction Methods of Multipliers (ADMM) in Fourier domain, which is computational efficiency. What's more, by introducing kurtosis to serve as the tracking monitor indicator, a simple yet efficient template updating strategy is incorporated to avoid the template contamination as well as maintain the target appearance. We test the proposed tracker on OTB-2015 benchmark \cite{d2} which contains 100 sequences with both the external and internal challenging factors to valid our approach. Experimental results show the proposed SAT tracker perform superiorly against the state-of-the-art methods. Our contributions could be summarized as follows: $\bigstar$ In this paper, we attempt to jointly model the discrimination and reliability information synchronously into the CF tracking framework. The proposed optimization problem could be solved using ADMM technique in Fourier domain with limited computational burden. $\bigstar$ We explore the statistical property of the response map in CF trackers, and a novel high-confidence updating scheme is advocated to avoid template corruption as well as ensure the robust tracking. $\bigstar$ Extensive experiments are carried out on tracking benchmarks and demonstrate that the proposed tracking algorithm performs favorably against existing state-of-the-art methods. The remainder of this paper is organized as follows. Section II presents a short description of the work related to ours. In Section III, the proposed approach is elaborated in details. Section IV describes experimental results and related analysis, while we draw our conclusions in Section V. \section{Related Works} Most existing trackers adopt either the generative\cite{d4,d5} or the discriminative \cite{d6,d7} approaches. For generative trackers, an elaborated appearance model is often designed to describe a set of target observations in order to search the best-matching patches for the target. While the discriminative trackers formulate visual tracking as a binary classification problem, which search for the target location that is most distinctive from the background. However, the tracking efficiency are limited by the number of the training samples. To address the above issues, significant attentions have been paid to discriminative correlation filter (DCF) based trackers \cite{d3}, which instead minimize the least-square function for all the circular shifts of the positive samples. Although this is the approximation for the actual problem, it enables dense-sampling strategy and further transfers the computational-cost matrix inversion to element-wise operations in Fourier domain. Using correlation filters for tracking started with MOSSE \cite{d8}. Using the single-channel gray features, MOSSE achieved the state-of-the-art performance on tracking benchmark, running at a high speed of more than 600 FPS. Henriques et al. incorporate both non-linear kernels and multi-dimensional features to replace the original grayscale template in \cite{d9}, achieving the state-of-the-art performance in VOT 2014. Seminal follow-up works of DCF have been proposed either in the performance advancement or the conceptual improvements in filter learning. DSST \cite{d10} and SAMF \cite{d11} add an extra scale filter to adapt the scale variations. Muster \cite{d12}, ROT \cite{d13} and LCT \cite{d14} carefully design re-detection schemes for long-term tracking. Color information has been taken into account in CN \cite{d15}, Staple \cite{d16} and CSR \cite{d17} to achieve the tracking robustness to non-rigid deformation. Furthermore, inspired by the recent success of the convolution neural networks (CNN) in object classification, researchers in tracking community devote theirselves on deep trackers, which could take advantage of the robust feature representation of CNN networks. \cite{d18} and \cite{d19} employ pre-trained CNN feature instead of hand-crafted features, and the final results are obtained by stacking hierarchical responses and hedging trackers, respectively. Martin et al. investigate the feature map combination for continuous convolution filter with different spatial resolutions in \cite{d20}. With fewer super-parameters to tune, the C-COT tracker is insusceptible to the over-fitting problems. Except for the forementioned literature which focus on combining the DCF framework with feature representation or detection module. Another research hotspot is dedicated to tackling the inherent limitations of DCF tracking by modifying the conventional CF loss function for filter training. In SRDCF \cite{d21}, a spatial regularization term is added to the basic loss function to alleviate the boundary effect. The BACF tracker \cite{d22} trains the filter with a binary mask, which could generate more real samples as well as maintaining computational efficiency. Similar to BACF, Bibi et al. modify the expected target response and decrease model drift problem significantly in \cite{d23}. Recently, a Context-aware tracker \cite{d24} has been proposed to explicit the global context within CF trackers. Compared to the conventional CF trackers, the CA tracker is adept at suppressing the potential distractions at background regions. However, targets undergo several various of challenging attributes (both the external and the internal ones), during tracking process. Their learned trackers would always tend to be interfered by the salient parts in the feature map due to target appearance change itself. Hence, different from the existing methods, we aim at training the filter with discriminative context and reliable target information jointly, which could adapt to diverse tracking challenges. Furthermore, we explore a novel tracking confidence monitoring criterion to measure both the accuracy and robustness of the predicted results. With the employment of the proposed criteria, our algorithm is able to forecast the potential distractions or temporal tracking failures in time, achieving the state-aware performance. Accordingly, the proposed SAT tracker would adapt the updating strategy, maintaining the stable and purity of the training samples. We test the proposed framework in a general benchmark to validate its effectiveness. \section{Proposed Tracking Algorithm} \label{sec:3} We base our tracking algorithm on three fundamental requirements for online tracking: Firstly, to reduce the risk of drifting in cases of external challenging interference, the tracker should be aware of the potential distractors and have the ability to identify them beforehand. Secondly, the object model should reliably represent the target as well as suppress the background sub-region in the bounding box, when the target undergoes internal appearance variation. The last but not least, the tracker should have the ability to measure the tracking condition, which could further adjust the updating strategy in a high-confidence manner. It's conceivable that such criterion could assist the tracker to recover from model drift as well as maintaining the purity of learned filter. Therefore, we propose a CF-based optimization function which jointly addressing these requirements by training the filter with discriminative and reliable information. Pleasantly, we show that the optimization problem could be solved using ADMM technique and fully carried out in Fourier domain to speed up. Since the proposed SAT tracker apply the CF tracker as the baseline method, we would revisit the details about conventional CF tracker firstly. Furthermore, we also implement SAT tracker with the deep feature and scale adaption module to validate the strong compatibility of our algorithm. \subsection{Correlation Filtering} \label{ssec:3.1} The goal of discriminative tracking method is to train a classifier or filter which could be applied to the region of interest in consecutive frames to distinguish the target from backgrounds via learning features from positive and negative samples. The optimal classifier is learned as follows: \vspace{-0.25cm} \begin{equation} {\bf w}_{opt} = \arg\min\limits_{\bf w}\sum^N_{i=1}(\sum^D_{d=1}{\bf w}_{d}*{\bf x}_{i,d}-y_i)^2_2+\lambda\|{\bf w}\|_{2}^{2} \end{equation} $N$ denotes the number of training patches, and $d$ stands for the index of the feature channel. ${\bf x}$ denotes the input feature and $\bf y$ is the corresponding regression label ranging from one to zero. The forementioned objective function has a global minimum due to its convexity. We could acquire the close-form of the optimal classifier ${\bf w}_{opt} = ({\bf X}^H{\bf X}+\lambda {\bf I})^{-1}{\bf X}^H{\bf y}$, when we gather the feature of all the training samples to form a data matrix ${\bf X}$. For simplification, all derivation would be inferred under single-channel feature condition. Due to the computational burden in solving the matrix inversion, most of the previous work randomly picked a limited number samples from the searching region around the target. Such stochastic sampling strategy would bring uncertainty to the tracking performance. To tackle such issues, CF based trackers allow dense sampling scheme within the searching area at low computational cost. The key innovation of this technique is to approximate the spatial exhaustive searching by efficient dot product in the frequency domain taking advantage of the following property \cite{d26}. Property: We denote the conjugate of the feature vector ${\bf x}$ by ${\bf x}^*$ and the Fourier transform by $\bf{\hat{x}}$. Hence, we could establish the connection between the input vector $\bf x$ with the circulant matrix as: ${\bf X} = {\bf F}diag({\bf {\hat x}}){\bf F}^H$ and ${\bf X}^H = {\bf F}diag({\bf {\hat x}^{*}}){\bf F}^H$. Therefore, we could derive the optimal classifier efficiently in Fourier domain: \begin{equation} \bf{\hat{w}} = \frac{\hat{x}\odot\hat{y}}{\hat{x}\odot\hat{x}^\ast+\lambda} \end{equation} After the filter is learned in the current frame, it would be multiplied with the circulant matrix ${\bf Z}$ of the image patch ${\bf z}$ in the following frames. The detection formula is given as below: \begin{equation} \vspace{-0.15cm} {\bf S}({\bf w},{\bf Z}) = {\bf Z}{\bf w} \Leftrightarrow {\bf \hat{S}} = {\bf \hat{z}} \odot {\bf \hat{w}} \end{equation} The product shares the same size with the searching region, and the maximum score ${\bf S}$ indicates the target location in that frame. Afterwards, the filter is updated using the new object location frame by frame with a learning rate $\eta$ so as to maintain the historical appearance representation of the target. \begin{equation} \vspace{-0.3cm} {\bf \hat{w}}^t= (1-\eta){\bf \hat{w}}^{t-1}+\eta {\bf \hat{w}}^{t} \end{equation} \subsection{Robust Filter Learning} \label{ssec:3.2} Albeit their simplicity, CF trackers still suffer from several inherent drawbacks due to the challenging factors in tracking task. On one hand, due to the limitation for tracking application, the trackers have very limited information about the tracking condition and surrounding context, which leads to model drift when external interference attributes such as occlusion and background distractions occur. On the other hand, when the target undergoes several internal appearance variations, such as shape deformation or in-plane rotation, the target model could not be precisely approximated by an axis-aligned rectangle bounding box, especially for the non-grid object. Consequently, the filter would learn from the background inescapably which would further contaminate the training samples and lead to model drift. To this end, we argue that a robust tracking framework should consider both the external and internal challenging factors synchronously. Specifically, the tracker should be aware of the surrounding potential distractions in advance as well as distinguish reliable foreground from spurious background in the bounding box. Based on the previous description, we advocate a novel CF-based optimization problem to learn from the discriminative and reliable information and then develop a state-aware tracking method (SAT). The proposed optimization problem is composed of a discrimination term and a reliability term, which would be elaborated subsequently. For simplification, the following derivation would be presented for single feature channel, but it could be extended to multi-channels easily without loss of generality. \textbf{Discrimination Modeling.} Different from the conventional CF trackers which only train the filter and detect the target in a small local neighborhood. We incorporate the surrounding contextual information into the tracking framework during filter training stage to equip the tracker with the discriminative ability in forecasting the potential distractions. Specifically, $k$ contextual patches surrounding the estimated target patch would be extracted. Subsequently, the feature circulant matrix for the target and the context patches, denoting as ${\bf A}_0 \in \mathbb{R}^2$ and ${\bf A}_i \in \mathbb{R}^2$ would be calculated, respectively. In order to protect the filter from contaminating by the potential distractions in the ambient background, we manually assign the regression label to zero for these contextual patches. For now, the objective training problem has been reformulated as: \vspace{-0.5cm} \begin{equation} {\bf w} = \arg\min \|{\bf A}_0{\bf w}-{\bf y}\|_{2}^{2} +\lambda_1\|{\bf w}\|_{2}^{2} + \lambda_2\sum_{i=1}^k\|{\bf A}_i{\bf w}\|_{2}^{2} \end{equation} This equation could be further simplified by stacking the circulant feature matrices $A$ to constitute a new feature matrix ${\bf B}$ with the following substitution that ${\bf B} = \left [ {\bf A_0}, \sqrt{\lambda_2}{\bf A_1},...,\sqrt{\lambda_2}{\bf A_k }\right ]^T$. Meanwhile, ${\bf Y} = \left [{\bf y} , {\bf 0} ,..., {\bf 0} \right ]^T$, indicates the concatenate regression labels for the target and context patches. With the previous definition, the objective function has the form as: \vspace{-0.2cm} \begin{equation} {\bf w} = \arg\min \|{\bf B}{\bf w}-{\bf Y}\|_{2}^{2} +\lambda_1\|{\bf w}\|_{2}^{2} \end{equation} \textbf{Reliability Modeling.} Based upon above designment, the filter is able to resist the external interference in tracking by learning the surrounding context information ahead of time. However, similar to conventional CF trackers, the aforementioned discriminative tracker is confined to learning a rigid template as well. Therefore, it would suffer from the template contamination problem unavoidably when some internal appearance variation happens, i.e. non-rigid deformation or rotations. We argue that such drawback could be tackled by constructing a reliable feature representation which is insensitive to shape variation and further constrain the filter to exclude background pixels out of the bounding box. To this end, we attempt to construct a binary reliable mask ${\bf r}$, with the element belongs to $\{0,1\}$, indicating the category (foreground or background) for each pixel based upon the color models. Since we couldn't observe the object pixels directly in the search region, we would model the posterior probability for each pixel belonging to the target accordingly. Here, we denote ${\bf x}\in\mathbb{R}^2$ as the target location in the search region and $o$ as the object present in the scene. Therefore, the posterior probability $p(o|{\bf x})$ could be represented as the product of $p({\bf x}|o)$ and $p(o)$. Since we consider the target presence probability $p(o)$ obeys uniform distribution for simplicity. Thus, the posterior probability is mainly depend on the likelihood function $p({\bf x}|o)$. In addition, we model the posterior probability specified by color histogram ${\bf H}$ due to its insensitivity to shape variation and rotation. With the above discussion, we could derive the confidence map for the posterior probability as follow: \vspace{-0.55cm} \begin{eqnarray} p(o|{\bf x}) &=& p({\bf x}|o) p(o)\nonumber\\ &=& \sum\nolimits_{z\in{(f,b)}} p({\bf x},{\bf H}^z\mid o) p(o) \nonumber \\ &=& \sum\nolimits_{z\in{(f,b)}} p({\bf x}\mid {\bf H}^z,o) p({\bf H}^z,o) \end{eqnarray} Here, ${\bf H}^f$ and ${\bf H}^b$ indicate the color histogram for foreground and background, which are calculated from the target patch and $k$ surrounding contextual patches with the estimated target location at last frame $P_{t-1}$, respectively. Therefore, the posterior probability could be obtained via histogram projection technique proposed in \cite{d46}. Afterwards, we could yield a basic binary segmentation mask ${\bf r}$ via applying an adaptive threshold as in \cite{d46}. For more details, we refer to \cite{d46}. \begin{figure}[!t] \centering{ \includegraphics[width=9cm,height=7.4cm]{fig2.eps}\hspace{-0.2cm} } \\ \vspace{-0.3cm} \caption{Three examples of the color informativeness test. The first column denotes the current searching region, while the back-projection posterior probability map and binarized segmentation mask are shown in the middle and right column, respectively. The segmentation result in Bolt sequence (the first row) passes the test, while the other two segmentation mask fail the test since too many or too few pixels are assigned to be the target.} \label{fig:2} \end{figure} Since the segmentation mask ${\bf r}$ is obtained based upon standard color histogram, it raises a potential concern that how to avoid the poor classification when the object color is similar to the background or illumination variation occurs. Therefore, we conduct a color informativeness test after obtaining ${\bf R}$. Specifically, we would calculate the deviation between the number of the pixels assigned as foreground with the target size at last frame. Once the deviation locates in an ideal range, parameterized by the lower bound $\tau_l$ and upper bound $\tau_u$, we assume that the current color-based segmentation is valid. Accordingly, we would update the color model with a learning rate $\eta_h$. Otherwise, there are too many or too few pixels are labeled to be the target, indicating a potential drastic segmentation failure happens. Under such condition, we abandon the segmentation result for the current frame by setting the mask ${\bf r}$ as all-one matrix and stop the updating for the foreground and background histogram. \textbf{Optimization.} Since we have already constructed a reliable mask ${\bf r}$ to tackle the internal interference attributes in tracking procedure. Hence, we could combine the reliable information with the discriminative information jointly in filter learning. To be more specific, we would multiply the trained discriminative filter with the aforementioned reliable mask ${\bf w}_r = {\bf w} \odot {\bf r}$, to encourage the filter to focus on the reliable region and ignore the mixed background area. Based upon the above analysis, we construct a novel optimization function which jointly model the discriminative and reliable information of the target and surrounding context patches, resulting in the following Augmented Lagrangian objective function $L({\hat{\bf w}}_{c},{\bf w},{\hat{\bf I}},\rho)$: \begin{eqnarray} L({\hat{\bf w}}_{c},{\bf w},{\hat{\bf I}},\rho) = \|{\bf B{\hat {\bf w}_c}-\hat{{\bf Y}}}\|_{2}^{2} + \lambda_{1}\|\hat{\bf w}_{r}\|_2^2 \nonumber\\ + \hat{\bf I}^T(\overline{\hat{\bf w}_c}-\overline{\hat{\bf w}_r}) + \rho\|(\hat{\bf w}_c-\hat{\bf w}_r) \|_2^2 \end{eqnarray} Here, we introduce a dual variable ${\bf w}_c$, with the constrain that: ${\bf w}_c-{\bf w}_r \equiv 0$. ${\bf I}$ denotes the complex Lagrangian multiplier and $\rho$ is a positive penalty parameter. Fortunately, the above Augmented Lagrangian function could be solved using the ADMM algorithm \cite{d25} with a series of iterations: \vspace{-0.3cm} \begin{equation} \left\{ \begin{array}{lr} \hat{\bf w}_{c}^{i+1} = \arg\min\limits_{{\bf w}_c} L({\hat{\bf w}}_{c}^{i},{\bf w}^{i},\hat{\bf I}^{i},\rho^{i}) \nonumber \\ {\bf w}^{i+1} =\arg\min\limits_{{\bf w}} L({\hat{\bf w}}_{c}^{i+1},{\bf w}^{i},{\hat{\bf I}^{i}},\rho^{i}) \\ \hat{\bf I}^{i+1} = \hat{\bf I}^{i} + \rho^{i}(\hat{\bf w}_{c}^{i+1} - \hat{\bf w}_{r}^{i+1} ) \nonumber \\ \rho^{i+1} = \min(\rho_{max},\beta\rho^{i}) \nonumber \end{array} \right. \end{equation} It should be noted that the convergence of the aforementioned Augmented Lagrangian function could be guaranteed if the penalty parameter $\rho^i$ is non-decreasing and $\sum_{i=1}^{+\infty}\rho^i=+\infty$, according to its theoretical derivation in \cite{d27}. While the stopping criterion for the objective function depends on the residual of the filter in the previous iterations. Once the residual error term $\hat{\bf w}_{c}^{i+1}-\hat{\bf w}_{r}^{i+1}$ is small enough, the optimization process terminated. While after analyzing the experimental results, we find that the residual error drops significantly after the first few iterations. Therefore, we assign the maximum iteration number to five in all the video sequences. Based upon the discussion above, the closed-form solution for the variable $\hat{\bf w}_{c} $ and $\hat{\bf w}$ could be acquired as: \begin{eqnarray} \vspace{0.15cm} \hat{\bf w}_{c} &=& ({\bf B}^H{\bf B}+\rho)^{-1}(\rho\hat{\bf w}_r + {\bf B}^{H}\hat{{\bf Y}} -\hat{\bf I}^{T}) \nonumber \\ \hat{\bf w} &=& \frac{\sqrt{N}{\bf F}^{H}(\rho\hat{\bf w}_{c}+\hat{\bf I}^T)}{N(\lambda_1+\rho)} \end{eqnarray} Recalling that $B$ denotes the stacked circulant feature matrix for the image patch and context patches with the following equation ${\bf B} = \left [ {\bf A_0}, \sqrt{\lambda_2}{\bf A_1},...,\sqrt{\lambda_2}{\bf A_k }\right ]^T$. We could employ the property of the circulant matrix and subsequently re-write the form of the dual variable $\hat{\bf w}_c $ and the jointly learned filter $\hat{\bf w}_r$ into element-wise in Fourier domain as: \begin{eqnarray} \hat{\bf w}_c &=& \frac{\hat{a}^{*}_0\odot\hat{y}+\rho\hat{\bf w}_r-\hat{\bf I}^T}{\hat{a}^{*}_0\odot\hat{a}_0+\lambda_2\sum_{i=1}^{k} \hat{a}^{*}_i\odot\hat{a}_i +\rho} \nonumber \\ {\bf w}_r &=& {\bf r}\odot {\bf w} = {\bf r}\odot \frac{\mathcal{F}^{-1}(\rho\hat{\bf w}_{c}+\hat{\bf I}^T)}{\lambda_1+\rho} \end{eqnarray} Since all the calculation could be carried out in Fourier domain, the proposed SAT tracker runs at a low computational burden $\mathcal{O}(NlogN)$. For more details about the derivation, please refer to Algorithm 1 and the Appendix A. In addition, to counteract scale variation issues, we adopt the same strategy as \cite{d11} to estimate the translation and accurate scale jointly with the trained filter above. Specifically, we set seven searching sizes ranging from 0.94 to 1.06, with the assumption that the target scale wouldn't change significantly between consecutive frames. For more details, we recommend the readers to refer \cite{d11}. \begin{figure}[!t] \vspace{-0.5cm} \centering{ \includegraphics[width=8cm,height=2.1cm]{fig3.eps}\hspace{-0.2cm} } \vspace{-0.4cm} \caption{ Illustration of the representative response map during tracking. The response map follows Gaussian distribution under ideal condition as shown in the left column. When the target undergoes appearance change or other external interferences, the response map always shows multiple peaks or abnormal shape as shown in the middle and right column.} \label{fig:3} \end{figure} \vspace{-0.3cm} \subsection{High-Confidence Filter Updating} \label{ssec:3.4} \vspace{-0.1cm} A recurring question in visual tracking is how to update the target's appearance model so that it could maintain a good representation of the target. Wang et al. \cite{d45} point out that such problem is kind of a stability-plasticity dilemma. Since the tracker must maintain a tradeoff between adapting to new but possibly noisy examples collected and preventing the tracker from drifting to the background. However, implementation of the model updater is often treated as an engineering trick even though their impact on performance is usually quite significant. In this section, we tackle such issues by advocating a novel monitoring criterion to reveal the tracking condition as well as guarantee an accurate and stable filter updating. Physically, the horizontal axis and vertical axis in the response map indicate the candidate location, while the value ${\bf s}$ could be interpreted as the feature similarity between the target template and candidate samples. Ideally, the response map is assumed to follow a Gaussian distribution, with a single, shark peak and slight tail in the whole searching window, since they are trained with Gaussian shaped regression labels. Unfortunately, under the influence of some challenging attributes and sample noises, the contaminated candidate samples couldn't match the template perfectly, resulting multiple peaks and abnormal shape as shown in Figure 3. Most of the existing CF trackers update their model at each frame without considering whether the detection is accurate. Hence, such trackers always fail to locate the target precisely when the response map is no longer ideal and could hardly recover from the drifting since the filter is contaminated due to the incorrect updating. To this end, we argue that a robust tracker demands not only accurate and stable filter learning but also timely abnormality detection and high-confidence updating strategy as well. As mentioned above, the ideal response map should have only one sharp peak and be smooth in other areas. Therefore, the proposed criterion should consider the maximal value in the response map and the distribution of other response value simultaneously. The former attribute could be denoted by the maximum score ${S}_{max}$ in current response map ${\bf S}({\bf Z}, {\bf w})$. For the later measurement criteria, we introduce Kurtosis to measure the peakedness and tail weight of the response distributions. Given a random variable $x$, the kurtosis of the $x$ is denoted as the quotient of fourth cumulant and the square of the second cumulant, which could simply to the fourth central moments minus three. Here the stands for the $k$th cumulant function and $\mu_{k}$ is the $k$th central moment. \begin{equation} BK(x) = \frac{\kappa_4(x)}{\kappa_2^2(x)} =\frac{\mu_4(x)}{\sigma^4(x)}-3 \end{equation} On the basis of the definition, data with high kurtosis tend to have a distinct peak declining rapidly and have heavy tails. While the data with low kurtosis tend to have a flat top or multiple peaks rather than a single sharp peak as illustrated in Fig.3. Based on such mathematic property, we could employ kurtosis to forecast and supervise the tracking quality in advance. In addition, we save the kurtosis $BK$ and ${S}_{max}$ and calculate the historical average values each frame. Consequently, we multiply the average values with certain ratio ${\theta_1}$ and ${\theta_2}$ respectively serving as the updating thresholds $S_{tr}$ and $BK_{tr}$. \vspace{-0.0cm} \begin{equation} \left\{ \begin{array}{lr} S_{tr} = \theta_1 \times \frac{\sum_{t=1}^{T}S_{max}(t)}{T} \nonumber \\ BK_{tr} = \theta_2 \times \frac{\sum_{t=1}^{T}K(t)}{T} \end{array} \right. \end{equation} Therefore, model updating is only performed if both two criterions are larger than the corresponding thresholds with a learning rate $\eta_c$ for the learned filter as in equation 4. Figure 4 illustrates the necessity of the proposed monitoring indicator. The green box indicates the tracking result of trackers which only consider the maximal value while ignore the shape of the response map as updating metric. The red one stands for the tracking result of our tracker which take both the maximal response and the response shape into consideration during updating. One can see that due to the employment of Kurtosis, inaccurate update is avoided when other distractors appear. \vspace{-0.4cm} \begin{figure} \vspace{-0.2cm} \centering \includegraphics[width=9cm,height = 5.2cm]{update.eps} \hspace{-2cm} \vspace{-0.35cm} \caption{Validation results for the proposed high-confidence updating strategy. The first row shows the response maps for frame 50 and frame 71 in Jogging1 sequence. One can see that when the person is occluded by the telegraph, the maximal response still keeps large while the kurtosis value decreases sharply. Hence, with the employment of kurtosis, the unwanted updating would be avoided reasonably. The last picture of the second row illustrates the tracking results when the occlusion ends. The red box could no longer locate the target since the filter is contaminated at frame 71.} \label{fig:4} \end{figure} \begin{algorithm}[t] \caption{SAT Tracking Algorithm. \LinesNumbered \KwIn {Current Image ${\bf I}_t$, Previous Position ${P}_{t-1}$, Previous Target Size $s_{t-1}$, Learned Filter ${\bf w}_{t-1}$, Previous Color Histogram ${\bf H}_{t-1}$.} \KwOut{Estimated Target Position $P_t$ and Scale $s_t$, Updated Filter ${\bf w}_t$ and Histogram ${\bf H}_t$ } \textbf{Repeat:}\\ Extract target patch's feature ${\bf a}_0$ and context's features ${\bf a}_i$. \\ Extract the foreground and background histogram ${\bf H}^{f}$ and ${\bf H}^{b}$ based on the previous location $P_{t-1}$.\\ Construct the reliable mask ${\bf r}$ based on ${\bf H}^{f}$ and ${\bf H}^{b}$. \\ \If{ the color informativeness test passes,}{ Update the foreground and background histogram: ${\bf H}^{f}_{t} = (1-\eta_h) {\bf H}^{f}_{t-1}+ \eta_h {\bf H}^{f}$, ${\bf H}^{b}_{t} = (1-\eta_h) {\bf H}^{b}_{t-1}+ \eta_h {\bf H}^{b}$.\\ \textbf{endif} } Train the filter ${\bf w}$ based on the reliable mask ${\bf r}$ and the discriminative features ${\bf a}_0$ and ${\bf a}_i$. \\ Estimate the current target position $P_{t}$ and target size $L_s(t)$ by computing response map. \\ Calculate the maximal response $S_{max}$ and the Kurtosis $BK$ based on the response map. \\ \If{the update condition is satisfied,}{ Update the optimal filter: ${\bf w}_t = (1-\eta_c){\bf w}_{t-1} + \eta_c {\bf w}$. \\ Update the current scale size: $s_t = s_{t-1} \times L_s(t)$ . \\ Update the updating threshold $S_{tr}$ and $BK_{tr}$. \\ \textbf{end if} } \textbf{Until} the end of video sequence. \end{algorithm} \subsection{DeepSAT Tracker} \label{ssec:3.5} Recently, with the great power in feature representation, convolutional neural networks have demonstrated state-of-the-art results on a wide range of computer vision task. Therefore, we introduce a pre-trained CNN feature into the proposed framework. Inspired by HCF \cite{d18}, we utilize conv3, conv4, conv5 in VGG-Net as feature extractor. The feature in earlier layer retain higher spatial resolution for precise location, while feature in latter layer capture more semantic information and less fine-gained spatial details. In order to integrate the features in different layers effectively, each layer is convolved with dual correlation to generate a response map. After a resizing process, a final response map is obtained by stacking all the response maps with different weights. It should be mentioned that, in contrast to the work of DLT \cite{d30} and DeepTrack \cite{d31} which update the appearance models by fine-tuning CNNs online, HCF and our DeepSAT tracker use classifier learning for model update, which is computational efficiency. In addition, we find that the pre-trained CNN features is of limited effectiveness in estimating the target scale but with high computational cost. Hence, we incorporate the scale estimation method proposed in \cite{d10} to tackle the scale issues for DeepSAT tracker. Instead of searching the translation and scale jointly, we would introduce a one-dimensional convolutional filter with HOG feature after locating the target using Deep-CNN features. To be more specific, $W_t$ and $H_t$ denote the width and height of the current target. $L_s$ stands for the layer of the filter and $\alpha_s$ is the scaling parameter. During scale adaption for DeepSAT, a set of patches centered at the current position would be extracted whose size is $\alpha_s^n W_t\times \alpha_s^n H_t$. Here, $n \in \{[\frac{-(L_s-1)}{2}], ... , [\frac{(L_s-1)}{2}]\}$. Afterwards, the response map of each cropped image could be computed as in equation 3. The index $n$ which gives the maximum response is chosen as the accurate scale at the current frame. Please refer \cite{d10} for more details. Similarly, the scale estimation is only performed when the updating condition is satisfied to speed up. Experiments on \cite{d2} valid such simple strategy in scale estimation. \vspace{-0.03cm} \section{Experimental Results and Analysis} \label{sec:4} In this section, we evaluate our tracker on challenging sequences provided on Online Tracking Benchmark \cite{d2}, which involves 11 common challenging attributes. The proposed tracker is compared with 15 representative sota tracking methods in recent years. These trackers could be broadly categorized into two classes: (i) conventional CF based tracker and its variants including KCF \cite{d9}, DSST \cite{d10}, SAMF \cite{d11}, MUster \cite{d12}, LCT \cite{d14}, ROT \cite{d13}, Staple \cite{d17}, SAMF-CA \cite{d24} and CSR \cite{d16}. For instance, DSST and SAMF address the scale variation, while ROT, MUster, LCT aim at tackling occlusion issues. (ii) other representative trackers reported in OTB benchmark or VOT challenges: Struck \cite{d6}, SCM \cite{d4}, TLD \cite{d7}, MEEM \cite{d28} and TGPR \cite{d29}. It should be noted that since feature plays crucial role in visual tracking. For fair comparison, we equip the SAT tracker with hand-crafted features (HOG and CN) when comparing with other trackers. \begin{table*} \scriptsize \centering \caption{The Overlap Rate and Precision Scores (in percentage) over OTB100 for DeepSAT and the other 8 CNN-based trackers.} \label{Tab:1} \renewcommand\arraystretch{1.5} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{1}{c|}{\textbf{OUR}} & \multicolumn{1}{c|}{CNN-SVM \cite{d32}} & \multicolumn{1}{c|}{CNT \cite{d33}} & \multicolumn{1}{c|}{HCF \cite{d18}} & \multicolumn{1}{c|}{CFNet\cite{d34}} & \multicolumn{1}{c|}{Siamaese \cite{d35}} & \multicolumn{1}{c|}{HDT \cite{d19}} & \multicolumn{1}{c|}{DeepSRDCF \cite{d36}} & \multicolumn{1}{c|}{SRDCFdecon \cite{d37}} \\ \cline{1-10} {Overlap} & \multicolumn{1}{c|}{\textbf{64.3}} & \multicolumn{1}{c|}{55.4} & \multicolumn{1}{c|}{54.5} & \multicolumn{1}{c|}{56.2} & \multicolumn{1}{c|}{56.8} & \multicolumn{1}{c|}{59.2} & \multicolumn{1}{c|}{65.4} & \multicolumn{1}{c|}{63.5} & \multicolumn{1}{c|}{62.7} \\ \hline {Precision} & \multicolumn{1}{c|}{\textbf{86.4}} & \multicolumn{1}{c|}{81.4} & \multicolumn{1}{c|}{72.3} & \multicolumn{1}{c|}{83.7} & \multicolumn{1}{c|}{74.8} & \multicolumn{1}{c|}{77.3} & \multicolumn{1}{c|}{84.8} & \multicolumn{1}{c|}{85.1} & \multicolumn{1}{c|}{82.5}\\ \hline \end{tabular} \end{table*} \begin{figure*}[!t] \centering \includegraphics[width=9cm,height=7.8cm]{Os.eps} \hspace{-0.4cm} \includegraphics[width=9cm,height=7.8cm]{Op.eps} \hspace{1cm} \vspace{-0.34cm} \caption{The success plot and precision plot of OPE. The proposed tracker is compared with 13 state-of-the-art trackers on 100 challenging sequences. The scores of success and precision plots are the values shown in the legend. Best viewed in color.} \label{fig:8} \end{figure*} \subsection{Experimental Setup} \label{ssec:4.1} We implement all the experiments in MATLAB 2015a on an Inter(R) Xeon(R) 2.67 GHZ with 32GB RAM. For all the compared trackers, we use the original parameters and source code provided on OTB or the author's websites. HOG and ColorName are selected as the target feature representation for SAT tracker. As for DeepSAT tracker, we exploit an ensemble of deep feature as HCF \cite{d18} (conv5, conv4 and conv3 form VGGdeep-Net). The weight for stacking the response are set to 1, 0.5 and 0.02 respectively. HSV foreground and background color histograms with 16 bins per channel are used to establish the reliable mask for colorful source images. 4 context patches around the target are extracted to boost the discriminative ability of the filter. When conducting the informativeness test, the lower bound $\tau_l$ equals 0.3, while the upper bound $\tau_u$ equals 1.5. The padding size is set to 2.5 times of the initial target size, while the Gaussian kernel width $\sigma$ is set to 1 or 0.25 up to the aspect ratio of the target. The learning rate $\eta_{c}$ for CF template is set to 0.015, while the one for histogram adaption $\eta_{h}$ is set to 0.04. The regularization parameter $\lambda_{1}$ equals to 0.01, while $\lambda_{2}$ is set to 25. During solving the Augmented Lagrangian function, $\rho$ and $\beta$ are fixed to 5 and 3, respectively. The maximum iteration number is set to 5 and the upper bound of the penalty parameter $\rho_{max}$ is set to 25. As for the high-confidence updating section, the threshold ratio $\theta_1$ and $\theta_2$ are set as 0.6 and 0.5, severally. When addressing the scale variation attribute, we adopt two different strategy in SAT tracker and DeepSAT tracker since the pre-trained VGG feature perform limited effectiveness in estimating the target scale. Hence, for SAT tracker, we utilize the scale estimation scheme as in SAMF with 7 search sizes. While we empirically set the number of target pyramid layers $L_s$ to 33, with the scale factor $\alpha_s$ equals 1.02 as in \cite{d10} for the DeepSAT tracker. \subsection{Evaluation Methodology} \label{ssec:4.2} In this subsection, we employ One-Pass-Evaluation, which is a common evaluation methodology used in Object Tracking Benchmark \cite{d1,d2} to measure the tracking accuracy and robustness of the proposed method against other ones. Two metrics (precision and overlap rate) are utilized to evaluate the performance of candidate trackers. The precision plot illustrates the percentage of the frames whose center locations are within the given threshold distance to the center of the ground-truth. In the experiment, 20 pixels is used to rank the trackers. The success plot is based on the overlap ratio, which is defined as $R = Area (B_T\bigcap B_G)/ Area(B_T\bigcup B_G)$, $B_T$ stands for the tracking output box and $B_G$ stands for the ground-truth rectangle. The success plot shows the percentage of the frames with $R>th$ throughout all the thresholds $th$ belongs to [0,1]. The area under the curve (AUC) of each success plot serves as the second measure to rank the tracking algorithms. Both the precision and success plots show the mean scores over all the sequences. \begin{figure*}[!t] \centering \includegraphics[width=6cm,height=5.2cm]{Sbc.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Siv.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Socc.eps} \\ \vspace{0.25cm} \includegraphics[width=6cm,height=5.2cm]{Smb.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Sfm.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Sdef.eps} \\ \vspace{0.25cm} \includegraphics[width=6cm,height=5.2cm]{Sir.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Sor.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Ssv.eps} \\ \vspace{0.05cm} \hspace{-0.5cm} \hspace{2.5cm} \vspace{-0.3cm} \caption{The success plots of the videos with different attributes. The number in the title indicates the index of corresponding sequence. Best viewed in color.} \label{fig:10} \end{figure*} \begin{figure*}[!t] \vspace{0.5cm} \centering \includegraphics[width=6cm,height=5.2cm]{Pbc.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Piv.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Pocc.eps}\\ \vspace{0.25cm} \includegraphics[width=6cm,height=5.2cm]{Pmb.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Pfm.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Pdef.eps}\\ \vspace{0.25cm} \includegraphics[width=6cm,height=5.2cm]{Pir.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Por.eps} \hspace{-0.5cm} \includegraphics[width=6cm,height=5.2cm]{Psv.eps} \\ \vspace{0.25cm} \hspace{-0.5cm} \hspace{2.5cm} \vspace{-0.3cm} \caption{The precision plots of the videos with different attributes. The number in the title indicates the index of corresponding sequence. Best viewed in color.} \label{fig:11} \end{figure*} \subsection{Quantitative Comparison} \label{ssec:4.3} Figure 5 illustrates the overall performance of the 13 conventional trackers on OTB-100 in terms of success and precision plots. Among all the compared trackers, the proposed SAT tracker obtains the best performance, which achieves a 0.607 AUC score and a 0.837 distance precision rate at the threshold of 20 pixels. SAMF-CA is the baseline tracker for the proposed tracker, while SAT improves the tracking performance by 3.3 percent and 4.6 percent in success plot and precision plot respectively. In addition, the proposed tracker also outperforms the other state-of-the-art or milestone trackers by a distinct margin. To further evaluate the proposed tracker, we implement DeepSAT tracker with CNN features. We conduct the same experiments with eight popular CNN based trackers including CNN-SVM \cite{d32}, CNT \cite{d33}, HCF \cite{d21}, CFNet \cite{d34}, Siamese \cite{d35}, HDT \cite{d19}, DeepSRDCF \cite{d36} SRDCFdecon \cite{d37}. The one-pass-evaluation of the two metric is shown in Table I. It can be seen that the DeepSAT tracker performs favorably comparing against other CNN-based trackers, which further demonstrates our tracker is effective and promising. \begin{figure*}[!t] \vspace{-0.5cm} \centering{ \subfloat[Human3 and Jogging1 with Occlusion and Out-of-View]{ \includegraphics[width=16cm,height=4.cm]{fig7a.eps} } \\ \vspace{-0.4cm} \subfloat[Clifbar, Bolt2 and Football with Background Clutters and Distractors]{ \includegraphics[width=16cm,height=5.6cm]{fig7b.eps} } \\ \vspace{-0.4cm} \subfloat[Couple and Shaking with Abrupt Motion and Illumination variation]{ \includegraphics[width=16cm,height=4.cm]{fig7c.eps} } \\ \vspace{-0.4cm} \subfloat[Diving, Skating and Motorrolling with Rotation and Deformation]{ \includegraphics[width=16cm,height=5.6cm]{fig7d.eps} } \\ \vspace{-0.4cm} \subfloat[Human9 and Dog1 with Scale Variation]{ \includegraphics[width=16cm,height=4.cm]{fig7e.eps} } } \\ \vspace{-0.05cm} \includegraphics[width=16cm,height=0.7cm]{figure.eps}\\ \vspace{-0.5cm} \caption{Representative tracking results on some challenging sequences. Best viewed in color.} \label{fig:12} \end{figure*} Since the target undergoes different challenging attributes during tracking process, we note that it's crucial to investigate the tracking performance on these factors. Figure 6 and Figure 7 illustrate the success plot and precision plot on nine different factors which we have mainly discussed in this paper. We categorize these factors into two classes: the external interference (BC, IV, OCC and MB) and internal target appearance change (FM, DEF, IPR, OPR and SV). In most cases, the SAT tracker ranks top 2 among 15 trackers out of both the external and internal attributes synchronously. For external interference, our SAT tracker benefit from the learning of context patches, which could be aware of the potential distraction and holistic appearance change in advance. On the other hand, the performance gain for internal target appearance change could be largely attributed to the reliable information learning in filter designment, which could train the filter with more accurate features compared with the conventional CF trackers. To sum up, by jointly considering the discrimination and reliability information in filter training stage, our SAT tracker performs more robust to the above challenging attributes as shown in Figure 6 and Figure 7. \subsection{Qualitative Comparison} \label{ssec:4.5} To better visualize the tracking performance of SAT tracker, we provide a qualitative comparison of our approach with nine state-of-the-art trackers in Figure 7. Several video sequences are selected from OTB100 which contain various of challenging attributes to present the tracking performance among different trackers. \subsubsection{Occlusion and Out of View} \label{subsec:4.5.1} During tracking, the target is often partially or fully occluded by other object, which will destroy the holistic appearance of the target. In other cases, some portion of the target may leave the view. Both of these attributes may lead to the model drift if the tracker is not robust enough. We evaluate the tracking algorithms on sequence Human3 and Jogging1 in Figure 8(a) that the people are occluded by telegraph pole within some time during tracking. Unfortunately, most of the traditional CF trackers fail to track the target stably due to the corrupted training samples by the occluding objects. While our SAT tracker could address these issues, since we incorporate the surrounding context patches into training stage, which could detect the potential distractors in advance. In addition, the high confidence updating strategy could further guarantee the purity of tracking template. \subsubsection{Background Clutter and Distractors} \label{subsec:4.5.2} The target in Sequence ClifBar has similar surrounding texture in background, it's hard to separate the foreground with background by extracting the positive samples' feature. Therefore, TGPR, KCF and ROT could hardly locate the target precisely without exploiting the background information, when the target undergoes background clutter. Moreover, even TLD is equipped with the re-detection module, it fails to recover from the previous drift due to its less-discriminative feature. Figure 8(b) also illustrates another two sequences, where there are similar objects appear in the screenshots. Most of the trackers fail to distinguish the target from the distractors and drift gradually (see the tracking result of MEEM at frame 148 in Bolt2 and the result of CSR at frame 345 on Football1). Only our tracker successfully locates the people throughout the entire video in both videos. It could be attributed to the discrimination and reliability information adopted in training stage, which guarantee the filter focus on the reliable feature and suppress the potential distractions and false positives. \subsubsection{Abrupt Motion and Illumination Variation} \label{subsec:4.5.3} Figure 8(c) shows the screenshots of the tracking results in 2 challenging sequences where the object undergoes abrupt motion and illumination variation. ROT, TGPR and Staple undergoes serve drift when the camera shakes. SAMF-CA, CSR and KCF fail to detect the target when the target moves abruptly at frame 93 in Couple sequence. TLD, MEEM and our SAT tracker ultimately complete the challenging task. This is largely due to the fact that adding context patches allows for a larger search region and make the tracker insensitive to abrupt motion. In sequence Shaking, CSR and Staple lose the target when illumination variation occurs, since the color histogram adopted in their tracker are notoriously sensitive to the light change. Nevertheless, our SAT tracker combine the color cues with the surrounding context cues in a complementary manner, which could ensure the filter concentrate on the reliable part as well as not interfered by external factors. In addition, the high confidence updating scheme could monitor the tracking condition frame by frame, forecast the potential tracking failure and guarantee the purity of the learned filter. \subsubsection{Rotation and Deformation} \label{subsec:4.5.4} Figure 8(d) shows the screenshots of the tracking results in 3 challenging sequences where the object undergoes rotation and large shape deformation. In Diving and Skating sequence, CSR, SAMF-CA and our SAT tracker ultimately complete the challenging task, while other tracker either lose the target or drift on other distractors. In MotorRolling sequence, all the other trackers fail to track the motor due to the rapid rotation and deformation. In contrast, SAT tracker locates the target precisely through the entire sequence. We owe the superior performance to the effectiveness of color-based reliability learning, since the color statistics cope well with the variation in shape and rotation. \subsubsection{Scale Variation} \label{subsec:4.5.5} During tracking, the scale of the target often varies in successive frames as the target's movement. Hence, a tracker is required to estimate the scale as accurately as possible. Figure 8(e) shows the tracking results on human9 and Dog1 sequences with large scale changes. Some of the trackers could not tackle this issue and gradually drift due to the error accumulating even though they have equipped with the scale adaption scheme (See the tracking results of CSR, Staple and Struck tracker at Human9 sequence). While the proposed tracker could deal with the above challenges and performs better than the other trackers achieving a long-term stable tracking with the employment of effective searching strategy and anti-drift filter learning mechanism. \section{Conclusion} \label{sec:5} In this paper, we propose a generic framework for correlation filter (CF) based tracker, which jointly consider the discrimination and reliability information in filter learning stage. Context patches are employed into filter training stage to better distinguish the target from backgrounds. Furthermore, a color-based reliable mask is learned each frame to encourage the filter focus on more reliable regions suitable for tracking. Compared to the existing CF-based trackers, the proposed tracker handles not only the tracking challenges caused by external attributes but also the issues with target internal appearance change. Numerous experiments demonstrate the effectiveness and robustness of the proposed tracker (SAT and DeepSAT) against other relevant state-of-the-art methods. \vspace{0.cm} \section*{Acknowledgment} The authors are grateful to the anonymous reviewers for their encouraging and insightful advice that lead to this improved version and clearer presentation of the technical content. This work is partially supported by the National Natural Science Foundation of China (NSFC) Grant 91438203 and 3172901. This work is also partially supported by Changjiang Scholars Programme (No. T2012122). All of the authors are with the Beijing Key Laboratory of Embedded Real-Time Information Processing Technology, School of Information and Electronics, Beijing Institute of Technology, Beijing 10081, China. The corresponding author is Chenwei Deng, with the corresponding e-mail: [email protected]. \vspace{0.3cm} \section*{Apendix A} This section provides the complete derivation of the solutions for equation 8 in the manuscript. As discussed in the Section III-B, the augumented Lagrangian objective function $L({\hat{\bf w}}_{c},{\bf w},{\hat{\bf I}},\rho)$ is defined as : \vspace{-0.3cm} \begin{eqnarray} L({\hat{\bf w}}_{c},{\bf w},{\hat{\bf I}},\rho) = \|{\bf B{\hat {\bf w}_c}-\hat{{\bf Y}}}\|_{2}^{2} + \lambda_{1}\|\hat{\bf w}_{r}\|_2^2 \nonumber\\ + \hat{\bf I}^T(\overline{\hat{\bf w}_c}-\overline{\hat{\bf w}_r}) + \rho\|(\hat{\bf w}_c-\hat{\bf w}_r) \|_2^2 \end{eqnarray} Where, ${\bf B}$ is the stacked feature matrix forming by the target patch and K context patches, ${\bf Y}$ denotes the new regression label corresponding to the target patch and image patches. Meanwhile, $\hat{\bf w}_{r}$ denotes the Hadamard product between the reliability mask and the base filter, ${\bf w}_{r}={\bf r}\odot{\bf w}$. According to the property for Fourier Transform, $\hat{\bf w}_{r} = \sqrt{N}{\bf F}{\bf R}{\bf w}$. Here, {\bf F} is the orthonormal matrix of Fourier coefficient with the size $N\times N$ and ${\bf R}$ equals to $diag({\bf r})$. Afterwards, we would solve the Augmented Lagrangian objective function. We would like to decompose the overall problem down into four constitute parts for clearer representation and simplification. \vspace{-0.2cm} \begin{eqnarray} \vspace{-0.2cm} L_{1} &=& \|{\bf B{\hat {\bf w}_c}-\hat{\bf Y}}\|_{2}^{2} \nonumber \\ &=& \hat{\bf w}_{c}^{H}{\bf B}^{H}{\bf B}\hat{\bf w}_{c}- \hat{\bf w}_{c}{\bf B}^{H}\hat{\bf Y}-{\bf B}\hat{\bf w}_{c}\hat{\bf Y}^{H}+\hat{\bf Y}^H\hat{\bf Y} \nonumber \\ L_{2} &=& \lambda_{1}\|\hat{\bf w}_{r}\|_2^2 = \lambda_{1}N{\bf F}{\bf R}{\bf w}{\bf w}^{H}{\bf R}^{H}{\bf F}^{H} = \lambda_{1}{\bf R}{\bf w}{\bf w}^{H} \nonumber \\ L_{3} &=& \hat{\bf I}^{T}\overline{\hat{\bf w}_{c}}-\hat{\bf I}^{T}\overline{\hat{\bf w}_{r}} \nonumber \\ L_{4} &=& \rho\|(\hat{\bf w}_c-\hat{\bf w}_r) \|_2^2 \nonumber \\ &=& \rho(\hat{\bf w}_{c}^{H}\hat{\bf w}_{c}-\hat{\bf w}_{c}\sqrt{N}\overline{{\bf F}{\bf R}{\bf w}}-\hat{\bf w}_{c}^{H}\sqrt{N}{\bf F}{\bf R}{\bf w}+ D{\bf R}{\bf w}{\bf w}^{H}) \nonumber \end{eqnarray} According to the nature of ADMM model, the objective function could be solved using a series of iterations as: \vspace{-0.1cm} \begin{equation} \left\{ \begin{array}{lr} \hat{\bf w}_{c}^{i+1} = \arg\min\limits_{{\bf w}_c} L({\hat{\bf w}}_{c}^{i},{\bf w}^{i},\hat{\bf I}^{i},\rho^{i}) \nonumber \\ {\bf w}^{i+1} =\arg\min\limits_{{\bf w}} L({\hat{\bf w}}_{c}^{i+1},{\bf w}^{i},{\hat{\bf I}^{i}},\rho^{i}) \\ \hat{\bf I}^{i+1} = \hat{\bf I}^{i} + \rho^{i}(\hat{\bf w}_{c}^{i+1} - \hat{\bf w}_{r}^{i+1} ) \nonumber \\ \rho^{i+1} = \min(\rho_{max},\beta\rho^{i}) \nonumber \end{array} \right. \end{equation} It could be viewed from the above equation that, for each iteration, we would find the optimal value of one parameter by fixing the others. Besides that, the value of $\rho$ is increasing in each iteration to guarantee the convergence in the standard ADMM technique. We follow such regulation by setting a multiplier $\beta$ to 3. Traditionally, the ADMM stops when the residual term $\hat{\bf w}_{c}^{i+1} - \hat{\bf w}_{r}^{i+1}$ is small enough. We fix the iteration times as five since the residual error decreases largely after first five iteration after experimental validation. Seen from the above analysis, we could obtain the optimization over $\hat{\bf w}_{c}$ by setting the complex gradient of the Augmented Lagrangian Function to zero. \begin{eqnarray} \triangledown_{\hat{\bf w}_{c}^{H}}L = 0 \end{eqnarray} Therefore, we could decompose the complex gradient into four parts as: $\triangledown_{\hat{\bf w}_{c}^{H}}L_{1}+\triangledown_{\hat{\bf w}_{c}^{H}}L_{2}+\triangledown_{\hat{\bf w}_{c}^{H}}L_{3}+\triangledown_{\hat{\bf w}_{c}^{H}}L_{4} = 0 $. With the defination of these components, we could solve the partial gradient successively: \begin{eqnarray} \frac{\partial L}{\partial{\hat{\bf w}_{c}^{H}}} = ({\bf B}^H{\bf B}\hat{\bf w}_{c})-{\bf B}^{H}\hat{Y}+\hat{\bf I}^{T}+ \rho\hat{\bf w}_c +\rho\sqrt{N}{\bf N}{\bf F}w \end{eqnarray} Thus, we could yield the optimal value of $\hat{\bf w}_c$ for $i^{th}$ iteration by identifying the left to the right as follow: \begin{eqnarray} \vspace{-0.2cm} \hat{\bf w}_{c} = ({\bf B}^H{\bf B}+\rho)^{-1}(\rho\sqrt{N}{\bf F}{\bf R}w + {\bf B}^{H}\hat{{\bf Y}} -\hat{\bf I}^{T}) \end{eqnarray} Here, ${\bf B} = \left [ {\bf A_0}, \sqrt{\lambda_2}{\bf A_1},...,\sqrt{\lambda_2}{\bf A_k }\right ]^T$, denoting the stacked matrix for the feature matrix of target patch ${\bf A_{0}}$ and the ones of surrounding patches ${\bf A_1}$ to ${\bf A_k}$. Meanwhile, ${\bf Y}$ indicates their corresponding regression label ${\bf Y} = \left [{\bf y} , {\bf 0} ,..., {\bf 0} \right ]^T $, we manually set the label of the surrounding context to zero, treating them as negative samples. With that assumption, the SAT tracker would be able to detect the surrounding distractors by learning their features in advance. Since, we have ${\bf B}^H{\bf Y} = {\bf A}_0^H\hat{\bf y}$ and ${\bf B}^H{\bf B} = {\bf A}_0^H{\bf A}_0+\lambda_2{\bf A}_1^H{\bf A}_1+...+\lambda_2{\bf A}_k^H{\bf A}_k = {\bf A}_0^H{\bf A}_0 + \lambda_2\sum_{i=1}^{k}{\bf A}_i^H{\bf A}_i$. Recalling the property of circulant matrix that, ${\bf X} = {\bf F}diag({\bf {\hat x}}){\bf F}^H$ and ${\bf X}^H = {\bf F}diag({\bf {\hat x}^{*}}){\bf F}^H$. The above variable could be represented in element-wise in Fourier domain: \vspace{-0.2cm} \begin{eqnarray} \vspace{0.1cm} {\bf B}^H{\bf B} &=& {\bf F} diag(\hat{a}^{*}_0\odot\hat{a}_0) {\bf F}^{H}+ \lambda_2\sum_{i=1}^{k} {\bf F} diag(\hat{a}^{*}_i\odot\hat{a}_i) {\bf F}^{H} \nonumber \\ &=& {\bf F} diag(\hat{a}^{*}_0\odot\hat{a}_0+\lambda_2\sum_{i=1}^{k}\hat{a}^{*}_i\odot\hat{a}_i) {\bf F}^{H}\\ {\bf B}^H{\bf Y} &=& {\bf A}_0^H\hat{\bf y} = {\bf F} diag(\hat{a}^{*}_0\odot\hat{y}){\bf F}^{H} \end{eqnarray} $\hat{a}_0$ and $\hat{a}_i$ indicate the feature vector for target patch and context patches, in conjunction with the circulant feature matrix ${\bf A}_0$ and ${\bf A}_i$, respectively. Therefore, by substituting the equation X with equation X, we obtain the element-wise closed-form solution for $\hat{\bf w}_c$: \begin{eqnarray} \hat{\bf w}_c = \frac{\hat{a}^{*}_0\odot\hat{y}+\rho\hat{\bf w}_r-\hat{\bf I}^T}{\hat{a}^{*}_0\odot\hat{a}_0+\lambda_2\sum_{i=1}^{k} \hat{a}^{*}_i\odot\hat{a}_i +\rho} \end{eqnarray} Similarly, we could acquire the optimal ${\bf w}$ by setting the complex gradient of the Argmented Lagrangian function to zero, that is $\triangledown_{{\bf w}^{H}}L = 0 $. \begin{eqnarray} \triangledown_{{\bf w}^{H}}L_{1}+\triangledown_{{\bf w}^{H}}L_{2}+\triangledown_{{\bf w}^{H}}L_{3}+\triangledown_{{\bf w}^{H}}L_{4} = 0 \end{eqnarray} By solving the formulation, we obtain the following result: \begin{eqnarray} \frac{\partial L}{\partial{{\bf w}^{H}}} = \lambda_1N{\bf R}{\bf w}+\rho N{\bf R}{\bf w}-\rho\hat{\bf w}_c\sqrt{N}{\bf R}{\bf F}^{H}-\hat{\bf I}^T\sqrt{N}{\bf R}{\bf F}^{H} \end{eqnarray} By identifying the left hand to the right, we could obtain : \begin{eqnarray} \hat{\bf w} = \frac{\sqrt{N}{\bf F}^{H}(\rho\hat{\bf w}_{c}+\hat{\bf I}^T)}{N(\lambda_1+\rho)} \end{eqnarray} Recalling the property of Fourier Transform that $\hat{x} = \sqrt{N}{\bf F}x$, we could derive the inverse Fourier Transform formula as: $\mathcal{F}^{-1}(\hat{x}) = \frac{1}{\sqrt{N}}{\bf F}^{H}\hat{x}$. In this case, we get the element-wise form of the filter in spatial domain eventually: \vspace{0.2cm} \begin{eqnarray} {\bf w}_r = {\bf r}\odot {\bf w} = {\bf r}\odot \frac{\mathcal{F}^{-1}(\rho\hat{\bf w}_{c}+\hat{\bf I}^T)}{\lambda_1+\rho} \end{eqnarray} \bibliographystyle{IEEEtran}
1,477,468,751,232
arxiv
\section{Introduction} In his recent brilliant manuscript \cite{Read}, the late Charles J.~Read constructed an equivalent norm $|||\cdot|||$ on $c_0$ such that the space $\mathcal R = (c_0, |||\cdot|||)$ answers negatively the following open problem by Ivan Singer from 1974 \cite{Singer}: \begin{itemize} \item[(S)] Is it true that every Banach space contains a proximinal subspace of finite codimension greater than or equal to $2$? \end{itemize} Recall that a subset $Y$ of a Banach space $X$ is said to be \emph{proximinal} if for every $x \in X$ there is $y_0 \in Y$ such that $\|x - y_0\| = \inf\{\|x-y\|\colon y\in Y\}$. Many times, when an interesting and non-trivial space is constructed to produce a counterexample, it can be also used to solve some other different problems. In the case of Read's space, this didn't wait for too long: in \cite[Theorem 4.2]{Rmoutil}, Martin Rmoutil demonstrates that the space $\mathcal R$ also gives a negative solution to the following problem by Gilles Godefroy \cite[Problem III]{Godefroy}: \begin{itemize} \item[(G)] is it true that for every Banach space $X$ the set $\NA(X) \subset X^*$ of norm attaining functionals contains a two-dimensional linear subspace? \end{itemize} Recall that an element $f$ of the dual $X^*$ of a Banach space $X$ is said to be \emph{norm attaining} if there is $x\in X$ with $\|x\|=1$ such that $\|f\|=|f(x)|$. The utility of Read's space makes clear that it is interesting to increase our knowledge of its geometry. The aim of this note is to show that Read's space fulfills the following properties: \begin{itemize} \item[(a)] the bidual $\mathcal{R}^{**}$ of Read's space is strictly convex; \item[(b)] therefore, $\mathcal{R}^*$ is smooth; and \item[(c)] $\mathcal{R}$ is also strictly convex; \item[(d)] moreover, $\mathcal{R}$ is weakly locally uniformly rotund (WLUR); \item[(e)] the norm of $\mathcal{R}^*$ is rough, so it is not Fr\'{e}chet differentiable at any point; \item[(f)] and the norm of $\mathcal{R}$ is not locally uniformly rotund (LUR); \item[(g)] moreover, there is $\rho>0$ such that every weakly open subset of the unit ball of $\mathcal{R}$ has diameter greater than or equal to $\rho$. \end{itemize} The main point in Rmoutil's proof was the demonstration that for several closed subspaces $Y$ of $\mathcal R$, the corresponding quotient spaces $X/Y$ are strictly convex. Observe that it follows from assertion (b) above that ALL quotient spaces $\mathcal R/Y$ are strictly convex (because their duals $Y^\bot$ are smooth). This gives a substantial simplification of the proof of \cite[Theorem 4.2]{Rmoutil}. Even more, it follows from a 1987 paper by V.~Indumathi \cite[Proposition~1]{Indumathi-JAT}, that if $X$ is a Banach space with $X^*$ smooth at every point of $\NA(X)\cap S_{X^*}$, then a finite-codimensional subspace $Y$ of $X$ is proximinal if and only if $Y^\perp\subset \NA(X)$. This hypothesis on $X$ is satisfied if $X^*$ is smooth (clear) or if $X$ is WLUR (see \cite{Yorke}). Let us comment that the facts that $\mathcal{R}$ is WLUR and $\mathcal{R^*}$ is smooth were said to be unexpected in the cited paper \cite{Rmoutil} by Rmoutil. As a final result of the paper, we present a renorming of Read's space which is smooth, whose dual is smooth and which solves negatively both problems (S) and (G). Let us now present the notation we are using along the paper. We deal only with real scalars and real Banach spaces. By $\|\cdot\|_1$ and $\|\cdot\|_\infty$ we denote the standard norms on $\ell_1$ and $\ell_\infty$ respectively, and by $(e_n)_{n\in \N}$ we denote the canonical basis vectors, i.e.\ the $k$-th coordinate $e_{n,k}$ of $e_n$ equals 0 for $n \neq k$ and equals $1$ for $n = k$. In the sequel, each time it will be clear from the context in what sequence space these $e_n$'s are considered. For $x = (x_k)_{k \in \N} \in \ell_\infty$, $y = (y_k)_{k \in \N} \in \ell_1$, we use the standard notation $\langle x, y \rangle = \sum_{n \in \N} x_n y_n$. If $X$ is an arbitrary Banach space, $B_X$ denotes its closed unit ball, $S_X$ denotes its unit sphere, and $X^*$ is the dual of $X$. We refer the reader to the books \cite{D-G-Z} and \cite{FHHMPZ} for background on geometry of Banach spaces. Let us finally recall the definition of Read's space and its basic properties from \cite{Read}. Let $c_{00}(\Q)$ be the set of all terminating sequences with rational coefficients, and let $(u_n)_{n \in \N}$ be a sequence of elements of $c_{00}(\Q)$ which lists every element infinitely many times. Further, let $(a_n)_{n \in \N}$ be a strictly increasing sequence of positive integers satisfying that $$ a_n > \max \supp u_n \quad \text{and} \quad a_n> \|u_n\|_1 $$ for every $n \in \N$. The equivalent norm $|||\cdot|||$ on $c_0$ is defined in \cite{Read} as follows: \begin{equation} \label{eq-norm1} |||x||| := \|x\|_\infty + \sum_{n \in \N}2^{-a_n^2} \left|\langle x, u_n - e_{a_n} \rangle\right| \qquad \bigl(x\in c_0\bigr). \end{equation} Now, as we mentioned above, $\mathcal R := (c_0, |||\cdot|||)$. To simplify the notation, let us denote $$ v_n = \frac{u_n - e_{a_n}}{\|u_n - e_{a_n}\|_1} \in \ell_1,\qquad r_n = 2^{-a_n^2}\|u_n - e_{a_n}\|_1 $$ for every $n\in \N$. Then \eqref{eq-norm1} rewrites as \begin{equation} \label{eq-norm2} |||x||| = \|x\|_\infty + \sum_{n \in \N}r_n \left|\langle x, v_n \rangle\right| \end{equation} for every $x\in \mathcal{R}$. We finally remark that $ \sum\limits_{n \in \N}r_n \leq 2$ \cite{Read} and that the sequence $(v_n)_{n\in N}$ is dense in $S_{\ell_1}$. Observe that it follows that \begin{equation}\label{eq-equivalent} \|x\|_\infty \leq |||x||| \leq 3 \|x\|_\infty \end{equation} for every $x\in \mathcal{R}$ \cite[Eq.~4]{Read}. \section{The results} Our first aim is to show that $\mathcal{R}^{**}$ is strictly convex but, previously, we need a description of $\mathcal{R}^{**}$ which can be of independent interest. \begin{prop} \label{prop-bidualnorm} The bidual space $\mathcal R^{**}$ of Read's space is naturally isometric to $\ell_\infty$ equipped with the norm given by the formula \eqref{eq-norm2}. \end{prop} \begin{proof} Since $\mathcal R$ is $c_0$ in the equivalent norm $|||\cdot|||$, $\mathcal R^{**}$ should be $\ell_\infty$ equipped with an equivalent norm $|||\cdot|||^{**}$. We want to demonstrate that $(\ell_\infty, |||\cdot|||^{**}) = (\ell_\infty, |||\cdot|||)$. By Goldstine's theorem, the unit ball of $\mathcal R^{**} = (\ell_\infty, |||\cdot|||^{**})$ is the weak$^*$-closure in $\ell_\infty$ of $B_{\mathcal R}$. So, what remains to demonstrate is that the set $U:=\{\bar x \in \ell_\infty\colon |||\bar x||| \leq 1\}$ is equal to the weak$^*$-closure in $\ell_\infty$ of $B_{\mathcal R}$. Recall that on bounded sets of $\ell_\infty$, the weak$^*$-topology is metrizable (so we can use the sequential language), and the weak$^*$-convergence is just the coordinate-wise one. Let us demonstrate first that $U$ is weak$^*$-closed, so $U$ will contain the weak$^*$-closure of $B_{\mathcal{R}}$, $B_\mathcal R^{**}$. Indeed, consider a sequence of $(\bar z_m)_{m\in \N}$ in $U$ with $w^*-\lim_m \bar z_m = \bar z \in \ell_\infty$. Since all the maps $\bar x \longmapsto \langle \bar x, v_n \rangle $ are weak$^*$-continuous on $\ell_\infty$, we have that $\lim_m \langle \bar z_m, v_n \rangle= \langle \bar z, v_n \rangle$ for all $n \in \N$. Passing to a subsequence, we may (and do) assume that there exists $\lim_m \|\bar z_m\|_\infty$ which satisfies that $\lim_m \|\bar z_m\|_\infty \geq \|\bar z\|_\infty$. Now, \begin{align*} |||\bar z||| &= \|\bar z\|_\infty + \sum_{n \in \N}r_n \left|\langle \bar z, v_n \rangle \right| \leq \lim_m\|\bar z_m\|_\infty + \sum_{n \in \N}r_n \lim_m \left|\langle \bar z_m, v_n \rangle\right| \intertext{and using the version for series of Lebesgue's dominated convergence theorem} &= \lim_m\left(\|\bar z_m\|_\infty + \sum_{n \in \N}r_n \left|\langle \bar z_m, v_n \rangle\right|\right) \leq 1, \end{align*} which demonstrates the desired weak$^*$-closedness of $U$. Let us show that $B_{\mathcal R}$ is weak$^*$-dense in $U$. Indeed, for every $\bar x = (x_1, x_2, \ldots) \in U \subset \ell_\infty$ and every $m\in \N$, denote $S_m\bar x = \sum_{k=1}^m x_ke_k \in c_0$. Then, $w^*-\lim_m S_m \bar x = \bar x$ and, consequently, $\lim_m \langle S_m\bar x, v_n \rangle= \langle \bar x, v_n \rangle$ for all $n \in \N$. Since $\lim_m \|S_m \bar x\|_\infty = \|\bar x\|_\infty$, another application of Lebesgue's dominated convergence theorem gives us $$ |||\bar x||| = \lim_m |||S_m\bar x|||. $$ Now, consider $x_m = \frac{|||\bar x|||}{|||S_m\bar x|||}S_m\bar x$ for $m\in \N$. We have that $x_m \in c_0$, $ |||x_m||| = |||\bar x||| \leq 1$, so $x_m \in B_{\mathcal R}$ for every $m\in \N$. At the same time, $w^*-\lim_m x_m = w^*-\lim_m S_m \bar x = \bar x$. This completes the demonstration of the weak$^*$-density of $B_{\mathcal R}$ in $U$, and thus the proof of the theorem. \end{proof} From the proof above, we may extract the following property of Read's norm which we will use later and which is consequence of Lebesgue's dominated convergence theorem for series. \begin{remark}\label{remark-Readnorm} Let $(\bar z_m)_{m\in \N}$ be a sequence in $\ell_\infty$ which weakly$^*$-converges to $\bar z\in \ell_\infty$. Then, there exists $\lim_m |||\bar z_m|||$ if and only if there exists $\lim_m \|\bar z_m\|_\infty$. Besides, $\lim_m |||\bar z_m|||=|||\bar z|||$ if and only if $\lim_m \|\bar z_m\|_\infty=\|\bar z\|_\infty$. \end{remark} Let us note that Proposition \ref{prop-bidualnorm} is a particular case of the following general result which is a consequence of the ``principle of local reflexivity for operators'' \cite{Behrends} and which has been suggested to us by the referee. \begin{prop}\label{prop-bidual-abstract-version} Let $X$, $Y$ be Banach spaces and let $R:X\longrightarrow Y$ be a bounded linear operator. Define a (equivalent) norm on $X$ by $$ |||x|||=\|x\|_{X} + \|Rx\|_{Y} $$ for every $x\in X$. Then the norm of $(X,|||\cdot|||)^{**}$ is given by the formula $$ |||x^{**}|||=\|x^{**}\|_{X^{**}} + \|R^{**}x^{**}\|_{Y^{**}} $$ for every $x^{**}\in X^{**}$ ($R^{**}$ denotes the biconjugate operator of $R$). \end{prop} \begin{proof} Write \begin{align*} B & =\{x\in X\colon \|x\|_X + \|Rx\|_Y\leq 1\}, \\ A & =\{x^{**}\in X^{**}\colon \|x^{**}\|_{X^{**}} + \|R^{**}x^{**}\|_{Y^{**}}\leq 1\}, \\ A_0 & =\{x^{**}\in X^{**}\colon \|x^{**}\|_{X^{**}} + \|R^{**}x^{**}\|_{Y^{**}}< 1\}. \end{align*} We have to prove that $A$ coincides with the weak$^*$-closure of $B$. It is clear that $A$ contains the weak$^*$-closure of $B$ since it contains $B$ and it is weak$^*$-closed by the weak$^*$ lower semicontinuity of the norm. Besides, it is enough to prove that the weak$^*$-closure of $B$ contains $A_0$. Therefore, we fix $x_0^{**}\in A_0$, we write $\delta=1-(\|x_0^{**}\|_{X^{**}} + \|R^{**}x_0^{**}\|_{Y^{**}})>0$, and we fix a weak$^*$ neighborhood $U$ of $x_0^{**}$ which we may suppose that is of the form $$ U=\{z^{**}\in X^{**}\colon |\langle f, z^{**}-x_0^{**}\rangle|<\gamma \ \forall f\in D\} $$ for suitable $D\subset S_{X^*}$ finite and $\gamma>0$. We can now apply the principle of local reflexivity for operators \cite[Theorem 5.2]{Behrends} to get a point $x_0\in X$ satisfying that $$ \|x_0\|_{X}\leq \|x^{**}\|_{X^{**}}+\delta/2, \quad \|Rx_0\|_{Y}\leq \|R^{**}x_0^{**}\|_{Y^{**}}+\delta/2,\quad \text{and} \quad \langle x_0,f\rangle=\langle f,x_0^{**}\rangle\ \ \forall f\in D. $$ We clearly have that $x_0\in B\cap U$, finishing the proof. \end{proof} We are now ready to present the strict convexity of the bidual of $\mathcal{R}$. Recall that a Banach space $X$ is said to be \emph{strictly convex} if $S_X$ does not contain any non-trivial segment or, equivalently, if $\|x+y\|<2$ whenever $x,y\in B_X$, $x\neq y$. \begin{theorem} \label{theor-str-convR**} The bidual space $\mathcal R^{**}$ of Read's space is strictly convex. \end{theorem} \begin{proof} Let $\bar x, \bar y \in S_{\mathcal R^{**}}$, $\bar x \neq \bar y$. Our goal is to demonstrate that $|||\bar x+\bar y||| < 2$. Let us use the fact that the sequence $(v_n)_{n\in N}$ from the formula \eqref{eq-norm2} is dense in $S_{\ell_1}$. This implies the existence of $k \in \N$ such that the values of $ \langle \bar x, v_k \rangle$ and $ \langle \bar y, v_k \rangle$ are non-zero and of opposite signs. Then, $$ \left|\langle \bar x + \bar y, v_k \rangle \right| < \left|\langle \bar x, v_k \rangle\right| + \left|\langle \bar y, v_k \rangle\right|. $$ On the other hand, the triangle inequality says that $$ \|\bar x+ \bar y\|_\infty \leq \|\bar x\|_\infty + \|\bar y\|_\infty \qquad \text{and} \qquad \left|\langle \bar x + \bar y, v_n \rangle \right| \leq \left|\langle \bar x, v_n \rangle\right| + \left|\langle \bar y, v_n \rangle\right| $$ for all $n \in \N\setminus \{k\}$ . Combining all these inequalities with the formula \eqref{eq-norm2}, we obtain the desired estimate: \begin{align*} |||\bar x + \bar y||| &= \|\bar x + \bar y\|_\infty + \sum_{n \in \N}r_n\left|\langle \bar x + \bar y, v_n \rangle\right| \\ & < \|\bar x\|_\infty + \sum_{n \in \N}r_n \left|\langle \bar x, v_n \rangle\right| + \|\bar y\|_\infty + \sum_{n \in \N}r_n \left|\langle \bar y, v_n \rangle\right| = 2.\qedhere \end{align*} \end{proof} As an immediate consequence, $\mathcal{R}^*$ is \emph{smooth}, i.e.\ its norm is G\^{a}teaux differentiable at every non-zero element (see \cite[Fact 8.12]{FHHMPZ}, for instance). \begin{corollary}\label{corollary-dual-smooth} The dual space $\mathcal{R}^*$ of Read's space is smooth. \end{corollary} However, the norm of $\mathcal{R}^*$ cannot be Fr\'{e}chet differentiable at any point, as we may show that it is rough. A norm of a Banach space is said to be $\eps$-\emph{rough} ($\eps>0$) if $$ \limsup_{\|h\|\to 0}\frac{\|x+h\|+\|x-h\|-2\|x\|}{\|h\|}\geq \eps $$ for every $x\in X$. The norm is said to be \emph{rough} if it is $\eps$-rough for some $\eps\in (0,2]$. We refer the reader to the classical book \cite{D-G-Z} on smoothness and renorming for more information and background. Clearly, a rough norm is not Fr\'{e}chet differentiable at any point. \begin{theorem}\label{thm-rough} The norm of the dual space $\mathcal{R}^*$ of Read's space is $2/3$-rough. \end{theorem} We need a formula for the supremum norm of $c_0$ which is well-known. It follows from the fact that $c_0$ has a property related to $M$-ideals called $(m_\infty)$ and which was deeply studied by N~.Kalton and D.~Werner in \cite{KaltonWerner}. We include a simple proof of a slightly more general result here for the sake of completeness. \begin{lemma}\label{conorm} Assume that $(u_m)_{m\in \N}$ is a weakly$^*$-null sequence in $\ell_\infty$ such that the limit $\lim_m\Vert u_m\Vert_{\infty}$ exists. Then, for every $u\in c_0$ $$ \lim_m \Vert u+u_m\Vert_{\infty} =\max\bigl\{\Vert u\Vert_{\infty},\ \lim_{m}\Vert u_m\Vert_{\infty}\bigr\}. $$ \end{lemma} \begin{proof} Suppose first that $u$ has finite support. Let $n\in\N$ be an integer such that all non-zero coordinates of $u$ are smaller than $n$ and denote by $P$ the projection of $\ell_\infty$ on the first $n$ coordinates. Then, $(Pu_m)_{m\in\N}$ go coordinate-wise to zero, but there are only $n$ non-null coordinates, so actually $(Pu_m)_{m\in \N}$ go to zero in norm. With that in mind, we get that \begin{align*} \lim_m\|u + u_m\|_\infty & = \lim_m\|u + (u_m - Pu_m)\|_\infty = \lim_m \max\bigl\{\|u\|_\infty,\, \|u_m - Pu_m\|_\infty\bigr\} \\ &= \max\bigl\{\|u\|_\infty,\, \lim_m\|u_m\|_\infty\bigr\}, \end{align*} where in the second equality we have used the disjointness of the supports of $u$ and $u_m - Pu_m$. Now, let us define $f_m:c_0\longrightarrow \R$ by $f_m(u)=\|u+u_m\|_\infty$ for every $u\in c_0$ and every $m\in \N$. Then, all the functions $f_m$ are $1$-Lipschitz and the sequence $(f_m)_{m\in \N}$ converges pointwise on the dense set $c_{00}$ to a function which is a fortiori $1$-Lipschitz. It is now routine, using again that the Lipschitz constants of the $f_m$'s are uniformly bounded, to prove that $(f_m)_{m\in \N}$ converges pointwise on the whole $c_0$ to the unique extension of the limit on $c_{00}$. \end{proof} We are now ready to proof that the norm of $\mathcal{R}^*$ is rough. \begin{proof}[Proof of Theorem~\ref{thm-rough}] Fix $x_0^*\in S_{\mathcal{R}^*}$ and $\lambda\in (0,1)$. Given $\delta\in (0,1/3)$, we take $x_0\in B_{\mathcal{R}^*}$ such that $$ x_0^*(x_0)>1-3\lambda\delta \qquad \text{and} \qquad |||x_0|||<1-\lambda\delta. $$ Write $\rho=1/3-\lambda\delta$. We have that $\|x_0\|_\infty\geq \frac13 |||x_0|||\geq \rho$ (use \eqref{eq-equivalent}) and that the sequence $(\rho\, e_m)_{m\in \N}$ is weakly-null, so Lemma \ref{conorm} gives us that $$ \lim_m \|x_0 \pm \rho\, e_m\|_\infty = \max\bigl\{\|x_0\|_\infty,\ \lim_m \|\rho\, e_m\|_\infty\bigr\}=\|x_0\|_\infty. $$ It then follows from Remark~\ref{remark-Readnorm} that $$ \lim_m |||x_0 \pm \rho\, e_m||| = |||x_0||| < 1-\lambda\delta. $$ With all of these in mind, we may find $N\in \N$ such that \begin{equation}\label{eq:rough1} |||x_0 \pm \rho\, e_N|||\leq 1 \quad \text{and} \quad x_0^*(x_0\pm \rho \, e_N)>1-3\lambda\delta. \end{equation} Finally, take $y^*\in S_{\mathcal{R}^*}$ such that $$ y^*(e_N)=|||e_N|||\geq 1. $$ We have that \begin{align*} |||x_0^* + \lambda y^*||| + |||x_0^*-\lambda y^*||| &\geq \langle x_0 + \rho\, e_N, x_0^*+\lambda y^*\rangle + \langle x_0 - \rho\, e_N, x_0^*-\lambda y^*\rangle \\ & > 2 - 6\lambda\delta + 2\lambda\rho = 2 +\lambda\left(\frac{2}{3}-(6+\lambda)\delta \right). \end{align*} Summarizing, we have proved that for every $x^*\in S_{\mathcal{R}^*}$ and every $\lambda\in (0,1)$, $$ \sup_{z^*\in \mathcal{R},\,|||z^*|||=\lambda}\frac{|||x^*+z^*||| + |||x^*-z^*|||-2}{\lambda} \geq \frac{2}{3}. $$ This gives the $2/3$-roughness of $\mathcal{R}^*$, as desired. \end{proof} Observe that following the above proof until \eqref{eq:rough1}, we get that every slice of the unit ball of $\mathcal{R}$ has diameter greater than or equal to $2/3$. Actually, the $\eps$-roughness of the norm of the dual $X^*$ of a Banach space $X$ is equivalent to the fact that all slices of the unit ball of $X$ have diameter greater than or equal to $\eps$ \cite[Proposition~I.1.11]{D-G-Z}, and the second part of our proof is based in the proof of the result above. Let us also note that the first part of the proof of Theorem \ref{thm-rough} can be easily adapted to get that all weakly open subsets of $B_{\mathcal{R}}$ have diameter greater than or equal to $2/3$. \begin{corollary}\label{Corollary-big-weak-open-subsets} Every weakly open subset of the unit ball of Read's space $\mathcal{R}$ has diameter greater than or equal to $2/3$. \end{corollary} We now study convexity properties of Read's space itself. It follows from Theorem \ref{theor-str-convR**} that $\mathcal{R}$ is strictly convex (as it is a subspace of $\mathcal{R}^{**}$), but we may actually prove that it is weakly locally uniformly rotund. Recall that a Banach space $X$ is \emph{weakly locally uniformly rotund} (\emph{WLUR} in short) if for every $x\in S_X$ and every sequence $(x_n)_{n\in \N}$ in $S_X$, if $\|x+x_n\|\longrightarrow 2$, then $x=w-\lim_n x_n$. If one actually gets that $x=\lim_n x_n$ in norm, we say that the space $X$ is \emph{locally uniformly rotund} (\emph{LUR} in short). It is clear that LUR implies WLUR and that WLUR implies strict convexity, being the converse results false. \begin{theorem} \label{Theo-Read-WLUR} Read's space $\mathcal R$ is weakly locally uniformly rotund. \end{theorem} \begin{proof} Let $x, y_m \in S_{\mathcal R}$ with $|||x + y_m||| \longrightarrow 2$. Observe that it is enough to show that there is subsequence of $(y_m)_{m\in \N}$ which weakly converges to $x$. Passing to a subsequence, we may assume the existence of $w^*-\lim_m y_m = \bar y \in B_{\mathcal R^{**}}$. Then, using Remark~\ref{remark-Readnorm}, \begin{equation} \label{eq00000} \begin{split} 2 = \lim_m |||x+y_m||| &= \lim_m \|x + y_m\|_\infty + \sum_{n \in \N}r_n \left|\langle x + \bar y, v_n \rangle \right| \\ & \leq \|x \|_\infty + \lim_m \|y_m\|_\infty + \sum_{n \in \N}r_n \left|\langle x, v_n \rangle \right|+ \sum_{n \in \N}r_n \left|\langle \bar y, v_n \rangle \right| \\ &= |||x||| + \lim_m |||y_m||| = 2. \end{split} \end{equation} This chain of inequalities implies that \begin{equation} \label{eq-theo2-2} \lim_m \|x + y_m\|_\infty = \|x \|_\infty + \lim_m \|y_m\|_\infty \end{equation} and \begin{equation} \label{eq-theo2-3} \left|\langle x + \bar y, v_n \rangle \right| = \left|\langle x, v_n \rangle \right|+ \left|\langle \bar y, v_n \rangle \right| \quad \textrm{ for all \,} n \in \N. \end{equation} As in the proof of Theorem \ref{theor-str-convR**}, if $\bar y$ is not of the form $ax$ with $a \geq 0$, there exists $k \in \N$ such that the values of $ \langle x, v_k \rangle$ and $ \langle \bar y, v_k \rangle$ are non-zero and of opposite signs. Then $ \left|\langle x + \bar y, v_k \rangle \right| < \left|\langle x, v_k \rangle\right| + \left|\langle \bar y, v_k \rangle\right|$, which contradicts \eqref{eq-theo2-3}. So, $\bar y = ax \in \mathcal R$ for some $a \in [0, +\infty)$. Since $1 = \lim_m |||y_m||| \geq |||ax||| = a$, we deduce that $a \in [0, 1]$. Passing again to a subsequence, we may assume that there exists $ \lim_m \|y_m - ax\|_\infty$. From Lemma \ref{conorm} we obtain that \begin{equation} \label{eq-theo2-3+} \lim_m \|y_m\|_\infty = \max\left\{a\|x\|_\infty,\ \lim_m \|y_m - ax\|_\infty\right\}, \end{equation} and $$ \lim_m \|x + y_m\|_\infty = \max\left\{(1+a)\|x\|_\infty,\ \lim_m \|y_m - ax\|_\infty\right\}. $$ Combining this with \eqref{eq-theo2-2}, we obtain that $$ \|x \|_\infty + \max\left\{a\|x\|_\infty,\ \lim_m \|y_m - ax\|_\infty\right\} = \max\left\{(1+a)\|x\|_\infty,\ \lim_m \|y_m - ax\|_\infty\right\}, $$ which implies that $$ \lim_m \|y_m - ax\|_\infty \leq a\|x\|_\infty, $$ so, by \eqref{eq-theo2-3+}, $$ \lim_m \|y_m\|_\infty =a\|x\|_\infty. $$ Taking into account that all the inequalities in \eqref{eq00000} are, in fact, equalities, and substituting $\bar y = ax$, we get $$ 2 = \|x \|_\infty + \lim_m \|y_m\|_\infty+ \sum_{n \in \N}r_n \left|\langle x, v_n \rangle \right|+ \sum_{n \in \N}r_n \left|\langle ax, v_n \rangle \right| = (1 + a)|||x||| = 1+a. $$ So $a = 1$ and $w-\lim_m(y_m - x) = ax - x = 0$. \end{proof} The above result cannot be improved to get that $\mathcal{R}$ is LUR. This is so because it follows easily from the definition, that the unit ball of a LUR space contains slices or arbitrarily small diameter and this contradicts Corollary \ref{Corollary-big-weak-open-subsets} or, alternatively, because the norm of the dual space of a LUR space is Fr\'{e}chet differentiable at every norm-attaining functional by the Smulyan's criterium (see \cite[Theorem I.1.4]{D-G-Z}, for instance). \begin{remark} Read's space $\mathcal R$ is not locally uniformly rotund. \end{remark} Observe that $\mathcal R$ is not smooth (this follows from the formula for the directional derivative of the norm $|||\cdot|||$ given in \cite[Lemma 2.5]{Read}). Nevertheless, one can modify $\mathcal R$ in such a way that the modified space $\widetilde{\mathcal R}$ is simultaneously strictly convex and smooth (actually, its dual is also smooth and strictly convex), but the set of norm-attaining functionals remains the same. This is a consequence of the following argument which appears in the proof of \cite[Theorem 9.(4)]{DebsGodSR} by G.~Debs, G.~Godefroy, and J.\ Saint Raymond, and which we include here for the sake of completeness. \begin{lemma} \label{lemma-smooth-renorming} Let $X$ be a separable Banach space, let $\{x_n\colon n\in \N\}$ be a dense subset of $B_X$, and consider the bounded linear operator $T: \ell_2 \longrightarrow X$ be defined by $T\bigl((a_n)_{n\in \N}\bigr)=\sum\limits_{n=1}^{+\infty} \frac{a_n}{2^n} x_n$ for every $(a_n)_{n\in \N}\in \ell_2$. Consider the equivalent norm on $X$, denoted by $\|\cdot\|_{s}$, for which the set $V = B_X + T(B_{\ell_2})$ is its unit ball. Then, $(X,\|\cdot\|_{s})^*$ is strictly convex and so, $(X,\|\cdot\|_{s})$ is smooth, and $\NA(X) = \NA(X,\|\cdot\|_{s})$. Moreover, if $X$ is strictly convex, then $(X,\|\cdot\|_{s})$ is also strictly convex; if $X^*$ is smooth, then $(X,\|\cdot\|_{s})^*$ is also smooth. \end{lemma} \begin{proof} The set $V = B_X + T(B_{\ell_2})$ is bounded, balanced and solid. Its closedness follows from the compactness of $T(B_{\ell_2})$. This explains the definition of $\|\cdot\|_{s}$. A functional $f \in X^*$ attains its maximum on $V$ if and only if it attains its maximum both on $B_X$ and $T(B_{\ell_2})$, but all functionals attain their maximums on $T(B_{\ell_2})$, so $\NA(X) = \NA(X,\|\cdot\|_{s})$. It is easy to show that for every $f\in X^*$, $\|f\|_{s}=\|f\|+\|T^*(f)\|_2$. Since $T^*$ is one-to-one (as $T$ has dense range) and $\|\cdot\|_2$ is strictly convex, it follows that $(X,\|\cdot\|_{s})^*$ is strictly convex and so $(X,\|\cdot\|_{s})$ is smooth. Finally, if $X$ is strictly convex, so is $(X,\|\cdot\|_{s})$ as a functional $f \in X^*$ cannot attain its maximum in two different points of $V$; if $X^*$ is smooth, then so is $(X,\|\cdot\|_{s})^*$ as its norm is the sum of two smooth norms. \end{proof} We are now ready to present a smooth version of Read's space. \begin{example} {\slshape Consider $\widetilde{\mathcal{R}}$ to be the renorming of Read's space $\mathcal{R}$ given by the procedure of the above lemma. Then, $\widetilde{\mathcal{R}}^*$ is strictly convex and smooth, so $\widetilde{\mathcal{R}}$ is also strictly convex and smooth, and solves negatively both problems (S) and (G). That is, $\widetilde{R}$ does not contains proximinal subspaces of finite codimension greater than or equal to 2 and $\NA(\widetilde{\mathcal{R}})$ does not contain two-dimensional linear subspaces.} Indeed, $\widetilde{\mathcal{R}}$ and $\widetilde{\mathcal{R}}^*$ are smooth, and $\NA(\widetilde{\mathcal R})= \NA(\mathcal R)$, so $\NA(\widetilde{\mathcal R})$ does not contain linear subspaces of dimension greater than or equal to 2. But then it is easy to show that this implies that $\widetilde{\mathcal{R}}$ does not contain proximinal subspaces of finite codimension greater than or equal to 2 (see \cite[Proposition III.4]{Godefroy}, for instance). \end{example} \vspace*{1cm} \noindent \textbf{Acknowledgment:\ } The authors are grateful to the anonymous referee for helpful suggestions which improved the final version of the paper. In particular, Proposition \ref{prop-bidual-abstract-version} and the possibility of Theorem \ref{thm-rough} to be true were suggested by the referee. \newpage
1,477,468,751,233
arxiv
\section{Introduction} \label{sec1} Since the pioneering discoveries of quasicrystals with icosahedral,\cite{SBGC} dodecagonal,\cite{INF} decagonal,\cite{Ben} and octagonal\cite{WCK} symmetry, electronic transport phenomena arguably belong to the most celebrated and intriguing physical properties of these intermetallic alloys.\cite{Poo,Ber,CR} For instance, the electric conductivity of icosahedral quasicrystals decreases strongly with decreasing temperature and with improving structural quality of the sample, and anomalous transport behaviour is also observed in other quantities such as thermopower or magnetoconductance. Stimulated by the experimental results, a lot of effort has been spent towards a better theoretical understanding of the transport phenomena in quasicrystalline materials.\cite{CR,SGD,1D,BYP,TM,CC,2D,RGT,JR,HJ,CG,SD,HHTM,BCV,IG,FP,RKST} This is also of interest from the theoretical or mathematical point of view, because quasicrystals as ordered aperiodic structures are intermediate between periodically ordered crystals and short-range ordered amorphous solids. In particular, the anomalous diffusion of wave packets in quasiperiodic systems has attracted wide interest.\cite{RGT,JR,HJ,CG,SD,HHTM,BCV,IG,FP,RKST} Multifractal eigenstates --- neither extended over the system, nor exponentially localized --- exist at the metal-insulator transition of the Anderson model of localization.\cite{MHT,ARM} In tight-binding models of quasicrystals, this kind of eigenstates has also been revealed.\cite{1D,BYP,TM,CC,RGS} Generally, the energy spectra of one-dimensional (1D) quasicrystals are singular continuous.\cite{1D} However, in higher-dimensional cases, the energy spectra can be band-like with finite measure, fractal-like with zero band width or a mixture of partly band-like and partly fractal-like character.\cite{CC,2D} The diffusion properties of quasicrystals are associated with the complex eigenstates and energy spectra stated above.\cite{IG,FP,RKST} To describe the diffusion of a wave packet initially localized at some site $n_{0}$, one usually discusses the temporal autocorrelation function\cite{RGT,JR,HJ,CG,SD,TT,BL} \begin{equation} C(t) = \frac{1}{t}\int\limits_{0}^{t}|\Psi_{n_{0}}(t^{\prime})|^{2}\, dt^{\prime} \end{equation} or the mean square displacement\cite{SD,HHTM,BCV,TT} \begin{equation} d(t) = \left(\sum\limits_{n}|{\bf r}_{n}- {\bf r}_{n_{0}}|^{2}\, |\Psi_{n}(t)|^{2}\right)^{1/2} \end{equation} where $\Psi_{n}(t)$ is the amplitude of the wavefunction at time $t$ at the $n$th site which is located at the position ${\bf r}_{n}$ in space. Apparently, $C(t)$ is the time-averaged probability of a wave packet staying at the initial site at time $t$, and $d(t)$ determines the spreading of the width of a wave packet. Generally, one finds $ C(t)\sim t^{-\delta}, d(t)\sim t^{\beta}$ with $0<\delta <1$ and $ 0<\beta<1 $ for 1D quasiperiodic systems.\cite{RGT,JR,CG,SD,HHTM} For higher-dimensional cases, no general results are available. Zhong {\it et al.}\/ \cite{JR} observed a transition of $C(t)$ with the increase of the quasiperiodic modulation strength in simple higher-dimensional Fibonacci lattices. For small patches of the octagonal tiling, Passaro {\it et al.}\/ \cite{BCV} found $d(t)\sim t^\beta$ with $0<\beta<1$ even for the case of a band-like spectrum. However, one of us obtained $C(t)\sim t^{-1}$ for this case after analyzing the long-time behaviour.\cite{HJ} In fact, it is quite difficult to derive the exact long-time behaviour of $C(t)$ and $d(t)$ from the investigation of rather small systems. Therefore, a study of a large higher-dimensional quasiperiodic system will be significant. In this paper, we will mainly discuss the diffusion properties on a 2D quasiperiodic tiling related to the octagonal quasicrystals. The tiling is based on the octonacci chain and thus permits us to study large systems. Recent investigations show that the diffusion properties are connected with the multifractality of eigenstates and energy spectra.\cite{RGT,JR,HJ,CG,SD,HHTM,BCV,IG,FP,RKST,TT,BL,zhong} It can be rigorously proven that the exponent $\delta$ ruling the decay of the autocorrelation function $C(t)$ equals the correlation dimension $D_{2}$ of the local spectral measure associated with the initial site.\cite{RGT,JR} In 1D quasiperiodic systems, Guarneri\cite{IG} analytically deduced that $\beta \geq D_{1}$, where $D_{1}$ is the information dimension of the spectral measure. More recently, Ketzmerick {\it et al.}\/ \cite{RKST} argued that $\beta$ is also related to the multifractal properties of the eigenstates. We shall address the question whether these relations exist in different quasiperiodic systems, especially in higher-dimensional cases. This paper is organized as follows. In the next section, we describe the construction of the labyrinth tiling and its properties that are relevant for our analysis. Afterwards, in Sec.~\ref{sec2b}, we consider a tight-binding model on the labyrinth tiling and express the eigenstates and eigenvalues in terms of eigenstates and eigenvalues of a tight-binding Hamiltonian on the octonacci chain. In Sec.~\ref{sec3} we show the energy spectra and multifractal eigenstates for both these systems. Sec.~\ref{sec4} describes the diffusion properties of the octonacci chain. The diffusion properties of the labyrinth tiling will be emphasized in Sec.~\ref{sec5}. In Sec.~\ref{sec6} we discuss the fractal dimensions of eigenstates and eigenspectra and their relation to the diffusion properties. Finally, we conclude in Sec.~\ref{sec7}. \section{The labyrinth tiling} \label{sec2a} The labyrinth tiling\cite{SMS,CC} can be considered as a subset of the octagonal quasiperiodic tiling,\cite{octagonal} and vice versa.\cite{CC} One can build it directly from the octonacci chain. In order to construct the labyrinth tiling, we introduce the octonacci sequence which can be produced by iterating the inflation rule \begin{equation} \varrho:\;\begin{array}{lcl} S & \rightarrow & L \\ L & \rightarrow & LSL \end{array} \label{eq:subst} \end{equation} on the initial word $w_{0}=S$. The number of letters $g_{m}$ in the $m$th iterate $w_{m}=\varrho^{m}(w_{0})$ satisfies the recursion \begin{equation} g_{m}=2g_{m-1}+g_{m-2},\qquad g_{0}=g_{1}=1. \label{eq:gm} \end{equation} The numbers of letters $L$ and $S$ in $w_{m}$ are given by $f_{m}$ and $f_{m-1}$, respectively, which fulfill the same recursion relation with a different initial condition \begin{equation} f_{m}=2f_{m-1}+f_{m-2},\qquad f_{0}=0,\quad f_{1}=1, \label{eq:fm} \end{equation} such that $g_{m}=f_{m}+f_{m-1}$. Their ratio in the limit sequence $w_{\infty}$ \begin{equation} \lim_{m\rightarrow \infty} \frac{f_{m}}{f_{m-1}}= \lim_{m\rightarrow \infty} \frac{g_{m}}{g_{m-1}}=\lambda \end{equation} is given by the silver mean $\lambda=1+\sqrt{2}$ which is a root of the quadratic equation $\lambda^{2}=2\lambda+1$. As can been seen from Eq.~(\ref{eq:gm}), $g_m$ is odd for all $m$. Associating with the letters $S$ and $L$ intervals of length $1$ and $\lambda$, respectively, one obtains a linear chain ${\cal C}_{m}$ of $N_{m}=g_{m}+1$ sites, which is known as octonacci or silver mean chain.\cite{CC} We note that all words $w_{m}$ obtained from the substitution rule (\ref{eq:subst}) are palindromes, thus the resulting chains are symmetric under reflection. The labyrinth tiling can be constructed from the Euclidean product ${\cal C}_{m}\times {\cal C}_{m}$ of two such chains. This product is a rectangular grid, thus its vertices can be classified into {\em even}\/ and {\em odd}\/ vertices if they can be reached from the origin by an {\em even}\/ or {\em odd}\/ number of steps along the bonds, respectively. This is completely analogous to the even and the odd sublattice of the square lattice. Connecting the {\em even}\/ vertices by diagonal bonds, we obtain a finite approximant ${\cal L}_{m}$ of the labyrinth tiling ${\cal L}$. The {\em odd}\/ vertices, when connected by diagonal bonds, form another labyrinth tiling ${\cal L}_{m}^{\ast}$ that is dual to ${\cal L}_{m}$. We note that, due to the palindromicity of $w_{m}$, ${\cal L}_{m}$ and ${\cal L}_{m}^{\ast}$ just differ by a $90$ degree rotation.\cite{BGB} The finite labyrinth tiling ${\cal L}_m$ consists of $N_{m}^{2}/2$ atoms. An example is shown in Fig.~\ref{fig:Fig1}. By construction, the labyrinth tiling is symmetric with respect to reflection at the main diagonal. Taking this diagonal as the $x$ axis and the direction orthogonal to it as the $y$ axis, the coordinates of the vertices of the labyrinth tiling, labeled by $k,l\in \mathbb{Z}$, can be written as\cite{CC} \begin{mathletters} \begin{eqnarray} x_{k,l} & = & u_{k}+u_{l}\\ y_{k,l} & = & u_{k}-u_{l} \end{eqnarray} \end{mathletters}% where the coordinates with {\em even}\/ values of $k+l$ belong to ${\cal L}$, those with {\em odd}\/ values of $k+l$ to ${\cal L}^{\ast}$. Here, \begin{equation} u_{k} = k/\sqrt{2}+\left[k/\sqrt{2}\right] \end{equation} where $\left[x\right]$ denotes the integer closest to $x$. It is easy to see that the sequence of long and short lengths given by $u_{k}$ again follows the octonacci sequence, but now the two intervals have lengths $(\lambda\pm1)/2$ which again have ratio $(\lambda+1)/(\lambda-1)=\lambda$. Thus, the diagonal of the labyrinth $x_{k,k}=2u_{k}$ is just a $\sqrt{2}$-scaled version of the original octonacci sequence. \section{Tight-binding model} \label{sec2b} The energy spectra for tight-binding Hamiltonians on the labyrinth tiling were investigated by Sire.\cite{CC} For properly chosen Hamiltonians, the analysis reduces to the one-dimensional case, and the energy spectrum can be derived directly from those of the corresponding Hamiltonian on the octonacci chain. However, the properties of eigenstates, which also factorize into the product of two eigenstates of the octonacci chain, were not discussed in Ref.~\onlinecite{CC}. Here, we follow the same route to study the eigenvalues and eigenstates. Consider two identical octonacci chains in the framework of a tight-binding model with zero on-site potentials \begin{mathletters} \begin{eqnarray} H^{(1)}\psi_{k}^{(1,i)} & = & t_{k}^{}\psi_{k-1}^{(1,i)}+t_{k+1}^{}\psi_{k+1}^{(1,i)} = E^{(1,i)}\psi_{k}^{(1,i)}\label{eq:toc1}\\ H^{(2)}\psi_{l}^{(2,j)} & = & t_{l}^{}\psi_{l-1}^{(2,j)}+t_{l+1}^{}\psi_{l+1}^{(2,j)} = E^{(2,j)}_{}\psi_{l}^{(2,j)} \label{eq:toc2} \end{eqnarray} \end{mathletters}% where superscripts $(1)$ and $(2)$ label the two chains and the indices $i$ and $j$ enumerate the eigenfunctions $\psi$ and eigenvalues $E$ of the two octonacci chains. Throughout the paper, we employ free boundary conditions, which formally corresponds to setting $\psi_{0}=\psi_{N_{m}+1}=0$. The hopping parameters $t_{k}$ and $t_{l}$ take values according to the octonacci sequence. We associate $t_{k,l}=1$ to a {\em long}\/ bond of length $\lambda$ and $t_{k,l}=v$ to a {\em short}\/ bond of length $1$, respectively. The eigenvalues of the octonacci chain are symmetric with respect to $E=0$; if $\psi$ is an eigenstate of $H$ with eigenvalue $E$, then the state $\tilde{\psi}$ with amplitudes \begin{equation} \tilde{\psi}_{k} = (-1)^{k}\psi_{k} \label{eq:tilde} \end{equation} is again an eigenstate, but has an eigenvalue $-E$. For $E=0$, the eigenvalue equation reduces to the recursion \begin{equation} \psi_{k+1} = -\frac{t_{k}}{t_{k+1}}\psi_{k-1} \end{equation} which always yields precisely {\em two}\/ linearly independent solutions $\psi^{\pm}$ which can be chosen to vanish on either {\em even}\/ or {\em odd}\/ sites. These have the form \begin{equation} \psi^{-}_{2r-1} = (-1)^{r-1}\psi^{-}_{1} \prod_{s=2}^{r}\frac{t_{2s-2}}{t_{2s-1}},\qquad \psi^{-}_{2r} = 0, \label{eq:psim} \end{equation} and \begin{equation} \psi^{+}_{2r-1} = 0, \qquad \psi^{+}_{2r} = (-1)^{r-1}\psi^{+}_{2} \prod_{s=2}^{r}\frac{t_{2s-1}}{t_{2s}}, \label{eq:psip} \end{equation} where $\psi^{-}_{1}\neq 0$ and $\psi^{+}_{2}\neq 0$ are determined, up to phases, by normalization. We note that one has to be careful if one employs periodic boundary conditions because these, for an odd length of the chain, couple the even and odd sublattices of the chain. Thus, while there are again two states (\ref{eq:psim}) and (\ref{eq:psip}) for a periodic chain of even length, only a single state at $E=0$ exists for an odd length of the chain. Multiplying the two Eqs.~(\ref{eq:toc1}) and (\ref{eq:toc2}), we obtain \begin{eqnarray} H^{(1,2)}\Phi_{k,l}^{(i,j)} & = & t_{k}^{}t_{l}^{}\Phi_{k-1,l-1}^{(i,j)}+ t_{k}^{}t_{l+1}^{}\Phi_{k-1,l+1}^{(i,j)}+ t_{k+1}^{}t_{l}^{}\Phi_{k+1,l-1}^{(i,j)}+ t_{k+1}^{}t_{l+1}^{}\Phi_{k+1,l+1}^{(i,j)}\nonumber\\ & = & E^{(1,i)}_{}E^{(2,j)}_{}\Phi_{k,l}^{(i,j)} \label{eq:prod} \end{eqnarray} where we defined \begin{equation} \Phi_{k,l}^{(i,j)} = \psi_{k}^{(1,i)}\psi_{l}^{(2,j)} \label{eq:prodef} \end{equation} as an eigenfunction on the product of the two chains with eigenvalue $E^{(1,i)}_{}E^{(2,j)}_{}$. In Eq.~(\ref{eq:prod}), only wave function amplitudes at positions $(k\pm 1,l\pm 1)$ contribute, thus the Hamiltonian $H^{(1,2)}$ corresponds to hopping along the {\em diagonals}\/ of the product grid ${\cal C}_{m}\times {\cal C}_{m}$. The corresponding hopping parameters are the product of two hopping parameters in the octonacci chain and thus take values $1$, $v$, and $v^2$ for diagonals of length $\lambda+1$, $\sqrt{2\lambda+2}$, and $\lambda-1$, respectively. Thus the system in Eq.~(\ref{eq:prod}) naturally separates into {\em two}\/ independent sets of equations with $k+l$ {\em even} or $k+l$ {\em odd}, respectively. In this paper, we restrict our investigation to the case with $k+l$ even as the other case is completely analogous. Thus, $H^{(1,2)}$ can be interpreted as a tight-binding Hamiltonian with zero on-site potential defined on the labyrinth tiling ${\cal L}$. Clearly, the eigenvalues for the labyrinth are just products of the eigenvalues for the octonacci chain, and all such products appear as eigenvalues because the spectra of the two dual labyrinth tilings ${\cal L}_{m}$ and ${\cal L}_{m}^{\ast}$ are identical. For the corresponding eigenfunctions on ${\cal L}$, we have to construct linear combinations of the product eigenfunctions $\Phi_{i,j}$ (\ref{eq:prodef}) which vanish on the vertices of the dual tiling ${\cal L}_{m}^{\ast}$. This can be done as follows. Suppose $\psi^{(1,i)}$ and $\psi^{(2,j)}$ are normalized eigenfunctions of the octonacci chain with eigenvalues $E^{(1,i)}$ and $E^{(2,j)}$, respectively. Then both $\Phi^{(i,j)}=\psi^{(1,i)}\psi^{(2,j)}$ and $\tilde{\Phi}^{(i,j)}=\tilde{\psi}^{(1,i)}\tilde{\psi}^{(2,j)}$ (\ref{eq:tilde}) are eigenfunctions of $H^{(1,2)}$ with the same eigenvalue $E^{(1,i)}E^{(2,j)}$, where we assume $E^{(1,i)}\neq 0$ and $E^{(2,j)}\neq 0$. But from Eq.~(\ref{eq:tilde}) we have \begin{equation} \tilde{\Phi}^{(i,j)}_{k,l} = (-1)^{k+l}\Phi^{(i,j)}_{k,l} \end{equation} and thus the linear combinations \begin{equation} {\Psi^{(i,j)}}^{\pm} = \frac{1}{\sqrt{2}}\left(\Phi^{(i,j)}\pm\tilde{\Phi}^{(i,j)}\right) \label{eq:lc1} \end{equation} are normalized eigenfunctions that vanish for {\em odd}\/ or for {\em even}\/ values of $k+l$, and thus on ${\cal L}^{\ast}$ or ${\cal L}$, respectively. If one or both eigenvalues $E^{(1,i)}$ and $E^{(2,j)}$ are zero, we can make use of the previously discussed eigenfunctions $\psi^{+}$ (\ref{eq:psip}) and $\psi^{-}$ (\ref{eq:psim}) to construct the desired wavefunctions. If one of the eigenvalues vanishes, say, without loss of generality, $E^{(1,i)}=0$ and $E^{(2,j)}\neq 0$, the appropriate four linear combinations are \begin{equation} {\Psi^{(i,j)}}^{\pm\pm} = \frac{1}{\sqrt{2}}{\psi^{(1)}}^{\pm} \left(\psi^{(2,j)}\pm\tilde{\psi}^{(2,j)}\right) \label{eq:lc2} \end{equation} where ${\psi^{(1)}}^{\pm}$ is the wavefunction of Eqs.~(\ref{eq:psip}) and (\ref{eq:psim}) on the first chain, and we also used the state $\tilde{\psi}^{(2,j)}$ which has an energy $-E^{(2,j)}$. Clearly, the wave functions ${\Psi^{(i,j)}}^{++}$ and ${\Psi^{(i,j)}}^{--}$ have support on ${\cal L}$, the other two linear combinations ${\Psi^{(i,j)}}^{+-}$ and ${\Psi^{(i,j)}}^{-+}$ live on ${\cal L}^{\ast}$. Finally, for $E^{(1,i)}=E^{(2,j)}=0$, we have four states \begin{equation} {\Psi^{(i,j)}}^{\pm\pm} = {\psi^{(1)}}^{\pm}{\psi^{(2)}}^{\pm} \label{eq:lc3} \end{equation} where again ${\Psi^{(i,j)}}^{++}$ and ${\Psi^{(i,j)}}^{--}$ are supported on ${\cal L}$, and the remaining two product states ${\Psi^{(i,j)}}^{+-}$ and ${\Psi^{(i,j)}}^{-+}$ on ${\cal L}^{\ast}$. In particular, this argument proves that $E=0$ is a $(2N_{m}\! -\! 2)$-fold degenerate eigenvalue for the labyrinth ${\cal L}_m$. Thus we find, as for simple tight-binding Hamiltonians on the Penrose\cite{confined} or the octagonal Ammann-Beenker tiling, a large degeneracy of states in the ``band'' center at $E=0$. However, in contrast to these well-known examples where the degeneracy stems from certain ``confined'' states\cite{confined} that occur as a consequence of the local topology of the tilings, the spectral measure carried by the states at $E=0$ vanishes for the labyrinth as $N_{m}\rightarrow\infty$, thus it is not a finite fraction of the eigenstates that contributes to $E=0$ in this case. In practice, having the complete knowledge of the eigenstates for the labyrinth tiling at our disposal, we do not need to care too much about the precise linear combinations of states derived above. Since the eigenvalues $E_{i}$, $i=1,\ldots,N$ of the octonacci chain are symmetric about zero, one can obtain the set of eigenvalues of the labyrinth tiling simply as \begin{equation} \left\{E_{i}E_{j} \mid 1\leq i\leq {\textstyle\frac{N}{2},\; j\leq i}\right\} \cup \left\{E_{i}E_{j} \mid {\textstyle\frac{N}{2}}<i\leq N,\; j\leq i-1\right\} \end{equation} where we assume that the eigenvalues of the octonacci chain are ordered as $E_{i}\geq E_{j}$ for $i>j$. The corresponding eigenvectors are most easily constructed by just restricting the products of eigenvectors to the sites of the labyrinth ${\cal L}$, and re-normalizing the resulting eigenstate. Eq.~(\ref{eq:tilde}) guarantees that this procedure yields the correct results, because the states $\psi$ and $\tilde{\psi}$ just differ by an alternating sign. \section{Energy spectra and wavefunctions} \label{sec3} Following the results of Sec.~\ref{sec2b}, one can easily calculate the density of states (DOS) and the integrated density of states (IDOS). For comparison, we show the DOS and the IDOS for the octonacci chain and the labyrinth tiling in Fig.~\ref{fig:Fig2}. For the octonacci chain, the IDOS is a devil's staircase even for $v$ close to $1$ and the DOS is singular continuous with zero Lebesgue measure. By more detailed analysis, one finds a self-similar energy spectrum for the octonacci chain with a hierarchical gap structure as described by the gap labelling theorem.\cite{gaplabel} In contrast, we observed a smooth IDOS without visual gaps as $v$ approaches $1$ in the labyrinth tiling. A more detailed analysis of the IDOS and the energy spectra shows that in the regime $0.6<v<1.0$ the energy spectrum contains no or a finite number of gaps; for $v<0.6$ the spectrum is fractal-like and the IDOS is similar to a devil's staircase. Sire found that the spectrum is singular continuous with {\em finite}\/ Lebesgue measure for $v\geq 0.4$,\cite{CC} which may indicate that the spectrum is a mixture of band-like and fractal-like parts in the regime $0.4\leq v<0.6$. In Fig.~\ref{fig:Fig2}(b) one can see a peak at the center of the spectrum which is due to the degenerate states at $E=0$. But it differs from the localized states observed in the Penrose tiling\cite{confined} in the sense that no leap at $E=0$ is seen in the IDOS, in agreement with the results of the previous section. For varying parameter $v$, we find three regions with different behaviour of the DOS of the labyrinth tiling: a maximum around the center, distinct shoulders located between the spectral center and edge, and a tail at the band edge, which is similar to the behaviour observed for a tight-binding model on the icosahedral Ammann-Kramer tiling.\cite{TM} In order to characterize the eigenstates, we employ a multifractal analysis, which is based on the standard box-counting procedure.\cite{TM,ARM} In our numerical calculations, we determine the singularity strength $\alpha(q)$ and the corresponding fractal dimension $f(q)$ by a linear regression procedure, but prior to this we need to check the linearity of $\sum_{i}\mu_{i}\ln\mu_{i}$ versus $\ln{\varepsilon}$, where $\mu_{i}(q,\varepsilon)$ denotes the normalized $q$th moment of the box probability for boxes of linear size $\varepsilon L$. A homogenously extended wave function corresponds to $\alpha(q)=f(q)=d$, where $d$ denotes the spatial dimension. For critical eigenstates, the fractal dimension $f$ is a smooth convex function of $\alpha$, and $\alpha$ should be limited to a $q$-interval. Moreover, the generalized dimensions of the eigenstate $\psi$ are given by $D_{q}^{\psi}=\left[f(q)-q\alpha (q)\right]/(1-q)$ for $q\neq 1$ and $D_{1}^{\psi}=f(1)=\alpha(1)$. The singularity spectra $f(\alpha)$ of eigenstates for both the octonacci chain and the labyrinth tiling show the typical shape expected for multifractal states, thus we refrain from showing these here. For the octonacci chain, the eigenstates in the ``band'' center are more extended than those at the ``band'' edge. In this case, the curves $f(\alpha)$ become fairly narrow as $v$ approaches $1$. Generally, the eigenstates show stronger multifractal characteristics with decreasing parameter $v$. In contrast to the behaviour observed for the Penrose tiling,\cite{TM} for the labyrinth tiling we do not find that the multifractal behaviour of eigenstates becomes significantly stronger when moving from energies at the edge towards the center of the ``band''. We also calculated the scaling behaviour of the inverse participation number \begin{equation} P^{-1}(E,V)=\sum_{\bf r}|\psi({\bf r})|^{4} \label{eq:ipn} \end{equation} with respect to the size $V=L^{d}$ of the system,\cite{TM,ARM} i.e., \begin{equation} P^{-1}(E,V) \sim V^{-\gamma(E)} \label{eq:ipnscal} \end{equation} for large $V$. A fractal eigenstate is characterized by $0<\gamma<1$, whereas $\gamma=0$ corresponds to a localized state, and $\gamma=1$ to an extended state. In general, the scaling exponent $\gamma(E)$ depends on the energy.\cite{ARM} Numerically, one analyzes the scaling behaviour of $P^{-1}(E,V)$ at an energy $E$ by averaging over the eigenstates within a small energy interval $E\pm\Delta E/2$. The result for eigenvectors from the center and at the lower edge of the spectrum is shown in Fig.~\ref{fig:Fig4} which corroborates the multifractal nature of the eigenstates in both systems. The exponent $\gamma$, given by the slope, decreases, presumably continuously, from $\gamma=1$ for the periodic case $v=1$ to $\gamma=0$ for $v=0$. \section{Quantum diffusion for the octonacci chain} \label{sec4} In this short section, we briefly present our numerical results of the autocorrelation function $C(t)$ and the mean square displacement $d(t)$ for the octonacci chain. Further discussion and comparison with the results for the labyrinth will be given below. Fig.~\ref{fig:Fig5} shows the autocorrelation function $C(t)$ of the octonacci chain. The initial site is located at the center of the system. The long-time behaviour of $C(t)$ follows $C(t)\sim t^{-\delta}$ with $0<\delta<1$ for different $v$. For small $v$, $C(t)$ displays strong oscillatory behaviour, which may result from level fluctuations. The result for $d(t)$ is displayed in Fig.~\ref{fig:Fig6}. Evidently, $d(t)\sim t^{\beta}$ and $\beta$ increases with increasing $v$, limited by $\beta<1$. For a given modulation parameter $v$, we observe the relation $\beta>\delta$ between the two exponents. Similar results have been obtained for 1D Fibonacci chains and at the mobility edge of the Harper model.\cite{RGT,JR,SD,HHTM} Therefore, in accordance with the singular continuous energy spectra and the multifractal eigenstates, the diffusion is usually anomalous in 1D quasiperiodic systems. \section{Quantum diffusion for the labyrinth tiling} \label{sec5} We now switch to the more interesting case of the labyrinth tiling. In Fig.~\ref{fig:Fig7}, we show the behaviour of $C(t)$ for the labyrinth tiling. The number of sites in our system is $N^2/2=19\, 602^{2}/2=192\, 119\, 202$, which is much larger than other 2D quasiperiodic systems discussed previously such as, for instance, Fibonacci lattices\cite{JR} and the octagonal tiling.\cite{HJ} Therefore, we can utilize this system to study the long-time behaviour of $C(t)$ more accurately than before. Apparently, Fig.~\ref{fig:Fig7} again exhibits a power law behaviour $C(t)\sim t^{-\delta}$. By a more detailed analysis, we surprisingly find a transition point at $v_{c}\approx 0.6$. For $v<v_c$ the slope of the curves decreases with decreasing $v$. In the regime $v>v_c$, the behaviour of $C(t)$ is the same as for a periodic system, i.e., $C(t)\sim t^{-1}$.\cite{JR} When compared to the results of Sec.~\ref{sec3}, we see that this regime corresponds to the region where one finds band-like energy spectra. Since $\delta$ equals the correlation dimension $D_{2}$ of the energy spectral measure,\cite{RGT,JR} $\delta=1$ is reasonable for the case of band-like spectra. Similar to the 1D case, one still has $0<\delta<1$ in the regime $v<v_c$ with fractal-like or mixed spectra for the labyrinth tiling. We expect that this is a general result for higher-dimensional quasiperiodic systems. Furthermore, we find the behaviour of $C(t)$ is independent of the initial site, which can be observed from the example shown in Fig.~\ref{fig:Fig7}. Of course, as our analysis is based on numerical data for a finite system, we cannot possibly {\em prove}\/ the existence of a true transition point $v_{c}$, because we cannot rule out a rapid, but continuous change in $\delta$ around $v\approx 0.6$. The calculation of the square displacement $d(t)$ is numerically more expensive, thus we restrict ourselves to a smaller system of $N^2/2=578^{2}/2=167\, 042$ sites. Nevertheless, this is still larger than the 2D octagonal quasicrystals studied previously.\cite{BCV} In Fig.~\ref{fig:Fig8}, we show that the long-time behaviour is described by a power law $d(t)\sim t^{\beta}$. In contrast to $C(t)$, we do not find a transition point for $d(t)$ as the parameter $v$ is varied. As for the octagonal tiling\cite{BCV} and for the octonacci chain, $0<\beta<1$ for the labyrinth tiling. Therefore, a band spectrum does not imply ballistic diffusion in quasicrystals. It can be argued that the exponent $\beta$ is associated with the correlation dimension $D_{2}^\psi$ of the eigenstates.\cite{RKST} In 1D quasiperiodic systems, or at the metal-insulator transition in the Anderson model of localization, the eigenstates are multifractal\cite{1D,MHT,ARM} and $0<\beta<1$.\cite{SD,HHTM,TT} In accordance, the multifractal eigenstates in 2D quasicrystals may be expected to lead to anomalous diffusion with $0<\beta<1$. Possibly, ballistic diffusion can occur in 3D quasicrystals because their wavefunctions are more extended.\cite{TM} So far, we assumed that the initial wave packet is a $\delta$-function, thus we start with an electron that is localized at a particular site $n_{0}=(k_{0},l_{0})$ and follow the spreading of its wave function $\Psi^{\{n_{0}\}}$ with time. This means that, in general, {\em all}\/ eigenstates contribute to the time evolution because the expansion in terms of the orthonormal basis of eigenstates $\Psi^{(i,j)}$ is \begin{equation} \Psi^{\{n_{0}\}}_{k,l} = \delta_{k,k_{0}}^{}\delta_{l,l_{0}}^{} = \sum_{i,j}\Psi^{(i,j)}_{k_{0},l_{0}}\Psi^{(i,j)}_{k,l} \label{eq:delta} \end{equation} and thus the entire energy spectrum is probed. For convenience, we dropped the superscripts $\pm$ on the wavefunctions of Eqs.~(\ref{eq:lc1})--(\ref{eq:lc3}), assuming that the proper linear combinations are used that are supported on the labyrinth ${\cal L}_{m}$. Note that we do not need a complex conjugation in Eq.~(\ref{eq:delta}) because the Hamiltonian is a real symmetric and we therefore can choose eigenvectors that form a real orthogonal matrix. In order to check for an energy dependence of the diffusion, we now consider different initial wave packets $\Psi^{\{n_{0},[E-\frac{\Delta E}{2},E+\frac{\Delta E}{2}]\}}$ which have a finite width and are constructed as linear combinations of eigenstates from a certain energy window $[E-\Delta E/2,E+\Delta E/2]$. The new normalized states can be written as \begin{equation} \Psi^{\{n_{0},[E-\frac{\Delta E}{2},E+\frac{\Delta E}{2}]\}}_{k,l} = \frac{\sum^{\prime}_{i,j}\Psi^{(i,j)}_{k_{0},l_{0}}\Psi^{(i,j)}_{k,l}} {\sqrt{\sum^{\prime}_{i,j}|\Psi^{(i,j)}_{k_{0},l_{0}}|^{2}}} \label{eq:wavepack} \end{equation} where the sum $\sum^{\prime}$ is restricted to the eigenstates $\Psi^{(i,j)}$ with eigenvalues $E^{(1,i)}E^{(2,j)}\in [E-\Delta E/2,E+\Delta E/2]$.\cite{Schw} Clearly, Eq.~(\ref{eq:wavepack}) becomes Eq.~(\ref{eq:delta}) if the energy interval contains the complete spectrum, this is nothing but the usual completeness condition of the basis of eigenvectors. We numerically checked different energy windows for the octonacci chain and the labyrinth tiling. Due to the high DOS around $E=0$, we choose smaller intervals in the band center. The results in Fig.~\ref{fig:Fig9} and Fig.~\ref{fig:Fig10} show that the long-time behaviour of $C(t)$ and $d(t)$ hardly depends on the selection of the energy window. However, it is more complex at small times due to the different shapes and widths of the initial wave packets. The various values of $d(t)$ at the initial time reflect the width of the initial wave packet. The smaller the energy interval under consideration, the wider is the initial wavepacket. In practice, in order to avoid that the wave packet reaches the boundary too early, the energy interval may not be chosen too small. \section{Dynamical scaling and fractal dimensions} \label{sec6} In 1D quasiperiodic systems, it is known that the inequality $\beta\geq D_{1}$ relates the diffusion behaviour and the fractal properties of the energy spectrum.\cite{IG} In $d$ dimensions, this generalizes to the inequality $\beta\geq D_{1}/d$, thus it implies a superdiffusive behaviour $\beta\geq 1/2$ in two dimensions if $D_1\ge 1$. In Fig.~\ref{fig:Fig11}, the values of the exponent $\beta$ for the octonacci chain and the labyrinth tiling are shown for various values of the parameter $v$. In all cases, we find that this inequality holds. Apparently, the diffusion exponents $\beta$ for the octonacci chain and the labyrinth tiling are very close, which might be due to the product structure of the labyrinth and its wavefunctions. According to a conjecture by Pi\'{e}chon, \cite{FP} $\beta=D_{-1}$ of the global spectral measure for one-dimensional quasiperiodic models with multifractal global spectral measure. In order to check this relation, we calculated $D_{-1}$, but it turns out that is rather difficult to extract the accurate values by a linear fit due to strong oscillations in the data. However, it appears that the relation does not hold for general parameter values $v$ in the octonacci chain, and it certainly cannot be valid for the two-dimensional system as it only involves a dimension that characterizes the spectral measure. Ketzmerick {\it et al.}\cite{RKST}\/ suggested an improved inequality $\beta \geq D_{2}/D_{2}^{\psi}$ which is numerically obeyed by 1D quasiperiodic models.\cite{RKST} As can be seen from Fig.~\ref{fig:Fig11}, this relation applies for the octonacci chain as well as for the the labyrinth tiling. However, in the two-dimensional case the inequality is less sharp as $\beta$ is much larger than the ratio $D_{2}/D_{2}^{\psi}$, in particular for values of the parameter $v\ge 0.6$ where the energy spectrum is smooth and $D_{2}\approx 1$. For multifractal wavefunctions at the Anderson transition or at quantum Hall transitions, one finds $D_{2}^{\psi}=d D_{2}$ for a $d$-dimensional system.\cite{BL,BR} Above, it has been demonstrated that $D_{2}=1$ for the band-like spectra in 2D quasiperiodic systems, but the corresponding eigenstates are multifractal with generalized dimension $D_{2}^{\psi}<2$. Although the eigenstates of 2D quasiperiodic tight-binding models are similar to the critical states at the Anderson transition, the equality $D_{2}^{\psi}=d D_{2}$ apparently does not apply to 2D quasicrystals. Recently, Zhong {\it et al.}\cite{zhong}\/ argued that one might interpret the superdiffusive behaviour in aperiodic systems as a ballistic behaviour in a space of effective dimension $D_{2}^{\psi}$, or that this should at least give an upper bound on the possible values of $\beta$. In Fig.~\ref{fig:Fig11}, we compare the ratio $D_{2}^{\psi}/d$ to $\beta$. It turns out that the values of $\beta$ and $D_{2}^{\psi}/d$ are rather close, but that there seems to be a systematic deviation with $\beta<D_{2}^{\psi}/d$ for small values of $v$ and $\beta>D_{2}^{\psi}/d$ for $v$ close to $1$. Therefore, at least for large values of the parameter $v$, it appears that this bound does not hold. Finally, we also included the values of $D_1^{\psi}/d$ in Fig.~\ref{fig:Fig11}, which apparently does give an upper bound on the values of $\beta$ for the models under consideration. So far, this is just an observation, we cannot present an argument that this should hold in general. \section{Conclusion} \label{sec7} In this paper, the energy spectra, wavefunctions and quantum diffusion for the octonacci chain and the labyrinth tiling are studied. The labyrinth tiling is based on the octonacci chain, which allows us to deal with very large systems. For the octonacci chain, the energy spectra are singular continuous and the eigenstates are critical. The energy spectra of the labyrinth tiling presumably are also singular continuous, but they can be band-like (i.e., of finite Lebesgue measure) with zero or finite gaps, a mixture of band-like and fractal parts, or fractal-like upon increasing the modulation strength. However, the eigenstates are multifractal irrespective of the value of the modulation parameter. The propagation of an initial wave packet is discussed in terms of the autocorrelation function $C(t)$ and the mean square displacement $d(t)$. Numerical results show that $C(t)\sim t^{-\delta}$ and $d(t)\sim t^{\beta}$ for the octonacci chain and the labyrinth tiling. Corresponding to the multifractal eigenstates, we observe $0<\beta<1$ for both systems. In the case of fractal-like or mixed energy spectra and multifractal eigenstates, we find $0<\delta<1$. However, for a band-like spectrum, $C(t)\sim t^{-1}$ as in a periodic system, which causes a qualitative change of behaviour in $C(t)$ for the labyrinth tiling at a parameter value $v_c\approx 0.6$. Similar effects have also been observed for Fibonacci lattices\cite{JR} and for the octagonal tiling.\cite{HJ} We believe that the anomalous diffusion shown in $d(t)$ and the crossover of the autocorrelation $C(t)$ will be a common phenomenon in 2D quasiperiodic systems. Of course, to observe the crossover in $C(t)$ one needs a parameter that allows one to continuously move away from the periodic case, which is not easily at hand for the most commonly investigated quasiperiodic model systems such as the Penrose or the octagonal tiling. Finally, we also studied the influence of different initial wave packets by choosing the eigenstates from various energy windows. The results show that the behaviour of $C(t)$ and $d(t)$ does not depend significantly on the shape and the location of the initial wave packet. Comparing the values of $\beta$ with several expressions involving the fractal dimensions of energy spectra and eigenstates that were proposed in the literature, we find that the inequality $\beta \geq D_{2}/D_{2}^{\psi}$ of Ref.~\onlinecite{RKST} holds true. However, it seems that the bound $\beta\le D_{2}^{\psi}/d$ proposed recently by Zhong {\it et al.}\cite{zhong}\/ may be violated for parameter values $v$ close to one, i.e., close to the periodic case. However, we find that the weaker condition $\beta\le D_1^{\psi}/d$ is always satisfied. Our present work corroborates that there are strong relations between fractal properties of energy spectra and wavefunctions on the one hand and the exponents describing the quantum diffusion on the other hand. However, it appears to be difficult to find relations that give quantitative agreement for one- and two-dimensional aperiodic systems. Here, a deeper understanding of the underlying physics is desirable. Higher-dimensional systems constructed as products of one-dimensional systems, such as the labyrinth tiling, provide useful toy examples for further investigations which can, at least, be treated numerically in an efficient way. \acknowledgements The authors thank J.\ X.\ Zhong for fruitful discussions. HQY is grateful for the kind hospitality in Chemnitz. Financial support from DFG (UG) and the NSF of China (HQY) is gratefully acknowledged.
1,477,468,751,234
arxiv
\section{Introduction} In recent years, Ole E. Barndorff-Nielsen has been working on a class of stochastic models called integer-valued trawl processes. References include \cite{BarndorffNielsen(11)}, \cite{BarndorffNielsenBenthVeraart(12)} and \cite{BarndorffNielsenLundeShephardVeraart(14)}. These are flexible models whose core randomness is driven by Poisson random measures. Trawl processes are related to the up-stairs processes of \cite{WolpertTaqqu(05)} and the random measure processes of \cite{WolpertBrown(11)}. Both of these processes are stationary. \cite{BarndorffNielsenLundeShephardVeraart(14)} also brings out the relationship between their processes and $\mathrm{M}/G/\infty $ queues (e.g. \cite{Lindley(56)}, \cite{Reynolds(68)} and \cite[Ch. 6.31 {Bartlett(78)}) and mixed moving average processes (e.g. \cit {SurgailisRosinskiMandrekarCambanis(93)}). Related discrete time count models include \cite{CameronTrivedi(98)}, \cite{KedemFokianos(02)}, \cit {CuiLund(09)}, \cite{DavisWu(09)}, \cite{JungTremayne(11)}, \cit {McKenzie(03)}, \cite{ZhuJoe(03)}, \cite{JacobsLewis(78)}, \cit {McKenzie(03)} and \cite{Weiss(08)}. Trawl processes also fall within the wide class of the so-called ambit fields (e.g. \cit {BarndorffNielsenSchmiegel(07)} and \cite{BarndorffNielsenBenthVeraart(11) ). Recently, \cite{ShephardYang(14)} models high frequency financial data by using a trawl process to allow for fleeting movements to prices in addition to an integer-valued L\'{e}vy process proposed by \cit {BarndorffNielsenPollardShephard(12)}. As far as we know, there is no existing literature that directly and completely addresses likelihood inference for these trawl processes---or equivalently the prediction based upon it. Even though there are a large number of papers that focus on likelihood inference for marked point processes (see \cite{DaleyVere-Jones(08_Ch14)} for a survey), it only indirectly and partially describes trawl processes in terms of their jumps. A thorough likelihood inference for trawl processes needs to include the information in the initial value of the process. In this Chapter, we provide a thorough analysis of likelihood inference for integer-valued trawl processes and demonstrate the core ideas---prediction decomposition, filtering, smoothing and EM algorithm---by focusing on the so-called exponential trawl. It is not only a simplification of the modelling framework but also an intellectually interesting special case of its own, as in this special case the resulting trawl process is a continuous time hidden Markov process with countable state space. The theoretical analysis for the filtering and smoothing problems for this type of process has been discussed in details by \cite{Rudemo(73)} and \cit {Rudemo(75)}, using the classical theory of Kolmogorov's forward and backward differential equations. We particulary emphasize that the resulting EM algorithm in this special case is exact in the sense that there are no discretization errors in its computation. The major goal of this Chapter is to derive filtering and smoothing results in the framework of trawl processes, so the analysis adopted here can be easily scaled up to adapt to the discussions of other general trawls or even the inclusion of a non-stationary component proposed in \cit {ShephardYang(14)}. These general discussions will be dealt with elsewhere, for they require a significantly more sophisticated particle filtering and smoothing device. We also discuss non-negative trawl processes, which are particularly easy to work with. The structure of this Chapter is as follows. In Section \ref{Section: Trawl Process}, we remind the reader how to construct trawl processes using the exponential trawl. Section \ref{Section: Fitlering and Smoothing} includes details of how to carry out filtering and smoothing for these models. In Section \ref{Section: Likelihood inferences}, we show likelihood inference for exponential-trawl processes based on these filters and smoothers. Section \ref{Section: Non-Negative case likelihood} discusses the important but analytically tractable special case of non-negative trawl processes. We finally conclude in Section \ref{Section: Conclude}. The Appendix contains the proofs and derivations of various results given in this Chapter. \section{Exponential-Trawl Processes\label{Section: Trawl Process}} In this Section, we build our notation, definitions and key structures for the exponential-trawl process that will be focused on throughout this Chapter. We also provide its log-likelihood function based on observed data. \subsection{Definition} Our model will be based on a homogeneous L\'{e}vy basis on $\left[ 0,1\right] \times \mathbb{R} \longmapsto \mathbb{Z} \backslash \left\{ 0\right\} $, which models the discretely scattered events of integer size (with direction) $y\in \mathbb{Z} \backslash \left\{ 0\right\} $ at each point in time $s\in \mathbb{R} $ and height $x\in \left[ 0,1\right] $. It is defined b \begin{equation*} L\left( \mathrm{d}x,\mathrm{d}s\right) \triangleq \int_{-\infty }^{\infty }yN\left( \mathrm{d}y,\mathrm{d}x,\mathrm{d}s\right) ,\ \ \ \ \left( x,s\right) \in \left[ 0,1\right] \times \mathbb{R} , \end{equation* where $N$ is a three-dimensional Poisson random measure with intensity measur \begin{equation*} \mathbb{E}\left( N\left( \mathrm{d}y,\mathrm{d}x,\mathrm{d}s\right) \right) =\nu \left( \mathrm{d}y\right) \mathrm{d}x\mathrm{d}s. \end{equation* Here $\mathrm{d}s$ means the arrival times are uniformly scattered (over \mathbb{R} $), $\mathrm{d}x$ means the random heights are also uniformly scattered (over $\left[ 0,1\right] $) and $\nu \left( \mathrm{d}y\right) $ is a L\'{e vy measure concentrated on the non-zero integers \mathbb{Z} \backslash \left\{ 0\right\} $. Without any confusion, we will abuse the notation $\nu \left( y\right) $ to denote the mass of the L\'{e}vy measure centered at $y$. Throughout this Chapter, we assume that \begin{equation*} \int_{-\infty }^{\infty }\nu \left( \mathrm{d}y\right) =\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\nu \left( y\right) <\infty . \end{equation*} Following \cite{BarndorffNielsenLundeShephardVeraart(14)}, we think of dragging a fixed Borel measurable set $A\subseteq \lbrack 0,1]\times \left( -\infty ,0\right] $ through tim \begin{equation*} A_{t}\triangleq A+(0,t),\ \ \ \ t\geq 0, \end{equation* so the trawl process is defined b \begin{equation*} Y_{t}\triangleq L(A_{t})=\int_{\left[ 0,1\right] \times \mathbb{R} }1_{A}\left( x,s-t\right) L\left( \mathrm{d}x,\mathrm{d}s\right) . \end{equation*} Throughout the rest of this Chapter, we will focus on the exponential traw \begin{equation*} A\triangleq \left\{ \left( x,s\right) :s\leq 0,\ 0\leq x<d\left( s\right) \triangleq \exp \left( s\phi \right) \right\} ,\ \ \ \ \phi >0, \end{equation* to simplify our exposition of the key ideas. We will leave results on more general trawls in another study. \begin{example} Suppose tha \begin{equation*} \nu (\mathrm{d}y)=\nu ^{+}\delta _{\left\{ 1\right\} }\left( \mathrm{d y\right) +\nu ^{-}\delta _{\left\{ -1\right\} }\left( \mathrm{d}y\right) ,\ \ \ \ \nu ^{+},\nu ^{-}>0, \end{equation* where $\delta _{\left\{ \pm 1\right\} }\left( \mathrm{d}y\right) $ is the Dirac point mass measure centered at $\pm 1$. The corresponding $L\left( \mathrm{d}x,\mathrm{d}s\right) $ is called a Skellam L\'{e}vy basis, while the special case of $\nu ^{-}=0$ is called Poisson. The upper panel of Fig. \ref{Fig.: Trawl process illustration} shows events in $L $ using $\nu ^{+}=\nu ^{-}=10$, taking sizes on $1,-1$ with black and white dots respectively and with equal probability. \begin{figure}[t] \centering\includegraphics[width=11.7cm]{ExponentialTrawlProcess} \caption{A moving trawl $A_{t}$ is joined by the Skellam L\'{e}vy basis $L \mathrm{d}x,\mathrm{d}s)$, where the horizontal axis $s$ is time and the vertical axis $x$ is height. The shaded area is an example of the exponential trawl $A$, while we also show the outlines of $A_{t}$ when t=1/2 $ and $t=1$. Also shown below is the implied trawl process Y_{t}=L(A_{t})$. Code: \texttt{EPTprocess\_Illurstration.R}} \label{Fig.: Trawl process illustration} \end{figure} The lower panel of Fig. \ref{Fig.: Trawl process illustration} then illustrates the resulting Skellam exponential-trawl process $Y_{t}=L\left( A_{t}\right) $ using $\phi =2$, which sums up all the effects (both positive and negative) captured by the exponential trawl. Dynamically, $L\left( A_{t}\right) $ will move up by $1$ if the moving trawl $A_{t}$ either captures one positive event or releases a negative one; conversely, it will move down by $1$ if vice versa. Notice that $Y_{0}=L\left( A_{0}\right) $ might not be necessarily zero and the path of $Y$ at negative time is not observed. \end{example} \subsection{Markovian Counting Process\label{Sect: Trawl process decomposition}} For $y\in \mathbb{Z} \backslash \left\{ 0\right\} $, let $C_{t}^{\left( y\right) }\in \left\{ 0,1,2,...\right\} $ be the total counts of surviving events of size $y$ in the trawl at time $t$, which also includes the event that arrives \emph exactly} at time $t$, so each $C_{t}^{\left( y\right) }$ must be c\`{a}dl\`{ }g (right-continuous with left-limits). Then clearly the trawl process can be represented a \begin{equation} Y_{t}=\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }yC_{t}^{(y)},\ \ \ \ t\geq 0. \label{Trawl process deompose to counting processes} \end{equation} Note that each $C_{t}^{\left( y\right) }$ is not only a Poisson exponential-trawl process with (different) intensity of arrivals $\nu \left( y\right) $ (and sharing the same trawl) but also a $\mathrm{M}/\mathrm{M /\infty $ queue and hence a continuous time Markov process. Hence, for \left\{ \mathcal{C}_{t}^{\left( y\right) }\right\} _{t\geq 0}$ being the natural filtration generated by the counting process $C_{t}^{\left( y\right) }$, i.e., $\mathcal{C}_{t}^{\left( y\right) }\triangleq \sigma \left( \left\{ C_{s}^{\left( y\right) }\right\} _{0\leq s\leq t}\right) $, it has (infinitesimal) transition probabilities (or rates or intensities \begin{equation} \lim\limits_{\mathrm{d}t\rightarrow 0}\dfrac{\mathbb{P}\left( \left. C_{t}^{(y)}-C_{t-\mathrm{d}t}^{(y)}=j\right\vert \mathcal{C}_{t-\mathrm{d t}^{\left( y\right) }\right) }{\mathrm{d}t}=\left\{ \begin{array}{cc} \nu \left( y\right) , & \text{if }j=1 \\ \phi C_{t-}^{\left( y\right) }, & \text{if }j=-1 \\ 0, & \text{if }j\in \mathbb{Z} \backslash \left\{ -1,1\right\ \end{array \right. . \label{Transition intensities of C^(y)} \end{equation The cases of $j=1$ or $-1$---which correspond to the arrival of a new event of size $y$ and the departure of an old one---are the only two possible infinitessimal movements of $C_{t}^{\left( y\right) }$ due to the point process nature of the L\'{e}vy basis. Note that the arrival rate and departure rate are controlled by the L\'{e}vy measure $\nu $ and the trawl parameter $\phi $ respectively. Derivation of (\ref{Transition intensities of C^(y)}) can be found in many standard references for queue theory (e.g. \cite{Asmussen(03)}). \begin{remark} Let $\Delta X_{t}\triangleq X_{t}-X_{t-}$ denote the \emph{instantaneous} jump of any process $X$ at time $t$. Then the transition probability (\re {Transition intensities of C^(y)}) can be conveniently written in a differential for \begin{equation*} \mathbb{P}\left( \left. \Delta C_{t}^{(y)}=j\right\vert \mathcal{C _{t-}^{\left( y\right) }\right) =\left\{ \begin{array}{cc} \nu \left( y\right) \mathrm{d}t, & \text{if }j=1 \\ \phi C_{t-}^{\left( y\right) }\mathrm{d}t, & \text{if }j=-1 \\ 0, & \text{if }j\in \mathbb{Z} \backslash \left\{ -1,1\right\ \end{array \right. . \end{equation* Throughout this Chapter, our analysis will be majorly based on this infinitessimal point of view for the ease of demonstration. All of our arguments can be rephrased in a mathematically tighter way. \end{remark} The independence property of the L\'{e}vy basis implies the independence between each $C_{t}^{\left( y\right) }$ for $y\in \mathbb{Z} \backslash \left\{ 0\right\} $, so the joint count proces \begin{equation*} \mathbf{C}_{t}\triangleq \left( ...,C_{t}^{\left( -2\right) },C_{t}^{\left( -1\right) },C_{t}^{\left( 1\right) },C_{t}^{\left( 2\right) },...\right) \end{equation* is also Markovian, which serves as the unobserved state process for the observed \emph{hidden Markov process} $Y_{t}$ and will be the central target for the filter and smoother we will discuss in a moment. Let $\mathcal{C _{t}\triangleq \sigma \left( \left\{ \mathbf{C}_{s}\right\} _{0\leq s\leq t}\right) =\dbigvee_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\mathcal{C}_{t}^{\left( y\right) }$ be the join filtration. Clearly, from (\ref{Transition intensities of C^(y)}), \mathbf{C}_{t}$ has (infinitesimal) transition probabilitie \begin{equation} \mathbb{P}\left( \left. \Delta \mathbf{C}_{t}=\mathbf{j}\right\vert \mathcal C}_{t-}\right) =\left\{ \begin{array}{cc} \nu \left( y\right) \mathrm{d}t, & \text{if }\mathbf{j}=\mathbf{1}^{\left( y\right) }\text{ for some }y \\ \phi C_{t-}^{\left( y\right) }\mathrm{d}t, & \text{if }\mathbf{j}=-\mathbf{1 ^{\left( y\right) }\text{ for some }y \\ 0, & \text{otherwise \end{array \right. , \label{Transition probability of vector C} \end{equation where $\mathbf{1}^{\left( y\right) }\in \mathbb{Z} ^{\infty }$ is the vector that takes $1$ at $y$-th component and $0$ otherwise. The trawl process $Y_{t}$ can be also written a \begin{equation*} Y_{t}=\sum_{y=1}^{\infty }yY_{t}^{(y)},\ \ \ \ Y_{t}^{(y)}\triangleq C_{t}^{(y)}-C_{t}^{(-y)}, \end{equation* where each $Y_{t}^{(y)}$ is a Skellam exponential-trawl process. Each Y_{t}^{(y)}$ is observed from the path of $Y_{t}$ up to its initial value Y_{0}^{\left( y\right) }$, for we can exactly observe all the jumps of Y_{t} $ and hence allocate them into the appropriate $Y_{t}^{(y)}$. In other words, we can regard the observed trawl process as (i) a \emph{marked point process} $\Delta Y_{t}\in \mathbb{Z} \backslash \left\{ 0\right\} $, which consists of several independent (given all the $Y_{0}^{\left( y\right) }$) marked point process $\Delta Y_{t}^{\left( y\right) }\in \left\{ -1,1\right\} $, plus (ii) the initial value $Y_{0}$. The missing components $Y_{0}^{\left( y\right) }$'s will have some mild effects on $\Delta Y_{t}^{\left( y\right) }$. It is this initial value challenge that differentiates the likelihood analysis of trawl processes from that of marked point processes. The special case where $Y_{t}$ is always non-negative has further simpler structure, as we must have $C_{t}^{\left( -y\right) }=0$ for all $y=1,2,...$ and hence $C_{t}^{\left( y\right) }=Y_{t}^{\left( y\right) }$ is directly observed up to its initial condition $C_{0}^{\left( y\right) }$, which can be well-approximated if the observation period $T$ is large enough. We will go through these details in Section \ref{Section: Non-Negative case likelihood}. \subsection{Conditional Intensities and Log-likelihood\label{Section: Observed data likelihood}} Let $\left\{ \mathcal{F}_{t}\right\} _{t\geq 0}$ be the natural filtration generated by the observed trawl process $Y_{t}$, i.e. $\mathcal{F _{t}\triangleq \sigma \left( \left\{ Y_{s}\right\} _{0\leq s\leq t}\right) . Define the c\`{a}dl\`{a}g conditional intensity process of the trawl process $Y$ a \begin{equation} \lambda _{t-}^{\left( y\right) }\triangleq \lim\limits_{\mathrm{d t\rightarrow 0}\dfrac{\mathbb{P}\left( Y_{t}-Y_{t-\mathrm{d}t}=y|\mathcal{F _{t-\mathrm{d}t}\right) }{\mathrm{d}t},\ \ \ \ y\in \mathbb{Z} \backslash \left\{ 0\right\} ,\ t>0 \label{Definition of conditional intensity} \end{equation or conveniently in a differential for \begin{equation} \lambda _{t-}^{\left( y\right) }\mathrm{d}t\triangleq \mathbb{P}\left( \Delta Y_{t}=y|\mathcal{F}_{t-}\right) . \label{Diff. Def. of conditional intensity} \end{equation It means the (time-varying) predictive intensity of a size $y$ move at time t$ of the trawl process, conditional on information instantaneously before time $t$. \begin{remark} To emphasize the $\mathcal{F}_{t}$-predictability of $\lambda ^{\left( y\right) }$, i.e., being adapted to the left natural filtration $\mathcal{F _{t-}$, we will keep the subscript $t-$ throughout this Chapter. This is particularly informative in the implementation of likelihood calculations, reminding us to take the \emph{left-limit} of the intensity process whenever there is a jump. \end{remark} For any two $\sigma $-fields $\mathcal{F}$ and $\mathcal{G}$, let the Radon-Nikodym derivative over $\mathcal{F}|\mathcal{G}$ between two probability measures $\mathbb{P}$ and \mathbb{Q} $ b \begin{equation*} \left( \dfrac{\mathrm{d}\mathbb{\mathbb{P}}}{\mathrm{d \mathbb{Q} }\right) _{\mathcal{F}|\mathcal{G}}\triangleq \dfrac{\left( \dfrac{\mathrm{d \mathbb{\mathbb{P}}}{\mathrm{d \mathbb{Q} }\right) _{\mathcal{F}\vee \mathcal{G}}}{\left( \dfrac{\mathrm{d}\mathbb \mathbb{P}}}{\mathrm{d \mathbb{Q} }\right) _{\mathcal{G}}}. \end{equation* In particular, when $\mathcal{G}=\sigma \left( X\right) $ for any random variable $X$, we will simply write the subscript as $\mathcal{F}|X$. The following classical result serves as the foundation for all likelihood inference for jump processes. \begin{theorem} \label{Thm.: Point process Radon-Nykodym Derivative}Let $X_{t}$ be any integer-valued stochastic process and $\left\{ \mathcal{F}_{t}^{X} \right\} _{t \geq 0}$ be its associated natural filtration. Assume that, under both \mathbb{P}$ and \mathbb{Q} $, (i) it has finite expected number of jumps during $(0,T]$, and (ii) the conditional intensities $\lambda _{t-}^{\left( y\right) ,\mathbb{P}}$ and \lambda _{t-}^{\left( y\right) \mathbb{Q} }$ are well-defined using (\ref{Definition of conditional intensity}) and \mathcal{F}_{t-}^{X}$. Then $\mathbb{P\ll \mathbb{Q} }$ over $\mathcal{F}_{T}^{X}|X_{0}$ if and only if $\lambda _{t-}^{\left( y\right) \mathbb{Q} }$ is strictly positive. In this case, the logarithmic Radon-Nikodym derivative over $\mathcal{F}_{T}^{X}|X_{0}$ i \begin{eqnarray*} \log \left( \dfrac{\mathrm{d}\mathbb{P}}{\mathrm{d}\mathbb \mathbb{Q} }}\right) _{\mathcal{F}_{T}^{X}|X_{0}} &=&\sum_{0<t\leq T}\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\log \left( \dfrac{\lambda _{t-}^{\left( y\right) ,\mathbb{P}}}{\lambda _{t-}^{\left( y\right) \mathbb{Q} }}\right) 1_{\left\{ \Delta X_{t}=y\right\} } \\ &&-\int_{t\in (0,T]}\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\left( \lambda _{t-}^{\left( y\right) ,\mathbb P}}-\lambda _{t-}^{\left( y\right) ,\mathbb \mathbb{Q} }}\right) \mathrm{d}t. \end{eqnarray*} \end{theorem} Proposition 14.4.I of \cite{DaleyVere-Jones(08_Ch14)} provides a complete and mathematically rigorous treatment for this Theorem. For completeness, we also provide an intuitive and heuristic derivation in the Appendix. A direct application of Theorem \ref{Thm.: Point process Radon-Nykodym Derivative} gives the following Corollary. \begin{corollary} \label{Cor.: log-likelihood for general trawl processes}The log-likelihood function of the (general) trawl process is (ignoring the constant \begin{equation} l_{\mathcal{F}_{T}}\left( \mathbf{\theta }\right) =\sum_{0<t\leq T}\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\log \lambda _{t-}^{\left( y\right) }1_{\left\{ \Delta Y_{t}=y\right\} }-\int_{t\in (0,T]}\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\lambda _{t-}^{\left( y\right) }\mathrm{d t+l_{Y_{0}}\left( \mathbf{\theta }\right) , \label{Log-Likelihood} \end{equation where the parameters of interest $\mathbf{\theta }$ include the L\'{e}vy measure $\nu \left( \mathrm{d}y\right) $ (i.e. $\nu \left( y\right) $'s) and the trawl parameter $\phi $. \end{corollary} The study of likelihood inference for trawl processes then reduces to the calculations of conditional intensities $\lambda _{t-}^{\left( y\right) }$ for $y\in \mathbb{Z} \backslash \left\{ 0\right\} $. Now, by law of iterated expectations and the fact that $\mathcal{C}_{t}\supseteq \mathcal{F}_{t}$ for all $t$ (because of (\ref{Trawl process deompose to counting processes})), we hav \begin{eqnarray*} \lambda _{t-}^{\left( y\right) }\mathrm{d}t &=&\mathbb{E}\left( \mathbb{P \left( \Delta Y_{t}=y|\mathcal{C}_{t-}\right) |\mathcal{F}_{t-}\right) \\ &=&\mathbb{E}\left( \left. \mathbb{P}\left( \left. \Delta \mathbf{C}_{t} \mathbf{1}^{\left( y\right) }\right\vert \mathcal{C}_{t-}\right) \right\vert \mathcal{F}_{t-}\right) +\mathbb{E}\left( \left. \mathbb{P}\left( \left. \Delta \mathbf{C}_{t}=-\mathbf{1}^{\left( -y\right) }\right\vert \mathcal{C _{t-}\right) \right\vert \mathcal{F}_{t-}\right) \\ &=&\nu \left( y\right) \mathrm{d}t+\phi \mathbb{E}\left( \left. C_{t-}^{\left( -y\right) }\right\vert \mathcal{F}_{t-}\right) \mathrm{d}t, \end{eqnarray* where the second line follows because the event $\Delta Y_{t}=y$ must come from either an arrival of a new size $y$ event or a departure of an old size $-y$ event; the third line follows from (\ref{Transition probability of vector C}). Thus \begin{equation} \lambda _{t-}^{\left( y\right) }=\nu \left( y\right) +\phi \mathbb{E}\left( \left. C_{t-}^{\left( -y\right) }\right\vert \mathcal{F}_{t-}\right) ,\ \ \ \ y\in \mathbb{Z} \backslash \left\{ 0\right\} . \label{conditional intensity} \end{equation In next Section, we will study an exact filtering scheme to numerically calculate $\mathbb{E}\left( \left. C_{t-}^{\left( -y\right) }\right\vert \mathcal{F}_{t-}\right) $. The non-negative exponential-trawl process, where we always have positive events, admits a further simplificatio \begin{equation} \lambda _{t-}^{\left( y\right) }=\nu \left( y\right) ,\ \ \ \ \lambda _{t-}^{\left( -y\right) }=\phi \mathbb{E}\left( \left. C_{t-}^{(y)}\right\vert \mathcal{F}_{t-}\right) ,\ \ \ \ y=1,2,..., \label{exponential trawl non-negative} \end{equation so likelihood inference for such a case is easier. In the Poisson case, all the impacts are of size one, so in particular $C_{0}^{(1)}=Y_{0}$ is also observed (as $C_{0}^{\left( y\right) }=0$ for all $y\neq 1$), which allows us to bypass the conditional expectation in (\ref{exponential trawl non-negative}) for $y=1$. \section{Exact Filter and Smoother for Exponential-Trawl Processes\labe {Section: Fitlering and Smoothing}} \subsection{Filtering\label{Section: Filtering for exponential trawl}} In general we need to solve the filtering problems for $\mathbf{C}_{t}$ to implement (\ref{Log-Likelihood}) and (\ref{conditional intensity}). Denote the filtering probability mass function a \begin{equation*} p_{t,s}\left( \mathbf{j}\right) \triangleq \mathbb{P}\left( \left. \mathbf{C _{t}=\mathbf{j}\right\vert \mathcal{F}_{s}\right) ,\ \ \ \ \mathbf{j}=\left( ...,j_{-2},j_{-1},j_{1},j_{2},...\right) ,\ j_{y}=0,1,2,...,\ t,s\geq 0. \end{equation* Also, let $\left\Vert \mathbf{j}\right\Vert _{1}\triangleq \sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }j_{y}$ and $D_{t}\triangleq \left\Vert \mathbf C}_{t}\right\Vert _{1}=\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }C_{t}^{\left( y\right) }$. Our goal here is to sequentially update $p_{t-,t-}\left( \mathbf{j}\right) , where the initial distribution is derived fro \begin{equation*} C_{0}^{\left( y\right) }\overset{\text{indep.}}{\backsim }\mathrm{Poisson \left( \dfrac{\nu \left( y\right) }{\phi }\right) \text{ subject to \sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }yC_{0}^{\left( y\right) }=Y_{0}, \end{equation* so, by letting $\mathrm{Poisson}\left( x|\lambda \right) \triangleq \lambda ^{x}e^{-\lambda }/x!$, we hav \begin{equation*} p_{0,0}\left( \mathbf{j}\right) =\dfrac{\prod_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\mathrm{Poisson}\left( j_{y}|\nu \left( y\right) /\phi \right) }{\mathbb{P}\left( \sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }yC_{0}^{\left( y\right) }=Y_{0}\right) }, \end{equation* where the denominator can be numerically calculated using the inverse fast Fourier transform \cite{ShephardYang(14)}. Notice that the filtering distribution not only updates at the times when the process jumps but also at those inactivity periods. We discuss these two cases separately. \begin{theorem}[Forward Filtering] \label{Thm.: Filtering} \begin{enumerate} \item \lbrack \textbf{Update by inactivity}] Assume that the last jump time is $\tau $ (or $\tau =0$) and the current time is $t-$, where $\Delta Y_{s}=0 $ for $\tau <s<t$ (and $\Delta Y_{\tau }\neq 0$ if $\tau >0$). The \begin{equation} p_{t-,t-}\left( \mathbf{j}\right) =\dfrac{e^{-\phi \left\Vert \mathbf{j \right\Vert _{1}\left( t-\tau \right) }p_{\tau ,\tau }\left( \mathbf{j \right) }{\sum_{\mathbf{k}}e^{-\phi \left\Vert \mathbf{k}\right\Vert _{1}\left( t-\tau \right) }p_{\tau ,\tau }\left( \mathbf{k}\right) }, \label{Filtering update by inactivity} \end{equation where $p_{\tau ,\tau }$ is the filtering distribution we have already known at time $\tau $. \item \lbrack \textbf{Update by jump}] Assume that the current time is $\tau -$ and $\Delta Y_{\tau }=y$ for some $y\in \mathbb{Z} \backslash \left\{ 0\right\} $. The \begin{equation} p_{\tau ,\tau }\left( \mathbf{j}\right) =\dfrac{1}{\lambda _{\tau -}^{\left( y\right) }}\left( \nu \left( y\right) p_{\tau -,\tau -}\left( \mathbf{j} \mathbf{1}^{\left( y\right) }\right) +\phi \left( j_{-y}+1\right) p_{\tau -,\tau -}\left( \mathbf{j}+\mathbf{1}^{\left( -y\right) }\right) \right) , \label{Filtering update by jumps} \end{equation where $p_{\tau -,\tau -}$ is the filtering distribution we have already known at time $\tau -$. \end{enumerate} \end{theorem} Overall, the filtering procedures (\ref{Filtering update by inactivity}) and (\ref{Filtering update by jumps}) imply that $p_{t-,t-}\left( \mathbf{j \right) $ can be updated in continuous time without discretization errors at any set of finite discrete time points, so we call it an exact filter. \begin{example} \label{Ex.: Skellam Filtering}For Skellam exponential-trawl process with \'{e}vy intensities $\nu ^{+}$ and $\nu ^{-}$, we always hav \begin{equation*} Y_{t-}=C_{t-}^{\left( +\right) }-C_{t-}^{\left( -\right) },\ \ \ \ t>0, \end{equation* so knowing $p_{t-,t-}\left( j\right) \triangleq \mathbb{P}\left( \left. C_{t-}^{\left( -\right) }=j\right\vert \mathcal{F}_{t-}\right) $ immediately gives us $p_{t-,t-}\left( j,k\right) $. Hence, the filtering updating scheme reduces to the following: starting from $\tau =0$ \begin{eqnarray*} p_{t-,t-}\left( j\right) &\propto &e^{-\phi \left( 2j+Y_{\tau }\right) \left( t-\tau \right) }p_{\tau ,\tau }\left( j\right) \ \ \ \ \text{if\ \Delta Y_{s}=0\text{ for }\tau <s<t, \\ p_{\tau ,\tau }\left( j\right) &\propto &\nu ^{+}p_{\tau -,\tau -}\left( j\right) +\phi \left( j+1\right) p_{\tau -,\tau -}\left( j+1\right) \text{\ \ \ \ if }\Delta Y_{\tau }=1, \\ p_{\tau ,\tau }\left( j\right) &\propto &\nu ^{-}p_{\tau -,\tau -}\left( j-1\right) +\phi \left( j+Y_{\tau -}\right) p_{\tau -,\tau -}\left( j\right) \text{\ \ \ \ if }\Delta Y_{\tau }=-1. \end{eqnarray* We then renormalize $p_{t-,t-}\left( j\right) $ such that \sum_{j=0}^{\infty }p_{t-,t-}\left( j\right) =1$ in each step of the updates. Knowing the filtering distributions $p_{t-,t-}\left( j\right) $ allows us to calculat \begin{equation*} \mathbb{E}\left( \left. C_{t-}^{\left( -\right) }\right\vert \mathcal{F _{t-}\right) =\sum_{j=0}^{\infty }jp_{t-,t-}\left( j\right) ,\ \mathbb{E \left( \left. C_{t-}^{\left( +\right) }\right\vert \mathcal{F}_{t-}\right) =\sum_{j=0}^{\infty }jp_{t-,t-}\left( j\right) +Y_{t-}. \end{equation* Using the following settings, with time unit being second \begin{equation} \nu ^{+}=0.013,\ \nu ^{-}=0.011,\ \phi =0.034,\ T=21\times 60^{2}=75,60 \text{\ (sec.)}, \label{Simulated Skellam OU True Setting} \end{equation Fig. \ref{Fig.: Skellam Filtering} shows a simulated path of the trawl process $Y_{t}$ together with the filtering expectations of $C_{t}^{\left( +\right) }$, $C_{t}^{\left( -\right) }$ and $D_{t}=C_{t}^{\left( +\right) }+C_{t}^{\left( -\right) }$, the total number of surviving (both positive and negative) events in the trawl at time $t$. \begin{figure}[t] \centerin \includegraphics[width=11.7cm]{ExponentialTrawlProcessFiltering} \caption{\emph{Top left}: A simulated path for the Skellam exponential-trawl process $Y_{t}$. \emph{Top right}, \emph{Bottom left}, \emph{Bottom right}: Paths of the true hidden counting processes $C_{t}^{\left( +\right) }$, C_{t}^{\left( -\right) }$ and $D_{t}=C_{t}^{\left( +\right) }+C_{t}^{\left( -\right) }$ of surviving events in the trawl along with their filtering estimations. Code: \texttt{EPTprocess\_FilteringSmoothing\_Illustration.R}} \label{Fig.: Skellam Filtering} \end{figure} \end{example} \subsection{Smoothing} We now consider the smoothing procedure for the exponential-trawl process Y_{t}$, which is necessary for the likelihood inference based on the EM algorithm we will see in a moment. Running the filtering procedure up to time $T$, we then start from $p_{T,T}$ to conduct the smoothing procedure. \begin{theorem}[Backward Smoothing] \label{Thm.: Smoothing} \begin{enumerate} \item \lbrack \textbf{Update by inactivity}] Assume that the (backward) last jump time is $\tau $ (or $\tau =T$) and the current time is $t$, where \Delta Y_{s}=0$ for $t\leq s<\tau $ (and $\Delta Y_{\tau }\neq 0$ if $\tau <T $). The \begin{equation*} p_{t,T}\left( \mathbf{j}\right) =p_{\tau -,T}\left( \mathbf{j}\right) , \end{equation* where $p_{\tau -,T}$ is the smoothing distribution we have already known at time $\tau -$. \item \lbrack \textbf{Update by jump}] Assume that the current time is $\tau $ and $\Delta Y_{\tau }=y$ for some $y\in \mathbb{Z} \backslash \left\{ 0\right\} $. The \begin{equation} p_{\tau -,T}\left( \mathbf{j}\right) =\dfrac{p_{\tau -,\tau -}\left( \mathbf j}\right) }{\lambda _{\tau -}^{\left( y\right) }}\left( \nu \left( y\right) \dfrac{p_{\tau ,T}\left( \mathbf{j}+\mathbf{1}^{\left( y\right) }\right) } p_{\tau ,\tau }\left( \mathbf{j}+\mathbf{1}^{\left( y\right) }\right) }+\phi j_{-y}\dfrac{p_{\tau ,T}\left( \mathbf{j}-\mathbf{1}^{\left( -y\right) }\right) }{p_{\tau ,\tau }\left( \mathbf{j}-\mathbf{1}^{\left( -y\right) }\right) }\right) , \label{Smoothing updating over jump} \end{equation where $p_{\tau -,\tau -}$ and $p_{\tau ,\tau }$ are from the forward filtering procedure and $p_{\tau ,T}$ is the smoothing distribution we have already known at time $\tau $. \end{enumerate} \end{theorem} The two terms in (\ref{Smoothing updating over jump}) are \begin{equation*} \mathbb{P}\left( \left. \mathbf{C}_{\tau -}=\mathbf{j},\mathbf{C}_{\tau } \mathbf{j}+\mathbf{1}^{\left( y\right) }\right\vert \mathcal{F}_{T}\right) \text{ and }\mathbb{P}\left( \left. \mathbf{C}_{\tau -}=\mathbf{j},\mathbf{C _{\tau }=\mathbf{j}-\mathbf{1}^{\left( y\right) }\right\vert \mathcal{F _{T}\right) \end{equation* respectively, so, in particular \begin{eqnarray} \mathbb{P}\left( \left. \Delta C_{\tau }^{\left( y\right) }=1\right\vert \mathcal{F}_{T}\right) &=&\sum_{\mathbf{j}}\dfrac{p_{\tau -,\tau -}\left( \mathbf{j}\right) }{\lambda _{\tau -}^{\left( y\right) }}\left( \nu \left( y\right) \dfrac{p_{\tau ,T}\left( \mathbf{j}+\mathbf{1}^{\left( y\right) }\right) }{p_{\tau ,\tau }\left( \mathbf{j}+\mathbf{1}^{\left( y\right) }\right) }\right) , \label{Smoothing probability of arrival} \\ \mathbb{P}\left( \left. \Delta C_{\tau }^{\left( y\right) }=-1\right\vert \mathcal{F}_{T}\right) &=&\sum_{\mathbf{j}}\dfrac{p_{\tau -,\tau -}\left( \mathbf{j}\right) }{\lambda _{\tau -}^{\left( y\right) }}\left( \phi j_{-y \dfrac{p_{\tau ,T}\left( \mathbf{j}-\mathbf{1}^{\left( -y\right) }\right) } p_{\tau ,\tau }\left( \mathbf{j}-\mathbf{1}^{\left( -y\right) }\right) \right) . \label{Smoothing probability of departure} \end{eqnarray These (total) weights in (\ref{Smoothing updating over jump}) will be recorded for every jump time $\tau $ as by-products of the smoothing procedure, for later they will play important roles in the EM algorithm introduced in Subsection \ref{Section: EM Algorithm}. \begin{example}[Continued from Example \protect\ref{Ex.: Skellam Filtering}] For Skellam exponential-trawl process, the smoothing updating scheme reduces to the following: starting from $\tau =T$ \begin{eqnarray*} p_{t,T}\left( j\right) &=&p_{\tau -,T}\left( j\right) \ \ \ \ \text{if\ \Delta Y_{s}=0\text{ for }t\leq s<\tau , \\ p_{\tau -,T}\left( j\right) &\propto &p_{\tau -,\tau -}\left( j\right) \left( \nu ^{+}\dfrac{p_{\tau ,T}\left( j\right) }{p_{\tau ,\tau }\left( j\right) }+\phi j\dfrac{p_{\tau ,T}\left( j-1\right) }{p_{\tau ,\tau }\left( j-1\right) }\right) \text{\ \ \ \ if }\Delta Y_{\tau }=1, \\ p_{\tau -,T}\left( j\right) &\propto &p_{\tau -,\tau -}\left( j\right) \left( \nu ^{-}\dfrac{p_{\tau ,T}\left( j+1\right) }{p_{\tau ,\tau }\left( j+1\right) }+\phi \left( Y_{\tau -}+j\right) \dfrac{p_{\tau ,T}\left( j\right) }{p_{\tau ,\tau }\left( j\right) }\right) \text{\ \ \ \ if }\Delta Y_{\tau }=-1. \end{eqnarray* We also renormalize $p_{t,T}\left( j\right) $ in each step of the updates. Using the same simulated path and the same setting (\ref{Simulated Skellam OU True Setting}) as in Example \ref{Ex.: Skellam Filtering}, we show the smoothing expectations of $C_{t}^{\left( +\right) }$, $C_{t}^{\left( -\right) }$ and $D_{t}$ in Fig. \ref{Fig.: Skellam Smoothing}. For most of the time, the smoothing expectations can match the truth quite well and will remove those peaks of filtering expectations resulted from departures (such as the one close to $t=400$ in the plot for $C_{t}^{\left( -\right) }$). \begin{figure}[th] \centerin \includegraphics[width=11.7cm]{ExponentialTrawlProcessSmoothing} \caption{\emph{Top left}: A simulated path for the Skellam exponential-trawl process $Y_{t}$. \emph{Top right}, \emph{Bottom left}, \emph{Bottom right}: Paths of the true hidden counting processes $C_{t}^{\left( +\right) }$, C_{t}^{\left( -\right) }$ and $D_{t}=C_{t}^{\left( +\right) }+C_{t}^{\left( -\right) }$ of surviving events in the trawl along with their smoothing estimations. Code: \texttt{EPTprocess\_FilteringSmoothing\_Illustration.R}} \label{Fig.: Skellam Smoothing} \end{figure} \end{example} Now we are capable of conducting likelihood inference for exponential-trawl processes as one of the most important applications of the filtering and smoothing procedures we have already built here. \section{Likelihood Inference for General Exponential-Trawl Processes\labe {Section: Likelihood inferences}} It has been reported by \cite{BarndorffNielsenLundeShephardVeraart(14)} and \cite{ShephardYang(14)} that the moment-based inference for the family of trawl processes could be easily performed, but such inference is arbitrarily dependent on its procedure design. In this Section, we focus on the maximum likelihood estimate (MLE) calculation for exponential-trawl processes with general \'{e}vy basis and demonstrate its correctness using several examples. \subsection{MLE Calculation based on Filtering} Recall that the evaluation of the log-likelihood (\ref{Log-Likelihood}) requires the calculations of the conditional intensities $\lambda _{t-}^{\left( y\right) }$ and their integral \begin{equation} \int_{t\in (0,T]}\lambda _{t-}^{\left( y\right) }\mathrm{d}t=\nu \left( y\right) T+\phi \sum_{\mathbf{j}}j_{-y}\int_{t\in (0,T]}p_{t-,t-}\left( \mathbf{j}\right) \mathrm{d}t, \label{Intensity integral} \end{equation which follows from (\ref{conditional intensity}) and $\mathbb{E}\left( \left. C_{t-}^{\left( -y\right) }\right\vert \mathcal{F}_{t-}\right) =\sum_ \mathbf{j}}j_{-y}p_{t-,t-}\left( \mathbf{j}\right) $. However, we do not know the integral $\int_{t\in (0,T]}p_{t-,t-}\left( \mathbf{j}\right) \mathrm{d}t$ analytically, as the denominator in (\re {Filtering update by inactivity}) also depends on $t$. Hence, we have to calculate (\ref{Filtering update by inactivity}) in a dense grid of time points---separated by a time gap $\delta _{\mathrm{inactivity}}$ during those inactivity periods---and approximate (\ref{Intensity integral}) by linear interpolation. Clearly, the smaller the time gap $\delta _{\mathrm inactivity}}$, the smaller the numerical error in (\ref{Intensity integral}) but the larger the computational burden. \begin{example} \label{Ex.: MLE FIltering large vs small delta}Using the true parameters in \ref{Simulated Skellam OU True Setting}) and simulating a $10$-day-long data with $T=756,000$ (sec.), Fig. \ref{Fig.: Comparison btw. different delta} shows how an inappropriate choice of $\delta _{\mathrm{inactivity}}$ will depict a wrong log-likelihood surface no matter how long the correct simulated data we supply, where the comparison is made with respect to the first day portion ($75,600$ (sec.)) of the $10$-day-long simulated data. \begin{figure}[th] \centerin \includegraphics[width=11.7cm]{EPTprocess_LogLikelihoodPlotsOverPhi_Small_vs_Large} \caption{Log-likelihood plots over $\protect\phi $ (with $\protect\nu ^{+}$ and $\protect\nu ^{-}$ fixed at the truth) using different $\protect\delta _ \mathrm{inactivity}}$ and a simulated $10$-day-long ($T=756,000$ (sec.)) Skellam exponential-trawl process. The one-day-long data is the first tenth of the simulated data. The dashed lines indicate the true value of $\protec \phi $, while the solid lines indicate the optimal value of $\protect\phi $ in each plot. The $p$-values using the likelihood ratio test are $0.104\%$ \emph{Top left}), $21.0\%$ (\emph{Bottom left}), $8.82\times 10^{-13}$ \emph{Top right}) and $46.1\%$ (\emph{Bottom right}). Code: \texttt EPTprocess\_MLE\_Inference\_Simulation\_Small\_vs\_Large.R}} \label{Fig.: Comparison btw. different delta} \end{figure} Using the same one-day-long data, Fig. \ref{Fig.: Log-likelihood plots over intensities} also shows the corresponding log-likelihood function over $\nu ^{+}$ or $\nu ^{-}$ with other parameters fixed at the truth. Including the bottom left panel of Fig. \ref{Fig.: Comparison btw. different delta}, all of the MLE's (solid lines) are reasonably close to the true values (dashed lines) and the likelihood ratio tests suggest that $p$-values are all greater than $20\%$. \begin{figure}[th] \centerin \includegraphics[width=11.7cm]{EPTprocess_LogLikelihoodPlotsOverPsi} \caption{Log-likelihood plots over either $\protect\nu ^{+}$ or $\protect\nu ^{-}$ for one simulated Skellam exponential-trawl process. The dashed lines indicate the true value, while the solid lines indicate the optimal value of $\protect\nu ^{+}$ or $\protect\nu ^{-}$ in the individual plot. The $p -values using the likelihood ratio test are $40.5\%$ (\emph{Left}) and 33.4\%$ (\emph{Right}). Code: \texttt{EPTprocess\_MLE\_Inference\_Simulatio \_Small\_vs\_Large.R}} \label{Fig.: Log-likelihood plots over intensities} \end{figure} \end{example} \subsection{Complete-Data Likelihood Inference} Even though in general it would be computationally expensive to calculate the MLE by direct filtering, the maximum complete-data likelihood estimate (MCLE) is much simpler. A comprehensive analysis of the complete-data likelihood inference is performed in the following. Let $N_{t}^{\left( y\right) ,\mathrm{A}}$ and $N_{t}^{\left( y\right) \mathrm{D}}$ be the counting process of the temporary arrival of size $y$ events and the departure of old size $y$ events during the period $(0,T]$. Also le \begin{equation*} N_{t}^{\mathrm{type}}\triangleq \sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }N_{t}^{\left( y\right) ,\mathrm{type}},\text{\ \ \ \ }\mathrm{type}=\mathrm{A},\mathrm{D}. \end{equation*} \begin{theorem} \label{Thm.: MCLE}The complete-data log-likelihood function of the exponential-trawl process is (ignoring the constant \begin{eqnarray} l_{\mathcal{C}_{T}}\left( \mathbf{\theta }\right) &=&\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\left( \log \left( \nu \left( y\right) \right) \left( N_{T}^{\left( y\right) ,\mathrm{A}}+C_{0}^{\left( y\right) }\right) -\nu \left( y\right) \left( T+\phi ^{-1}\right) \right) \label{Complete-data log-likelihood with exp trawl} \\ &&+\log \left( \phi \right) \left( N_{T}^{\mathrm{D}}-D_{0}\right) -\phi \int_{t\in (0,T]}D_{t-}\mathrm{d}t, \notag \end{eqnarray so the corresponding MCLE's for the L\'{e}vy measure and the trawl parameter ar \begin{eqnarray} \hat{\nu}_{\mathrm{MCLE}}\left( y\right) &=&\dfrac{N_{T}^{\left( y\right) \mathrm{A}}+C_{0}^{\left( y\right) }}{T+\hat{\phi}_{\mathrm{MCLE}}^{-1}},\ \ \ \ y\in \mathbb{Z} \backslash \left\{ 0\right\} , \label{Complete-data MLE} \\ \hat{\phi}_{\mathrm{MCLE}} &=&\frac{\Xi _{T}+\sqrt{\Xi _{T}^{2}+4\dfrac N_{T}^{\mathrm{A}}+N_{T}^{\mathrm{D}}}{T}\int_{t\in (0,T]}D_{t-}\mathrm{d}t }{2\int_{t\in (0,T]}D_{t-}\mathrm{d}t}, \notag \\ \Xi _{T} &\triangleq &N_{T}^{\mathrm{D}}-D_{0}-\dfrac{1}{T}\int_{t\in (0,T]}D_{t-}\mathrm{d}t. \notag \end{eqnarray Furthermore, the MCLE's above are strong consistent: with probability $1$, as $T\rightarrow \infty $ \begin{equation*} \hat{\phi}_{\mathrm{MCLE}}\rightarrow \phi \text{ and }\hat{\nu}_{\mathrm MCLE}}\left( y\right) \rightarrow \nu \left( y\right) ,\ \ \ \ y\in \mathbb{Z} \backslash \left\{ 0\right\} . \end{equation*} \end{theorem} We note that $\widehat{\phi }_{\mathrm{MCLE}}$ depends on $\int_{t\in (0,T]}D_{t-}$, the total number of possible departures, weighed by time, at risk during the period $(0,T]$. \subsection{MLE Calculation based on EM Algorithm\label{Section: EM Algorithm}} In this Subsection, we introduce an EM algorithm that is particularly suitable for exponential-trawl processes, as there are no discretization errors. The EM algorithm is also computationally efficient. Compared with generic optimization methods like \emph limited-momory BFGS (L-BFGS)}, the updating scheme suggested by EM can converge to the MLE in a fewer steps and with no error. Clearly, the use of EM needs some extra computations in each step for backward smoothing, but in aggregate EM performs much faster than L-BFGS as EM skips those intermediate filtering calculations during those inactivity periods. \begin{description} \item[$E$-Step] The linear form of the complete-data log-likelihood (\re {Complete-data log-likelihood with exp trawl}) allows us to easily take expectation on it with respect to $\mathbb{P}\left( \cdot |\mathcal{F _{T}\right) $ (under a set of old estimated parameters $\mathbf{\hat{\theta} _{\mathrm{old}}$), which then requires the calculations of the following quantities using the smoothing distribution $p_{t,T}$ \begin{eqnarray} \mathbb{E}\left( \left. N_{T}^{\left( y\right) ,\mathrm{A}}\right\vert \mathcal{F}_{T}\right) &=&\sum_{0<t\leq T}\mathbb{P}\left( \left. \Delta C_{t}^{\left( y\right) }=1\right\vert \mathcal{F}_{T}\right) , \label{All the missing data smoothing expectation} \\ \mathbb{E}\left( \left. N_{T}^{\left( y\right) ,\mathrm{D}}\right\vert \mathcal{F}_{T}\right) &=&\sum_{0<t\leq T}\mathbb{P}\left( \left. \Delta C_{t}^{\left( y\right) }=-1\right\vert \mathcal{F}_{T}\right) , \notag \\ \mathbb{E}\left( \left. C_{0}^{\left( y\right) }\right\vert \mathcal{F _{T}\right) &=&\sum_{\mathbf{j}}j_{y}p_{0,T}\left( \mathbf{j}\right) ,\ \mathbb{E}\left( \left. D_{0}\right\vert \mathcal{F}_{T}\right) =\sum_ \mathbf{j}}\left\Vert \mathbf{j}\right\Vert _{1}p_{0,T}\left( \mathbf{j \right) , \notag \\ \mathbb{E}\left( \left. D_{t-}\right\vert \mathcal{F}_{T}\right) &=&\sum_ \mathbf{j}}\left\Vert \mathbf{j}\right\Vert _{1}p_{t-,T}\left( \mathbf{j \right) , \notag \end{eqnarray where (\ref{Smoothing probability of arrival}) and (\ref{Smoothing probability of departure}) will be extensively used. Note that $\mathbb{E \left( D_{t-}|\mathcal{F}_{T}\right) $ will be a step function of $t$, so the calculation of $\int_{t\in (0,T]}\mathbb{E}\left( D_{t-}|\mathcal{F _{T}\right) \mathrm{d}t$ is trivially exact. \item[$M$-Step] Since the $E$-Step generates a $Q$ function that takes the same functional form of $\mathbf{\theta }$ as (\ref{Complete-data log-likelihood with exp trawl}), the solution to $M$-Step takes the same form as the MCLE in (\ref{Complete-data MLE}), where we just replace each of the hidden data related terms by their smoothing expectations in (\ref{All the missing data smoothing expectation}). This can be also viewed as a representation of plug-in principle for (\ref{Complete-data MLE}), i.e., replacing those unknown quantities (e.g. $1_{\left\{ \Delta C_{t}^{\left( y\right) }=1\right\} }$) by the known ones (e.g. $\mathbb{P}\left( \left. \Delta C_{t}^{\left( y\right) }=1\right\vert \mathcal{F}_{T}\right) $). We further use the solution of this $M$-Step for next iteration. \end{description} \begin{example}[Continued from Example \protect\ref{Ex.: Skellam Filtering}] \label{Comparison between EM and L-BFGS} Using the same simulated Skellam exponential-trawl process path, Table \ref{Tab.: LBFGS vs EM} compares the MLE derived from (i) the \texttt{L-BFGS-B} procedure in the \texttt{optim} function of the \texttt{R} language (using the default tolerance settings) with that from (ii) the EM algorithm (using the same initial parameter value), which stops until each parameter differs less than a uniform tolerance $10^{-6}$. \begin{table}[tbp] \caption{The MLE calculations on one simulated Skellam exponential-trawl process using \texttt{L-BFGS-B} procedure in \texttt{R} (with default settings) and EM algorithm (with uniform tolerance $10^{-6}$ on the parameter space). The \texttt{R} elapsed time is $137.4$ (sec.) for \texttt L-BFGS-B} and $3.3$ (sec.) for EM, which is about 40 times speed up. Code: \texttt{EPTprocess\_MLE\_Inference\_Simulation\_LBFGS\_vs\_EM.R}} \label{Tab.: LBFGS vs EM}\input{LBFGS_vs_EM_table} \end{table} As expected, using the EM algorithm gives estimation values that are very close to the direct optimization of the log-likelihood function (using \delta _{\mathrm{inactivity}}=0.5$). An interesting feature here is that the MLE found by the EM algorithm has a slightly larger log-likelihood value (even for $\delta _{\mathrm{inactivity}}=0.01$) than by the \texttt{L-BFGS-B , which might attribute to the numerical insufficiency of the default optimization tolerance setting of \texttt{R.} The \texttt{L-BFGS-B} procedure uses $27$ evaluations of the filtering procedure ($9$ of them for objective function evaluations and $18$ of them for numerical gradients); as a comparison, the EM algorithm takes $12$ evaluations of the filtering procedure plus $12$ more of the smoothing procedure. In aggregate, the EM algorithm is over $40$ times faster than the \texttt{L-BFGS-B} in terms of the computation time. \end{example} Starkly different from Example \ref{Ex.: MLE FIltering large vs small delta , the EM algorithm does not require the fine evaluation of the integrals of \lambda _{t-}^{\left( y\right) }$, so not only the filtering procedure in each iteration of the EM is faster (as it skips the grid calculations of \lambda _{t-}^{\left( y\right) }$ during those inactivity periods) but also the convergent result of EM will maximize the \emph{numerically errorless} log-likelihood (as it has nothing to do with $\delta _{\mathrm{inactivity}}$ to conduct EM). As a conclusion, using EM algorithm to search the MLE for exponential-trawl processes will dominate the direct optimization of log-likelihood both on the numerical quality and on the computation speed. \subsection{Likelihood Inference without the Initial Information} If we consider the complete-data log-likelihood given the information \mathbf{C}_{0}$, i.e. $l_{\mathcal{C}_{T}|\mathbf{C}_{0}}\left( \mathbf \theta }\right) $, then the MCLE's are even simpler \begin{equation*} \hat{\nu}_{\mathrm{MCLE}}\left( y\right) =\dfrac{N_{T}^{\left( y\right) \mathrm{A}}}{T},\ \hat{\phi}_{\mathrm{MCLE}}=\frac{N_{T}^{\mathrm{D}}} \int_{t\in (0,T]}D_{t-}\mathrm{d}t}. \end{equation* Note that these estimates are the most natural frequency estimates providing that we know the hidden state process $\mathbf{C}_{t}$: $\nu \left( y\right) $ is estimated by the sample intensity of all the arrivals of size $y$ events, while $\phi ^{-1}$ is estimated by the average lifetime among all the departures of the temporary events, for the lifetime of any temporary event is exponentially distributed with mean $1/\phi $. However, here is a subtle statistical inconsistency if one wants to build an EM\ algorithm based on $l_{\mathcal{C}_{T}|\mathbf{C}_{0}}\left( \mathbf \theta }\right) $. In practice, all the initial values $C_{0}^{\left( y\right) }$'s are unknown, so the only way we can work on $l_{\mathcal{C _{T}|\mathbf{C}_{0}}\left( \mathbf{\theta }\right) $ is to treat them as nuisance parameters. Thus, the EM\ $Q$ function is defined b \begin{equation*} Q\left( \mathbf{\theta }^{\prime },\mathbf{C}_{0}^{\prime }|\mathbf{\theta } \mathbf{C}_{0}\right) =\mathbb{E}_{\mathbf{\theta }}\left( \left. l_ \mathcal{C}_{T}|\mathbf{C}_{0}^{\prime }}\left( \mathbf{\theta }^{\prime }\right) \right\vert \mathbf{C}_{0},\mathcal{F}_{T}\right) , \end{equation* which not only requires the smoothing scheme based on $\mathbb{P}_{\mathbf \theta }}\left( \cdot |\mathcal{F}_{T},\mathbf{C}_{0}\right) $---not \mathbb{P}_{\mathbf{\theta }}\left( \cdot |\mathcal{F}_{T}\right) $---but also finally gives us the MLE of the \emph{joint} log-likelihood function l_{\mathcal{F}_{T}}\left( \mathbf{\theta },\mathbf{C}_{0}\right) $---not the MLE of $l_{\mathcal{F}_{T}}\left( \mathbf{\theta }\right) $ nor of $l_ \mathcal{F}_{T}|Y_{0}}\left( \mathbf{\theta }\right) $. On the other hand, one might also define the EM $Q$ function a \begin{equation*} Q\left( \mathbf{\theta }^{\prime }|\mathbf{\theta }\right) =\mathbb{E}_ \mathbf{\theta }}\left( l_{\mathcal{C}_{T}|\mathbf{C}_{0}}\left( \mathbf \theta }^{\prime }\right) |\mathcal{F}_{T}\right) , \end{equation* but in this cas \begin{equation*} Q\left( \mathbf{\theta }|\mathbf{\theta }\right) =l_{\mathcal{F _{T}|Y_{0}}\left( \mathbf{\theta }\right) -\mathbb{E}_{\mathbf{\theta }\left( l_{\mathbf{C}_{0}|Y_{0}}\left( \mathbf{\theta }\right) |\mathcal{F _{T}\right) \neq l_{\mathcal{F}_{T}|Y_{0}}\left( \mathbf{\theta }\right) , \end{equation* which then \emph{breaks} the fundamental monotonicity that guarantees the availability of EM \begin{equation*} l_{\mathcal{F}_{T}|Y_{0}}\left( \mathbf{\theta }^{\ast }\right) \geq Q\left( \mathbf{\theta }^{\ast }|\mathbf{\theta }\right) =\max\limits_{\mathbf \theta }^{\prime }}Q\left( \mathbf{\theta }^{\prime }|\mathbf{\theta \right) \geq Q\left( \mathbf{\theta }|\mathbf{\theta }\right) =l_{\mathcal{F _{T}|Y_{0}}\left( \mathbf{\theta }\right) . \end{equation*} Therefore, even though the direct filtering allows the calculations of the MLE whenever we include the initial information $Y_{0}$ or not (i.e. to maximize $l_{\mathcal{F}_{T}}\left( \mathbf{\theta }\right) $ or $l_ \mathcal{F}_{T}|Y_{0}}\left( \mathbf{\theta }\right) $), a \emph{correct} EM-based inference will automatically enforce the consideration of $Y_{0}$ (i.e. to maximize $l_{\mathcal{F}_{T}}\left( \mathbf{\theta }\right) $ using EM). This is a bit different from likelihood inference for marked point processes, which usually ignores the effect of the initial value $Y_{0}$. This mild difference will clearly disappear asymptotically as $T\rightarrow \infty $, but here we still prefer to present a complete likelihood analysis for trawl processes instead of treating them the same as marked point processes. \section{Likelihood Inference for Non-negative Exponential-Trawl Processes \label{Section: Non-Negative case likelihood}} In this Section, we focus on exponential-trawl processes that are always non-negative. Then all the negative movements of this type of processes must attribute to the departures of the positive events in the trawl, so it is natural to split up $Y$ into the counting process of size $y$ jump \begin{equation*} N_{t}^{\left( y\right) }\triangleq \sum_{0<s\leq t}1_{\left\{ \Delta Y_{s}=y\right\} },\ \ \ \ y\in \mathbb{Z} \backslash \left\{ 0\right\} , \end{equation* which relates to $C_{t}^{\left( y\right) }\ $vi \begin{equation} C_{t}^{\left( y\right) }=C_{0}^{\left( y\right) }+N_{t}^{\left( y\right) }-N_{t}^{\left( -y\right) }. \label{C counting process and N counting process} \end{equation Then, as mentioned in the end of Subsection \ref{Sect: Trawl process decomposition} \begin{equation*} Y_{t}=\sum_{y=1}^{\infty }yC_{t}^{\left( y\right) }=\sum_{y=1}^{\infty }yC_{0}^{\left( y\right) }+\sum_{y=1}^{\infty }y(N_{t}^{\left( y\right) }-N_{t}^{\left( -y\right) }). \end{equation* Clearly, the path of $Y_{t}$ reveals the path of each of the individual N_{t}^{\left( y\right) }$ for $y\in \mathbb{Z} \backslash \left\{ 0\right\} $, so $N_{t}^{\left( y\right) }\in \mathcal{F _{t}$. Thus, the only unknown objects here are $C_{0}^{\left( y\right) }$'s, for we just see $Y_{0}=\sum_{y=1}^{\infty }yC_{0}^{\left( y\right) }$ and all the departures resulted from $C_{0}^{\left( y\right) }$'s. If we can know $C_{0}^{\left( y\right) }$, then we will see the complete path of C_{t}^{\left( y\right) }$ and hence likelihood inference will be particularly tractable. \subsection{Partial Likelihood Inference} We can specialize Corollary \ref{Cor.: log-likelihood for general trawl processes} using (\ref{exponential trawl non-negative}) and write down the log-likelihood for the non-negative case (ignoring the constant) \begin{eqnarray*} l_{\mathcal{F}_{T}}\left( \mathbf{\theta }\right) &=&\sum_{y=1}^{\infty }\left( \log \left( \nu \left( y\right) \right) N_{T}^{\left( y\right) }-\nu \left( y\right) T\right) \\ &&+\log \left( \phi \right) N_{T}^{\left( -\right) }-\phi \int_{t\in (0,T] \mathbb{E}_{\mathbf{\theta }}\left( \left. C_{t-}^{\left( +\right) }\right\vert \mathcal{F}_{t-}\right) \mathrm{d}t \\ &&+\sum_{0<t\leq T}\sum_{y=1}^{\infty }\log \mathbb{E}_{\mathbf{\theta }\left( \left. C_{t-}^{(y)}\right\vert \mathcal{F}_{t-}\right) 1_{\left\{ \Delta Y_{t}=-y\right\} }+l_{Y_{0}}\left( \mathbf{\theta }\right) , \end{eqnarray* where $N_{T}^{\left( -\right) }\triangleq \sum_{y=1}^{\infty }N_{T}^{\left( -y\right) }$ and $C_{t-}^{\left( +\right) }\triangleq \sum_{y=1}^{\infty }C_{t-}^{\left( y\right) }$. Like the general case we studied in Section \ref{Section: Likelihood inferences}, there are no analytic expressions available for the filtering expectations $\mathbb{E}_{\mathbf{\theta }}\left( \left. C_{t-}^{(y)}\right\vert \mathcal{F}_{t-}\right) $ and the initial likelihood $l_{Y_{0}}\left( \mathbf{\theta }\right) $, so finding $\mathbf{\hat{\theta} _{\mathrm{MLE}}$ also requires the EM techniques we introduced before. However, the first part of $l_{\mathcal{F}_{T}}\left( \mathbf{\theta \right) $ that involves $\nu \left( y\right) $'s is particularly analytically tractable, so this leads us to consider the following maximum partial likelihood estimate (MPLE) for the L\'{e}vy measure \begin{equation*} \hat{\nu}_{\mathrm{MPLE}}\left( y\right) =\dfrac{N_{T}^{\left( y\right) }}{T ,\ \ \ \ y=1,2,3,..., \end{equation* which is a non-parametric moment estimate that is apparent from the non-negative setting. Even though $\hat{\nu}_{\mathrm{MPLE}}$ is not $\hat{\nu}_{\mathrm{MLE}}$, it has several advantages. First, it has strong consistency, i.e., with probability $1$, $\hat{\nu}_{\mathrm{MPLE}}\left( y\right) \rightarrow \nu \left( y\right) $ as $T\rightarrow \infty $. Second, it is asymptotically equivalent to the MCLE, becaus \begin{equation*} \hat{\nu}_{\mathrm{MCLE}}\left( y\right) =\dfrac{N_{T}^{\left( y\right) }+C_{0}^{\left( y\right) }}{T+\widehat{\phi }_{\mathrm{MCLE}}^{-1}}=\dfrac \dfrac{N_{T}^{\left( y\right) }}{T}+\dfrac{C_{0}^{\left( y\right) }}{T}}{1 \dfrac{\widehat{\phi }_{\mathrm{MCLE}}^{-1}}{T}}\approx \hat{\nu}_{\mathrm MPLE}}, \end{equation* where the MCLE of $\mathbf{\theta }$ is simply given from (\re {Complete-data MLE}) but we need to replace those $D_{t-}$ by C_{t-}^{\left( +\right) }$. Third, it allows to estimate each component of the L\'{e}vy measure separately from themselves and from $\phi $, as given a long enough path of $Y$, including the initial value $C_{0}^{\left( y\right) }$ and $\widehat{\phi }_{\mathrm{MCLE}}^{-1}$ has no strong improvement on the estimation quality of $\hat{\nu}_{\mathrm{MPLE}}$. Alternatively, a parameterized common intensity function $\nu (y|\eta )$ can be used, where $\eta $ is some finite dimensional parameter. Then the MPLE is found by solvin \begin{equation*} \hat{\eta}_{\mathrm{MPLE}}\triangleq \limfunc{argmax}\limits_{\eta }\sum_{y=1}^{\infty }\left( \log \left( \nu \left( y|\eta \right) \right) N_{T}^{\left( y\right) }-\nu \left( y|\eta \right) T\right) \end{equation* and letting $\hat{\nu}_{\mathrm{MPLE}}\left( y\right) =\nu \left( y|\hat{\et }_{\mathrm{MPLE}}\right) $. To infer on the trawl parameter $\phi $, we can simply plug-in the $\hat{\nu _{\mathrm{MPLE}}$ (either parametric or non-parametric) and then do the filtering procedure to calculate $\mathbb{E}_{\mathbf{\theta }=\left( \hat \nu}_{\mathrm{MPLE}},\phi \right) }\left( \left. C_{t-}^{(y)}\right\vert \mathcal{F}_{t-}\right) $ for $y=1,2,...$ and $t\in (0,T]$. Combining this with an (one-dimensional) optimization procedure we can find \begin{equation*} \hat{\phi}_{\mathrm{MPLE}}\triangleq \limfunc{argmax}\limits_{\phi }l_ \mathcal{F}_{T}}\left( \hat{\nu}_{\mathrm{MPLE}},\phi \right) . \end{equation*} \subsection{Estimate the Missing Initial Missing Values} Except the Poisson case ($Y_{0}=C_{0}^{(1)}$, $C_{0}^{\left( y\right) }=0$ for all $y>1$ and hence in particular $\mathbf{\hat{\theta}}_{\mathrm{MCLE}} \mathbf{\hat{\theta}}_{\mathrm{MLE}}$), every $C_{0}^{\left( y\right) }$'s are missing, so in principle we need to estimate these initial values in order to get (an approximation of) $\mathbf{\hat{\theta}}_{\mathrm{MCLE}}$. Indeed, the EM algorithm also does so through the smoothing expectation \begin{equation*} \mathbb{E}\left( \left. C_{t-}^{\left( y\right) }\right\vert \mathcal{F _{T}\right) =\mathbb{E}\left( \left. C_{0}^{\left( y\right) }\right\vert \mathcal{F}_{T}\right) +N_{t}^{\left( y\right) }-N_{t}^{\left( -y\right) }, \end{equation* but it just iterates (\ref{Complete-data MLE}) until converges. Nevertheless, there is another simpler estimation of $C_{0}^{\left( y\right) }$ thanks to the special non-negative feature. The following Proposition only relies on the fact that $Y_{t}$ is non-negative and in fact does not depend on the choice of the trawl. \begin{proposition} \label{Prop.: Bound the initial values}Assume that the (general) trawl process $Y_{t}$ is non-negative. I \begin{equation*} C_{0,T}^{\left( y\right) ,\mathrm{L}}\triangleq \sup_{t\in \left[ 0,T\right] }\left( N_{t}^{(-y)}-N_{t}^{(y)}\right) ,\ C_{0,T}^{\left( y\right) ,\mathrm U}}\triangleq \left\lfloor \frac{Y_{0}-\sum_{y^{\prime }\neq y}y^{\prime }C_{0,T}^{\left( y^{\prime }\right) ,\mathrm{L}}}{y}\right\rfloor , \end{equation* where $N_{0}^{\left( y\right) }\triangleq 0$ conventionally and \left\lfloor x\right\rfloor $ means the integer part of $x$, the \begin{equation*} C_{0,T}^{\left( y\right) ,\mathrm{U}}\geq C_{0}^{\left( y\right) }\geq C_{0,T}^{\left( y\right) ,\mathrm{L}}. \end{equation* Furthermore \begin{equation*} \lim\limits_{T\rightarrow \infty }C_{0,T}^{\left( y\right) ,\mathrm{U }=\lim\limits_{T\rightarrow \infty }C_{0,T}^{\left( y\right) ,\mathrm{L }=C_{0}^{\left( y\right) }. \end{equation*} \end{proposition} Thus, a straightforward and sharp estimation to $C_{0}^{\left( y\right) }$ can be given by, e.g., \begin{equation*} \hat{C}_{0}^{\left( y\right) }\triangleq \left\lfloor \dfrac{C_{0,T}^{\left( y\right) ,\mathrm{U}}+C_{0,T}^{\left( y\right) ,\mathrm{L}}}{2}\right\rfloor , \end{equation* so use this estimation in (\ref{Complete-data MLE}) will give an estimate of $\mathbf{\theta }$ that is almost as good as $\mathbf{\hat{\theta}}_{\mathrm MCLE}}$. \begin{example} Figure \ref{FIg.: Initial Estimation} illustrates Proposition \ref{Prop.: Bound the initial values} with a non-negative geometric L\'{e}vy basis, wher \begin{eqnarray*} \nu (y|\eta ) &=&\left\Vert \nu \right\Vert \eta \left( 1-\eta \right) ^{y-1},\ \ \ \ y=1,2,..., \\ \left\Vert \nu \right\Vert &=&3,\ \eta =0.5,\ \phi =0.5,\ T=100. \end{eqnarray*} The paths of the upper bound $C_{0,t}^{\left( y\right) ,\mathrm{U}}$ and the lower bound $C_{0,t}^{\left( y\right) ,\mathrm{L}}$ are shown as step functions of time $t$ in Fig. \ref{FIg.: Initial Estimation}. We can observe a strong convergent pattern, as all the bounds for different $y$ converge after $t>15$---the perfect estimations of the initial values $C_{0}^{\left( y\right) }$'s. Furthermore, as $Y_{0}=C_{0}^{\left( 1\right) }+2C_{0}^{\left( 2\right) }+4C_{0}^{\left( 4\right) }$ in this case, all the other $C_{0}^{\left( y\right) }$'s for $y\neq 1,2,4$ must be zero. We then have discovered all the initial values and can use them to conduct MCLE by \ref{Complete-data MLE}). \begin{figure}[th] \centerin \includegraphics[width=11.7cm]{EPTprocess_NonNegativeInitialEstimation} \caption{\emph{Top left}: A simulated path for the exponential-trawl process $Y_{t}$ using non-negative geometric L\'{e}vy basis. \emph{Top right}, \emph Bottom left}, \emph{Bottom right}: Paths of $C_{0,t}^{\left( y\right) \mathrm{U}}$ and $C_{0,t}^{\left( y\right) ,\mathrm{L}}$ along with the true $C_{0}^{\left( y\right) }$ for $y=1,2,4$. Code: \texttt{EPTproces \_NonNegativeInitialEstimate.R}} \label{FIg.: Initial Estimation} \end{figure} \end{example} \section{Conclusion\label{Section: Conclude}} In this Chapter, we studied likelihood-based inference of the trawl processes by explicitly working on the filtering and smoothing procedures inherited from this model. It is plausible and practically implementable under the exponential trawl. We used some simulation examples to justify the correctness of our procedures. The major contribution of this Chapter is to provide an easiest beginning step toward likelihood inference for all of the other more general trawl processes, which might even allow the inclusion of a non-stationary L\'{e}vy process component. \cite{ShephardYang(14)} calls it a fleeting price process and extensively uses it for the study of high frequency financial econometrics. The filters for the fleeting price process they proposed will allow an econometrically interesting decomposition of observed prices into equilibrium prices and market microstructure noises. More empirical analysis about these will be addressed in the future work. \section*{Appendix: Proofs and Derivations} \addcontentsline{toc}{section}{Appendix} \subsection{Heuristic Proof of Theorem \protect\ref{Thm.: Point process Radon-Nykodym Derivative}} Our heuristic derivation starts from the following prediction decomposition of the Radon-Nikodym derivative \begin{equation} \log \left( \dfrac{\mathrm{d}\mathbb{P}}{\mathrm{d \mathbb{Q} }\right) _{\mathcal{F}_{T}^{X}|X_{0}}=\int_{t\in (0,T]}\log \left( \dfrac \mathrm{d}\mathbb{\mathbb{P}}}{\mathrm{d \mathbb{Q} }\right) _{X_{t}|\mathcal{F}_{t-}^{X}}, \label{Prediction probability decomposition} \end{equation where the integral over $t\in (0,T]$ means a continuous sum of the integrand random variables. Thus \begin{eqnarray*} \left( \dfrac{\mathrm{d}\mathbb{\mathbb{P}}}{\mathrm{d \mathbb{Q} }\right) _{X_{t}|\mathcal{F}_{t-}^{X}} &=&\left( \dfrac{\mathrm{d}\mathbb \mathbb{P}}}{\mathrm{d \mathbb{Q} }\right) _{\Delta X_{t}|\mathcal{F}_{t-}^{X}} \\ &=&\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\dfrac{\mathbb{P}\left( \Delta X_{t}=y \mathcal{F}_{t-}^{X}\right) } \mathbb{Q} \left( \Delta X_{t}=y|\mathcal{F}_{t-}^{X}\right) }1_{\left\{ \Delta X_{t}=y\right\} }+\dfrac{\mathbb{P}\left( \Delta X_{t}=0|\mathcal{F _{t-}^{X}\right) } \mathbb{Q} \left( \Delta X_{t}=0|\mathcal{F}_{t-}^{X}\right) }1_{\left\{ \Delta X_{t}=0\right\} } \\ &=&\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\dfrac{\lambda _{t-}^{\left( y\right) ,\mathbb P}}\mathrm{d}t}{\lambda _{t-}^{\left( y\right) ,\mathbb \mathbb{Q} }}\mathrm{d}t}1_{\left\{ \Delta X_{t}=y\right\} }+\dfrac{1-\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\lambda _{t-}^{\left( y\right) ,\mathbb{P} \mathrm{d}t}{1-\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\lambda _{t-}^{\left( y\right) \mathbb{Q} }\mathrm{d}t}1_{\left\{ \Delta X_{t}=0\right\} }, \end{eqnarray* where the first equality follows because $X_{t-}$ is known in $\mathcal{F _{t-}^{X}$; the third equality follows from (\ref{Diff. Def. of conditional intensity}). Therefore, (\ref{Prediction probability decomposition}) can be rewritten a \begin{eqnarray*} \int_{t\in \left( 0,T\right] }\log \left( \dfrac{\mathrm{d}\mathbb{\mathbb{P }}{\mathrm{d \mathbb{Q} }\right) _{X_{t}|\mathcal{F}_{t-}^{X}} &=&\sum_{0<t\leq T}\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\log \left( \dfrac{\lambda _{t-}^{\left( y\right) ,\mathbb{P}}\mathrm{d}t}{\lambda _{t-}^{\left( y\right) ,\mathbb \mathbb{Q} }}\mathrm{d}t}\right) 1_{\left\{ \Delta X_{t}=y\right\} } \\ &&+\int_{\left\{ t\in \left( 0,T\right] :\Delta X_{t}=0\right\} }\log \left( \dfrac{1-\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\lambda _{t-}^{\left( y\right) ,\mathbb{P} \mathrm{d}t}{1-\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\lambda _{t-}^{\left( y\right) \mathbb{Q} }\mathrm{d}t}\right) \\ &=&\sum_{0<t\leq T}\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\log \left( \dfrac{\lambda _{t-}^{\left( y\right) ,\mathbb{P}}}{\lambda _{t-}^{\left( y\right) ,\mathbb \mathbb{Q} }}}\right) 1_{\left\{ \Delta X_{t}=y\right\} } \\ &&-\int_{t\in \left( 0,T\right] }\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\left( \lambda _{t-}^{\left( y\right) ,\mathbb P}}-\lambda _{t-}^{\left( y\right) \mathbb{Q} }\right) \mathrm{d}t, \end{eqnarray* where the second equality follows from $\log \left( 1-x\right) \approx -x$ for small $x$ and $\left\{ t\in \left( 0,T\right] :\Delta X_{t}\neq 0\right\} $ has Lebesgue measure $0$. \subsection{Heuristic Proof of Theorem \protect\ref{Thm.: Filtering}} \subsubsection{Update by inactivity} We want to update $p_{\tau ,\tau }\left( \mathbf{j}\right) $ by incorporating the information $\mathcal{F}_{\left( \tau ,t\right) }\triangleq \sigma \left( \left\{ \Delta Y_{s}=0,\ \tau <s<t\right\} \right) $ using Bayes' Theorem \begin{eqnarray*} \mathbb{P}\left( \left. \mathbf{C}_{t-}=\mathbf{j}\right\vert \mathcal{F _{t-}\right) &=&\mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j \right\vert \mathcal{F}_{t-}\right) =\mathbb{P}\left( \left. \mathbf{C _{\tau }=\mathbf{j}\right\vert \mathcal{F}_{\tau },\mathcal{F}_{\left( \tau ,t\right) }\right) \\ &\propto &\mathbb{P}\left( \mathcal{F}_{\left( \tau ,t\right) }\left\vert \mathcal{F}_{\tau },\mathbf{C}_{\tau }=\mathbf{j}\right. \right) \mathbb{P \left( \left. \mathbf{C}_{\tau }=\mathbf{j}\right\vert \mathcal{F}_{\tau }\right) , \end{eqnarray* where the first equality holds because there is no activity of $Y_{s}$ for s\in \left( \tau ,t\right) $ and hence the hidden state $\mathbf{C}$ must stay the same. Using the prediction decomposition, we hav \begin{eqnarray*} \log \mathbb{P}\left( \mathcal{F}_{\left( \tau ,t\right) }\left\vert \mathcal{F}_{\tau },\mathbf{C}_{\tau }=\mathbf{j}\right. \right) &=&\int_{s\in \left( \tau ,t\right) }\log \mathbb{P}\left( \Delta Y_{s}=0 \mathcal{F}_{\tau },\mathcal{F}_{\left( \tau ,s\right) },\mathbf{C}_{\tau } \mathbf{j}\right) \\ &=&\int_{s\in \left( \tau ,t\right) }\log \left( 1-\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\nu \left( y\right) \mathrm{d}s-\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\phi j_{y}\mathrm{d}s\right) \\ &=&-\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\nu \left( y\right) \left( t-\tau \right) -\phi \left\Vert \mathbf{j}\right\Vert _{1}\left( t-\tau \right) , \end{eqnarray* where the second equality intuitively holds because we know the instantaneous departure probability of a size $y$ event at time $s$ is $\phi C_{s-}^{\left( y\right) }\mathrm{d}s$ but $C_{s-}^{\left( y\right) }=C_{\tau }^{\left( y\right) }=j_{y}$ under $\mathcal{F}_{\left( \tau ,s\right) }$; the third equality follows from $\log \left( 1-x\right) \approx -x$ for small $x$. Therefore \begin{equation*} \mathbb{P}\left( \left. \mathbf{C}_{t-}=\mathbf{j}\right\vert \mathcal{F _{t-}\right) \propto e^{-\phi \left\Vert \mathbf{j}\right\Vert _{1}\left( t-\tau \right) }\mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j \right\vert \mathcal{F}_{\tau }\right) , \end{equation* where we throw out the term $\exp \left( -\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }\nu \left( y\right) \left( t-\tau \right) \right) $ because it doesn't depend on $\mathbf{j}$. Normalizing the equation above leads to the desired result. \subsubsection{Update by jump} We want to update $p_{\tau -,\tau -}\left( \mathbf{j}\right) $ by incorporating the piece of information, $\Delta Y_{\tau }=y$. First note tha \begin{eqnarray*} \mathbb{P}\left( \mathbf{C}_{\tau }=\mathbf{j}|\mathcal{F}_{\tau }\right) &= \mathbb{P}\left( \mathbf{C}_{\tau }=\mathbf{j}|\mathcal{F}_{\tau -},\Delta Y_{\tau }=y\right) \\ &=&\mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j},\mathbf{C}_{\tau -} \mathbf{j}-\mathbf{1}^{\left( y\right) }\right\vert \mathcal{F}_{\tau -},\Delta Y_{\tau }=y\right) \\ &&+\mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j},\mathbf{C}_{\tau -} \mathbf{j}+\mathbf{1}^{\left( -y\right) }\right\vert \mathcal{F}_{\tau -},\Delta Y_{\tau }=y\right) , \end{eqnarray* which corresponds to the arrival of a new size $y$ event and the departure of an old size $-y$ event. For the first term \begin{eqnarray*} &&\mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j},\mathbf{C}_{\tau -} \mathbf{j}-\mathbf{1}^{\left( y\right) }\right\vert \mathcal{F}_{\tau -},\Delta Y_{\tau }=y\right) \\ &=&\dfrac{\mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j},\mathbf{C _{\tau -}=\mathbf{j}-\mathbf{1}^{\left( y\right) },\Delta Y_{\tau }=y\right\vert \mathcal{F}_{\tau -}\right) }{\mathbb{P}\left( \left. \Delta Y_{\tau }=y\right\vert \mathcal{F}_{\tau -}\right) } \\ &=&\dfrac{\mathbb{P}\left( \mathbf{C}_{\tau }=\mathbf{j},\Delta Y_{\tau }=y\left\vert \mathbf{C}_{\tau -}=\mathbf{j}-\mathbf{1}^{\left( y\right) } \mathcal{F}_{\tau -}\right. \right) \mathbb{P}\left( \left. \mathbf{C}_{\tau -}=\mathbf{j}-\mathbf{1}^{\left( y\right) }\right\vert \mathcal{F}_{\tau -}\right) }{\mathbb{P}\left( \left. \Delta Y_{\tau }=y\right\vert \mathcal{F _{\tau -}\right) } \\ &=&\dfrac{\mathbb{P}\left( \Delta \mathbf{C}_{\tau }=\mathbf{1}^{\left( y\right) }\left\vert \mathbf{C}_{\tau -}=\mathbf{j}-\mathbf{1}^{\left( y\right) },\mathcal{F}_{\tau -}\right. \right) \mathbb{P}\left( \left. \mathbf{C}_{\tau -}=\mathbf{j}-\mathbf{1}^{\left( y\right) }\right\vert \mathcal{F}_{\tau -}\right) }{\mathbb{P}\left( \left. \Delta Y_{\tau }=y\right\vert \mathcal{F}_{\tau -}\right) } \\ &=&\dfrac{\nu \left( y\right) }{\lambda _{\tau -}^{\left( y\right) }}\mathbb P}\left( \left. \mathbf{C}_{\tau -}=\mathbf{j}-\mathbf{1}^{\left( y\right) }\right\vert \mathcal{F}_{\tau -}\right) , \end{eqnarray* where the fourth equality follows from (\ref{Transition probability of vector C}) (using $\mathcal{C}_{\tau -}\supseteq \mathcal{F}_{\tau -}$) and \ref{Diff. Def. of conditional intensity}). Using similar arguments, the second term i \begin{equation*} \mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j},\mathbf{C}_{\tau -} \mathbf{j}+\mathbf{1}^{\left( -y\right) }\right\vert \mathcal{F}_{\tau -},\Delta Y_{\tau }=y\right) =\dfrac{\phi \left( j_{-y}+1\right) }{\lambda _{\tau -}^{\left( y\right) }}\mathbb{P}\left( \left. \mathbf{C}_{\tau -} \mathbf{j}+\mathbf{1}^{\left( -y\right) }\right\vert \mathcal{F}_{\tau -}\right) . \end{equation* Combining all of these gives us the required result. \subsection{Heuristic Proof of Theorem \protect\ref{Thm.: Smoothing}} The case of updating smoothing distribution $p_{\tau -,T}\left( \mathbf{j \right) $ due to inactivity is trivial because the hidden configuration \mathbf{C}$ must stay unchanged because of the inactivity during the time period $[t,\tau )$. \subsubsection{Update by jump} We now consider the case of (backward) updating the smoothing distribution p_{\tau ,T}\left( \mathbf{j}\right) $ due to the jump $\Delta Y_{\tau }=y$. The \begin{eqnarray*} \mathbb{P}\left( \mathbf{C}_{\tau -}=\mathbf{j}|\mathcal{F}_{T}\right) &= \mathbb{P}\left( \left. \mathbf{C}_{\tau -}=\mathbf{j},\mathbf{C}_{\tau } \mathbf{j}+\mathbf{1}^{\left( y\right) }\right\vert \mathcal{F}_{T}\right) \mathbb{P}\left( \left. \mathbf{C}_{\tau -}=\mathbf{j},\mathbf{C}_{\tau } \mathbf{j}-\mathbf{1}^{\left( -y\right) }\right\vert \mathcal{F}_{T}\right) \\ &=&\mathbb{P}\left( \mathbf{C}_{\tau -}=\mathbf{j}\left\vert \mathcal{F}_{T} \mathbf{C}_{\tau }=\mathbf{j}+\mathbf{1}^{\left( y\right) }\right. \right) \mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j}+\mathbf{1}^{\left( y\right) }\right\vert \mathcal{F}_{T}\right) \\ &&+\mathbb{P}\left( \mathbf{C}_{\tau -}=\mathbf{j}\left\vert \mathcal{F}_{T} \mathbf{C}_{\tau }=\mathbf{j}-\mathbf{1}^{\left( -y\right) }\right. \right) \mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j}-\mathbf{1}^{\left( -y\right) }\right\vert \mathcal{F}_{T}\right) . \end{eqnarray*} Note tha \begin{eqnarray} \mathbb{P}\left( \mathbf{C}_{\tau -}=\mathbf{j}\left\vert \mathcal{F}_{T} \mathbf{C}_{\tau }=\mathbf{k}\right. \right) &=&\mathbb{P}\left( \mathbf{C _{\tau -}=\mathbf{j}\left\vert \mathcal{F}_{\tau },\mathbf{C}_{\tau } \mathbf{k}\right. \right) \label{The smoothing subtleties} \\ &=&\dfrac{\mathbb{P}\left( \mathbf{C}_{\tau }=\mathbf{k}|\mathbf{C}_{\tau -} \mathbf{j},\mathcal{F}_{\tau }\right) \mathbb{P}\left( \mathbf{C}_{\tau -} \mathbf{j}|\mathcal{F}_{\tau }\right) }{\mathbb{P}\left( \mathbf{C}_{\tau } \mathbf{k}|\mathcal{F}_{\tau }\right) } \notag \\ &=&\dfrac \begin{array}{c} \mathbb{P}\left( \mathbf{C}_{\tau }=\mathbf{k}|\mathbf{C}_{\tau -}=\mathbf{j ,\mathcal{F}_{\tau }\right) \mathbb{P}\left( \Delta Y_{\tau }=y|\mathbf{C _{\tau -}=\mathbf{j},\mathcal{F}_{\tau -}\right) \\ \times \mathbb{P}\left( \mathbf{C}_{\tau -}=\mathbf{j}|\mathcal{F}_{\tau -}\right \end{array }{\mathbb{P}\left( \mathbf{C}_{\tau }=\mathbf{k}|\mathcal{F}_{\tau }\right) \mathbb{P}\left( \Delta Y_{\tau }=y|\mathcal{F}_{\tau -}\right) } \notag \\ &=&\dfrac{\mathbb{P}\left( \mathbf{C}_{\tau }=\mathbf{k},\Delta Y_{\tau }=y \mathbf{C}_{\tau -}=\mathbf{j},\mathcal{F}_{\tau -}\right) }{\lambda _{\tau -}^{\left( y\right) }\mathrm{d}t}\dfrac{\mathbb{P}\left( \mathbf{C}_{\tau -} \mathbf{j}|\mathcal{F}_{\tau -}\right) }{\mathbb{P}\left( \mathbf{C}_{\tau } \mathbf{k}|\mathcal{F}_{\tau }\right) }, \notag \end{eqnarray where the first equality holds due to the Markov property of $\mathbf{C}_{t} , a heuristic derivation is given later; the second and third equalities follow from the Bayes' Theorem. Sinc \begin{eqnarray*} \mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j}+\mathbf{1}^{\left( y\right) },\Delta Y_{\tau }=y\right\vert \mathbf{C}_{\tau -}=\mathbf{j} \mathcal{F}_{\tau -}\right) &=&\mathbb{P}\left( \left. \Delta \mathbf{C _{\tau }=\mathbf{1}^{\left( y\right) }\right\vert \mathbf{C}_{\tau -} \mathbf{j},\mathcal{F}_{\tau -}\right) \\ &=&\nu \left( y\right) \mathrm{d}t, \\ \mathbb{P}\left( \left. \mathbf{C}_{\tau }=\mathbf{j}-\mathbf{1}^{\left( -y\right) },\Delta Y_{\tau }=y\right\vert \mathbf{C}_{\tau -}=\mathbf{j} \mathcal{F}_{\tau -}\right) &=&\mathbb{P}\left( \left. \Delta \mathbf{C _{\tau }=-\mathbf{1}^{\left( -y\right) }\right\vert \mathbf{C}_{\tau -} \mathbf{j},\mathcal{F}_{\tau -}\right) \\ &=&\phi j_{-y}\mathrm{d}t, \end{eqnarray* combining all of these gives us the required result. \subsubsection{Derivation of (\protect\ref{The smoothing subtleties})} Let $\mathcal{F}_{(\tau ,T]}\triangleq \sigma \left( \left\{ Y_{t}\right\} _{\tau <t\leq T}\right) $ and $\mathcal{C}_{(\tau ,T]}\triangleq \sigma \left( \left\{ \mathbf{C}_{t}\right\} _{\tau <t\leq T}\right) $. Note that heuristically the Bayes' Theorem implie \begin{eqnarray*} \mathbb{P}\left( \mathbf{C}_{\tau -}=\mathbf{j}\left\vert \mathcal{F}_{T} \mathbf{C}_{\tau }=\mathbf{k}\right. \right) &=&\mathbb{P}\left( \mathbf{C _{\tau -}=\mathbf{j}\left\vert \mathcal{F}_{\tau },\mathcal{F}_{(\tau ,T]} \mathbf{C}_{\tau }=\mathbf{k}\right. \right) \\ &=&\dfrac{\left( \dfrac{\mathrm{d}\mathbb{P}}{\mathrm{d \mathbb{Q} }\right) _{\mathcal{F}_{(\tau ,T]}\left\vert \mathcal{F}_{\tau },\mathbf{C _{\tau }=\mathbf{k},\mathbf{C}_{\tau -}=\mathbf{j}\right. }}{\left( \dfrac \mathrm{d}\mathbb{P}}{\mathrm{d \mathbb{Q} }\right) _{\mathcal{F}_{(\tau ,T]}\left\vert \mathcal{F}_{\tau },\mathbf{C _{\tau }=\mathbf{k}\right. }}\mathbb{P}\left( \mathbf{C}_{\tau -}=\mathbf{j \left\vert \mathcal{F}_{\tau },\mathbf{C}_{\tau }=\mathbf{k}\right. \right) . \end{eqnarray* Since $\mathcal{F}_{(\tau ,T]}\subseteq \mathcal{C}_{(\tau ,T]}$ (each Y_{t}=\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }C_{t}^{\left( y\right) }$), the Markov property of $\mathbf{C}_{t}$ implie \begin{equation*} \left( \dfrac{\mathrm{d}\mathbb{P}}{\mathrm{d \mathbb{Q} }\right) _{\mathcal{F}_{(\tau ,T]}\left\vert \mathcal{F}_{\tau },\mathbf{C _{\tau }=\mathbf{k},\mathbf{C}_{\tau -}=\mathbf{j}\right. }=\left( \dfrac \mathrm{d}\mathbb{P}}{\mathrm{d \mathbb{Q} }\right) _{\mathcal{F}_{(\tau ,T]}\left\vert \mathcal{F}_{\tau },\mathbf{C _{\tau }=\mathbf{k}\right. }, \end{equation* because given the current information $\mathbf{C}_{\tau }$ the information in the past $\mathbf{C}_{\tau -}$ is irrelevant. This then proves tha \begin{equation*} \mathbb{P}\left( \mathbf{C}_{\tau -}=\mathbf{j}\left\vert \mathcal{F}_{T} \mathbf{C}_{\tau }=\mathbf{k}\right. \right) =\mathbb{P}\left( \mathbf{C _{\tau -}=\mathbf{j}\left\vert \mathcal{F}_{\tau },\mathbf{C}_{\tau } \mathbf{k}\right. \right) . \end{equation*} \subsection{Proof of Theorem \protect\ref{Thm.: MCLE}} Since each $C_{t}^{\left( y\right) }$ is independent for different $y$, the complete-data log-likelihood can be written a \begin{equation*} l_{\mathcal{C}_{T}}\left( \mathbf{\theta }\right) =\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }l_{\mathcal{C}_{T}^{\left( y\right) }|C_{0}^{\left( y\right) }}\left( \mathbf{\theta }\right) +\sum_{y\in \mathbb{Z} \backslash \left\{ 0\right\} }l_{C_{0}^{\left( y\right) }}\left( \mathbf \theta }\right) , \end{equation* where we recall that $\mathcal{C}_{t}^{\left( y\right) }$ is the natural filtration generated by $C_{t}^{\left( y\right) }$ \begin{eqnarray*} l_{\mathcal{C}_{T}^{\left( y\right) }|C_{0}^{\left( y\right) }}\left( \mathbf{\theta }\right) &=&\sum_{0<t\leq T}\left( \log \left( \nu \left( y\right) \right) 1_{\left\{ \Delta C_{t}^{\left( y\right) }=1\right\} }+\log \left( \phi C_{t-}^{\left( y\right) }\right) 1_{\left\{ \Delta C_{t}^{\left( y\right) }=-1\right\} }\right) \\ &&-\int_{t\in (0,T]}\left( \nu \left( y\right) +\phi C_{t-}^{\left( y\right) }\right) \mathrm{d}t \\ &=&\log \left( \nu \left( y\right) \right) N_{T}^{\left( y\right) ,\mathrm{A }-\nu \left( y\right) T+\log \left( \phi \right) N_{T}^{\left( y\right) \mathrm{D}}-\phi \int_{t\in (0,T]}C_{t-}^{\left( y\right) }\mathrm{d}t, \end{eqnarray* where the first equality follows directly from Theorem \ref{Thm.: Point process Radon-Nykodym Derivative} (ignoring the constant), an \begin{equation*} l_{C_{0}^{\left( y\right) }}\left( \mathbf{\theta }\right) =C_{0}^{\left( y\right) }\left( \log \nu \left( y\right) -\log \phi \right) -\dfrac{\nu \left( y\right) }{\phi } \end{equation* because of $C_{0}^{\left( y\right) }\backsim \mathrm{Poisson}\left( \nu \left( y\right) /\phi \right) $. Thus, collecting terms will give us the required result (\ref{Complete-data log-likelihood with exp trawl}). The derivations of the MCLE are elementary. Le \begin{equation*} \left\Vert \nu \right\Vert \triangleq \int \nu \left( \mathrm{d}y\right) =\sum_{y=1}^{\infty }\nu \left( y\right) . \end{equation* The ergodicity of $D_{t-}$ implies that as $T\rightarrow \infty \begin{equation*} \dfrac{1}{T}\int_{t\in (0,T]}D_{t-}\mathrm{d}t\rightarrow \mathbb{E}\left( D_{t-}\right) =\dfrac{\left\Vert \nu \right\Vert }{\phi }. \end{equation* Since $\dfrac{N_{T}^{\mathrm{D}}}{T}\approx \dfrac{N_{T}^{\mathrm{A}}}{T \rightarrow \left\Vert \nu \right\Vert $, we hav \begin{equation*} \dfrac{\Xi _{T}}{T}=\dfrac{N_{T}^{\mathrm{D}}}{T}-\dfrac{D_{0}+T^{-1}\int_{ \in (0,T]}D_{t-}\mathrm{d}t}{T}\rightarrow \left\Vert \nu \right\Vert \text , too.} \end{equation* Thus \begin{eqnarray*} \hat{\phi}_{\mathrm{MCLE}} &=&\frac{\dfrac{\Xi _{T}}{T}+\sqrt{\left( \dfrac \Xi _{T}}{T}\right) ^{2}+4T^{-1}\dfrac{N_{T}^{\mathrm{A}}+N_{T}^{\mathrm{D} }{T}T^{-1}\int_{t\in (0,T]}D_{t-}\mathrm{d}t}}{2T^{-1}\int_{t\in (0,T]}D_{t- \mathrm{d}t} \\ &\rightarrow &\frac{\left\Vert \nu \right\Vert +\sqrt{\left\Vert \nu \right\Vert ^{2}+0}}{2\dfrac{\left\Vert \nu \right\Vert }{\phi }}=\phi . \end{eqnarray* Finally, for any $y\in \mathbb{Z} \backslash \left\{ 0\right\} $, $\dfrac{N_{T}^{\left( y\right) }}{T \rightarrow \nu \left( y\right) $ and $\hat{\phi}_{\mathrm{MCLE }^{-1}\rightarrow \phi ^{-1}<\infty $, so we easily have \begin{equation*} \hat{\nu}_{\mathrm{MCLE}}\left( y\right) =\dfrac{\dfrac{N_{T}^{\left( y\right) }}{T}+\dfrac{C_{0}^{\left( y\right) }}{T}}{1+\dfrac{\hat{\phi}_ \mathrm{MCLE}}^{-1}}{T}}\rightarrow \nu \left( y\right) \text{ as well.} \end{equation*} \subsection{Proof of Proposition \protect\ref{Prop.: Bound the initial values}} As $C_{t}^{(y)}\geq 0$, (\ref{C counting process and N counting process}) implies tha \begin{equation*} C_{0}^{\left( y\right) }\geq C_{0,T}^{\left( y\right) ,\mathrm{L }=\sup_{t\in \left[ 0,T\right] }\left( N_{t}^{(-y)}-N_{t}^{(y)}\right) ,\ \ \ \ y=1,2,..., \end{equation* where we set $N_{0}^{\left( y\right) }\triangleq 0$ conventionally. No \begin{equation*} C_{0}^{\left( y\right) }=\frac{Y_{0}-\sum_{y^{\prime }\neq y}y^{\prime }C_{0}^{(y^{\prime })}}{y}\leq \left\lfloor \frac{Y_{0}-\sum_{y^{\prime }\neq y}y^{\prime }C_{0,T}^{\left( y^{\prime }\right) ,\mathrm{L}}}{y \right\rfloor =C_{0,T}^{\left( y\right) ,\mathrm{U}}, \end{equation* so we hav \begin{equation*} C_{0,T}^{\left( y\right) ,\mathrm{U}}\geq C_{0}^{\left( y\right) }\geq C_{0,T}^{\left( y\right) ,\mathrm{L}}. \end{equation*} Let $N_{t}^{\left( -y\right) ,\ast }$ be the counting process of $-y$ jumps resulted from the departures of those initial events of size $y$ that constitute $C_{0}^{\left( y\right) }$. Let $\tau $ be the time when N^{\left( -y\right) ,\ast }$ achieve $C_{0}^{\left( y\right) }$. Then we hav \begin{eqnarray*} C_{0,T}^{\left( y\right) ,\mathrm{L}} &=&C_{0,\tau }^{\left( y\right) \mathrm{L}}\vee \sup\limits_{t\in (\tau ,T]}\left( N_{t}^{\left( -y\right) ,\ast }-\left( N_{t}^{\left( y\right) }-\left( N_{t}^{\left( -y\right) }-N_{t}^{\left( -y\right) ,\ast }\right) \right) \right) \\ &=&C_{0,\tau }^{\left( y\right) ,\mathrm{L}}\vee \left( C_{0}^{\left( y\right) }-\inf\limits_{t\in (\tau ,T]}\left( N_{t}^{\left( y\right) }-\left( N_{t}^{\left( -y\right) }-N_{t}^{\left( -y\right) ,\ast }\right) \right) \right) . \end{eqnarray* Observe that $N_{t}^{\left( y\right) }-\left( N_{t}^{\left( -y\right) }-N_{t}^{\left( -y\right) ,\ast }\right) $ is a \textrm{M}/$G$/$\infty $ queue initiated at state $0$, so by the ergodicity we must have with probability $1 \begin{equation*} \lim\limits_{T\rightarrow \infty }\inf\limits_{t\in (\tau ,T]}\left( N_{t}^{\left( y\right) }-\left( N_{t}^{\left( -y\right) }-N_{t}^{\left( -y\right) ,\ast }\right) \right) =0. \end{equation* This then shows that actuall \begin{equation*} \lim\limits_{T\rightarrow \infty }C_{0,T}^{\left( y\right) ,\mathrm{L }=C_{0,\tau }^{\left( y\right) ,\mathrm{L}}\vee C_{0}^{\left( y\right) }=C_{0}^{\left( y\right) }, \end{equation* where the last equality follows because $C_{0,\tau }^{\left( y\right) \mathrm{L}}\leq C_{0}^{\left( y\right) }$. Correspondingly \begin{equation*} \lim\limits_{T\rightarrow \infty }C_{0,T}^{\left( y\right) ,\mathrm{U }=\left\lfloor \frac{Y_{0}-\sum_{y^{\prime }\neq y}y^{\prime }C_{0}^{\left( y\right) }}{y}\right\rfloor =C_{0}^{\left( y\right) }\text{.} \end{equation*} \input{referenc} \end{document}
1,477,468,751,235
arxiv
\section{Introduction} \label{sec:intro} In the search for physics beyond the Standard Model (SM), supersymmetry (SUSY) offers a very promising avenue. By relating bosons and fermions, it stabilises the Higgs boson mass with respect to quantum corrections, allows for the unification of the gauge couplings and provides a dark matter candidate, thus solving several long-standing problems of the SM simultaneously. Within the Minimal Supersymmetric Standard Model (MSSM)~\cite{Nilles:1983ge,Haber:1984rc}, each degree of freedom of the SM is associated with a superpartner, that differs only by half a unit in spin. The scalar partners of the left- and right-handed quarks mix, as do the higgsino, bino and wino interaction eigenstates, which after electroweak symmetry breaking form neutral and charged so-called electroweakinos (neutralinos and charginos). Since these particles have not yet been discovered, it must be assumed that supersymmetry is broken, which increases the masses of the supersymmetric particles with respect to those of their SM partners. With the assumption of conserved $R$-parity, a neutral lightest supersymmetric particle (LSP) then has all the right properties to be one of the most promising dark matter candidates, {\it i.e.}~a weakly interacting massive particle (WIMP). Both the upcoming Run 3 of the Large Hadron Collider (LHC) and its planned extension to high luminosity (HL-LHC) will provide access to very massive new particles~\cite{hllhc, CidVidal:2018eel, Gianotti:2002xx}. In supersymmetry, the generally dominant production processes are those involving the strong interaction and thus concern the pair production of squarks and gluinos. As a consequence of the current and expected bounds on these states, they might be too massive to be pair-produced at the LHC. This restriction can be lifted by considering the single production of a squark or a gluino in association with a (typically lighter) electroweakino. These processes then also have the advantage of providing insights not only on the supersymmetric masses, but also on the supersymmetric interactions~\cite{1108.1250,1907.04898,2110.04211}. Only precise theoretical predictions allow for a reliable comparison between theory and experiment. Already by going from leading order (LO) to next-to-leading order (NLO) in perturbative QCD, the theoretical uncertainty originating from the arbitrary choice of factorisation and renormalisation scales is reduced. However, with the possibility of light SUSY particles being excluded by direct searches at the LHC, the current mass limits imply that in any SUSY production process the kinematic configuration approaches the production threshold. This results in large threshold logarithms ruining the convergence of the perturbative series, so that they must be resummed. This resummation procedure has been known for quite some time at the leading logarithmic (LL) and next-to-leading-logarithmic (NLL) accuracy and in some cases beyond and has been found to generally further reduce the theoretical uncertainty inherent in the perturbative calculation~\cite{Sterman:1986aj,Catani1989, Catani1991,Kidonakis:1997gm,hep-ph/9801268}. In the last decade the precise investigation of slepton pair \cite{Bozzi:2004qq,Bozzi:2006fw,Bozzi:2007qr,Bozzi:2007tea,Fuks:2013lya,Fiaschi:2018xdm,Fiaschi2020}, electroweakino pair~\cite{Debove:2008nr,Debove:2009ia,Debove:2010kf,Debove:2011xj,Fuks:2012qx,Fuks:2013vua,1805.11322,Fiaschi:2020udf} and electroweakino-gluino \cite{1604.01023} production beyond NLO was accomplished. Similarly, improved predictions in the strong sector exist for squark and gluino pair production~\cite{Kulesza:2008jb,Beenakker:2011sf,1601.02954,Beenakker:2009ha,Beenakker:2013mva,Beenakker:2014sma,1510.00375}. In this work we focus on the production of a squark and an electroweakino at hadron colliders. Since this process involves both weak [${\cal O}(\alpha_\text{EM})$] and strong [${\cal O}(\alpha_s)$] interactions at leading order, the resulting cross section is of intermediate size. The simplest process in this category involves the production of a first- or second-generation squark together with a lightest neutralino. This process manifests itself through a hard jet originating from the squark decay and missing transverse energy from the two neutralinos leaving the detector invisibly, one of them being a decay product of the squark and the other one being directly produced in the hard process. Such a monojet signal is particularly well-studied in the context of dark matter production at colliders~\cite{Feng:2005gj,Bai:2010hh}. In recent analyses by the CMS collaboration, squark masses below \SI{1.6}{TeV} were excluded in four mass-degenerate squark flavour models, assuming production with a light neutralino ${\tilde \chi}^0_1$. This limit is reduced to \SI{1.1}{TeV} for a single kinematically reachable squark \cite{CMS:2017abv, 1908.04722, CMS:2021cox}. Similarly, the ATLAS collaboration gives limits of \SI{1.4}{TeV} and \SI{1.0}{TeV}~\cite{2010.14293, 2101.01629}. In this paper, we present a threshold resummation calculation for the associated production of squarks and electroweakinos at the NLO+NLL accuracy. The structure of this work is as follows: In \cref{sec:soft}, we compute the production cross section at leading and next-to-leading order and give a brief summary of the ingredients required for threshold resummation. The numerical validation of our NLO calculation and our new results up to NLO+NLL accuracy are given in \cref{sec:num} for various benchmark scenarios. We summarise our work and conclude in \cref{sec:concl}. \section{Soft gluon resummation} \label{sec:soft} We begin this work with a derivation of LO and NLO expressions for the associated production of a squark and an electroweakino at hadron colliders in \cref{sec:lo_and_nlo}. Then, \cref{sec:ref} explains refactorisation and resummation up to NLL accuracy and includes a computation of the soft anomalous dimension associated with the process considered. In \cref{sec:hard} we present the NLO hard matching coefficient, which is then used together with the soft anomalous dimension in \cref{sec:match} to consistently combine fixed-order and resummed predictions at the NLO+NLL accuracy. \subsection{Production of squarks and electroweakinos at leading and next-to-leading order} \label{sec:lo_and_nlo} To calculate the total hadronic cross section $\sigma_{AB}$ for the process considered, we convolute the partonic cross section ${\rm d}\sigma_{ab}$ with factorisation-scale dependent parton distribution functions (PDFs) $f_{i/h}(x_i,\mu_F^2)$ for a particle $i$ of momentum fraction $x_i$ in a hadron $h$, \begin{equation} \label{eq:sigma} \begin{split} \sigma_{AB} = \int M^2 \frac{\dd \sigma_{AB}}{\dd M^2}(\tau) =& \sum_{a,b} \int_0^1 \dd{x_a} \dd{x_b} \dd{z} \Big[x_a f_{a/A}(x_a,\mu_F^2)\Big]\Big[x_b f_{b/B}(x_b,\mu_F^2)\Big] \\ \times &\Big[z \dd \sigma_{ab}(z,M^2,\mu_R^2,\mu_F^2)\Big]\ \delta(\tau - x_a x_b z) \,, \end{split} \end{equation} where $\tau = M^2/S$ is the ratio of the squared invariant mass $M^2$ over the hadronic centre-of-mass energy $S$ \cite{hep-ph/0409313}. The partonic fraction $z=\tau/(x_ax_b) = M^2/s$ is defined by the ratio of the squared invariant mass to the partonic centre-of-mass energy $s=x_ax_bS$ and equals one at leading order. The partonic cross section \begin{equation} \sigma_{ab}(s) = \int_2 \dd \sigma_{ab} = \int \frac{1}{2s}\ \overline{|\mathcal M|^2}\ \dd \text{PS}^{(2)} \end{equation} is related to the squared and averaged matrix element $\overline{|\mathcal M|^2}$ by the usual flux factor $1/(2s)$ and the integration over the two-particle phase space dPS$^{(2)}$. \begin{figure} \centering \includegraphics[height=.25\linewidth]{fig01a.pdf} \quad \includegraphics[height=.25\linewidth]{fig01b.pdf} \caption{Tree-level $s$- (left) and $u$-channel (right) Feynman diagrams for the associated production of a squark and an electroweakino at hadron colliders.} \label{fig:lo_feyn} \end{figure} The associated production of a squark and an electroweakino with masses $m_{\tilde q}$ and $m_{\tilde \chi}$ occurs at a hadron collider at LO through the annihilation of a (massless) quark and a gluon. Charge conservation restricts the possible partonic processes to \begin{align}\label{eq:processes} q_{u,d}(p_a)\ g(p_b) \to {\tilde q}_{u,d} (p_1)\ {\tilde \chi}^0_{k} (p_2)\qquad\text{and}\qquad q_{u,d}(p_a)\ g(p_b) \to {\tilde q}_{d,u}\ (p_1){\tilde \chi}^\pm_{k} (p_2) \,, \end{align} where $k$ identifies the neutralino (${\tilde \chi}^0_{k}$, $k=1,\dots,4$) or chargino (${\tilde \chi}^\pm_{k}$, $k=1,2$) mass eigenstate and $p_{a,b}$ and $p_{1,2}$ refer to the four-momenta of the initial- and final-state particles, respectively. The corresponding Born diagrams are shown in \cref{fig:lo_feyn}. The squared matrix elements associated with the $s$-channel quark exchange diagram (left), the $u$-channel squark exchange diagram (right) and their interference can be expressed as functions of the usual Mandelstam variables $s=(p_a+p_b)^2$, $t=(p_a-p_1)^2$ and $u=(p_a-p_2)^2$, {\it i.e.} as \begin{align} |\mathcal M_s |^2 &= \frac{g_s^2 C_A C_FB}{ s}\ 2 (m_{\tilde \chi}^2-t) \,,\\ |\mathcal M_u |^2 &= - \frac{g_s^2 C_A C_FB}{ (u-m_{\tilde q}^2)^2}\ 2 (m_\chi^2-u)(m_{\tilde q}^2+u)\qquad {\rm and}\\ 2 \Re[\mathcal M_s \mathcal M_u^\dagger] &=2\frac{g_s^2 C_AC_FB}{ s(u-m_{\tilde q}^2)} \Big(2(m_{\tilde \chi}^4-m_{\tilde q}^4) + m_{\tilde q}^2(2u-3s) -2m_{\tilde \chi}^2 ( 2m_{\tilde q}^2 + u)-su\Big)\,. \end{align} They are all proportional to the squared electroweakino-squark-quark coupling \begin{equation} B\equiv R_{Ijk} L'_{Ijk}+L_{Ijk}R'_{Ijk} =R_{Ijk} R_{Ijk}^*+L_{Ijk}L_{Ijk}^* = |R_{Ijk}|^2 + |L_{Ijk}|^2 \,, \end{equation} where the capitalised index $I$ labels the quark generation, the lower-case index $j$ refers to the squark eigenstate and the index $k$ is related as above to the electroweakino eigenstate. The definitions of the various left- and right-handed couplings $L^{(')}$ and $R^{(')}$ are provided in Refs.~\cite{Haber:1984rc,Gunion:1984yn,1604.01023}. Using arbitrary squark mixings and mass eigenstates opens the possibility to study SUSY flavour violation~\cite{Bozzi:2005sy,Bozzi:2007me, Fuks:2008ab,Fuks:2011dg,DeCausmaecker:2015yca,Chakraborty:2018rpn}. The total spin- and colour-averaged squared amplitude then reads \begin{equation} \overline{|\mathcal M|^2} = \frac{1}{96} \left(|\mathcal M_s |^2 +|\mathcal M_u |^2 + 2 \Re[\mathcal M_s \mathcal M_u^\dagger]\right)\,. \end{equation} The NLO corrections to this cross section are well-known \cite{1108.1250,1907.04898,2110.04211}. They involve one-loop self-energy, vertex and box corrections interfering with tree-level diagrams, as well as squared real gluon and quark emission diagrams, from which intermediate on-shell squark and gluino resonant contributions have to be subtracted to avoid spoiling the convergence of the perturbative series~\cite{Gavin:2013kga,Hollik:2012rc}. We have calculated the full NLO cross section using dimensional regularisation of ultraviolet (UV) and infrared (IR) divergences, on-shell renormalisation for all squark and gluino masses and wave functions, and $\overline{\text{MS}}$ renormalisation for the couplings with the exception of the strong coupling $g_s$. The latter is renormalised by subtracting, at zero-momentum transfer, all massive particle contributions and $\overline{\text{MS}}$ contributions of all massless particles from the gluon self-energy~\cite{Collins:1978wz,Bardeen:1978yd,Marciano:1983pj}. In order to avoid the violation of SUSY invariance by the introduction of a mismatch between the strong coupling $g_s$ and the quark-squark-gluino Yukawa coupling $\hat g_s$ at one loop, we shift $\hat g_s$ by a finite contribution, allowing us to restore SUSY and to make use of the above-mentioned renormalisation scheme~\cite{hep-ph/9308222}. Real and virtual contributions are combined with the help of the dipole subtraction method to cancel infrared and collinear divergences~\cite{hep-ph/9605323,hep-ph/0201036,hep-ph/0011222}. This method splits the pure NLO cross section into separately finite virtual and real contributions and a collinear counterterm consisting of two insertion operators $\mathbf P$ and $\mathbf K$, \begin{equation} \label{eq:nlo} \begin{split} \sigma^\text{NLO} &= \int_{3} \Big[ \dd\sigma^{\text{R}} - \dd\sigma^{\text{A}}\Big]_{\epsilon=0} + \int_{2} \bigg[\dd\sigma^{\text{V}} + \int_1\dd\sigma^{\text{A}}\bigg]_{\epsilon=0} \\ &\qquad + \int_0^1\dd x \int_2 \Big[\dd\sigma^\text{B}(xp) \otimes (\mathbf P + \mathbf K)(x)\Big]_{\epsilon=0} \,. \end{split} \end{equation} In this expression the integration domain denotes the number of final-state particles. Moreover, the auxiliary cross section $\sigma^A$ shifts the infrared divergences such that the integrations over both the two- and three-particle phase spaces are numerically possible without changing the total result. \subsection{Refactorisation} \label{sec:ref} After the cancellation of soft and collinear divergences among the real and virtual corrections, large logarithms remain near threshold~\cite{Kinoshita1962,Lee1964}. They arise between the constrained integration over the real emission phase space and the integration of virtual loops and take the form \begin{equation} \left( \frac {\alpha_s} {2 \pi}\right)^n \left[ \frac{\log^m(1-z)}{1-z}\right]_+ \,, \end{equation} with $m\leq 2n -1$. The variable $1-z = 1- M^2/s$ describes the energy fraction of an additional emitted gluon or massless quark and thus quantifies the distance to the partonic threshold. For soft emitted particles ($z \to 1$), truncating the perturbative calculation at a fixed order does not give a reliable prediction, so that the logarithms must be resummed to all orders in $\alpha_s$. To calculate soft gluon emission up to all orders, kinematic and dynamical factorisation are necessary. Kinematic factorisation is possible by transforming the constituents of \cref{eq:sigma} into Mellin space \begin{equation} F(N) = \int_0^1 \dd{y} y^{N-1} F(y)\,, \end{equation} with $F=\sigma_{AB},\, \sigma_{ab},\, f_{a/A},\, f_{b/B}$ and $y=\tau,\, z,\, x_a,\, x_b$, respectively. We obtain \begin{equation} M^2 \frac{\dd \sigma_{AB}}{\dd M^2} (N-1) = \sum_{a,b}\ f_{a/A}(N,\mu_F^2)\ f_{b/B}(N,\mu_F^2)\ \sigma_{ab}(N,M^2,\mu_F^2,\mu_R^2)\, , \end{equation} such that the phase space factorises. In this expression, the large logarithms now depend on the Mellin variable $N$. Dynamical factorisation can then be achieved by relying on eikonal Feynman rules. The partonic cross section can be refactorised and resummed to \begin{equation} \begin{split} \sigma_{ab\to ij}(N,M^2,\mu_F^2,\mu_R^2) &= \sum_I \mathcal H_{ab \to ij,I}(M^2,\mu_F^2,\mu_R^2)\ \Delta_a(N,M^2,\mu_F^2,\mu_R^2) \\ &\qquad \times \Delta_b(N,M^2,\mu_F^2,\mu_R^2)\ \Delta_{ab\to ij,I}(N,M^2,\mu_F^2,\mu_R^2)\,, \end{split}\label{eq:HtimesG} \end{equation} in which the hard function is given by \begin{equation} \mathcal H_{ab\to ij,I}(M^2,\mu_F^2,\mu_R^2) = \sum_{n=0}^\infty \left(\frac{\alpha_s}{2 \pi}\right)^n \mathcal H^{(n)}_{ab \to ij,I}(M^2,\mu_F^2,\mu_R^2) \,. \end{equation} This quantity is further discussed in \cref{sec:hard}~\cite{hep-ph/0409313,Kidonakis:1997gm,hep-ph/9801268}. The irreducible colour representation index $I$ is dropped from now on, since squark-electroweakino associated production involves only a single colour tensor. The soft wide-angle function $\Delta_{ab\to ij}$ and the soft collinear radiation functions $\Delta_{a,b}$ exponentiate~\cite{hep-ph/0010146}, \begin{equation} \label{eq:exponentiation} \Delta_a \Delta_b \Delta_{ab \to ij} = \exp\left[ LG^{(1)}_{ab}(\lambda) + G^{(2)}_{ab \to ij}(\lambda,M^2,\mu_F^2,\mu_R^2) + \dots \right]\,, \end{equation} with $\lambda = \alpha_s\beta_0L/(2\pi)$, $L = \log{} \bar N$ and $\bar N = Ne^{\gamma_E}$. The above expression contains the leading-logarithmic $G_{ab}^{(1)}$ and next-to-leading logarithmic $G_{ab \to ij}^{(2)}$ contributions~\cite{Catani1989,Catani1991}. They are given by \begin{align} G^{(1)}_{ab}(\lambda) &= g_a^{(1)}(\lambda) + g_b^{(1)}(\lambda) \,, \\ G^{(2)}_{ab \to ij}(\lambda,M^2,\mu_F^2,\mu_R^2) &= g_a^{(2)}(\lambda,M^2,\mu_F^2,\mu_R^2) + g_b^{(2)}(\lambda,M^2,\mu_F^2,\mu_R^2) + h^{(2)}_{ab\to ij}(\lambda)\,, \label{eq:sudakov}\end{align} with \begin{align} g_a^{(1)} &= \frac{A_a^{(1)}}{2 \beta_0 \lambda} \left[ 2 \lambda + (1- 2 \lambda) \log (1- 2\lambda)\right] \,, \\ g_a^{(2)} &= \frac{A_a^{(1)} \beta_1}{2 \beta_0^3 } \left[ 2 \lambda + \log(1- 2 \lambda) + \frac 1 2 \log^2 (1- 2\lambda)\right] - \frac{A_a^{(2)}}{2 \beta_0^2} \left[ 2 \lambda + \log (1- 2 \lambda)\right] \nonumber\\&+ \frac{A_a^{(1)}}{2 \beta_0} \left[ \log (1- 2 \lambda ) \log\left(\frac{M^2}{\mu_R^2}\right) + 2 \lambda \log\left( \frac{\mu_F^2}{\mu_R^2}\right)\right] \,. \end{align} The resummation coefficients entering those quantities are \begin{align} A_a^{(1)} = 2 C_a \qquad\text{and}\qquad A_a^{(2)} = 2 C_a \left[ \left( \frac{67}{18} - \frac{\pi^2}{6} \right) C_A - \frac{5}{9} n_f\right]\,, \end{align} with $C_a=C_F$ for quarks and $C_a=C_A$ for gluons. The last term in \cref{eq:sudakov} consists of the process-dependent contributions related to large-angle soft-gluon emissions. It reads \begin{align} h^{(2)}_{ab \to ij}(\lambda) &= \frac{\log{(1-2\lambda)}}{2 \beta_0} D^{(1)}_{ab \to ij} = \frac{\log{(1-2\lambda)}}{2 \beta_0} \frac{2\pi}{\alpha_s} \Re(\bar\Gamma_{ab\to ij})\,. \end{align} The one-loop coefficient $D_{ab \to ij}^{(1)}$ does not vanish for squark-gaugino production, since soft gluons can be radiated off the final-state squark, and it depends on the modified soft anomalous dimension \begin{equation} \label{eq:soft:a:dim} \bar{\Gamma}_{q g \rightarrow {\tilde q} {\tilde \chi}} ={} \frac{\alpha_s}{2\pi}\bigg\{ C_F \left[ 2\log\left(\frac{m_{\tilde{q}}^2 - t}{\sqrt{s} m_{\tilde{q}}}\right) - 1 + i\pi \right] + C_A \log\left(\frac{m_{\tilde{q}}^2 - u}{m_{\tilde{q}}^2 - t}\right) \bigg\}\,, \end{equation} that has been derived in \cref{app:soft}. \subsection{Hard matching coefficient} \label{sec:hard} The resummation of the logarithmic contributions as performed in \cref{eq:exponentiation} scales with the hard function $\mathcal H_{ab \to ij}(M^2,\mu_F^2,\mu_R^2)$, as shown in \cref{eq:HtimesG}. Including higher-order contribution in the hard function hence further improves the accuracy of the predictions. Therefore, in addition to the LO term \begin{equation} \mathcal H^{(0)}_{ab \to ij}(M^2,\mu_F^2,\mu_R^2) = \sigma^{(0)}_{ab\to ij}(M^2) \,, \end{equation} we include the $N$-independent parts of the NLO cross section in the one-loop hard matching coefficient, \begin{equation} \mathcal H^{(1)}_{ab \to ij}(M^2,\mu_F^2,\mu_R^2) = \sigma^{(0)}_{ab\to ij}(M^2)\ C^{(1)}_{ab\to ij}(M^2, \mu_F^2, \mu_R^2) \,. \end{equation} To compute this coefficient, we begin with the full NLO cross section of \cref{eq:nlo}. We first neglect the real emission contributions due to the three-particle phase space suppression close to threshold~\cite{Beenakker:2013mva,Beenakker:2011sf}. The virtual contributions $\dd\sigma^\text{V}$ and the integrated dipoles $\int_1 \dd \sigma^\text{A}$ in \cref{eq:nlo} correspond to a contribution proportional to $\delta(1-z)$, that is thus constant in $N$ after a Mellin transform. The collinear remainder is split into two pieces related to the insertion operators $\bf P$ and $\bf K$, in which only the former depends on the factorisation scale $\mu_F$ \cite{ hep-ph/9605323,hep-ph/0201036,hep-ph/0011222}. After discarding any $\mathcal O(1/N)$ contribution that vanishes in the large-$N$ limit, only the diagonal terms survive. We obtain for the initial quark \begin{equation}\label{eq:k:quark}\begin{split} \Big\langle{\bf P}(N)\Big\rangle_q =&\ \frac{\alpha_s}{2\pi}\left(\log\bar{N} - \frac{3}{4}\right)\left(2 C_F \log\frac{\mu_F^2}{m_{\tilde{q}}^2 - t} - C_A \log\frac{s}{{m_{\tilde{q}}^2 - t}}\right)\,, \\ % \Big\langle{\bf K}(N)\Big\rangle_q =&\ \frac{\alpha_s}{2\pi}\Bigg\{ C_F \left(2\log^2\bar{N} + \frac{\pi^2}{2} - \frac{\gamma_q}{C_F} - \frac{K_q}{C_F}\right) \\ &+ \left(C_F - \frac{C_A}{2}\right)\left[2\log\bar{N}\left(1+\log\frac{m_{\tilde q}^2}{m^2_{{\tilde q}} - t}\right) + \mathcal{Q}\right]\Bigg\}\,, \end{split}\end{equation} with \begin{equation} \begin{aligned} \mathcal{Q} ={}& \frac{m^2_{{\tilde q}} - t}{2m^2_{{\tilde q}} - t } + \frac{3 m_{\tilde q}}{m_{\tilde q} + \sqrt{2m^2_{{\tilde q}} - t }} + \log\frac{m_{\tilde q}^2}{2m^2_{{\tilde q}} - t } \left(1 + 2\log\frac{m_{\tilde q}^2}{m^2_{{\tilde q}} - t}\right) \\ &-\frac{3}{2}\log\frac{3m^2_{{\tilde q}} - t - 2m_{\tilde q}\sqrt{2m^2_{{\tilde q}} - t }}{m^2_{{\tilde q}} - t} + 2{\rm Li}_2 \left(\frac{2m^2_{{\tilde q}} - t}{m_{\tilde q}^2}\right) - \frac{\gamma_{\tilde{q}}}{C_F} \,. \end{aligned} \end{equation} For the initial gluon, we find \begin{equation}\label{eq:k:gluon}\begin{split} \Big\langle{\bf P}(N)\Big\rangle_g =&\ \frac{\alpha_s}{2\pi}\left(C_A \log\bar{N} - \frac{\beta_0}{2}\right)\log\frac{\mu_F^4}{s(m_{\tilde{q}}^2 - u)} \,,\\ \Big\langle{\bf K}(N)\Big\rangle_g =&\ \frac{\alpha_s}{2\pi}\frac{C_A}{2}\left[ 4\log^2\bar{N} \!+\! 2\log\bar{N}\left(1\!+\!\log\frac{m_{\tilde q}^2}{m^2_{{\tilde q}} \!-\! u}\right) \!+\! \pi^2 \!-\! \frac{2\gamma_g}{C_A} \!-\! \frac{2K_g}{C_A} \!+\! \mathcal{G}\right]\,, \end{split}\end{equation} with \begin{equation} \begin{split} \mathcal{G} ={}& \frac{m^2_{{\tilde q}} - u}{2m^2_{{\tilde q}} - u } + \log\frac{m_{\tilde q}^2}{2m^2_{{\tilde q}} - u }\left(1 + 2 \log\frac{m_{\tilde q}^2}{m^2_{{\tilde q}} - u}\right) + 2{\rm Li}_2 \left(\frac{2m^2_{{\tilde q}} - u}{m_{\tilde q}^2}\right) - \frac{ \gamma_{\tilde q}}{C_F}\\ &+ \frac{\beta_0}{C_A} \left(\log\frac{3m^2_{{\tilde q}} - u-2m_{\tilde q}\sqrt{2m^2_{{\tilde q}} - u}}{m^2_{{\tilde q}} - u} + \frac{2m_{\tilde q}}{\sqrt{2m^2_{{\tilde q}} - u}+m_{\tilde q}}\right) \,. \end{split} \end{equation} In all these expressions, the two-body phase space Mandelstam variables are defined according to the particle ordering of \cref{eq:processes}, and the various constants are~\cite{hep-ph/0201036} \begin{equation} \begin{aligned} &\gamma_q = \frac{3}{2}C_F\,, \qquad &&K_q = \left(\frac{7}{2}-\frac{\pi^2}{6}\right)C_F \,,\\ &\gamma_g = \beta_0 = \frac{11}{6}C_A - \frac{2}{3}T_R N_f \,,\qquad &&K_g = \left(\frac{67}{18}-\frac{\pi^2}{6}\right)C_A - \frac{10}{9}T_R N_f \,,\\ &\gamma_{\tilde{q}} = 2 C_F \,,\qquad &&K_{\tilde{q}} = \left(4-\frac{\pi^2}{6}\right)C_F \,. \end{aligned} \end{equation} As we have ignored any $1/N$ terms in the above computation of the hard matching coefficient, we employ the standard collinear~unimproved resummation formalism as opposed to the collinear improved one of Refs.~\cite{Kramer:1996iq,Catani:2001ic,Kulesza:2002rh,Almeida:2009jt}. While only the $N$-independent terms are necessary in practice, we included the logarithmic terms in the above expressions to be able to validate analytically the re-expansion of the resummed cross section at ${\cal O}(\alpha_s^2)$ in \cref{sec:match}. \subsection{Matching and expansion} \label{sec:match} So far we have computed a fixed order cross section $\sigma^\text{NLO}$ and a resummed cross section $\sigma^\text{Res.}$. As the latter is a good approximation near threshold and the former far from it, they should be consistently combined. Therefore, we sum up both contributions and remove the terms that are accounted for both in the resummed and the fixed-order predictions, thus avoiding any double counting. A consistent matching is achieved by re-expanding $\sigma^\text{Res.}$ at ${\cal O}(\alpha_s^2)$ and subtracting this quantity $\sigma^\text{Exp.}$ from the sum of the resummed and fixed-order results, \begin{equation} \sigma_{ab} = \sigma_{ab}^\text{NLO} + \sigma_{ab}^{\text{Res.}} - \sigma_{ab}^\text{Exp.}\,. \end{equation} The expansion is given in terms of the first- ($\mathcal H ^{(0)}$) and second-order ($\mathcal H^{(1)}$) hard function coefficients of \cref{sec:hard}, \begin{equation} \label{eq:exp} \begin{split} \sigma^\text{Exp.}_{ab} &= \mathcal H^{(0)}_{ab\to ij}(M^2,\mu^2) + \frac{\alpha_s}{2 \pi}\mathcal H^{(1)}_{ab\to ij}(M^2,\mu^2)+ \frac{\alpha_s}{2 \pi}\mathcal H^{(0)}_{ab\to ij}(M^2,\mu^2) \\ &\times \left( \Big(A_a^{(1)} + A_b^{(1)}\Big) \Big( \log \bar N + \log \frac {\mu_F^2}{s} \Big) - 2 D_{ab\to ij}^{(1)}\right) \log \bar N\,. \end{split} \end{equation} We can verify that the leading logarithmic terms in $\log^2 \bar N $ agree with those of the $\mathbf K$-operators for quarks ($A_q^{(1)}=2C_F$) and gluons ($A_g^{(1)}=2C_A$) in \cref{eq:k:quark} and \cref{eq:k:gluon}. Moreover, combining the next-to-leading logarithmic terms in $\log \bar N$ originating from the soft anomalous dimension in \cref{eq:soft:a:dim} and the terms in $\log (\mu_F^2/s)$ from \cref{eq:exp}, we recover the same terms as in the sum of the contributions of the $\mathbf P$ and $\mathbf K$ operators for quarks and gluons taken separately, \begin{align} 2C_F \left[ \log \frac{\mu_F^2 }{ m_{\tilde q}^2 -t} +\log \frac{ m_{\tilde q}^2}{ m_{\tilde q}^2 -t} +1 \right] &+ (u \leftrightarrow t), \\ 2C_A \left[ \log \frac{\mu_F^2}{s}+ \log \frac{m_{\tilde q}^2-t}{m_{\tilde q}^2-u}\right] &+ (u \leftrightarrow t) \,. \end{align} The partonic cross section in Mellin space must finally be multiplied with the $N$-moments of the PDFs, and an inverse Mellin transformation must be applied \cite{Catani:1996yz,Contopanagos:1993yq}, \begin{equation} \label{eq:inv:mass} M^2 \frac{\dd\sigma_{AB}}{\dd M^2}(\tau) = \frac{1}{2 \pi i} \int \dd N \tau^{-N} M^2 \frac{\dd \sigma_{AB}}{\dd M^2}(N)\,. \end{equation} Details can be found in Ref.~\cite{1604.01023}. \section{Numerical Results} \label{sec:num} For our numerical predictions, we identify the Standard Model parameters with those determined by the Particle Data Group \cite{ParticleDataGroup:2020ssz}. The running of the strong coupling with five active quark flavours is chosen in agreement with the selected PDF set as provided by the \textsc{LHAPDF6} library \cite{lhapdf}. As our default choice of PDFs, we employ the sets of MSHT20~\cite{Bailey:2020ooq} unless stated otherwise. To be specific, we use at LO the set MSHT20LO130 with $\alpha_s(M_Z)=0.130$, and at NLO and NLO+NLL the set MSHT20NLO118 with and $\alpha_s(M_Z)=0.118$. Again unless stated otherwise, we consider the dominant squark-electroweakino production channel at the LHC, {\it i.e.} the production of a left-handed or a right-handed up-type squark in association with the lightest neutralino. \subsection{Validation of our NLO implementation} In order to validate the numerical implementation of our NLO calculation, we compare our NLO predictions with those obtained with \textsc{MadGraph5$\_$aMC@NLO} \cite{1405.0301}, that rely on the MSSM model implementation described in Ref.~\cite{1907.04898}. In addition, we make use of the \textsc{MadSTR} plugin to subtract on-shell squark and gluino contributions from the real emission pieces. We find excellent numerical agreement of the two approaches. In addition, we validate our NLO implementation by studying the squark mass dependence of the total cross section and comparing our results to those of an independent, previously published automated calculation \cite{1108.1250} at the MSSM benchmark point SPS1a$_{1000}$. This benchmark is based on the point SPS1a \cite{hep-ph/0202233}, for which the physical particle spectrum is calculated with \textsc{SPheno} 3 \cite{Porod:2011nf}, and the gluino mass is then shifted to \SI{1}{TeV}. Our results are presented in \cref{fig:mass_plehn}. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig02a.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig02b.pdf} \end{subfigure} \caption{ NLO cross sections $\sigma(pp \to\tilde u_{L,R} {\tilde \chi}_1^0)$ and their virtual and real components (top panels) as well as the associated $K$ factors (bottom panels) as a function of the squark mass $m_{{\tilde u}_{L,R}}$ for a fixed mass difference $m_{{\tilde u}_L}-m_{{\tilde u}_R} = \SI{20}{GeV}$. The remaining MSSM parameters are fixed to the benchmark point SPS1a$_{1000}$, the LHC energy is $\sqrt{S}=\SI{7}{TeV}$, and CTEQ6.6M PDFs are employed. } \label{fig:mass_plehn} \end{figure} The contributions from real and virtual corrections, made indiviually finite with the Catani-Seymour dipole formalism \cite{hep-ph/0201036}, are also shown separately. The contributions from the integrated dipoles and the collinear counterterms $\mathbf{P} + \mathbf{K}$ have been combined with the virtual corrections. For the associated production of a right-handed up-type squark with the lightest neutralino (right), we observe the same sign flip in the real corrections as the one that had been found in Ref.~\cite{1108.1250}. Moreover, our $K$-factors for the real and virtual corrections, defined by their ratios with respect to the Born cross section, exhibit the same behaviour as those in the reference computation, both for $\tilde u_L\tilde\chi_1^0$ and $\tilde u_R\tilde\chi_1^0$ production. The remaining minor numerical differences between the two calculations can be traced back to the use of a slightly different PDF set, as \textsc{LHAPDF6} \cite{lhapdf} no longer supports CTEQ6M~\cite{Pumplin:2002vw}, and benchmark scenario with small differences in the superpartner masses (below \SI{1}{\percent}). Next, we demonstrate in \cref{fig:scale_plehn} \begin{figure} \includegraphics[width=\linewidth]{fig03.pdf} \caption{ Profile of the renormalisation and factorisation scale dependence of the total cross section for the process $pp \to\tilde u_{R} {\tilde \chi}_1^0$. The plot covers $\mu_{F,R}\in (0.1-10)\mu_0$, where the central scale is $\mu_0= (m_{\tilde q} + m_{\tilde \chi})/2$. The bands show the scale uncertainties as obtained from the seven-point method. The predictions in this plot are for a centre-of-mass energy of $\sqrt{S}=$ 7 TeV and CTEQ6.6M PDFs. } \label{fig:scale_plehn} \end{figure} that the factorisation and renormalisation scale dependence of our result agrees with the findings of Ref.~\cite{1108.1250}. In addition, we show as shaded bands the uncertainties obtained by varying the central scale $\mu_0\equiv\mu_R=\mu_F= (m_{\tilde q} + m_{\tilde \chi})/2$ with the seven-point method, {\it i.e.} by varying both scales independently by a factor of two up and down, but excluding relative factors of four between the two scales. As expected, the NLO corrections increase the total cross section and reduce the scale uncertainties. For illustrative purposes, we include also our new NLL+NLO predictions (green). While they do not systematically increase the NLO cross section for the masses chosen here, the scale uncertainty is reduced by about a factor of two relative to the NLO result. \subsection{Invariant mass distributions} In the following, we present results for two specific phenomenological MSSM scenarios with eleven parameters (pMSSM-11). The input parameters and the relevant resulting physical masses obtained with {\sc SPheno} 3 are listed in \cref{tab:scenarios}. \begin{table} \begin{center} \begin{tabular}{ |c|c|c| } \hline \multicolumn{3}{|c|}{pMSSM-11 scenario A } \\ \hline $M_1$ & $M_2$ & $M_3$ \\ $0.25$ & $0.25$ & $-3.86$ \\ \hline $M_{(U,D,Q)_{1,2}}$ & $M_{(U,D,Q)_{3}}$& $\mu$\\ $4.0$ & $1.7$ & $1.33$\\ \hline $M_{(L,E)_{1,2}}$ & $M_{(L,E)_{3}}$&$\tan \beta$ \\ $0.35$ & $0.47$ &$36$ \\ \hline $M_A$ & $A_0$ &\\ $4.0$ & $2.8$ & \\ \hline $m_{{\tilde \chi}_1^0}$ & $m_{{\tilde u}}$& $m_{\tilde g}$\\ $0.249$ & $ 4.07 $ & $ 3.90 $\\ \hline \end{tabular}\hspace{1cm} \begin{tabular}{ |c|c|c| } \hline \multicolumn{3}{|c|}{pMSSM-11 scenario B } \\ \hline $M_1$ & $M_2$ & $M_3$ \\ $0.51$ & $0.48$ & $3.00$ \\ \hline $M_{(U,D,Q)_{1,2}}$ & $M_{(U,D,Q)_{3}}$& $\mu$ \\ $0.9$ & $2.0$& $-9.4$ \\ \hline $M_{(L,E)_{1,2}}$ & $M_{(L,E)_{3}}$ &$\tan \beta$ \\ $1.85$ & $1.33$ &$33$ \\ \hline $M_A$ & $A_0$ &\\ $3.0$ & $-3.4$ &\\ \hline $m_{{\tilde \chi}_1^0}$& $m_{{\tilde u}}$& $m_{\tilde g}$ \\ $0.505$ & $ 0.96 $& $ 2.94 $ \\ \hline \end{tabular} \end{center} \caption{Higgs and soft SUSY breaking parameters in our pMMSM-11 benchmark models, together with the relevant resulting physical particle masses. All values, except for $\tan \beta$, are given in \si{TeV}. } \label{tab:scenarios} \end{table} First, we focus on a scenario featuring large squark masses of \SI{4}{TeV}, referred to as scenario~A. Second, scenario~B explores squark and gaugino masses expected to be within the reach of Run 3 of the LHC. Both scenarios are based on the global fits of Ref.~\cite{1710.11091}. Scenario A is derived from a fit that includes data from the anomalous magnetic moment of the muon~\cite{Muong-2:2021ojo}, while scenario B does not include it. In addition, we have lowered the parameters $M_1$ and $M_2$ in scenario B and have increased the parameter $M_3$ to bring the squark and gluino masses in agreement with the current SUSY limits from the LHC \cite{2101.01629,2010.14293,1908.04722}. In \cref{fig:invariant_mass}, \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig05ul.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig05ur.pdf} \end{subfigure} \newline \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig05ll.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig05lr.pdf} \end{subfigure} \caption{Invariant-mass distributions for the processes $pp \to {\tilde u}_{L,R} {\tilde \chi}_1^0$ (top panels). The uncertainties correspond to variations around the central scale $\mu_0=M$, and we additionally show NLL+NLO/NLO $K$-factors (bottom panels). The results are shown for both scenarios A and B with MSHT20 PDFs and a centre-of-mass energy of $\sqrt{S}= \SI{13}{TeV}$. } \label{fig:invariant_mass} \end{figure} we show invariant mass distributions as derived from \cref{eq:inv:mass} for the associated production of left- and right-handed squarks with the lightest neutralino in both scenarios A and B. An invariant mass below the combined final state particle masses $M_0=m_{\tilde \chi} + m_{\tilde q}$ is kinematically forbidden. Very close to this lower limit there is a rapid increase of the cross section, until it peaks at about $M \approx 1.1 M_0$ and then falls off towards higher values of $M$. As the invariant mass $M$ increases, we get closer to the threshold region $z = M^2/s \to 1$, and NLL corrections contribute significantly more to the differential cross section. This behaviour is captured by the NLL+NLO/NLO $K$-factors shown in the lower panels of the figure. For scenario A, the increase of the NLO cross section goes from \SI{25}{\percent} in the region of the peak to more than \SI{50}{\percent} at large invariant masses. In contrast, scenario B receives smaller corrections of \SI{10}{\percent} to \SI{20}{\percent} due to the smaller invariant mass $M$ relevant for the bulk of the cross section. The lower panels display the relative seven-point scale uncertainty, when the scale is varied around a central scale choice of $\mu_0 = M$. Across the whole mass range, the scale uncertainty is first reduced when comparing LO rates to NLO ones, and next further reduced when considering NLO+NLL predictions. By performing a similar calculation with $\mu_0 = (m_{\tilde q} + m_{\tilde \chi})/2$ we recover the total cross section (see below) as the area under the invariant mass distribution. In our figures, results for the production of left-handed up-squarks in scenario A are scaled by a factor of 10. Due to the bino-like nature of the neutralino ${\tilde \chi}^0_1$ in this scenario, the coupling to the right-handed up-squark is dominant, and so is the cross section related to the production of a $\tilde u_R\tilde\chi_1^0$ pair. In scenario B however, the composition of ${\tilde \chi}^0_1$ is roughly \SI{50}{\percent} wino and \SI{50}{\percent} bino, yielding cross sections of the same order for the two processes $pp\to \tilde u_R\tilde\chi_1^0$ and $pp\to \tilde u_L\tilde\chi_1^0$. \subsection{Total cross sections and their scale uncertainty} In \cref{fig:scale}, \begin{figure} \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\linewidth]{fig04a.pdf} \end{subfigure} \newline \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\linewidth]{fig04b.pdf} \end{subfigure} \caption{ Profiles of the renormalisation and factorisation scale dependence of the total cross section corresponding to the process $pp \to\tilde u_{L} {\tilde \chi}_1^0$ in scenarios A and B. The plots cover $\mu \in (0.1-10)\mu_0$ with a central scale $\mu_0= (m_{\tilde q} + m_{\tilde \chi})/2$. The bands correspond to scale uncertainties evaluated with the seven-point method, and we use $\sqrt{S}=\SI{13}{TeV}$ and MSHT20 PDFs. We present predictions at LO, NLO and NLO+NLL, as well as for the ${\cal O}(\alpha_s^2)$ expansion of the NLL result. } \label{fig:scale} \end{figure} we present predictions for the total cross section related to the process $pp\to\tilde u_L\tilde\chi_1^0$ in scenarios A and B with squark masses is of \SI{4}{TeV} and 1 TeV, repsectively, together with the associated scale uncertainties. The results are shown at LO, NLO and NLO+NLL for a centre-of-mass energy of $\sqrt{S}=13$~TeV. Our predictions show a significant increase of the total cross section in scenario A when including NLO+NLL corrections as well as a reduction of the scale uncertainties in both scenarios. The uncertainty bands are again determined by the seven-point method, where the factorisation and renormalisation scales are both varied independently by factors of two up and down around the central scale $\mu_0=(m_{\tilde q}+ m_{\tilde \chi})/2$, excluding the cases where $\mu_{F}/\mu_{R} = 4$ or $1/4$. For both examined scenarios, we observe that the relative scale uncertainties are reduced from about \SI{\pm 20}{\percent} at LO to \SI{\pm 10}{\percent} at NLO, and finally fall below \SI{\pm 5}{\percent} at NLO+NLL. The kink at $\mu_F=\mu_R=0.1\mu_0$ is more prominent in scenario A than in scenario B. It originates from the subtraction of on-shell squark and gluino resonant contributions from the real emission component of the cross section. We also include predictions for the expansion of the NLL predictions at ${\cal O}(\alpha_s^2)$, following \cref{eq:exp} (solid red curve). As expected for large scales the logarithmic terms become dominant, and the expansion consequently approximates well the NLO result. \subsection{Parton density uncertainties of the total cross section} So far we have only studied scale uncertainties associated with the total rates for squark-electroweakino production at the LHC. There is, however, a second important source of theoretical uncertainties, {\it i.e.}~those coming from the parton density fits. To compute them, we use the methods available in \textsc{LHAPDF}, which allow to calculate the PDF uncertainties in two different ways \cite{pdfmonte}: \begin{description} \item[ - Eigenvectors:] Experimental uncertainties are parametrised by making computations for a set of orthogonal Hessian PDF eigenvectors. The uncertainty is calculated from \begin{equation} \Delta\sigma_{\text{PDF}\pm} = \sqrt{\sum_{i=1}^n \Big[\max(\pm \sigma_{+i} \mp \sigma_0,\ \pm \sigma_{-i}\mp \sigma_0,0 )\Big]^2}\,, \end{equation} where the index $i$ runs over all PDF eigenvectors, with $i$ = 0 representing the central, best fit, set. This method is the one to be used with CT18~\cite{Hou:2019efy} and MSHT20~\cite{Bailey:2020ooq} densities. \item[ - Replicas:] Monte Carlo PDF sets are provided with multiple replicas that need to be combined to get a statistical symmetric uncertainty on the predictions. This is achieved through the formula \begin{equation} \Delta\sigma_{\text{PDF}\pm} = \sqrt{\frac{1}{n-1}\sum_{i=1}^n \Big[\sigma_{i}-\langle\sigma\rangle\Big]^2}\,, \end{equation} where the index $i$ runs over the entire set of PDF replicas, with the central value being given by the mean cross section value $\langle \sigma \rangle = \frac{1}{n} \sum^n_{i=1} \sigma_i\simeq \sigma_0$. This method is used with NNPDF40 densities~\cite{Ball:2021leu}. \end{description} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig06ul.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig06ur.pdf} \end{subfigure} \newline \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig06ll.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig06lr.pdf} \end{subfigure} \caption{ Relative PDF uncertainties of the total cross sections for the processes $pp \to\tilde q_{L,R} {\tilde \chi}_1^0$ at the LHC with a centre-of-mass energy of $\sqrt{S}=\SI{13}{TeV}$ and at NLO+NLL. The uncertainties are shown as a function of the squark mass $m_{{\tilde q}_{L,R}}$ and for four different choices of PDFs, namely MSHT20NLO118 (blue), CT18NLO (orange), NNPDF40NLO01180 (green) and NNPDF40NLOPCH01180 (red). } \label{fig:pdfs} \end{figure} For our predictions at NLO and NLO+NLL, we calculated the PDF uncertainties at \SI{90}{\percent} confidence level. The resummation of large logarithms does not significantly alter the size of the relative PDF uncertainties, as the same set of PDFs is used in both calculations. For scenario B we show in \cref{fig:pdfs} the PDF uncertainties associated with NLO+NLL total cross sections for the different choices of parton densities mentioned above, {\it i.e.}~for MSHT20, CT18 and NNPDF40. We consider the process $pp\to \tilde u_{L,R}\tilde\chi_1^0$ in the top row of the figure and present predictions as a function of the squark mass. While the uncertainty is of about \SI{5}{\percent} for \SI{1}{TeV} up-squarks, it increases up to $10-15$ \si{\percent} for squark masses of \SI{3}{TeV}. This increase is related to the large partonic momentum fractions $x$ relevant for such a large mass, where the PDFs are less constrained in their fitting procedure. The central cross section values obtained with the MSHT20 and CT18 sets agree consistently at the percent level in the explored mass range, the MSTH20 errors being slightly smaller as a consequence of this set being more recent than the CT18 one. On the other hand, the NNPDF40 predictions are a few percent lower, although they are still in reasonable agreement within their uncertainty intervals with the predictions achieved with other PDFs. In the lower two plots of \cref{fig:pdfs} we show results for the production of a charm squark in association with the lightest neutralino. We observe that the results obtained with the central NNPDF40NLO01180 set with $\alpha_s(M_Z)=0.118$ give cross sections that are larger by a factor of three to four with respect to those obtained with the CT18NLO set and with the MSHT20NLO118 set, both also with $\alpha_s(M_Z)=0.118$. In addition, the uncertainties associated with the NNPF40 predictions are of about 30--50\si{\percent}, in contrast with predictions obtained with CT18 and MSHT20, that have much smaller uncertainties. This discrepancy can be traced back to the treatment of the charm quark in the NNPDF40 fit~\cite{2104.09174} and is expected to be even more significant in processes with two charm quarks or antiquarks in the initial state. The cross sections estimated with the alternative NNPDF40NLOPCH01180 PDF fit (red) with $\alpha_s(M_Z)=0.118$, in which the treatment of the charm quark is kept purely perturbative, are, in contrast, in good agreement with CT18 and MSHT20 both for the central values and the uncertainties. \subsection{Squark and gaugino mass dependence of the total cross section} The dependence of the total cross section for associated squark-electroweakino production on the masses of the produced particles is important to estimate the sensitivity of Run 3 at the LHC to this process. A precise quantitative statement would of course require a detailed signal and background analysis, which is beyond the scope of this work. Therefore, we show in Fig. \ref{fig:mass} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig07ul.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig07ur.pdf} \end{subfigure} \newline \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig07ll.pdf} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{fig07lr.pdf} \end{subfigure} \caption{Total cross sections for the processes $pp \to {\tilde u}_{L,R}{\tilde \chi}^0_1$ and their relative scale uncertainties (top panels) as well as (NLO+NLL)/NLO $K$-factors (bottom panels) in scenario B. In the first row, we vary the squark mass $m_{{\tilde u}_{L,R}}$, keeping a fixed distance between the left- and right-handed squark masses $m_{{\tilde u}_L}-m_{{\tilde u}_R} = \SI{100}{GeV}$. In the second row, we vary the electroweakino mass $m_{{\tilde \chi}}$. The other parameters defining scenario B are not modified. The LHC energy is $\sqrt S = \SI{13}{TeV}$, and we use MSHT20 PDFs.} \label{fig:mass} \end{figure} the total cross sections and resulting relative scale uncertainties for scenario B as a function of the SUSY particle masses, both for $\tilde u_L\tilde\chi_1^0$ (left) and $\tilde u_R\tilde\chi_1^0$ (right) production. As expected, the cross sections fall steeply with either mass. Our predictions indicate that an integrated luminosity of \SI{350}{fb^{-1}} at $\sqrt{S} = \SI{13}{TeV}$ from the LHC Run 3~\cite{hllhc} will lead to the production of hundreds of squark-electroweakino events for a neutralino mass of \SI{0.5}{TeV} and squark masses ranging up to \SI{2}{TeV}. In the lower panels of the plots, we observe an improvement in the precision of the predictions over the whole mass range. Resummation effects reduce the scale dependence from \SI{\pm 10}{\percent} at NLO to below \SI{\pm 5}{\percent} at NLO+NLL. The black curves in the lower insets of the figures represent the ratio of the NLO+NLL predictions to the NLO ones and demonstrate the increasing impact of resummation with rising mass values. As in the previous sections, this demonstrates once more that resummation effects are larger near the hadronic threshold. While the central cross section values are enlarged by \SI{50}{\percent} when adding NLO corrections to the LO rates, the additional increase from NLL resummation reaches only about \SI{6}{\percent} for the mass ranges observable at the LHC in the near future. \section{Conclusion} \label{sec:concl} We have presented in this paper a threshold resummation calculation at the NLO+NLL accuracy for the associated production of a squark and an electroweakino at the LHC. This process has the potential to become important in the near future, if squarks and gluinos turn out to be too heavy to be produced in pairs. The semi-strong production of one electroweak and one strongly-interacting superpartner indeed offers cross sections of intermediate size and a larger available phase space thanks to the possibility of having a lighter electroweakino in the final state. Our investigations required the calculation of the full NLO corrections to the LO rate, as well as of the associated process-dependent soft anomalous dimension and hard matching coefficients. By matching fixed-order and resummed predictions, we consistently combined the resummation of large logarithms appearing close to threshold at NLL with NLO results. NLL resummation has been found to increase the NLO cross sections by up to \SI{6}{\percent} for squark masses expected to be in the reach of the LHC Run 3. In addition, the resummation procedure allowed for a stabilisation of the predictions relative to the scale dependence, the scale uncertainties being reduced to below \SI{5}{\percent} in the explored mass regime. Our calculation will be included in the next version of the public code \textsc{Resummino}~\cite{resumminourl}. \acknowledgments We thank M. Sunder for his collaboration in the early stages of this project. This work has been supported by the BMBF under contract 05P21PMCAA and by the DFG through the Research Training Network 2149 “Strong and Weak Interactions - from Hadrons to Dark Matter”. \paragraph{Open Access.} This article is distributed under the terms of the Creative Commons Attribution License (CC-BY-4.0), which permits any use, distribution and reproduction in any medium, provided the original authors(s) and source are credited.
1,477,468,751,236
arxiv
\section{Introduction} Smartphones have now replaced many measuring instruments and devices specially the ones that we use in our daily life. Examples are alarm clock, calendar, calculator, maps etc. With the help of computer vision, we also have apps that can monitor persons health and estimate blood pressure and heart beat rate accurately. Further, there are apps that can help visually impaired to read, identify currency or navigate through environment. This has been possible due to the fact that modern smartphones are packed with various sensors and increase in processing capabilities. Availability of affordable smartphones have put these apps to frequent use by more than 1 billion people everyday. One similar application could be to use smartphone to estimate dimensions of various objects, sketches and drawings. It can be used as an accurate and reliable measuring device and specifically to estimate 3D affine measurement from single view. In order to estimate objects dimensions we use single-view metrology. The single-view metrology method makes use of planes and parallel lines in the scene to extract the physical dimensions of structures from a single perspective view, given minimal prior knowledge about the scene. The prior knowledge usually involves the recognition of vanishing points of a reference plane, and a vanishing point of a direction not parallel to the reference plane. As a result, this method is especially suitable for scene structures that involve parallel lines, which could often be found in man-made structures, such as architectures and geometric elements. Concepts of the single-view metrology method were described in a number of papers \cite{Reid, Kim} and were later generalized by A. Criminisi et al. \cite{Criminisi}. To increase usability and reliability of such system, we will use a predefined reference object which will ensure we have enough information about the scene. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{images/usecase.jpg} \caption{A typical use-case for DimensionsApp.} \label{fig:use_case} \end{figure} \section{Problem Statement} To estimate dimensions of objects given a reference object in the scene. A typical use-case of the app is shown in Figure 1. In this project we consider measuring only one dimension at a time (height of boxes in Figure 1). Accuracy of measurement is naturally the first concern for such apps followed by speed. The app should be accurate, fast and robust while taking less memory and storage. Since we focus on usability of the app, it should be easy to use and capable of handling inaccurate input points due to touch interface. \section{Challenges and Design} Working on a mobile platform brings with it a number of unique challenges that need to be taken care of. Primarily, the restrictions are in the memory, the application size, and the processing time. Single-view metrology method can provide information about length only in reference direction. Given the casual nature of everyday use and limitation of physical screen, we need to align inaccurate input points to the reference direction. Since projective transformation is involved while capturing image, adjusting slope of the line is not the solution. Further, since the app is to be used in all kinds of environment, finding parallel lines automatically in the scene is difficult. Moreover, parallel lines estimated this way are far from being accurate. Inaccuracy results in huge error in final estimate. \section{Solution - Single view metrology} \subsection{Finding Parallel Lines in the Scene} As previously mentioned, automatically detecting parallel lines in the scene is a difficult task, and results are also not reliable. To overcome this, we find parallel lines with the help of reference object. To estimate the planar transformation of surfaces we compute homography between the scene and image of each side of the reference object (Figure 2.a). We used SIFT as a feature to find correspondences (Figure 2.b), however, ORB-FREAK also works fine with some restriction like size of reference object and how skewed is it in the scene. Homography matrix $H$ is computed using RANSAC implementation of OpenCV. Using $H$, we get the predefined parallel lines in the scene. \subsection{Finding Vanishing Line and Vanishing Point } Using set of parallel lines obtained previously, we can easily determine vanishing line of reference plane and vanishing point in reference direction (Figure 2.c and 2.d). Vanishing line \textbf{l} is computed by using 2 sets of parallel lines pair while vanishing point \textbf{v} is computed by using one set of parallel lines pair. \subsection{Align User's Input to Reference Direction} Due to nature of use of touch screens, input points \textbf{b}$_x$ and \textbf{t}$_x$ (Figure 2.e) may deviate from reference direction. This deviation can cause huge error in further steps. To overcome this, we align user input points to reference direction before computing metrics. After user has provided input points \textbf{b}$_x$ and \textbf{t}$_x'$, project \textbf{t}$_x'$ on line $\textbf{v} \times \textbf{b}_x$ to get point \textbf{t}$_x$. It can be easily verified that \textbf{b}$_x$ and \textbf{t}$_x$ are points in reference direction. Or that the line joining \textbf{b}$_x$ and \textbf{t}$_x$ also passes through \textbf{v}. \subsection{Computing Metric Factor and Dimension} The heights of scene elements could be estimated based on vanishing points and a known reference height found in the scene. See Fig.2(e) for an illustration of the method. Here \textbf{l} is the horizontal vanishing line, \textbf{v} is the vertical vanishing point, and $Z_r$ is the reference height. \textbf{b}$_r$ and \textbf{t}$_r$ are the bottom and top of the reference height, respectively. The metric factor $\alpha$ can then be found by the following equation: \begin{equation} \alpha Z_r = - \frac{\lVert \textbf{b}_r \times \textbf{t}_r \rVert}{\left( \textbf{l}.\textbf{b}_r \right)\lVert \textbf{v} \times \textbf{t}_r \rVert} \end{equation} For a given object on the reference plane, its height $Z_x$ can then be found by the following equation, where \textbf{b}$_x$ and \textbf{t}$_x$ are the bottom and the top of the object, respectively: \begin{equation} Z_x = - \frac{\lVert \textbf{b}_x \times \textbf{t}_x \rVert}{\alpha \left( \textbf{l}.\textbf{b}_x \right)\lVert \textbf{v} \times \textbf{t}_x \rVert} \end{equation} \begin{figure*}[H] \centering \begin{subfigure}[]{\includegraphics[width=0.2\linewidth ]{images/a.jpg}} \end{subfigure} \begin{subfigure}[]{\includegraphics[width=0.2\linewidth ]{images/b.jpg}} \end{subfigure} \begin{subfigure}[]{\includegraphics[width=0.4\linewidth ]{images/c.jpg}}\\ \end{subfigure} \begin{subfigure}[]{\includegraphics[width=0.2\linewidth ]{images/d.jpg}} \end{subfigure} \begin{subfigure}[]{\includegraphics[width=0.2\linewidth ]{images/e.jpg}} \end{subfigure} \begin{subfigure}[]{\includegraphics[width=0.5\linewidth ]{images/svm.jpg}} \end{subfigure} \caption{(a) Images of reference object used to compute homography. (b) SIFT correspondences betweem image of reference object and the scene. (c) Determine vanishing line \textbf{l} using 2 sets of parallel lines pair. (d) Determine vanishing point \textbf{v} using one set of parallel lines pair. (e) Illustration of height measurement using vanishing points and a reference height (image taken from \cite{MITguy}).} \end{figure*} \section{Results} \subsection{DimensionsApp} Some salient features about DimensionsApp are: \begin{itemize} \item Automatically finds vanishing line and vanishing point in the scene. \item Automatically align user input points to the reference direction. \item Can be used with almost any predefined reference object. \item Instant accurate results. Error +/- 5 mm. \item Small size, 2.4 MB and uses only 8.4 MB RAM. \end{itemize} \begin{figure*}[H] \centering \begin{subfigure}[]{\includegraphics[width=0.32\linewidth]{images/r1.jpg}} \end{subfigure} \begin{subfigure}[]{\includegraphics[width=0.32\linewidth]{images/r2.jpg}} \end{subfigure} \begin{subfigure}[]{\includegraphics[width=0.32\linewidth]{images/r3.jpg}} \end{subfigure} \caption{Few examples of DimensionsApp. (a) Actual height = 5cm, estimated height= 5.2cm. (b) Actual height = 10cm, estimated height= 10.19cm. (c) Actual height = 17cm, estimated height= 17.41cm. All errors are within +/- 5mm.} \end{figure*} \begin{figure}[b] \centering \begin{subfigure}[]{\includegraphics[width=0.45\linewidth]{images/f1.jpg}} \end{subfigure} \begin{subfigure}[]{\includegraphics[width=0.45\linewidth]{images/f2.jpg}} \end{subfigure} \caption{Failure cases for DimensionsApp. (a) Actual Depth = 10cm, estimated height= 6.5cm. (b) Failure due to bad correspondences while computing homography.} \end{figure} { \small \bibliographystyle{ieee}
1,477,468,751,237
arxiv
\section{Introduction and the main results} The paramount property of an analytic function is that it is completely determined by its value and the values of all its derivatives at a single point. Borel first perceived that there is a more larger class of smooth functions than that of analytic functions which has this magnificent property. He coined the term \textit{quasi-analytic} for such class of functions. In exact terms a subset of smooth functions on an interval $(a,b)$ is called a quasi-analytic class if for any function $f$ from that set and $x_0\in (a,b)$, $\frac{d^n}{dx^n}f(x_0)=0$ for all $n\in\mathbb{N}$ implies $f=0.$ Now recall that a smooth function on an interval $I$ is analytic provided its Taylor series converges to the function on $I$ which naturally restricts the growth of derivatives of that function. In fact, if for every $n$, $\|\frac{d^n}{dx^n}f\|_{L^{\infty}(I)}\leq C n!A^n$ for some constant $A$ depending on $f$ then the Taylor series of $f$ converges to $f$ uniformly and the converse is also true. This drives an analytic mind to investigate whether relaxing growth condition on the derivatives generates quasi-analytic class. In 1912 Hadamard proposed the problem of finding sequence $\{M_n\}_n$ of positive numbers such that the class $C\{M_n\}$ of smooth functions on $I$ satisfying $\|\frac{d^n}{dx^n}f\|_{L^{\infty}(I)}\leq A_f^nM_n$ for all $f\in C\{M_n\}$ is a quasi-analytic class. A solution to this problem is provided by a theorem of Denjoy and Carleman where they showed that $C\{M_n\}$ is quasi-analytic if and only if $\sum_{n=1}^{\infty}M_n^{-1/n}=\infty$. As a matter of fact Denjoy \cite{Den} first proved a sufficient condition and later Carleman \cite{Car} completed the theorem giving a necessary and sufficient condition. A short proof of this theorem based on complex analytic ideas can be found in Rudin \cite{R}. A several variable analogue of this theorem has been obtained by Bochner and Taylor \cite{BT} in 1939. Later in 1950, instead of using all partial derivatives, Bochner used iterates of the Laplacian $\Delta$ and proved an analogue of Denjoy-Carleman theorem which reads as follows: if $f\in C^{\infty}(\mathbb{R}^n)$ satisfies $\sum_{m=1}^{\infty}\|\Delta^mf\|^{-1/m}_{\infty}=\infty,$ then the condition $\Delta^mf(x)=0$ for all $m\geq 0$ and for all $x$ in a set $U$ of analytic determination implies $f=0.$ Building upon the works of Masson- McClary \cite{MM} and Nussbaum \cite{N}, in 1972 Chernoff \cite{C1} used operator theoretic arguments to study quasi-analytic vectors. As an application he improved the above mentioned result of Bochner by proving the following very interesting result in 1975. \begin{thm}\cite[Chernoff]{C} Let $f$ be a smooth function on $\mathbb{R}^n.$ Assume that $\Delta^mf\in L^2(\mathbb{R}^n)$ for all $m\in \mathbb{N}$ and $\sum_{m=1}^{\infty}\|\Delta^mf\|_2^{-\frac{1}{2m}}=\infty.$ If $f$ and all its partial derivatives vanish at a point $ a \in \mathbb R^n$, then $f$ is identically zero. \end{thm} In this paper we prove an analogue of Chernoff's theorem for the Laplace-Beltrami operator on rank one symmetric spaces of both compact and noncompact types. In order to state our results we first need to introduce some notations. Let $G$ be a connected, noncompact semisimple Lie group with finite centre and $K$ a maximal compact subgroup of $G$. Let $ X=G/K$ be the associated symmetric space which is assumed to have rank one. The origin $o$ in the symmetric space is given by the identity coset $eK$ where $e$ is the identity element in $G$. We know that $X$ is a Riemannian manifold equipped with a $G$ invariant metric on it. We denote by $\Delta_{X}$ the Laplace-Beltrami operator associated to $X$. The Iwasawa decomposition of $G$ reads as $G=KAN$ where $A$ is abelian and $N$ is a nilpotent Lie group. Let $\mathfrak{ g}$ and $\mathfrak{a}$ stand for the Lie algebras corresponding to $G$ and $A$ respectively. Here $\mathfrak{a}$ is one dimensional since $X$ is of rank one. It is well known that every element of $\mathfrak{ g}$ gives rise to a left invariant vector field on $G$. Let $H$ be the left invariant vector field corresponding to a fixed basis element of $\mathfrak{a}.$ We will describe all these notations in detail in the next section. As an exact analogue of Chernoff's theorem for $ X $ we prove the following: \begin{thm} \label{C} Let ${X}=G/K$ be a rank one symmetric space of noncompact type. Suppose $f\in C^{\infty}(X)$ satisfies $\Delta_X^mf\in L^2(X)$ for all $m\geq 0$ and $\sum_{m=1}^{\infty}\|\Delta_{X}^mf\|_2^{-\frac{1}{2m}}=\infty.$ If ${H}^lf(eK)=0$ for all $l\geq 0$ then $f$ is identically zero. \end{thm} As an immediate consequence of the above result we obtain an analogue of the $L^2$ version of the classical Denjoy-Carleman theorem using iterates of the Laplace-Beltrami operator on $X = G/K$. \begin{cor} Let ${X}=G/K$ be a rank one symmetric space of noncompact type. Let $\{M_k\}_k$ be a log convex sequence. Define $\mathcal{C}(\{M_k\}_k,\Delta_X,X)$ to be the class of all smooth functions $f$ on $X$ satisfying $\Delta_X^mf\in L^2(X)$ for all $ m\in \mathbb{N}$ and $\|\Delta_X^kf\|_2\leq M_k\lambda(f)^k$ for some constant $\lambda(f)$ depending on $f$. Suppose that $\sum_{k=1}^{\infty}M_k^{-\frac{1}{2k}}=\infty.$ Then every member of that class is quasi-analytic. \end{cor} As Chernoff's theorem is a useful tool in establishing uncertainty principles of Ingham's type, proving analogues of Theorem 1.1 in contexts other than Euclidean spaces have received considerable attention in recent years. Recently, an analogue of Chernoff's theorem for the sublaplacian on the Heisenberg group has been proved in \cite{BGST}. For noncompact Riemannian symmetric spaces $ X = G/K$, without any restriction on the rank, the following weaker version of Theorem 1.2 has been proved in Bhowmik-Pusti-Ray \cite{BPR}. \begin{thm}[Bhowmik-Pusti-Ray] \label{thm-BPR} Let ${X}=G/K$ be a noncompact Riemannian symmetric space and let $ \Delta_X $ be the associated Laplace-Beltrami operator. Suppose $f\in C^{\infty}(X)$ satisfies $\Delta_X^mf\in L^2(X)$ for all $m\geq 0$ and $\sum_{m=1}^{\infty}\|\Delta_{X}^mf\|_2^{-\frac{1}{2m}}=\infty.$ If $ f $ vanishes on a non empty open set, then $f$ is identically zero. \end{thm} In proving the above theorem, the authors have made use of a result of de Jeu \cite{J}. In the case of rank one symmetric spaces, a different proof was given by the first and third authors of this article by making use of spherical means and an analogue of Chernoff's theorem for the Jacobi transform proved in \cite{GT2}. In fact, we only need to use the one dimensional version of de Jeu's theorem which is equivalent to the Denjoy-Carleman theorem. Our proof of Theorem 1.2 is built upon the ideas used in \cite{GT2}. In a very recent preprint, Bhowmik-Pust-Ray have proved the following improvement of their Theorem \ref{thm-BPR}. In what follows let $D(G/K) $ denote the algebra of differential operators on $ G/K $ which are invariant under the (left) action of $ G.$ \begin{thm}[Bhowmik-Pusti-Ray] \label{thm-BPR1} Let ${X}=G/K$ be a noncompact Riemannian symmetric space and let $ \Delta_X $ be the associated Laplace-Beltrami operator. Suppose $f\in C^{\infty}(G/K)$ be a left $ K $-invariant function on $ X $ which satisfies $\Delta_X^mf\in L^2(X)$ for all $m\geq 0$ and $\sum_{m=1}^{\infty}\|\Delta_{X}^mf\|_2^{-\frac{1}{2m}}=\infty.$ If there is an $ x_0 \in X$ such that $ Df(x_0) $ vanishes for all $ D \in D(G/K)$ then $f$ is identically zero. \end{thm} \begin{rem} Observe that in the above theorem the function $ f $ is assumed to be $ K$-biinvariant. The problem of proving the same for all functions on $ X $ is still open. However, in the case of rank one symmetric spaces we have proved Theorem 1.2 for all functions $ f $. Moreover, we only require that $ H^lf(eK) =0 $ for all $ l \geq 0.$ Here we can also take any $x_0\in X$ in place of $eK$ using translation invariance of Laplacian and $H.$ \end{rem} We remark that the condition $ H^lf(eK) =0 $ is the counterpart of $ (\frac{d}{dr})^kf(r\omega)|_{r=0} = 0 $ where $ x = r\omega, r> 0, \omega \in \mathbb{S}^{n-1} $ is the polar decomposition of $ x \in \mathbb R^n.$ Indeed, as can be easily checked $$ \left(\frac{d}{dr}\right)^kf(r\omega) = \sum_{|\alpha|=k} \partial^\alpha f(r\omega)\, \omega^\alpha $$ and hence $(\frac{d}{dr})^kf(r\omega)|_{r=0} = 0 $ for all $ k $ if and only if $ \partial^\alpha f(0) = 0 $ for all $ \alpha.$ This observation plays an important role in formulating the right analogue Chernoff's theorem for compact Riemannian symmetric spaces. In view of the above observation, Chernoff's theorem for the Laplacian on $ \mathbb R^n $ can be stated in the following form. \begin{thm} Let $f$ be a smooth function on $\mathbb{R}^n.$ Assume that $\Delta^mf\in L^2(\mathbb{R}^n)$ for all $m\in \mathbb{N}$ and $\sum_{m=1}^{\infty}\|\Delta^mf\|_2^{-\frac{1}{2m}}=\infty.$ If $(\frac{d}{dr})^kf(r\omega)|_{r=0} = 0 $ for all $ k $ and $ \omega \in \mathbb{S}^{n-1} ,$ then $f$ is identically zero. \end{thm} We can give a proof of the above theorem by reducing it to a theorem for Bessel operators. Recall that written in polar coordinates the Laplacian takes the form \begin{equation}\label{polar-Lap} \Delta = \frac{\partial^2}{\partial r^2}+ \frac{n-1}{r} \frac{\partial}{\partial r}+\frac{1}{r^2} \Delta_{\mathbb{S}^{n-1}} \end{equation} where $ \Delta_{\mathbb{S}^{n-1}} $ is the spherical Laplacian on the unit sphere $ \mathbb{S}^{n-1}.$ By expanding the function $ F(r,\omega) = f(r\omega) $ in terms of spherical harmonics on $ \mathbb{S}^{n-1} $ and making use of Hecke-Bochner formula, we can easily reduce Theorem 1.7 to a sequence of theorems for the Bessel operator $ \partial_r^2+ (n+2m+1)r^{-1} \partial_r $ for various values of $ m \in \mathbb{N}.$ This idea has been already used in the paper \cite{GT2}. A similar expansion in the case of noncompact Riemannian symmetric spaces leads to Jacobi operators as done in \cite{GT2} which will be used in proving Theorem 1.2. As the proof of the above theorem is similar to and easier than that of Theorem 1.2, we will not present it here. \begin{rem} We remark in passing that the above theorem can also be proved in the context of Dunkl Laplacian on $ \mathbb R^n $ associated to root systems. We would also like to mention that analogues of Chernoff's theorem can be proved for the Hermite operator $ H $ on $ \mathbb R^n$ and the special Hermite operator $ L $ on $ {\mathbb C}^n.$ Again the idea is to make use of Hecke-Bochner formula for the Hermite and special Hermite projections (associated to their spectral decompositions). \end{rem} So far we have only considered non compact Riemannian symmteric spaces, but now we turn our attention to proving an analogue of Theorem 1.2 for compact, rank one symmetric spaces. We make use of the well known classification of such spaces in formulating and proving a Chernoff theorem for the Laplace-Beltrami operator. It turns out that we only need to prove such a result for the spherical Laplacian on spheres in Euclidean spaces. Let $(U,K)$ be a compact symmetric pair and $S= U/K$ be the associated symmetric space. Here $U$ is a compact semisimple Lie group and $K$ is a connected subgroup of $U$. We assume that $S$ has rank one. Being a compact Riemannian manifold, $S$ admits a Laplace-Beltrami operator $\tilde{\Delta}_S. $ It is customary to add a suitable constant $\rho_S$ and work with $\Delta_S= -\tilde{\Delta}_S+\rho_S^2.$ This way we can arrange that $\Delta_S\geq \rho_S^2>0.$ In \cite{W} H.C.Wang has completely classified all rank one compact symmetric spaces. To be more precise, $S$ is one of the followings: The unit sphere $\mathbb{S}^q= SO(q+1)/SO(q)$, the real projective space $P_q(\mathbb{R})=SO(q+1)/O(q)$, the complex projective space $P_l(\mathbb{C})$, the quaternion projective space $P_l(\mathbb{H})$ and the Cayley projective space $P_2(\mathbb{C}ay)= F_4/Spin(9).$ In each case, $S$ comes up with an appropriate polar form $(0,\pi)\times \mathbb{S}^{k_S}$ where $k_S$ depends on the symmetric space $S$. As a consequence, functions on $S$ can be identified with functions on the product space $ Y =(0,\pi)\times \mathbb{S}^{k_S},$ see Section 4 for more details. We prove the following analogue of Chernoff's theorem: \begin{thm} Let $S$ be a rank one Riemannian symmetric space of compact type. Suppose $f\in C^{\infty}(S)$ satisfies $\Delta_{S}^mf\in L^2(S)$ for all $m\geq 0$ and $\sum_{m=1}^{\infty}\|\Delta_{S}^mf\|_2^{-\frac{1}{2m}}=\infty.$ If the function $F$ on $(0,\pi)\times \mathbb{S}^{k_S}$ associated to $ f $ on $S$ satisfies $ \frac{\partial^m}{\partial\theta^m}\big|_{\theta=0}F(\theta,\xi)=0$ for all $m\geq 0$, then $f$ is identically zero. \end{thm} In the context of Theorem 1.7, by identifying $ \mathbb R^n $ with $ (0,\infty) \times \mathbb{S}^{n-1} $ every function $ f $ on $ \mathbb R^n $ gives rise to a function $ F(r,\omega) $ on $ (0,\infty) \times \mathbb{S}^{n-1} $ and in view of \ref{polar-Lap}, the action of $ \Delta $ on $ f $ takes the form, $$ \Delta f(r,\omega) = \frac{\partial^2}{\partial r^2} F(r,\omega)+ \frac{n-1}{r} \frac{\partial}{\partial r}F(r,\omega)+\frac{1}{r^2} \Delta_{\mathbb{S}^{n-1}}F(r,\omega).$$ There is a similar decomposition of $ \Delta_S $ as a sum of a Jacobi operator on $ (0,\pi) $ and the spherical Laplacian $ \Delta_{\mathbb{S}^{k_S}}$ and this justifies our formulation of Theorem 1.9. We complete this introduction with a brief description of the plan of the paper. In Section 2 we recall the requisite preliminaries on noncompact Riemannian symmetric spaces and in Section 3 we prove our version of Chernoff's theorem for the Laplace-Beltrami operator. In Section 4, after recalling necessary results from the theory of compact symmetric spaces and setting up the notations, we prove Theorem 1.9. We refer the reader to the papers \cite{GT1} and \cite{GT2} for related ideas. \section{Preliminaries on Riemannian symmetric spaces of non-compact type} In this section we describe the relevant theory regrading the harmonic analysis on rank one Riemannian symmetric spaces of noncompact type. General references for this section are the monographs of Helgason \cite{H1} and \cite{H2}. Let $G$ be a connected, noncompact semisimple Lie group with finite centre. Suppose $\mathfrak{ g}$ denotes its Lie algebra. With respect to a fixed Cartan involution $\theta$ on $\mathfrak{ g}$ we have the decomposition $\mathfrak{ g}=\mathfrak{k}\oplus \mathfrak{p} .$ Here $\mathfrak{k}$ and $\mathfrak{p}$ are the $+1$ and $-1$ eigenspaces of $\theta$ respectively. Let $\mathfrak{a}$ be the maximal abelian subspace of $\mathfrak{p}$. Also assume that the dimension of $\mathfrak{a}$ is one. Now we know that the involution $\theta$ induces an automorphism $\Theta$ on $G$ and $K=\{g\in G:\Theta(g)=g\}$ is a maximal compact subgroup of $G$. We consider the homogeneous space ${X}=G/K$ which a is a smooth manifold endowed with a $G$-Riemannian metric induced by the restriction of the Killing form $\mathfrak{B}$ of $\mathfrak{g}$ on $\mathfrak{p}$. This turns $X$ into a rank one Riemannian symmetric space of noncompact type and every such space can be realised this way. Let $\mathfrak{a}^*$ denote the dual of $\mathfrak{a}$. Given $\alpha\in \mathfrak{a}^*$ we define $$\mathfrak{ g}_{\alpha}:=\{X\in \mathfrak{ g}:[Y,X]=\alpha(Y)X, \forall\ Y\in \mathfrak{a}\}.$$ Now $\Sigma:=\{\alpha\in \mathfrak{a}^*: \mathfrak{ g}_{\alpha}\neq \{0\}\}$ is the set of all resticted roots of the pair $(\mathfrak{g},\mathfrak{a})$. Let $\Sigma_{+}$ denote the set of all positive roots with respect to a fixed Weyl chamber. It is known that $\mathfrak{n}:=\oplus_{\alpha\in \Sigma_{+}}\mathfrak{g}_{\alpha}$ is a nilpotent subalgebra of $\mathfrak{g}$ and we have the Iwasawa decomposition $\mathfrak{ g}=\mathfrak{k}\oplus\mathfrak{a}\oplus\mathfrak{n}.$ Now writing $N=\exp \mathfrak{n}$ and $A=\exp \mathfrak{a}$ we obtain $G=KAN$ where $A$ is abelian and $N$ is a nilpotent subgroup of $G$. Moreover, $A$ normalizes $N$. In view of this decomposition every $g\in G$ can be uniquely written as $g=k(g)\ \exp H(g) n(g)$ where $H(g)$ belongs to $\mathfrak{a}$. Also we have $G=NAK$ and with respect to this decomposition we write $g\in N\exp A(g) K$ where the functions $A$ and $H$ are related via $A(g)=-H(g^{-1}). $ Now in the rank one case when dimension of $\mathfrak{a}$ is one, $\Sigma$ is given by either $\{\pm\gamma\} $ or $\{\pm \gamma, \pm 2\gamma\}$ where $\gamma$ belongs to $\Sigma_{+}$. Let $\rho:=(m_{\gamma}+m_{2\gamma})/2$ where $m_{\gamma}$ and $m_{2\gamma}$ denote the multiplicities of the roots $\gamma$ and $2\gamma$ respectively. The Haar measure $dg$ on $G$ is given by $$\int_{G}f(g)dg=\int_K\int_{A}\int_N f(k a_t n)e^{2\rho t}dkdtdn. $$ The measure $dx$ on $X$ is induced from the Haar measure $dg$ via the relation $$\int_Gf(gK)dg=\int_X f(x)dx.$$ Suppose $M$ denotes the centralizer of $A$ in $K$. The polar decomposition of $G$ reads as $G=KAK$ in view of which we can write each $g\in G$ as $g=k_1a_rk_2$ with $k_1,k_2\in K$. Actually the map $(k_1,a_r,k_2)\rightarrow k_1a_rk_2$ of $K\times A\times K$ into $G$ induces a diffeomorphism of $K/M\times A_{+}\times K$ onto an open dense subset of $G$ where $A_{+}=\exp \mathfrak{a}_{+}$ and $\mathfrak{a}_{+}$ is the fixed positive Weyl chamber which basically can be identified with $(0,\infty)$ in our case. It is also well-known that each $\bf{X}\in \mathfrak{ g}$ gives rise to a left invariant vector field on $G$ by the prescription $${\bf{X}}f(g)=\frac{d}{dt}\bigg|_{t=0}f(g.\exp(t{\bf{X}})),~g\in G.$$ Since $\mathfrak{a}$ is one dimensional, we fix a basis $\{H\}$ of $\mathfrak{a}$. By an abuse of notation, we denote the left invariant vector field corresponding to this basis element by $H$. Infact, we can write $A=\{a_r=\exp(rH):r\in\mathbb{R}\}.$ \subsection{Helgason Fourier transform}Define the function $A:X\times K/M\rightarrow \mathfrak{a}$ by $A(gK,kM)=A(k^{-1}g).$ Note that $A$ is right $K$-invariant in $g$ and right $M$-invariant in $K$. In what follows we denote the elements of $X$ and $K/M$ by $x$ and $b$ respectively. Let $\mathfrak{a}^*$ denote the dual of $\mathfrak{a}$ and $\mathfrak{a}^*_{{\mathbb C}}$ be its complexification. Here in our case $\mathfrak{a}^*$ and $\mathfrak{a}^*_{{\mathbb C}}$ can be identified with $\mathbb R$ and ${\mathbb C}$ respectively. For each $\lambda\in \mathfrak{a}^*_{{\mathbb C}}$ and $b\in K/M$, the function $x\rightarrow e ^{(i\lambda+\rho)A(x,b)}$ is a joint eigenfunction of all invariant differential operators on $X.$ For $f\in C^{\infty}_c(X)$, its Helgason Fourier transform is a function $\widetilde{f}$ on $\mathfrak{a}^*_{{\mathbb C}}\times K/M$ defined by $$\tilde{f}(\lambda, b)= \int_X f(x)e ^{(-i\lambda+\rho)A(x,b)}dx,~ \lambda\in \mathfrak{a}^*_{{\mathbb C}},~ b\in K/M . $$ Moreover, we know that if $f\in L^1(X)$ then $\widetilde{f}(.,b)$ is a continuous function on $\mathfrak{a}^*$ which extends holomorphically to a domain containing $\mathfrak{a}^*.$ The inversion formula for $f\in C^{\infty}_c(X)$ says that $$f(x)=c_{X}\int_{-\infty}^{\infty}\int_{K/M}\widetilde{f}(\lambda,b)e ^{(i\lambda+\rho)A(x,b)}|c(\lambda)|^{-2}dbd\lambda$$ where $d\lambda$ stands for usual Lebesgue measure on $\mathbb R$ (i.e., $\mathfrak{a}^*$) , $db$ is the normalised measure on $K/M$ and $c(\lambda)$ is the Harish-Chandra $c$-function. The constant $c_X$ appearing in the above formula is explicit and depends on the symmetric space $X$ (See e.g., \cite{H2}). Also for $f\in L^1(X)$ with $\widetilde{f}\in L^1(\mathfrak{a}^*\times K/M,|c(\lambda)|^{-2}dbd\lambda)$, the above inversion formula holds for a.e. $x\in X.$ Furthermore, the mapping $f\rightarrow \widetilde{f}$ extends as an isometry of $L^2(X)$ onto $L^2(\mathfrak{a}^*_{+}\times K/M, |c(\lambda)|^{-2}d\lambda db) $ which is known as the Plancherel theorem for the Helgason Fourier transform. We also need to use certain irreducible representations of $K$ with $M$-fixed vectors. Suppose $\widehat{K_0}$ denotes the set of all irreducible unitary representations of $K$ with $M$ fixed vectors. Let $\delta\in \widehat{K_0}$ and $V_{\delta}$ be the finite dimensional vectors space on which $\delta $ is realised. We know that $V_{\delta}$ contains a unique normalised $M$-fixed vector $ v_1$ (See Kostant \cite{Ks}). Consider an orthonormal basis $\{v_1,v_2,...,v_{d_{\delta}}\}$ for $V_{\delta}$. For $\delta\in \widehat{K_0}$ and $1\le j\le d_{\delta}$, we define $$Y_{\delta,j}(kM)=(v_j, \delta(k)v_1),~ kM\in K/M.$$ It can be easily checked that $Y_{\delta,1}(eK)=1$ and moreover, $Y_{\delta,1}$ is $M$-invariant. \begin{prop}[\cite{H2}] \label{H2} The set $\{Y_{\delta,j}:1\le j\le d_{\delta},\delta\in \widehat{K_0}\}$ forms an orthonormal basis for $L^2(K/M)$. \end{prop} We can get an explicit realisation of $\widehat{K_0}$ by identifying $K/M$ with the unit sphere in $\mathfrak{p}$. By letting $\mathcal{H}^m$ to stand for the space of homogeneous harmonic polynomials of degree $m$ restricted to the unit sphere, we have the following spherical harmonic decomposition $$L^2(K/M)=\displaystyle\oplus_{m=0}^{\infty}\mathcal{H}^m .$$ Thus the functions $Y_{\delta,j}$ can be identified with the spherical harmonics. Given $\delta\in \widehat{K_0}$ and $\lambda\in \mathfrak{a}^*_{{\mathbb C}}\, (i.e., {\mathbb C}~ \text{in our case})$ we consider the spherical functions of type $\delta$ defined by $$\Phi_{\lambda,\delta}(x):=\int_K e^{(i\lambda+\rho)A(x,kM)}Y_{\delta,1}(kM)dk.$$ These are eigenfunctions of the Laplace-Beltrami operator $\Delta_X$ with eigenvalue $-{(\lambda^2+\rho^2)}.$ When $\delta$ is the trivial representation for which $Y_{\delta,1}=1,$ the function $\Phi_{\lambda,\delta}$ is called the elementary spherical function, denoted by $\Phi_\lambda$. More precisely, $$\Phi_\lambda(x)=\int_K e^{(i\lambda+\rho)A(x,kM)} dk.$$ Note that these functions are $K$-biinvariant. The spherical functions can be expressed in terms of Jacobi functions. In fact, if $x=gK$ and $g=ka_rk^{'}$ (polar decomposition), $\Phi_{\lambda,\delta}(x)=\Phi_{\lambda,\delta}(a_r)$. Suppose \begin{equation} \label{ab} \alpha=\frac12(m_{\gamma}+m_{2\gamma}-1),~ \beta=\frac12(m_{2\gamma }-1). \end{equation} For each $\delta\in \widehat{K_0}$ there exists a pair of integers $(p,q)$ such that \begin{equation} \Phi_{\lambda,\delta}(x)=Q_{\delta}(i\lambda+\rho)(\alpha+1)_p^{-1}(\sinh r)^p(\cosh r)^q \varphi_{\lambda}^{(\alpha+p,\beta+q)}(r) \end{equation} where $\varphi_{\lambda}^{(\alpha+p,\beta+q)}$ are the Jacobi functions of type $(\alpha+p,\beta+q)$ and $Q_{ \delta}$ are the Kostant polynomials given by \begin{equation} \label{kos} Q_{\delta}(i\lambda+\rho)=\left(\frac{1}{2}(\alpha+\beta+1+i\lambda)\right)_{(p+q)/2}\left(\frac{1}{2}(\alpha-\beta+1+i\lambda)\right)_{(p-q)/2}. \end{equation} In the above we have used the notation $(z)_m=z(z+1)(z+2)...(z+m-1).$ The following result proved in Helgason \cite{H2} will be very useful for our purpose: \begin{prop} \label{fsp} Let $\delta\in \widehat{K_0}$ and $1\le j\le d_{\delta}$. Then we have \begin{equation} \int_K e^{(i\lambda+\rho)A(x,k^{'}M)}Y_{\delta,j}(k^{'}M)dk^{'}=Y_{\delta,j}(kM)\Phi_{\lambda,\delta}(a_r),~x=ka_r\in X. \end{equation} \end{prop} We refer the reader to the papers \cite{Jo} and \cite{JW} for all the results recalled in this subsection. \subsection{Spherical Fourier transform} We say that a function $f$ on $G$ is $K$-biinvariant if $f(k_1gk_2)=f(g)$ for all $k_1,k_2\in K$. It can be checked that if $f$ is a $K$-biinvariant integrable function then its Helgason Fourier transform $\widetilde{f}(\lambda,b)$ is independent of $b\in K/M$ and by a little abuse of notation we write this as $$\tilde{f}(\lambda)=\int_{X}f(x)\Phi_{-\lambda}(x)dx.$$ This is called the spherical Fourier transform. Now since $f$ is $K$ biinvarinat, using the polar decomposition $g=k_1a_rk_2$, we can view $f$ as a function on $A$ alone: $f(g)=f(a_r)$. So the above integral takes the following polar form: $$\tilde{f}(\lambda)=\int_{ 0}^{\infty}f(a_r)\varphi_{\lambda}(r) w_{\alpha,\beta}(r)dr$$ where $ w_{\alpha,\beta}(r)=(2\sinh r)^{2\alpha+1}(2\cosh r)^{2\beta+1}$ and $\Phi_{-\lambda}(a_r)=\varphi_{\lambda}(r)$ are given by Jacobi function $\varphi_{\lambda}^{\alpha,\beta}(r)$ of type $(\alpha,\beta).$ Here $\alpha$ and $\beta$ are associated to the symmetric space as mentioned above. So it is clear that the spherical Fourier transform is basically Jacobi transform of type $(\alpha,\beta)$. In the rest of the section we describe certain results from the theory of Jacobi analysis. Let $\alpha,\beta,\lambda\in \mathbb{C}$ and $-\alpha\notin \mathbb{N}.$ The Jacobi functions $\varphi_{\lambda}^{(\alpha, \beta)}(r) $ of type $(\alpha,\beta)$ are solutions of the initial value problem \begin{align*} (\mathcal{L}_{\alpha,\beta}+ \lambda^2+ \varrho^2)\varphi_{\lambda}^{( \alpha ,\beta)}(r) =0,\,\,\, \varphi_{\lambda}^{( \alpha , \beta )}(0)=1 \end{align*} where $\mathcal{L}_{\alpha,\beta}$ is the Jacobi operator defined by $$\mathcal{L}_{\alpha,\beta}:=\frac{d^2}{dr^2}+((2\alpha+1)\coth r+(2\beta+1)\tanh r) \frac{d}{dr}$$ and $\varrho=\alpha+\beta+1.$ Thus Jacobi functions $\varphi_{\lambda}^{(\alpha,\beta)}$ are eigenfunctions of $\mathcal{L}_{\alpha,\beta}$ with eigenvalues $-(\lambda^2+\varrho^2).$ These are even functions on $ \mathbb R $ and are expressible in terms of hypergeometric functions. For certain values of the parameters $ (\alpha, \beta) $ these functions arise naturally as spherical functions on Riemannian symmetric spaces of noncompact type. The Jacobi transform of a suitable function $f$ on $\mathbb R^+$ is defined by $$J_{\alpha,\beta}f(\lambda)=\int_{ 0}^{\infty}f(r)\varphi_{\lambda}^{(\alpha,\beta)}(r) {w}_{\alpha,\beta}(r)dr.$$ This is also called the Fourier-Jacobi transform of type $(\alpha,\beta).$ It can be checked that the operator $\mathcal{L}_{\alpha,\beta}$ is selfadjoint on $L^2(\mathbb R^+, {w}_{\alpha,\beta}(r)dr)$ and that $$\widetilde{\mathcal{L}_{\alpha,\beta}f}(\lambda)=-(\lambda^2+\varrho^2)\tilde{f}(\lambda).$$ Under certain assumptions on $\alpha$ and $\beta$ the inversion and Plancherel formula for this transform take a nice form as described below. \begin{thm}[\cite{K2}] \label{hpi} Let $\alpha,\beta\in\mathbb R$, $\alpha>-1$ and $|\beta|\leq\alpha+1.$ Suppose $ c_{\alpha,\beta}(\lambda) $ denotes the Harish-Chandra $c $- function defined by $$ c_{\alpha,\beta}(\lambda)=\frac{2^{\varrho-i\lambda}\Gamma(\alpha+1)\Gamma(i\lambda)}{\Gamma\left(\frac{1}{2}(i\lambda+\varrho)\right)\Gamma\left(\frac{1}{2}(i\lambda+\alpha-\beta+1)\right)}$$ \begin{enumerate} \label{planjj} \item (Inversion) For $f\in C_0^{\infty}(\mathbb R)$ which is even we have $$f(r)=\frac{1}{2\pi}\int_{ 0}^{\infty}J_{\alpha,\beta}f(\lambda)\varphi_{\lambda}^{(\alpha,\beta)}(r)|c_{\alpha,\beta}(\lambda)|^{-2}d\lambda$$ \item (Plancherel) For $f,g\in C^{\infty}_0(\mathbb R)$ which are even, the following holds $$\int_{ 0}^{\infty}f(r)\overline{ g(r)} {w}_{\alpha,\beta}(r)dr=\int_{ 0}^{\infty}J_{\alpha,\beta}f(\lambda)\overline{J_{\alpha,\beta}g(\lambda)}|c_{\alpha,\beta}(\lambda)|^{-2}d\lambda.$$ \end{enumerate} The mapping $f\rightarrowtail \tilde{f}$ extends as an isometry from $L^2(\mathbb R^+, {w}_{\alpha,\beta}(r)dr)$ onto $L^2(\mathbb R^+,|c_{\alpha,\beta}(\lambda)|^{-2}d\lambda).$ \end{thm} We will make use of this theorem in proving an analogue of Chernoff's theorem for the Laplace-Beltrami operator $\Delta_{X}$ in the next section. \section{Chernoff's theorem on noncompact symmetric spaces of rank one } In this section we prove our main theorem i.e., an analogue of Chernoff's theorem for $\Delta_{X}$. The main idea of the proof is to reduce the result for $\Delta_{X}$ to a result for Jacobi operator. So, first we indicate a proof of Chernoff's theorem for Jacobi operator. It has already been discussed in the work of Ganguly-Thangavelu \cite{GT2}. \begin{thm} \label{chernoffJ} Let $\alpha,\beta\in\mathbb R$, $\alpha>-1$ and $|\beta|\leq\alpha+1.$ Suppose $ f \in L^2(\mathbb R^+, {w}_{\alpha,\beta}(r)dr) $ is such that $ \mathcal{L}_{\alpha,\beta}^mf \in L^2(\mathbb R^+, {w}_{\alpha,\beta}(r)dr) $ for all $ m \in \mathbb N $ and satisfies the Carleman condition $ \sum_{m=1}^\infty \| \mathcal{L}_{\alpha,\beta}^m f \|_2^{-1/(2m)} = \infty.$ If $\mathcal{L}_{\alpha,\beta}^mf(0)=0$ for all $m\geq 0$ then $f$ is identically zero. \end{thm} In \cite{GT2} the above result was proved under the assumption that $ f $ vanishes near $ 0 $ but a close examination of the proof reveals that the assumption is superfluous and the same is true as stated above. In order to prove our main result, the following estimate for the ratio of Harish-Chandra $c$-functions is also needed. \begin{lem} \label{estC} Let $\alpha,\beta$ be as in \ref{ab} and $(p,q)$ be the pair of integers associated to $\delta\in \widehat{K_0}$. Then for any $\lambda\geq 0$ we have $$\frac{|c_{\alpha,\beta}(\lambda)|^2}{|c_{\alpha+p,\beta+q}(\lambda)|^2}|Q_{\delta}(i\lambda+\rho)|^{-2}\leq C$$ where $C$ is a constant independent of $ \lambda $ depending only on the parameters $ (\alpha,\beta) $ and $ (p,q).$ \end{lem} \begin{proof} First note that from the definition \ref{kos} of Kostant polynomials we have $$|Q_{\delta}(i\lambda+\rho)|= \prod_{j=0}^{\frac{p+q}{2}}\left((B_1+j)^2+\frac14\lambda^2\right)^{\frac12}\prod_{j=0}^{\frac{p-q}{2}}\left((B_2+j)^2+\frac14\lambda^2\right)^{\frac12}$$ where $B_1=\frac12(\alpha+\beta+1)$ and $B_2=\frac12(\alpha-\beta+1)$. From the above expression, it can be easily checked that $|Q_{\delta}(i\lambda+\rho)|/(2^{-1}\lambda)^p\rightarrow 1$ as $\lambda\rightarrow\infty$ so that \begin{equation} \label{c1} |Q_{\delta}(i\lambda+\rho)|\sim 2^{-p} \lambda^p,\ \ \ \ \lambda\rightarrow\infty. \end{equation} Moreover, we also have $$|Q_{\delta}(i\lambda+\rho)|\geq \prod_{j=0}^{\frac{p+q}{2}}|B_1+j|\prod_{j=0}^{\frac{p-q}{2}}|B_2+j|=\text{constant}.$$ Now using \cite[Lemma 2.4]{BP} we have \begin{equation} \label{c2} \frac{|c_{\alpha,\beta}(\lambda)|^2}{|c_{\alpha+p,\beta+q}(\lambda)|^2} \sim \lambda^{2p},\ \ \ \lambda\rightarrow\infty \end{equation} which together with \ref{c1} implies that $$\frac{|c_{\alpha,\beta}(\lambda)|^2}{|c_{\alpha+p,\beta+q}(\lambda)|^2}|Q_{\delta}(i\lambda+\rho)|^{-2}\sim 1,\ \ \ \lambda\rightarrow\infty.$$ Also the ratio in \ref{c2} being a continuous function of $\lambda$ is bounded near the origin. Hence the result follows. \end{proof} \textit{Proof of Theorem \ref{C}:} Let $f$ be as in the statement of the theorem \ref{C}. We complete the proof in the following steps.\\ \textit{Step 1:} Using Proposition \ref{H2} we write \begin{equation} \widetilde{f}(\lambda, k)=\sum_{ \delta\in \widehat{K_0}}\sum_{j=1}^{d_{\delta}} F_{\delta,j}(\lambda)Y_{\delta,j}(k) \end{equation} where $ F_{\delta,j}(\lambda) $ are the spherical harmonic coefficients of $ \tilde{f}(\lambda, \cdot) $ defined by $$ F_{\delta,j}(\lambda)=\int_{K/M} \widetilde{f}(\lambda,k)Y_{\delta,j}(k)dk.$$ Fix $\delta\in \widehat{K_0}$ and $1\leq j\leq d_{\delta}.$ From the definition of the Helgason Fourier transform we have $$ F_{\delta,j}(\lambda)=\int_{K/M}\int_{G/K}f(x)e^{(-i\lambda+\rho)A(x,kM)}Y_{\delta,j}(kM)dxdk .$$ Now using Fubini's theorem, in view of the Proposition \ref{fsp} the integral on the right hand side of above is equal to \begin{equation} \label{p2} \int_{G/K}f(x)Y_{\delta,j}(kM)\Phi_{\lambda,\delta}(a_r)dx. \end{equation} The function $ g_{\delta,j}(x) $ defined by $$ g_{\delta,j}(x)=\int_{K}f(k'x)Y_{\delta,j}(k'M)dk',\ x\in X $$ is clearly $ K$-biinvariant, and hence by abuse of notation we write $$ g_{\delta,j}(r)= \int_{K}f(k'a_r)Y_{\delta,j}(k'M)dk'.$$ Now performing the integral in \ref{p2} using polar coordinates we obtain \begin{equation} \label{p3} F_{\delta,j}(\lambda)=\int_{0}^{\infty} g_{\delta,j}(r)\Phi_{\lambda,\delta}(a_r) w_{\alpha,\beta}(r)dr \end{equation} Now recall that for each $\delta\in \widehat{K_0}$ there exist a pair of integers $(p, q)$ such that $$\Phi_{\lambda,\delta}(x)=Q_{\delta}(i\lambda+\rho)(\alpha+1)_p^{-1}(\sinh r)^p(\cosh r)^q \varphi_{\lambda}^{(\alpha+p,\beta+q)}(r).$$ By defining \begin{equation} f_{\delta,j}(r)=\frac{4^{-(p+q)}}{(\alpha+1)_p} g_{\delta,j}(r)(\sinh r)^{-p}(\cosh r)^{-q} \end{equation} and recalling the definition of Jacobi transforms we obtain \begin{equation} \label{p4} F_{\delta,j}(\lambda)=Q_{\delta}(i\lambda+\rho) J_{\alpha+p,\beta+q}(f_{\delta,j})(\lambda) \end{equation} \textit{Step 2:} In this step we estimate the $L^2$ norm of powers of Jacobi operator applied to $f_{\delta,j}$ in terms of the $L^2$ norm of corresponding powers of $\Delta_{X}$ applied to $f$. Let $m\in \mathbb{N}.$ Note that the Plancherel formula \ref{planjj} for the Jacobi transform yields \begin{align*} &\|\mathcal{L}^m_{\alpha+p,\beta+q}(f_{\delta,j})\|_{L^2(\mathbb R^+, {w}_{\alpha+p,\beta+q}(r)dr)}\\&=\left(\int_{ 0}^{\infty}(\lambda^2+\rho_\delta^2)^{2m}|J_{\alpha+p,\beta+q}(f_{\delta,j})(\lambda)|^2|c_{\alpha+p,\beta+q}(\lambda)|^{-2}d\lambda\right)^{\frac12} \end{align*} where where $ \rho_\delta=\alpha+\beta+p+q+1.$ In view of \ref{p4} the above integral reduces to $$\left(\int_{ 0}^{\infty}(\lambda^2+\rho_\delta^2)^{2m} \, | F_{\delta,j}(\lambda)|^2 \, |Q_{\delta}(i\lambda+\rho)|^{-2} \,c_{\alpha+p,\beta+q}(\lambda)|^{-2}d\lambda\right)^{\frac12}$$ which after recalling the definition of $F_{\delta,j}(\lambda)$ reads as $$\left(\int_{ 0}^{\infty}(\lambda^2+\rho_\delta^2)^{2m}|Q_{\delta}(i\lambda+\rho)|^{-2}|\left|\int_K \widetilde{f}(\lambda,k)Y_{\delta,j}(k)dk\right|^2|c_{\alpha+p,\beta+q}(\lambda)|^{-2}d\lambda\right)^{\frac12}.$$ By an application of Minkowski's integral inequality, the above integral is dominated by $$\int_{K}\left(\int_{ 0}^{\infty}(\lambda^2+\rho_\delta^2)^{2m}|Q_{\delta}(i\lambda+\rho)|^{-2} |\widetilde{f}(\lambda,k)|^2|c_{\alpha+p,\beta+q}(\lambda)|^{-2}d\lambda\right)^{\frac12}|Y_{\delta,j}(k)|dk.$$ Now using Cauchy-Schwarz inequality along with the fact that $\|Y_{\delta,j}\|_{L^2(K/M)}=1,$ we see that the above integral is bounded by $$\left(\int_{K/M}\int_{ 0}^{\infty}(\lambda^2+\rho_\delta^2)^{2m}|Q_{\delta}(i\lambda+\rho)|^{-2} |\widetilde{f}(\lambda,k)|^2|c_{\alpha+p,\beta+q}(\lambda)|^{-2}d\lambda \ dk\right)^{\frac12}$$ Since $\frac{\lambda^2+\rho_\delta^2}{\lambda^2+\rho^2} = 1+ \frac{\rho_\delta^2-\rho^2}{\lambda^2+\rho^2} $ is a decreasing function of $\lambda$ it follows that $\frac{\lambda^2+d^2}{\lambda^2+\rho^2}\leq C(\alpha,\beta)$ with $C(\alpha,\beta) = \frac{(\alpha+\beta+p+q+1)^2}{(\alpha+\beta+1)^2}.$ This together with the Lemma \ref{estC} yields the following estimate for the integral under consideration: for some constant $ C_1= C_1(\alpha,\beta)$ $$ C_1^m \left(\int_{K/M}\int_{ 0}^{\infty}(\lambda^2+\rho^2)^{2m} |\widetilde{f}(\lambda,k)|^2|c_{\alpha ,\beta }(\lambda)|^{-2}d\lambda \ dk\right)^{\frac12}.$$ Finally, from the series of inequalities above, we obtain \begin{equation} \|\mathcal{L}^m_{\alpha+p,\beta+q}(f_{\delta,j})\|_{L^2(\mathbb R^+, {w}_{\alpha+p,\beta+q}(r)dr)}\leq C_1^m \|\Delta_{X}^mf\|_2. \end{equation} Hence from the hypothesis of the theorem it follows that $$\sum_{m=1}^{\infty}\|\mathcal{L}^m_{\alpha+p,\beta+q}(f_{\delta,j})\|_{L^2(\mathbb R^+, {w}_{\alpha+p,\beta+q}(r)dr)}^{-\frac{1}{2m}}=\infty.$$ \textit{Step 3:} Finally in this step we prove that $\mathcal{L}^m_{\alpha+p,\beta+q}(f_{\delta,j})(0)=0$ for all $m\geq 0.$ First recall that $$f_{\delta,j}(r)=\frac{4^{-(p+q)}}{(\alpha+1)_p}(\sinh r)^{-p}(\cosh r)^{-q}\int_{K}f(ka_r)Y_{\delta,j}(kM)dk.$$ As $\sinh r$ has a zero at the origin and $\cosh 0=1$, if we can show that as a function of $r$, the integral $\int_{K}f(ka_r)Y_{\delta,j}(kM)dk$ has a zero of infinite order at the $ 0 ,$ then we are done. Now note that for any $m\in \mathbb{N}$ $$\frac{d^m}{dr^m}\int_{K}f(ka_r)Y_{\delta,j}(kM)dk=\int_{K}\frac{d^m}{dr^m}f(ka_r)Y_{\delta,j}(kM)dk.$$ But by definition of the vector fields on $G$, writing $a_r=\exp (rH)$ we have $$\frac{d^m}{dr^m}f(ka_r)|_{r=0}=\frac{d^m}{dr^m}f(k.\exp(rH))|_{r=0}= H^mf(k).$$ Hence by the hypothesis on $ f $ we obtain $\frac{d^m}{dr^m}f(ka_r)|_{r=0}=0$ for all $m$. Finally, proving $\mathcal{L}^m_{\alpha+p,\beta+q}(f_{\delta,j})(0)=0$ is a routine matter: repeated application of L'Hospital rule gives the desired result. Therefore, $f_{\delta,j}$ satisfies all the hypothesis of the Proposition \ref{chernoffJ} which allows us to conclude that $f_{\delta,j}=0$ i.e., $F_{\delta,j}=0.$ As this is true for every $\delta\in \widehat{K_0}$ and $1\leq j\leq d_{\delta}$ we get $f=0$ completing the proof of Theorem 1.2. \section{Compact symmetric spaces} Our aim in this section is to prove an analogue of Chernoff's theorem on compact symmetric spaces of rank one. To begin with, we first recall briefly some necessary background material on rank one compact symmetric spaces. Let $S$ be a compact Riemannian manifold equipped with a Riemannian metric $d_{S}$. We say that $S$ is a two point homogeneous space if for any $x_j,y_j\in S,\ j=1,2$ with $d_{S}(x_1,x_2)=d_{S}(y_1,y_2)$, there exists $g\in I(S)$, the group of isometries of $S$ such that $g.x_1=y_1$, and $g.x_2=y_2$ where $g.x$ denotes the usual action of $I(S)$ on $S$. It is well known that compact rank one symmetric spaces are compact two point homogeneous spaces (see Helgason\cite{H3}). Also these two point homogeneous spaces are completely classified by H-.C. Wang \cite{W}. So, following Wang any compact rank one symmetric space $S$ is one of the following: \begin{enumerate} \item the sphere $\mathbb{S}^q \subset \mathbb R^{q+1},~q\geq 1$; \item the real projective space $P_q(\mathbb R),~q\geq 2;$ \item the complex projective space $P_l(\mathbb{C}), ~l\geq 2$; \item the quaternionic projective space $P_l(\mathbb{H}),~l\geq 2;$ \item the Cauchy projective plane $P_2(\mathbb{C} a y).$ \end{enumerate} We describe the necessary preliminaries and prove Theorem 1.9 in each of the above five cases separately. We start with a brief description of Jacobi polynomial expansions in the following subsection. \subsection{Jacobi polynomial expansion:} Let $\alpha,\beta>-1.$ The Jacobi polynomials $P_n^{\alpha,\beta}$ of degree $n \geq 0$ and type $(\alpha, \beta)$ are defined by \begin{align} \label{jacobi} (1-x)^{\alpha} \, (1+x)^{\beta}P_n^{\alpha,\beta}(x)=\frac{(-1)^n}{2^n n!} \frac{d^n}{dx^n}\{(1-x)^{n+\alpha} \, (1+x)^{n+\beta}\} ,~x\in (-1,1). \end{align} By making a change of variable $x=\cos\theta$, it is convenient to work with the Jacobi trigonometric polynomials \begin{align} \mathcal{P}_n^{(\alpha,\beta)}(\theta) =C(\alpha,\beta,n) P_n^{(\alpha,\beta)}(\cos \theta), \end{align} where $C(\alpha,\beta,n)$ is the normalising constant, explicitly given by \begin{align} \label{db} C(\alpha,\beta,n)^2=\frac{(2n+\alpha+\beta+1)\Gamma(n+1)\Gamma(n+\alpha+\beta+1) }{\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}. \end{align} Also it is worth pointing out that these polynomials are closely related to Gegenbauer's polynomials by the following formula \begin{equation} \label{gj} C_k^{\lambda}(t)=\frac{\Gamma(\lambda+\frac12) \Gamma(k+2\lambda)}{\Gamma(2\lambda)\Gamma(k+\lambda+\frac12)}P_k^{(\lambda-\frac12,\lambda-\frac12)}(t), \lambda >-\frac12, t\in(-1,1). \end{equation} These Jacobi trigonometric polynomials are the eigenfunctions of the Jacobi differential operator given by \[\mathbb{L}_{\alpha,\beta}=-\frac{d^2}{d \theta^2}-\frac{\alpha-\beta+(\alpha+\beta+1) \cos \theta}{ \sin \theta}+\left(\frac{\alpha+\beta+1}{2}\right)^2\] with eigenvalues $(n+\frac{\alpha+\beta+1}{2})^2$ i.e., $$\mathbb{L}_{\alpha,\beta} \mathcal{P}_n^{(\alpha,\beta)}=\left(n+\frac{\alpha+\beta+1}{2}\right)^2\, \mathcal{P}_n^{(\alpha,\beta)},$$and $\{\mathcal{P}_n^{(\alpha,\beta)}:n\geq0\}$ forms an orthonormal basis for the weighted $L^2$ space $L^2(\tilde{w}_{\alpha,\beta}):=L^2((0,\pi), \tilde{w}_{\alpha,\beta}(\theta)d\theta)$ where the weight is given by $$\tilde{w}_{\alpha,\beta}(\theta)=\left(\sin \frac{\theta}{2}\right)^{2\alpha+1} \, \left(\cos \frac{\theta}{2}\right)^{2\beta+1}.$$ As a consequence we have the following Plancherel formula valid for $f\in L^2(\tilde{w}_{\alpha,\beta})$ \begin{equation} \label{jplan} \int_{ 0}^{\pi}|f(\theta)|^2 \tilde{w}_{\alpha,\beta}(\theta)d\theta=\sum_{n=0}^{\infty}|\mathcal{J}_{\alpha,\beta}f(n)|^2 \end{equation} where $\mathcal{J}_{\alpha,\beta}f(n)$ denotes the Fourier-Jacobi coefficients defined by $$\mathcal{J}_{\alpha,\beta}f(n)=\int_0^{\pi} f(\theta) \, \mathcal{P}_n^{(\alpha,\beta)}(\theta) \, \tilde{w}_{\alpha,\beta}(\theta)d\theta,\ \ \ n\geq 0.$$ We have the following version of Chernoff's theorem using the iterates of the Jacobi operator proved in Ganguly-Thangavelu \cite{GT1}. \begin{thm} \label{chernoffJp} Let $\alpha,\beta>-1$. Suppose $ f \in L^2( \tilde{w}_{\alpha,\beta} ) $ is such that $ \mathbb{L}_{\alpha,\beta}^mf \in L^2( \tilde{w}_{\alpha,\beta} ) $ for all $ m \in \mathbb N $ and satisfies the Carleman condition $ \sum_{m=1}^\infty \| \mathbb{L}_{\alpha,\beta}^m f \|_2^{-1/(2m)} = \infty.$ If $\mathbb{L}_{\alpha,\beta}^mf(0)=0$ for all $m\geq 0$ then $f$ is identically zero. \end{thm} This is the analogue of Theorem 3.1 for Jacobi polynomial expansions which plays an important role in proving Theorem 1.9 for compact Riemannian symmetric spaces. \subsection{The unit sphere $\mathbb{S}^q$} Let $q\geq 2$. The unit sphere in $\mathbb{R}^{q+1}$ is given by $$\mathbb{S}^{q}:=\{\xi \in \mathbb R^{q+1}:\xi_1^2+\cdots+\xi_{q+1}^2=1\}.$$ The spherical harmonic decomposition reads as $$L^2(\mathbb{S}^q)=\displaystyle\bigoplus_{n=0}^{\infty}\mathcal{H}_{n}(\mathbb{S}^q)$$ where $\mathcal{H}_{n}(\mathbb{S}^q)$ denotes the set of spherical harmonics of degree $n$. Now, for our purposes it is more convenient to work with the geodesic polar coordinate system on $\mathbb{S}^q$. Note that given $\xi\in \mathbb{S}^q$, we can write $\xi= (\cos\theta) e_1+\xi_1^{'}(\sin\theta) e_2+...+\xi_{q}^{'}(\sin\theta) e_{q+1}$ for some $\theta \in (0,\pi)$ and $\xi^{'}=(\xi_1^{'},...,\xi_{q}^{'})\in \mathbb{S}^{q-1}$ where $\{e_1,e_2,...,e_{q+1}\}$ is the standard basis for $\mathbb{R}^{q+1}.$ This observation drives us to consider the map $\varphi:(0,\pi)\times \mathbb{S}^{q-1}\rightarrow \mathbb{S}^{q}$ defined by $$\varphi(\theta, \xi')=(\cos \theta, \xi_1' \sin \theta, \dots, \xi_q' \sin \theta)$$ which induces the geodesic polar coordinate system on $\mathbb{S}^q$. This also provides a polar decomposition of the normalised measure $d\sigma_q$ on $\mathbb{S}^{q}$ as follows: Given a suitable function $f$ on $\mathbb{S}^q$ we have \[\int_{\mathbb{S}^q}f(\xi) d\sigma_q(\xi)=\int_{0}^{\pi} \int_{\mathbb{S}^{q-1}}F(\theta,\xi') \, (\sin \theta)^{q-1} d\sigma_{q-1}(\xi^{'}) d\theta\] where $F=f\circ \varphi.$ Also in this coordinate system, we have the following representation of the Laplace-Beltrami operator $$\Delta_{\mathbb{S}^q}=-\frac{\partial^2}{\partial\theta^2}-(q-1)\cot\theta \frac{\partial}{\partial\theta}+\frac14(q-1)^2-\sin^{-2}\theta\tilde{\Delta}_{\mathbb{S}^{q-1}}$$ The following theorem gives a representation of the spherical harmonics in this polar coordinate system. \begin{thm}\cite[Theorem 2.4]{K} \label{polarep} For $n\geq0$ we have the following orthogonal decomposition $$\mathcal{H}_n(\mathbb{S}^q)=\displaystyle\bigoplus_{l=0}^n\mathcal{H}_{n,l}(\mathbb{S}^q)$$ where the subspaces $\mathcal{H}_{n,l}(\mathbb{S}^q)$ are irreducible and invariant under $SO(q)$. Moreover, functions in $\mathcal{H}_{n,l}(\mathbb{S}^q)$ can be represented as $$S(\xi)=(\sin\theta)^lC^{q/2-1/2+l}_{n-l}(\cos\theta)S^{'}_l(\xi^{'})$$ where $\xi=\varphi(\theta,\xi^{'})$ and $S_{l}^{'}\in \mathcal{H}_l(\mathbb{S}^{q-1}).$ \end{thm} In view of the above theorem we have the orthogonal decomposition $$L^2(\mathbb{S}^q)=\bigoplus_{n=0}^{\infty}\bigoplus_{l=0}^n\mathcal{H}_{n,l}(\mathbb{S}^q).$$ Now we set $$S_{n,l,k}(\xi)=a_{n,l} \, (\sin \theta)^l \, C_{n-l}^{l+\frac{q-1}{2}}(\cos \theta) \, S^{'}_{k,l}(\xi')$$ where $\{S^{'}_{l,k}: 1\leq k\leq N(l)\}$ is an orthonormal basis for $\mathcal{H}_{l}(\mathbb{S}^{q-1}).$ Here $a_{n,l}$ is the normalising constant so that $\|S_{n,l,k}\|_{L^2(\mathbb{S}^q)}=1$ and it is explicitly given by \begin{align} \label{anl} a_{n,l}=\frac{2^{-(l+\frac{q-1}{2})}\Gamma(2l+q-1)\Gamma(n+\frac q2) }{\Gamma(l+\frac q2)\Gamma(n+l+q-1)} \, C\left(l+\frac{q-2}{2},l+\frac{q-2}{2},n-l\right). \end{align} \begin{thm} \label{chsphere} Let $f \in C^{\infty}(\mathbb{S}^q)$ be such that $\Delta^m_{\mathbb{S}^d}f\in L^2(\mathbb{S}^q)$ for all $m\geq0 $ and satisfies \[\sum_{m=1}^{\infty}\|\Delta^m_{\mathbb{S}^q}f\|_2^{-\frac{1}{2m}} =\infty.\] If $\frac{\partial^m}{\partial\theta^m} \big|_{\theta=0} F(\theta, \xi')=0$ for all $m\geq 0$ and for all $\xi' \in \mathbb{S}^{q-1},$ then $f$ is identically zero. \end{thm} \begin{proof} Let $f$ be as in the statement of the theorem. For $n\geq0$, let $P_nf$ denote the projection of $f$ onto the space $\mathcal{H}_n(\mathbb{S}^q).$ Then from the above observations we have \begin{equation} \label{sproj} P_nf= \sum_{l=0}^n\sum_{k=1}^{N(l)}(f, S_{n,l,k})_{L^2}S_{n,l,k}. \end{equation} Also since $f\in L^2(\mathbb{S}^q)$ we have $$f=\sum_{n=0}^{\infty} P_nf=\sum_{n=0}^{\infty}\sum_{l=0}^n\sum_{k=1}^{N(l)} ( f, S_{n,l,k})_{L^2(\mathbb{S}^q)} S_{n,l.k}$$ By interchanging the summations, we observe that \begin{align*} &f=\sum_{l=0}^{\infty}\sum_{n=l}^{\infty}\sum_{k=1}^{N(l)} ( f, S_{n,l,k})_{L^2(\mathbb{S}^q)} S_{n,l.k}\\ &=\sum_{l=0}^{\infty}\sum_{n=0}^{\infty}\sum_{k=1}^{N(l)} (f, S_{n+l,l,k})_{L^2(\mathbb{S}^q)} S_{n+l,l.k}. \end{align*} In view of this, to prove the theorem it is enough to prove that $(f, S_{n+l,l,k})_{L^2(\mathbb{S}^q)} = 0 $ for all $n,l,k.$ To start with, let us first fix $n,l$ and $k$. From the expansion \ref{sproj} we observe that \begin{equation} \label{a11} (P_nf, S_{n+l,l,k})_{L^2(\mathbb{S}^q)}=( f, S_{n+l,l,k})_{L^2(\mathbb{S}^q)}. \end{equation} Next we use the expression for $S_{n+l,l,k}$ to show that these coefficients are nothing but Jacobi coefficients of a suitable function. In order to do so, we write the integral on $\mathbb{S}^q$ in polar coordinates to obtain $$( f, S_{n+l,l,k})_{L^2(\mathbb{S}^q)}=\int_0^{\pi} \int_{\mathbb{S}^{q-1}} F(\theta,\xi') \, a_{n+l,l} \, (\sin \theta)^{l+q-1} \, C_{n}^{l+\frac{q-1}{2}}(\cos \theta) \, S^{'}_{k,l}(\xi') \, d\sigma_{q-1}({\xi'}) \, d\theta$$ where $F:=f\circ \varphi.$ Now using \ref{db}, \ref{gj} and \ref{anl}, a simple calculation yields \begin{align} a_{n+l,l}C_{n}^{l+\frac{q-1}{2}}(\cos \theta)=2^{-(l+\frac{q-1}{2})} C(l+\frac q2-1,l+\frac q2-1,n) P_n^{(l+\frac q2-1,l+\frac q2-1)}(\cos \theta) \end{align} which transforms the above equation into \begin{equation} \label{a111} ( f, S_{n+l,l,k})_{L^2(\mathbb{S}^q)}=2^{-(l+\frac{q-1}{2})}\int_{0}^{\pi}F_{k,l}(\theta) \,(\sin \theta)^{l+q-1} \,\mathcal{P}_n^{(l+\frac d2-1,l+\frac d2-1)}(\theta) \, d\theta \end{equation} where we have defined $$F_{k,l}(\theta):=\int_{\mathbb{S}^{q-1}}F(\theta,\xi^{'})S^{'}_{k,l}(\xi') \, d\sigma_{q-1}({\xi'}).$$ Now letting $\alpha=l+\frac d2-1$ and writing $\sin\theta=2\sin\frac{\theta}{2}\cos\frac{\theta}{2}$ we see that $$(\sin\theta)^{l+q-1}=2^{l+q-1} (\sin\theta)^{-l}w_{\alpha,\alpha}(\theta)$$ which together with \ref{a111} yields \begin{equation} ( f, S_{n+l,l,k})_{L^2(\mathbb{S}^q)}= \mathcal{J}_{\alpha,\alpha}(g_{k,l})(n) \end{equation} where $g_{k,l}(\theta):= 2^{\frac{q-1}{2}}(\sin\theta)^{-l}F_{k,l}(\theta).$ In view of the Plancherel formula \ref{jplan} and the relation \ref{a11} we have \begin{align} \label{a1111} \|\mathbb{L}_{\alpha,\alpha}^mg_{l,k}\|_2^2&= \sum_{n=0}^{\infty}\left(n+\frac{2\alpha+1}{2}\right)^{4m} \, |\mathcal{J}_{\alpha,\alpha}(g_{l,k})(n)|^2\nonumber\\ &= \sum_{n=0}^{\infty}\left(n+\frac{2l+q-1}{2}\right)^{4m} \, \left| \int_{\mathbb{S}^d} P_nf(\xi) S_{n+l,l,k}(x) d \sigma_{q}(\xi)\right|^2 , \end{align} By Cauchy-Schwarz inequality we note that $$\left| \int_{\mathbb{S}^d} P_nf(\xi) S_{n+l,l,k}(\xi) d \sigma_{q}(\xi)\right|^2\leq \|P_nf\|_{L^2(\mathbb{S}^q)}^2.$$ Finally, using the fact that $n+\frac12(2\alpha+1)= n+\frac12(2l+q-1)\leq \left(n+\frac{q-1}{2}\right)\left(1+\frac{2l}{q-1}\right),$ from \ref{a1111} we get the estimate $$\|\mathbb{L}_{\alpha,\alpha}^mg_{l,k}\|_2^2\leq \left(1+\frac{2l}{q-1}\right)^{4m} \sum_{n=0}^{\infty}\left(n+\frac{q-1}{2}\right)^{4m} \, \|P_nf\|^2_{L^2(\mathbb{S}^q)}.$$ Therefore, we have proved \[\|\mathbb{L}_{\alpha,\alpha}^mg_{l,k}\|_2 \leq \left(1+\frac{2l}{q-1}\right)^{2m} \|\Delta^m_{\mathbb{S}^q}f\|_{L^2(\mathbb{S}^q)}\] which by the hypothesis on the function $ f ,$ implies that \begin{equation} \label{carcon} \sum_{m=1}^{\infty}\|\mathbb{L}_{\alpha,\alpha}^mg_{l,k}\|_2^{-\frac{1}{2m}} =\infty. \end{equation} Since $ g_{l,k}(\theta) $ is related to $ F(\theta,\xi') $ via the integral $$g_{l,k}(\theta)= 2^{\frac{q-1}{2}}(\sin\theta)^{-l}\int_{\mathbb{S}^{q-1}}F(\theta,\xi^{'})S^{'}_{k,l}(\xi') \, d\sigma_{q-1}({\xi'})$$ the hypothesis $\frac{\partial^m}{\partial\theta^m} \big|_{\theta=0}F(\theta, \xi')=0$ for all $m\geq 0$ allows us to conclude that $\mathbb{L}_{\alpha,\alpha}^mg_{l,k}(0)=0$ for all $m\geq 0.$ Hence $g_{l,k}$ satisfies the hypotheses of Theorem \ref{chernoffJp} and hence we conclude that $g_{l,k} = 0 $ and consequently $ ( f, S_{n+l,l,k})_{L^2(\mathbb{S}^q)}=0.$ As this is true for any $n,l,k$, we conclude that $f=0$ completing the proof of the theorem. \end{proof} \subsection{The real projective spaces $P_{q}(\mathbb{R})$} Let $O(q)$ denote the group of $q\times q$ orthogonal matrices. Then $P_q(\mathbb{R})$ can be identified with $SO(q+1)/ O(q)$ which makes this a compact symmetric space. Now it is well-known that the real projective space $P_q(\mathbb{R})$ can be obtained from $\mathbb{S}^q$ by identifying the antipodal points i.e., $P_q(\mathbb{R})=\mathbb{S}^q/\{\pm I\}$ and the projection map $s\rightarrow \pm s$ from $\mathbb{S}^q$ to $P_q(\mathbb{R})$ is locally an isometry. So, the functions on $P_q(\mathbb{R})$ can be viewed as even functions on the corresponding sphere $\mathbb{S}^q$ and if $f_e$ is the even function on $\mathbb{S}^q$ corresponding to the function $f$ on $P_q(\mathbb{R})$ then $\Delta_{P_q(\mathbb{R})}f=\Delta_{\mathbb{S}^q}f_e.$ Hence the analogue of Chernoff's theorem on $P_q(\mathbb{R})$ follows directly from the case of sphere. \subsection{The other projective spaces $P_l(\mathbb{C})$, $P_l(\mathbb{H}),$ and $P_2(\mathbb{C} a y)$} As pointed out by T. O. Sherman in \cite{S}, analysis on these three projective spaces is quite similar. Closely following the notations of \cite{S} (see also \cite{CRS}), we first describe the appropriate polar coordinate representation of these spaces and then as in the sphere case we prove the Chernoff's theorem for the associated Laplace-Beltrami operators. To begin with, let $S$ denote any of these three spaces $P_l(\mathbb{C})$, $P_l(\mathbb{H}),$ and $P_2(\mathbb{C} a y)$. Suppose $\tilde{\Delta}_S$ denotes the corresponding Laplace-Beltrami operator. Let $d\mu$ denote the normalised Riemann measure on $S$. We have the following orthogonal decomposition: \[L^2(S,d\mu):=L^2(S)=\bigoplus_{n=0}^{\infty} \mathcal{H}_{n}(S),\] where $\mathcal{H}_{n}(S)$ are finite dimensional and eigenspaces of $\tilde{\Delta}_S$ with eigenvalue $-n(n+k+q)$ where $q=2,4,8,~k=l-2,2l-3,3,$ for $P_l(\mathbb{C}),~P_l(\mathbb{H})$ and $P_2(\mathbb{C} a y),$ respectively. However, it is convenient to work with $\Delta_S:=-\tilde{\Delta}_{S}+\rho_S^2$ where $\rho_S:=\frac12(k+q).$ As a result $\mathcal{H}_n(S)$ becomes eigenspaces of $\Delta_{S}$ with eigenvalue $\left(n+\frac{k+q}{2}\right)^2.$ Let $\Omega:=\{x\in\mathbb{R}^{q+1}:|x|\leq1\}$ be the closed unit ball in $\mathbb{R}^{q+1}.$ We consider a weight function $w$ defined by $w(r):=r^{-1}(1-r)^k$ for $0<r\leq 1.$ With these notations we have the following result proved in \cite[Lemma 4.15]{S}. \begin{prop} \label{e} There is a bounded linear map $E:L^1(S) \to L^1(\Omega, w(|x|) dx)$ satisfying \begin{enumerate} \item For $f\in L^1(S)$, \[\int_S f d\mu=\int_{\Omega}E(f)(x) \, w(|x|) \, dx\] \item The norm of $E$ as a map from $L^p(S)$ to $L^p(\Omega, w(|x|) dx)$ is $1~(1\leq p\leq \infty).$ \end{enumerate} \end{prop} The integration formula in the above proposition is very useful. In fact, integrating the right hand side of that formula in polar coordinates we have $$\int_{\Omega}E(f)(x) \, w(|x|) \, dx=\int_{0}^{1}\int_{\mathbb{S}^q}E(f)(r\xi)w(r)r^qd\sigma_{q}(\xi)dr.$$ Now a change of variables $r= \sin^2 (\theta/2)$ allows us to conclude that \begin{align} \int_S f d\mu= \, \int_0^{\pi} \int_{S^q} F(\theta, \xi) \left(\sin \frac{\theta}{2}\right)^{2q-1} \, \left(\cos \frac{\theta}{2}\right)^{2k+1} \, d\theta \, d\sigma_{q}(\xi), \end{align} where $F(\theta, \xi)=E(f)(\sin^2 (\theta /2)\, \xi)$. In \cite{S} Sherman has described the image of $\mathcal{H}_n(S)$ under the map $E.$ It has been proved that $E(\mathcal{H}_n(S))=\mathcal{H}_{n}(\Omega,w)$ where $\mathcal{H}_{n}(\Omega,w)$ is the orthocomplement of $\mathbb{P}_{n-1}(\Omega)$ in $\mathbb{P}_n(\Omega)$ with respect to the inner product in $L^2(\Omega, w(|x|) dx).$ Here $\mathbb{P}_n(\Omega)$ denotes the set of all polynomials on $\Omega$ of degree up to $n$. Also note that in these trigonometric polar coordinates we can identify $\Omega$ with $\Omega_0:= (0,\pi)\times\mathbb{S}^q$ and $$d\omega(\theta,\xi):= \left(\sin \frac{\theta}{2}\right)^{2q-1} \, \left(\cos \frac{\theta}{2}\right)^{2k+1} \, d\theta \, d\sigma_{q}(\xi)$$ is the corresponding measure on $\Omega_0.$ Basically in view of this trigonometric polar coordinates we have $\mathcal{H}_{n}(\Omega,w)=\mathcal{H}_n(\Omega_0, \omega).$ These spaces are eigenspaces of the following differential operator $$\Lambda_{S}=-\frac{\partial^2}{\partial\theta^2}-\frac{(q-1-k)+(q+k)\cos\theta}{\sin\theta}\frac{\partial}{\partial\theta}- \frac{1}{\sin^2 (\theta/2)}\tilde{\Delta}_{\mathbb{S}^q} + \left(\frac{k+q}{2}\right)^2$$ with eigenvalues $(n+\frac{q+k}{2})^2.$ The relation between this operator and the Laplace-Beltrami operator is described in the following proposition. \begin{prop} \label{e1} Let $f\in C^{2}(S)$ and $E$ be as in the Proposition\ref{e}. Then we have $$E(\Delta_{S}f)= \Lambda_{S} E(f).$$ \end{prop} For a proof of this fact we refer the reader to \cite[Lemma 4.25]{S}. Thus we have the following orthogonal decomposition $$L^2(\Omega_0, d\omega)=\bigoplus_{n=0}^{\infty}\mathcal{H}_{n}(\Omega_0,\omega).$$ Moreover, $\mathcal{H}_{n}(\Omega_0,\omega)$ admits a further decomposition as $\mathcal{H}_{n}(\Omega_0,\omega)=\bigoplus_{j=0}^{n}\mathcal{H}_{n,j}(\Omega_0,\omega)$ where each $\mathcal{H}_{n,j}(\Omega_0,\omega)$ is irreducible under $SO(q+1)$ and spanned by $\{Q_{n,j,l}: 1\leq l\leq N(j)\}$ (see \cite[Theorem 4.22]{S}) where for $x=\sin^2(\theta/2) \,\xi,~\theta\in (0,\pi)~\text{and}~\xi\in \mathbb{S}^q$ \begin{align*} Q_{n,j,l}(x)&= b_{n,j}\left(\sin\frac{\theta}{2}\right)^{2j}P_{n-j}^{(k,q-1+2j)}\left(2\sin^2(\theta/2)-1\right)S_{j,l}(\xi)\\ &=(-1)^{n-j}b_{n,j}\left(\sin\frac{\theta}{2}\right)^{2j}P_{n-j}^{(q-1+2j,k)}(\cos\theta)S_{j,l}(\xi). \end{align*} In the second equality we have used the symmetry relation for Jacobi polynomials i.e., $P^{(\alpha,\beta)}_{n}(-x)=(-1)^nP^{(\beta,\alpha)}_{n}(x).$ Here $\{S_{j,l}:1\leq l\leq N(j)\}$ a basis for $\mathcal{H}_{j}(\mathbb{S}^q)$, the spherical harmonics of degree $l$ on $\mathbb{S}^q.$ The constants $b_{n,j}$ appearing in the above expression are chosen so that $\|Q_{n,j,l}\|_2=1$. In fact, it can be checked that $b_{n,j}= C(q-1+2j,k, n-j). $ So, clearly $\{Q_{n,j,l}:n,l\geq0, 1\leq N(l)\}$ forms an orthonormal basis for $L^2(\Omega_0, d\omega).$ Now we are ready to state and prove an analogue of Chernoff's theorem on $S$. \begin{thm} \label{chprojs} Let $f \in C^{\infty}(S)$ be such that $\Delta^m_{S}f\in L^2(S)$ for all $m\geq0.$ Assume that \[\sum_{m=1}^{\infty}\|\Delta^m_{S}f\|_2^{-\frac{1}{2m}} =\infty.\] If the function $ F $ defined by $F(\theta, \xi) =E(f)( \sin^2(\theta/2) \xi) $ satisfies $\frac{\partial^m}{\partial\theta^m} \big|_{\theta=0} F(\theta, \xi)=0$ for all $m\geq 0$ and for all $\xi \in \mathbb{S}^{q},$ then $f$ is identically zero. \end{thm} \begin{proof} Given a function $f$ with the property as in the statement of the theorem, we write $E(f)(\sin^2 (\theta/2)\, \xi)=F(\theta,\xi),~(\theta,\xi)\in \Omega_0.$ So, the analysis, described above allow us to write the projection of $F$ onto $\mathcal{H}_{n}(\Omega_0, \omega)$ as $$P^{S}_nF= \sum_{j=0}^{n}\sum_{l=1}^{N(j)}(F, Q_{n,j,l})Q_{n,j,l}.$$ Now as in the sphere case, it is not hard to check that \begin{equation} F=\sum_{j=0}^{\infty}\sum_{n=0}^{\infty}\sum_{l=1}^{N(j)} (F, Q_{n+j,j,l})_{L^2(\Omega_0,d\omega)} Q_{n+j,j,l}. \end{equation} Clearly, for each $n\geq 0$ we have \begin{equation} \label{b1} (P^S_nF, Q_{n+j,j,l})_{L^2(\Omega_0,d\omega)}= (F, Q_{n+j,j,l})_{L^2(\Omega_0,d\omega)}. \end{equation} As in the case of sphere, we will show that the right hand side of the above equation can be expressed as Jacobi coefficient of a suitable function related to $F.$ By definition, we have \begin{equation} (F, Q_{n+j,j,l})=\, \int_0^{\pi} \int_{S^q} F(\theta, \xi)Q_{n+j,j,l}((\sin^2\frac{\theta}{2})\xi ) \left(\sin \frac{\theta}{2}\right)^{2q-1} \, \left(\cos \frac{\theta}{2}\right)^{2k+1} \, d\theta \, d\sigma_{q}(\xi). \end{equation} Now using the expression for $Q_{n+j,j,l}$ we have \begin{equation} (F, Q_{n+j,j,l})= (-1)^{j}b_{n+j,j}\int_0^{\pi} F_{j,l}(\theta)\left(\sin\frac{\theta}{2}\right)^{2j}P_n^{(q-1+2j,k)}(\cos\theta)\left(\sin \frac{\theta}{2}\right)^{2q-1} \, \left(\cos \frac{\theta}{2}\right)^{2k+1} \, d\theta. \end{equation} where $ F_{j,l} $ are defined by $$F_{j,l}(\theta):= \int_{S^q}F(\theta,\xi)S_{j,l}(\xi)d\sigma_{q}(\xi).$$ Writing $g_{j,l}(\theta)=(-1)^jF_{j,l}(\theta)(\sin\frac{\theta}{2})^{-2j}$ and using the definition of Jacobi coefficients we have $$(F, Q_{n+j,j,l})_{L^2(\Omega_0,d\omega)}=\mathcal{J}_{\alpha,\beta}(g_{j,l})(n)$$ where $\alpha=q-2j+k$ and $\beta=k$. Now using the Plancherel formula \ref{jplan} along with \ref{b1} we obtain \begin{align} \|\mathbb{L}_{\alpha,\beta}^mg_{j,l}\|_2^2&= \sum_{n=0}^{\infty}\left(n+\frac{\alpha+\beta+1}{2}\right)^{4m} \, |\mathcal{J}_{\alpha,\beta}(g_{j,l})(n)|^2\nonumber\\ &= \sum_{n=0}^{\infty}\left(n+\frac{q-2j+2k+1}{2}\right)^{4m} \, |(P^S_nF, Q_{n+j,j,l})|^2. \end{align} But $|(P^S_nf, Q_{n+j,j,l})|\leq \|P^S_nf\|_{L^2(\Omega_0,d\omega)}$ and $\left(n+\frac{q-2j+2k+1}{2}\right)\leq C(n+\frac{q+k}{2})$ so that we have \begin{align} \|\mathbb{L}_{\alpha,\beta}^mg_{j,l}\|_2^2\leq C^{4m}\sum_{n=0}^{\infty}\left(n+\frac{q+k}{2}\right)^{4m}\|P^S_nf\|_2^2 =C^{4m} \|\Lambda_{S}^mE(f)\|_2^2 \end{align} In view of the Proposition \ref{e1} we have $E(\Delta_{S}^mf)= \Lambda_{S}^m E(f)$ and using the fact that operator norm of $E$ is one (see Proposition \ref{e}) we have $$\|\mathbb{L}_{\alpha,\beta}^mg_{j,l}\|_2^2\leq C^{2m}\|\Delta^m_{S}f\|_2$$ Hence the given condition $\sum_{m=1}^{\infty}\|\Delta^m_{S}f\|_2^{-\frac{1}{2m}} =\infty$ allows us to conclude that \begin{equation} \sum_{m=1}^{\infty}\|\mathbb{L}_{\alpha,\beta}^mg_{j,l}\|_2^{-\frac{1}{2m}} =\infty. \end{equation} Also using the hypothesis $\frac{\partial^m}{\partial \theta^m} \big|_{\theta=0} F(\theta, \xi)=0$ for all $m\geq 0, \xi \in \mathbb{S}^q,$ a simple calculation shows that $\mathbb{L}_{\alpha,\beta}^mg_{j,l}(0)=0$ for all $m\geq0.$ Hence by Theorem \ref{chernoffJp}, we have $g_{j,l}=0$ whence $(F, Q_{n+j,j,l})_{L^2(\Omega_0,d\omega)}=0.$ As this is true for all $n,j,l$ we conclude $f=0$. This completes the proof of the theorem. \end{proof} \section*{Acknowledgments}The first author is supported by Int. Ph.D. scholarship from Indian Institute of Science. The second author is thankful to DST-INSPIRE [DST/INSPIRE/04/2019/001914] for the financial support. The third author is supported by J. C. Bose Fellowship from the Department of Science and Technology, Govt. of India.
1,477,468,751,238
arxiv
\section{#1}} \newtheorem{dfn}{Definition}[section] \newtheorem{thm}[dfn]{Theorem} \newtheorem{lmma}[dfn]{Lemma} \newtheorem{ppsn}[dfn]{Proposition} \newtheorem{crlre}[dfn]{Corollary} \newtheorem{xmpl}[dfn]{Example} \newtheorem{rmrk}[dfn]{Remark} \newcommand{\begin{dfn}}{\begin{dfn}} \newcommand{\begin{thm}}{\begin{thm}} \newcommand{\begin{lmma}}{\begin{lmma}} \newcommand{\begin{ppsn}}{\begin{ppsn}} \newcommand{\begin{crlre}}{\begin{crlre}} \newcommand{\begin{xmpl}}{\begin{xmpl}} \newcommand{\begin{rmrk}}{\begin{rmrk}} \newcommand{\end{dfn}}{\end{dfn}} \newcommand{\end{thm}}{\end{thm}} \newcommand{\end{lmma}}{\end{lmma}} \newcommand{\end{ppsn}}{\end{ppsn}} \newcommand{\end{crlre}}{\end{crlre}} \newcommand{\end{xmpl}}{\end{xmpl}} \newcommand{\end{rmrk}}{\end{rmrk}} \newcommand{{I\! \! A}}{{I\! \! A}} \newcommand{{I\! \! B}}{{I\! \! B}} \newcommand{{I\! \! \!\! C}}{\mathbb{C}} \newcommand{{I\! \! D}}{{I\! \! D}} \newcommand{{I\! \! E}}{{I\! \! E}} \newcommand{{I\! \! F}}{{I\! \! F}} \newcommand{{I\! \! G}}{{I\! \! G}} \newcommand{{I\! \! H}}{{I\! \! H}} \newcommand{{I\! \! I}}{{I\! \! I}} \newcommand{{I\! \! K}}{{I\! \! K}} \newcommand{{I\! \! L}}{{I\! \! L}} \newcommand{{I\! \! M}}{{I\! \! M}} \newcommand{{I\! \! N}}{{I\! \! N}} \newcommand{{I\! \! O}}{{I\! \! O}} \newcommand{{I\! \! P}}{{I\! \! P}} \newcommand{{I\! \! Q}}{\mathbb{Q}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{{I\! \! S}}{{I\! \! S}} \newcommand{\mathbb{T}}{\mathbb{T}} \newcommand{{I\! \! U}}{{I\! \! U}} \newcommand{{I\! \! V}}{{I\! \! V}} \newcommand{{I\! \! W}}{{I\! \! W}} \newcommand{{I\! \! X}}{{I\! \! X}} \newcommand{{I\! \! Y}}{{I\! \! Y}} \newcommand{{\ \! \! Z}}{\mathbb{Z}} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{\Delta}{\Delta} \newcommand{\delta}{\delta} \newcommand{\varepsilon}{\varepsilon} \newcommand{\epsilon}{\epsilon} \newcommand{\kappa}{\kappa} \newcommand{\lambda}{\lambda} \newcommand{\Lambda}{\Lambda} \newcommand{\omega}{\omega} \newcommand{\Omega}{\Omega} \newcommand{\hat{\pi}}{\hat{\pi}} \newcommand{\sigma}{\sigma} \newcommand{\Sigma}{\Sigma} \newcommand{\theta}{\theta} \newcommand{\Theta}{\Theta} \newcommand{\vartheta}{\vartheta} \newcommand{\zeta}{\zeta} \newcommand{\partial}{\partial} \newcommand{\Gamma}{\Gamma} \newcommand{{\cal A}}{{\cal A}} \newcommand{{\cal B}}{{\cal B}} \newcommand{{\cal C}}{{\cal C}} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal G}}{{\cal G}} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\cal I}}{{\cal I}} \newcommand{{\cal J}}{{\cal J}} \newcommand{{\cal K}}{{\cal K}} \newcommand{{\cal L}}{{\cal L}} \newcommand{{\cal M}}{{\cal M}} \newcommand{{\cal N}}{{\cal N}} \newcommand{{\cal P}}{{\cal P}} \newcommand{{\cal Q}}{{\cal Q}} \newcommand{{\cal R}}{{\cal R}} \newcommand{{\cal S}}{{\cal S}} \newcommand{{\cal T}}{{\cal T}} \newcommand{{\cal U}}{{\cal U}} \newcommand{{\cal V}}{{\cal V}} \newcommand{{\cal W}}{{\cal W}} \newcommand{{\cal X}}{{\cal X}} \newcommand{{\cal Y}}{{\cal Y}} \newcommand{{\cal Z}}{{\cal Z}} \def\widetilde{A}{\widetilde{A}} \def\widetilde{B}{\widetilde{B}} \def\widetilde{C}{\widetilde{C}} \def\widetilde{D}{\widetilde{D}} \def\widetilde{E}{\widetilde{E}} \def\widetilde{F}{\widetilde{F}} \def\widetilde{G}{\widetilde{G}} \def\widetilde{H}{\widetilde{H}} \def\widetilde{I}{\widetilde{I}} \def\widetilde{J}{\widetilde{J}} \def\widetilde{K}{\widetilde{K}} \def\widetilde{L}{\widetilde{L}} \def\widetilde{M}{\widetilde{M}} \def\widetilde{N}{\widetilde{N}} \def\widetilde{O}{\widetilde{O}} \def\widetilde{P}{\widetilde{P}} \def\widetilde{Q}{\widetilde{Q}} \def\widetilde{R}{\widetilde{R}} \def\widetilde{S}{\widetilde{S}} \def\widetilde{T}{\widetilde{T}} \def\widetilde{U}{\widetilde{U}} \def\widetilde{V}{\widetilde{V}} \def\widetilde{W}{\widetilde{W}} \def\widetilde{X}{\widetilde{X}} \def\widetilde{Y}{\widetilde{Y}} \def\widetilde{Z}{\widetilde{Z}} \def\widehat{\widehat} \def{\cal A}_h{{\cal A}_h} \def\a*{{\cal A}_{h,*}} \def{\cal B}(h){{\cal B}(h)} \def{\cal B}_1(h){{\cal B}_1(h)} \def{\cal B}^{\rm s.a.}(h){{\cal B}^{\rm s.a.}(h)} \def{\cal B}^{\rm s.a.}_1(h){{\cal B}^{\rm s.a.}_1(h)} \def{\cal A}^{\perp}_{h}{{\cal A}^{\perp}_{h}} \def{\cal A}^{\perp}{{\cal A}^{\perp}} \newcommand{\int \limits}{\int \limits} \newcommand{\widehat}{\widehat} \newcommand{\Re}{\Re} \newcommand{\otimes}{\otimes} \newcommand{\dagger}{\dagger} \newcommand{\bigotimes}{\bigotimes} \newcommand{\rightarrow}{\rightarrow} \newcommand{\Rightarrow}{\Rightarrow} \newcommand{\Longrightarrow}{\Longrightarrow} \newcommand{\subset}{\subset} \newcommand{\subseteq}{\subseteq} \newcommand{\Longleftrightarrow}{\Longleftrightarrow} \newcommand{\underline}{\underline} \newcommand{\overline}{\overline} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{\nonumber}{\nonumber} \newcommand{\tnsr}{\mbox{$\bigcirc\hspace{-0.89em}\mbox{\raisebox% {-.43ex}{$\top$}}\;$}} \newcommand{\gtreqqless}{\gtreqqless} \newcommand{\lesseqqgtr}{\lesseqqgtr} \newcommand{\mbox{id}}{\mbox{id}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{1\!\!1}{1\!\!1} \newcommand{\mbox{{\boldmath $\eta$}}}{\mbox{{\boldmath $\eta$}}} \newcommand{\noindent}{\noindent} \newcommand {\CC}{\centerline} \def \mbox{}\hfill $\sqare$\vspace{1ex} {$\Box$} \newcommand{\displaystyle}{\displaystyle} \newcommand{\vskip 1em}{\vskip 1em} \begin{document} \[ \] \begin{center} {\large {\bf Quantum Isometry Groups: Examples and Computations}}\\ by\\ {\large Jyotishman Bhowmick {\footnote {The support from National Board of Higher Mathematics, India, is gratefully acknowledged.}} and Debashish Goswami{\footnote {partially supported by the project `Noncommutative Geometry and Quantum Groups' funded by the Indian National Science Academy.}}}\\ {\large Stat-Math Unit, Kolkata Centre,}\\ {\large Indian Statistical Institute}\\ {\large 203, B. T. Road, Kolkata 700 108, India}\\ {e mails: jyotish\_\ [email protected], [email protected] }\\ \end{center} \begin{abstract} In this follow-up of \cite{goswami}, where quantum isometry group of a noncommutative manifold has been defined, we explicitly compute such quantum groups for a number of classical as well as noncommutative manifolds including the spheres and the tori. It is also proved that the quantum isometry group of an isospectral deformation of a (classical or noncommutative) manifold is a suitable deformation of the quantum isometry group of the original (undeformed) manifold. \end{abstract} \section{Introduction} The idea of quantum isometry group of a noncommutative manifold (given by spectral triple), which has been defined by one of the authors of the present article in \cite{goswami}, is motivated by the definition and study of quantum permutation groups of finite sets and finite graphs by a number of mathematicians (see, e.g. \cite{ban1}, \cite{ban2}, \cite{wang}, \cite{univ1} and references therein). The group of Riemannian isometries of a compact Riemannian manifold $M$ can be viewed as the universal object in the category of all compact metrizable groups acting on $M$, with smooth and isometric action. Therefore, to define the quantum isometry group, it is reasonable to consider a category of compact quantum groups which act on the manifold (or more generally, on a noncommutative manifold given by spectral triple) in a `nice' way, preserving the Riemannian structure in some suitable sense, which is precisely formulated in \cite{goswami}, where it is also proven that a universal object in the category of such quantum groups does exist if one makes some natural regularity assumptions on the spectral triple. Let us just sketch the definition of the quantum isometry group ${\cal Q} \equiv QISO({\cal A}^\infty, {\cal H},D)$ of a spectral triple $({\cal A}^\infty,{\cal H},D)$, without going into all the technical details, for which the reader is referred to \cite{goswami}. The main ingredient of the definition is the Laplacian ${\cal L}$ coming from the spectral triple (see \cite{goswami} for its construction), which coincides with the Hodge Laplacian $-d^\ast d$ (restricted on space of smooth functions) in the classical case, where $d$ denotes the de-Rham differential. To define the Laplacian in the noncommutative case, it is assumed that the spectral triple $({\cal A}^\infty,{\cal H}, D)$ is of compact type there is some $p>0$ such that the operator $|D|^{-p}$ (interpreted as the inverse of the restriction of $|D|^p$ on the closure of its range, which has a finite co-dimension since $D$ has compact resolvents) has finite nonzero Dixmier trace, denoted by $Tr_\omega$ (where $\omega$ is some suitable Banach limit). Consider the canonical `volume form' $\tau $ coming from the Dixmier trace, i.e. $\tau : {\cal B}({\cal H}) \rightarrow {I\! \! \!\! C}$ defined by $\tau(A):=\frac{1}{Tr_\omega(|D|^{-p})} Tr_\omega(A |D|^{-p}).$ We also assume that the spectral triple is $QC^\infty$, i.e. ${{\cal A}^\infty}$ and $\{ [D,a], ~a \in {{\cal A}^\infty} \}$ are contained in the domains of all powers of the derivation $[|D|, \cdot]$. Under this assumption, $\tau$ is a positive faithful trace on the $C^*$-subalgebra generated by ${\cal A}^\infty$ and $\{ [D,a]~a \in {{\cal A}^\infty} \}$, and using this there is a canonical construction of the Hilbert space of forms, denoted by ${\cal H}^n_D$, $ n \geq 0$ (see \cite{fro} for details), with ${\cal H}^0_D=L^2({\cal A}^\infty, \tau)$. It is assumed that the unbounded densely defined map $d_D$ from ${\cal H}^0_D$ to ${\cal H}^1_D$ given by $d_D(a)=[D,a]$ for $a \in {\cal A}^\infty$, is closable, ${\cal L}:=-d_D^*d_D$ has ${\cal A}^\infty$ in its domain, and it is left invariant by ${\cal L}$. Moreover, we assume that ${\cal L}$ has compact resolvents, with its eigenvectors belonging to ${\cal A}^\infty$, and the kernel of ${\cal L}$ is the one-dimensional subspace spanned by the identity $1$ of ${\cal A}^\infty$. The linear span of eigenvectors of ${\cal L}$, which is a subspace of ${\cal A}^\infty$, is denoted by ${\cal A}^\infty_0$, and it is assumed that ${\cal A}^\infty_0$ is norm-dense in the $C^*$-algebra ${\cal A}$ obtained by completing ${\cal A}^\infty$. The $\ast$-subalgebra of ${\cal A}^\infty$ generated by ${\cal A}^\infty_0$ is denoted by ${\cal A}_0$. It is clear that ${\cal L}({\cal A}^\infty_0) \subseteq {\cal A}^\infty_0$, and a compact quantum group $({\cal G},\Delta)$ which has an action $\alpha$ on ${\cal A}$ is said to act smoothly and isometrically on the noncommutative manifold $({\cal A}^\infty, {\cal H}, D)$ if $({\rm id} \otimes \phi) \circ \alpha({\cal A}^\infty_0) \subseteq {\cal A}^\infty_0$ for all state $\phi$ on ${\cal G}$, and also $({\rm id} \otimes \phi) \circ \alpha$ commutes with ${\cal L}$ on ${\cal A}^\infty_{0}$. One can consider the category of all compact quantum groups acting smoothly and isometrically on ${\cal A}$, where the morphisms are quantum group morphisms which intertwin the actions on ${\cal A}$. It is proved in \cite{goswami}(under some regularity assumptions, which are valid for any compact connected Riemannian spin manifold with the usual Dirac operator) that there exists a universal object in this category, and this universal object is defined to be the quantum isometry group of $({\cal A}^\infty,{\cal H},D)$, denoted by $QISO({\cal A}^\infty, {\cal H}, D)$, or simply as $QISO({\cal A}^\infty)$ or even $QISO({\cal A})$ if the spectral triple is understood. In fact, we have considered a bigger category, namely the category of `quantum families of smooth isometries' (see \cite{goswami} for details), which is motivated by the ideas of Woronowicz and Soltan (\cite{woro_pseudo}, \cite{soltan}), and identified the underlying $C^*$-algebra of the quantum isometry group as a universal object in this bigger category. We believe that a detailed study of quantum isometry groups will not only give many new and interesting examples of compact quantum groups, it will also contribute to the understanding of quantum group covariant spectral triples. For this, it is important to explicitly describe quantum isometry groups of sufficiently many classical and noncommutative manifolds. This is our aim in this paper. We have computed quantum isometry groups of classical and noncommutative spheres and tori, and also obtained a gereral principle for computing such quantum groups, by proving that the quantum isometry group of an isospectral deformation of a (classical or noncommutative) manifold is a deformation of the quantum isometry group of the original (undeformed) manifold. Throughout the paper, we have denoted by ${\cal A}_1 \otimes {\cal A}_2$ the minimal (injective) $C^*$-tensor product between two $C^*$-algebras ${\cal A}_1$ and ${\cal A}_2$. The symbol $\otimes_{\rm alg}$ has been used to denote the algebraic tensor product between vector spaces or algebras. For a compact quantum group ${\cal G}$, the dense unital $\ast$-subalgebra generated by the matrix coefficients of irreducible unitary representations has been denoted by ${\cal G}_0$. The coproduct of ${\cal G}$, say $\Delta$, maps ${\cal G}_0$ into the algebraic tensor product ${\cal G}_0 \otimes_{\rm alg} {\cal G}_0$, and there exist canonical antipode and counit defined on ${\cal G}_0$ which make it into a Hopf $\ast$-algebra ( see \cite{woro} for the details ). \section{Computation of the quantum isometry groups of the sphere and tori} \subsection{Computation for the commutative spheres} Let ${\cal Q}$ be the quantum isometry group of $S^2$ and let $\alpha$ be the action of ${\cal Q}$ on $C(S^2)$. Let ${\cal L}$ be the Laplacian on $S^2$ given by $${\cal L}=\frac{\partial^2}{\partial \theta^2}+ {\rm cot}(\theta) \frac{\partial}{\partial \theta}+\frac{1}{{\rm sin}^2(\theta)}\frac{\partial^2}{\partial \psi^2},$$ and the cartesian coordinates $x_1$, $x_2$, $x_3$ for $S^2$ are given by $x_1=r\cos{\psi} \sin{\theta}$, $x_2=r\sin{\psi} \sin{\theta}$, $x_3=r\cos{\theta}$. In the cartesian coordinates, ${\cal L}=\sum_{i=1}^3 \frac{\partial^2}{\partial x_i^2}.$ The eigenspaces of ${\cal L}$ on $S^2$ are of the form $$E_k={\rm Sp}\{(c_1X_1+c_2X_2+c_3X_3)^k~:~c_i\in {I\! \! \!\! C},i=1,2,3,~ \sum c_i^2=0\},$$ where $k \geq 1$. $E_k$ consists of harmonic homogeneous polynomials of degree $k$ on $R^3$ restricted to $S^2$.( See \cite{Helgason}, page 29-30 ). We begin with the following lemma, which says that any smooth isometric action by a quantum group must be `linear'. \begin{lmma} The action $\alpha$ satifies $\alpha(x_i)=\sum_{j=1}^{3} x_j\otimes Q_{ij}$ where $Q_{ij} \in {\cal Q}, i=1,2,3$. \end{lmma} {\it Proof :}\\ $\alpha$ is a smooth isometric action of ${\cal Q}$ on $C(S^2)$, so $\alpha$ has to preserve the eigenspaces of the laplacian ${\cal L}$. In particular, it has to preserve $E_1={\rm Sp}\{ c_1x_1+c_2x_2+c_3x_3~:~c_i \in {I\! \! \!\! C},i=1,2,3, \sum_{i=1}^{3}c^2_i=0\}.$ Now note that $x_1+ix_2, x_1-ix_2 \in E_1$, hence $x_1,x_2\in E_1.$ Similarly $x_3 \in E_1$ too. Therefore $E_1={\rm Sp}\{ x_1,x_2,x_3 \}$, which completes the proof of the lemma. \mbox{}\hfill $\sqare$\vspace{1ex} Now, we state and prove the main result of this section, which identifies ${\cal Q}$ with the commutative $C^*$ algebra of continuous functions on the isometry group of $S^2$, i.e. $O(3)$. \begin{thm} The quantum isometry group ${\cal Q}$ is commutative as a $C^*$ algebra. \end{thm} {\it Proof :}\\ We begin with the expression $$ \alpha(x_i)=\sum_{j=1}^3 x_j \otimes Q_{ij},~i=1,2,3,$$ and also note that $x_1,x_2,x_3$ form a basis of $E_1$ and $\{ x_1^2,x_2^2,x_3^2,x_1x_2,x_1x_3, x_2x_3 \}$ is a basis of $E_2$. Since $x_i^*=x_i$ for each $i$ and $\alpha$ is a $\ast$-homomorphism, we must have $Q_{ij}^*=Q_{ij} ~ \forall i,j=1,2,3$. Moreover, the condition $x^2_1+x^2_2+x^2_3=1$ and the fact that $\alpha$ is a homomorphism gives: $$ Q^2_{1j}+Q^2_{2j}+Q^2_{3j}=1,~\forall j=1,2,3.$$ Again,the condition that $x_i$,$x_j$ commutes $\forall i,j$ gives \begin{equation} \label{2c} Q_{ij}Q_{kj}=Q_{kj}Q_{ij} \forall i,j,k, \end{equation} \begin{equation} \label{3c} Q_{ik}Q_{jl}+Q_{il}Q_{jk}=Q_{jk}Q_{il}+Q_{jl}Q_{ik}.\end{equation} Now, it follows from the Lemma 2.12 in \cite{goswami} that $\tilde{\alpha}: C(S^2) \otimes {\cal Q} \rightarrow C(S^2) \otimes {\cal Q}$ defined by $\tilde{\alpha}(X \otimes Y)=\alpha(X)(1 \otimes Y)$ extends to a unitary of the Hilbert ${\cal Q}$-module $L^2 ( S^2 ) \otimes {\cal Q}$ (or in other words, $\alpha$ extends to a unitary representation of ${\cal Q}$ on $L^2(S^2)$). But $\alpha$ keeps $V={\rm Sp}\{x_1,x_2,x_3\}$ invariant. So $\alpha$ is a unitary representation of ${\cal Q}$ on $V$, i.e. $Q = (( Q_{ij} )) \in M_3 ( {\cal Q} )$ is a unitary, hence $Q^{-1}=Q^*=Q^T$, since in this case entries of $Q$ are self-adjoint elements. Clearly, the matrix $Q$ is a $3$-dimensional unitary representation of ${\cal Q}$. Recall that ( cf \cite{VanDaele} ) the antipode $\kappa$ on the matrix elements of a finite-dimensional unitary representation $U^\alpha \equiv ( u_{pq}^\alpha)$ is given by $\kappa (u_{pq}^\alpha ) =( u_{qp}^\alpha )^* .$ So we obtain \begin{equation} \label{5c} \kappa( Q_{ij} )= Q^{-1}_{ij}=Q^T_{ij}=Q_{ji}.\end{equation} Now from ( \ref{2c} ) , we have $Q_{ij}Q_{kj} = Q_{kj}Q_{ij}.$ Applying $ \kappa$ on this equation and using the fact that $\kappa$ is an antihomomorphism along with ( \ref{5c} ) , we have $Q_{jk}Q_{ji} = Q_{ji}Q_{jk}$ Similarly , applying $\kappa$ on ( \ref{3c} ), we get $$ Q_{lj}Q_{ki} + Q_{kj}Q_{li} = Q_{li}Q_{kj} +Q_{ki}Q_{lj}~ \forall i,j,k,l.$$ Interchanging between $k$ and $i$ and also between $l,j$ gives \begin{equation} \label{6c} Q_{jl}Q_{ik} +Q_{il}Q_{jk} =Q_{jk}Q_{il} +Q_{ik}Q_{jl}~ \forall i,j,k,l.\end{equation} Now, by (\ref{3c} )-( \ref{6c} ) , we have $$ [ Q_{ik},Q_{jl} ] =[ Q_{jl},Q_{ik} ],$$ hence $$ [ Q_{ik},Q_{jl} ] = 0.$$ Therefore the entries of the matrix $Q$ commute among themselves. However, by faithfulness of the action of ${\cal Q}$, it is clear that the $C^*$-subalgebra generated by entries of $Q$ (which forms a quantum subgroup of ${\cal Q}$ acting on $C(S^2)$ isometrically) must be the same as ${\cal Q}$, so ${\cal Q}$ is commutative. \mbox{}\hfill $\sqare$\vspace{1ex} So ${\cal Q}=C( G )$ for some compact group $G$ acting by isometry on $C (S^2 )$ and is also universal in this category, i.e. ${\cal Q}=C( O( 3 )).$ \begin{rmrk} Similarly, it can be shown that $ QISO ( S^n ) $ is commutative for all $ n \geq 2. $ \end{rmrk} \subsection {The commutative one-torus} Let ${\cal C}=C(S^1)$ be the $C^*$-algebra of continuous functions on the one-torus $S^1$. Let us denote by $z$ and $\overline{z}$ the identity function (which is the generator of $C(S^1)$) and its conjugate respectively. The Laplacian coming from the standard Riemannian metric is given by ${\cal L}(z^n)=-n^2 z^n$, for $n \in {\ \! \! Z}$, hence the eigenspace corresponding to the eigenvalue $-1$ is spanned by $z$ and $\overline{z}$ . Thus, the action of a compact quantum group acting smoothly and isometrically (and faithfully) on $C(S^1)$ must be {\it linear} in the sense that its action must map $z$ into an element of the form $z \otimes A +\overline{z} \otimes B$. However, we show below that this forces the quantum group to be commutative as a $C^*$ algebra, i.e. it must be the function algebra of some compact group . \begin{thm} Let $\alpha $ be a faithful, smooth and linear action of a compact quantum group $({\cal Q},\Delta)$ on $C ( S^1 )$ defined by $ \alpha ( z ) = z \otimes A + \overline{z} \otimes B$. Then ${\cal Q}$ is a commutative $C^*$ algebra. \end{thm} {\it Proof :}\\ By the assumption of faithfulness, it is clear that ${\cal Q}$ is generated (as a unital $C^*$ algebra) by $A$ and $B$. Moreover, recall that smoothness in particular means that $A$ and $B$ must belong to the algebra ${\cal Q}_0$ spanned by matrix elements of irreducible representations of ${\cal Q}$ . Since $z \overline{z} =\overline{z}z =1$ and $\alpha$ is a $\ast$-homomorphism, we have $ \alpha ( z ) \alpha ( \overline{z} ) = \alpha ( \overline{z} ) \alpha ( z ) = 1 \otimes 1 $. Comparing coefficients of $z^2,{ \overline {z}}^2$ and $1$ in both hand sides of the relation $ \alpha ( z ) \alpha ( \overline{z} ) =1 \otimes 1$, we get \begin{equation} \label{1d} AB^* = BA^* = 0,~~~ AA^* + BB^* =1.\end{equation} Similarly, $ \alpha (\overline{z} ) \alpha ( z ) = 1 \otimes 1$ gives \begin{equation} \label{2d} B^*A = A^*B =0,~~~ A^*A +B^*B =1. \end{equation} Let $U =A+B$ , $P=A^*A$ , $Q=AA^*$. Then it follows from (\ref{1d}) and (\ref{2d}) that $U$ is a unitary and $P$ is a projection since $P$ is self adjoint and \begin{eqnarray*} \lefteqn{ P^2} ~~ & =& A^*AA^*A ~~ = A^*A( 1-B^*B )~~ = A^*A - A^*AB^*B ~~ = A^*A ~~ =P.\end{eqnarray*} Moreover , \begin{eqnarray*} \lefteqn{ UP}\\ & =& ( A + B ) A^*A~~ =AA^*A + BA^*A ~~ =AA^*A \\&& (~{\rm since}~ BA^* =0~{\rm from}~(\ref{1d}) )\\ &=& A ( 1-B^*B ) ~~=A-AB^*B ~~=A .\end{eqnarray*} Thus, $A=UP$ , $B=U-UP=U(1-P)\equiv UP^\perp$ , so ${\cal Q}=C^*(A,B)=C^*(U,P)$. We can rewrite the action $\alpha$ as follows:\\ $$ \alpha ( z ) =z \otimes UP + \overline {z} \otimes UP^\bot.$$ The coproduct $\Delta$ can easily be calculated from the requirement $ (id \otimes \Delta )\alpha =( \alpha \otimes id ) \alpha$ , and it is given by : \begin{equation} \label{6d} \Delta ( UP ) = UP \otimes UP +P^ \bot U^{-1 } \otimes UP^ \bot \end{equation} \begin{equation} \label{7d} \Delta ( UP^ \bot ) = UP^\bot \otimes UP + PU^{-1} \otimes UP^ \bot. \end{equation} From this, we get \begin{equation} \label{8d} \Delta ( U ) =U \otimes UP +U^{-1} \otimes UP^ \bot, \end{equation} \begin{equation} \label{9d} \Delta ( P ) = \Delta (U^{ -1 }) \Delta ( UP ) =P \otimes P +UP^\bot U^{-1} \otimes P^\bot.\end{equation} It can be checked that $ \Delta $ given by the above expression is coassociative. Let $h$ denote the right-invariant Haar state on ${\cal Q}$. By the general theory of compact quantum groups, $h$ must be faithful on ${\cal Q}_0$. We have (by right-invariance of $h$): $$ ({\rm id} \otimes h) ( P \otimes P + UP^\bot U^{-1} \otimes P^\bot ) =h( P )1.$$ That is, we have \begin{equation} \label{10d} h( P^\bot )UP^ \bot U^{-1} = h ( P )P^ \bot.\end{equation} Since $P$ is a positive element in ${\cal Q}_0$ and $h$ is faithful on ${\cal Q}_0$, $h(P)=0$ if and only if $P=0$. Similarly , $h(P^\bot)=0$, i.e. $h(P)=1$, if and only if $P=1$. However, if $P$ is either $0$ or $1$, clearly ${\cal Q} =C^*( U,P )=C^*( U )$, which is commutative. On the other hand, if we assume that $P$ is not a trivial projection, then $h(P)$ is strictly between $0$ and $1$, and we have from ( \ref{10d} ) $$ UP^\bot U^{-1} = \frac{h( P )}{ 1-h( P )} P^\bot .$$ Since both $UP^ \bot U^{-1} $ and $P^ \bot $ are nontrivial projections, they can be scalar multiples of each other if and only if they are equal, so we conclude that $ UP^\bot U^{-1}=P^\bot$, i.e. $U$ commutes with $P^\bot$, hence with $P$, and ${\cal Q}$ is commutative. \mbox{}\hfill $\sqare$\vspace{1ex} \subsection{Commutative and noncommutative two-tori} Fix a real number $\theta$, and let $ {\cal A}_ \theta $ be the universal $ C^{*} $ algebra generated by two unitaries $ U $ and $ V $ such that $ U V = \lambda V U $, where $\lambda:=e^{2 \pi i \theta}$. It is well-known (see \cite{con}) that the set $ \{ U^{m}V^{n} : m,n \in {\ \! \! Z} \} $ is an orthonormal basis for $ L^{2} ( {\cal A}_{ \theta } , \tau ), $ where $ \tau$ denotes the unique faithful normalized trace on ${\cal A}_{\theta}$ given by, $\tau ( \sum a_{m n} U^{m} V^{n} ) = a_{0 0} $. We shall denote by $ \left\langle A , B \right\rangle = \tau ( A^{*} B )$ the inner product on ${\cal H}_0:=L^2({\cal A}_\theta,\tau)$. Let ${\cal A}_\theta^{\rm fin}$ be the unital $\ast$-subalgebra generated by finite complex linear combinations of $U^mV^n$, $m,n \in {\ \! \! Z}$, and $d_1,d_2$ be the maps on ${\cal A}^{\rm fin}_\theta$ defined by $d_1(U^mV^n)=mU^mV^n$, $d_2(U^mV^n)=nU^mV^n)$. We consider the canonical spectral triple (see \cite{con} for details) $({\cal A}^{\rm fin}_\theta, {\cal H}, D)$, where ${\cal H}={\cal H}_0 \oplus {\cal H}_0$, $D=\left( \begin{array}{cc} 0 & d_1+id_2 \\ d_1-id_2 & 0 \end{array} \right),$ and the representation of ${\cal A}_\theta$ on ${\cal H}$ is the diagonal one, i.e. $a \mapsto \left( \begin{array}{cc} a & 0 \\ 0 & a \end{array} \right).$ Clearly, the corresponding Laplacian ${\cal L}$ is given by ${\cal L}(U^mV^n)=-(m^2+n^2) U^mV^n,$ and it is also easy to see that the algebraic span of eigenvectors of ${\cal L}$ is nothing but the space ${\cal A}^{\rm fin}_\theta$, and moreover, all the assumptions in \cite{goswami} required for defining the quantum isometry group are satisfied. Let ${\cal Q}$ be the quantum isometry group of the above spectral triple, with the smooth isometric action of on $ {\cal A}_ \theta $ given by $\alpha : {\cal A}_\theta \rightarrow {\cal A}_\theta \otimes {\cal Q}$. By definition, $ \alpha $ must keep invariant the eigenspace of $ {\cal L} $ corresponding to the eigen value $- 1 $ , spanned by $ U,V,U^{-1},V^{-1} $.Thus, the action $ \alpha $ is given by: $$ \alpha ( U ) = U \otimes A_{1} + V \otimes B_{1} + U^{-1} \otimes C_{1} + V^{-1} \otimes D_{1},$$ $$ \alpha ( V ) = U \otimes A_{2} + V \otimes B_{2} + U^{-1} \otimes C_{2} + V^{-1} \otimes D_{2},$$ for some $ A_{i},B_{i},C_{i},D_{i} \in {\cal Q} ,i=1,2 $, and by faithfulness of the action of quantum isometry group (see \cite{goswami}), the norm-closure of the unital $\ast$-algebra generated by $A_i,B_i,C_i,D_i;i=1,2$ must be the whole of ${\cal Q}$. Next we derive a number of conditions on $ A_{i},B_{i},C_{i},D_{i} , i=1,2 $ using the fact that $ \alpha $ is a $ \ast $ homomorphism. \begin{lmma} \label{Lemma 1} The condition $ U^{*} U = 1 = U U^{*} $ gives: \begin{equation} \label{lem1.1a} A^{*}_{1} A_{1} + B^{*}_{1}B _{1} + C^{*}_{1}C_{1} + D^{*}_{1}D_{1} = 1 \end{equation} \begin{equation} \label{lem1.2} A^{*}_{1} B_{1} + \lambda D^{*}_{1} C_{1} = A^{*}_{1} D_{1} + \overline { \lambda } B^{*}_{1} C_{1} = 0 \end{equation} \begin{equation} \label{lem1.3} C^{*}_{1} D_{1} + \lambda B^{*}_{1} A_{1} = C^{*}_{1} B_{1} + \overline { \lambda } D^{*}_{1} A_{1} = 0 \end{equation} \begin{equation} \label{lem1.4} A^{*}_{1} C_{1} = B^{*}_{1} D_{1} = C^{*}_{1} A_{1} = D^{*}_{1} B_{1} = 0 \end{equation} \begin{equation} \label{lem1.5} A_{1} A^{*}_{1} + B_{1} B^{*}_{1} + C_{1}C^{*}_{1} + D_{1}D^{*}_{1} = 1 \end{equation} \begin{equation} \label{lem1.6}A_{1}B^{*}_{1} + \lambda D_{1}C^{*}_{1} = A_{1} D^{*}_{1} + \overline{ \lambda } B_{1}C^{*}_{1} = 0 \end{equation} \begin{equation} \label{lem1.7}C_{1} D^{*}_{1} + \lambda B_{1}A^{*}_{1} = C_{1}B^{*}_{1} + \overline { \lambda } D_{1}A^{*}_{1} = 0 \end{equation} \begin{equation} \label{lem1.8} A_{1}C^{*}_{1} = B_{1}D^{*}_{1} = C_{1}A^{*}_{1} = D_{1}B^{*}_{1} = 0 \end{equation} \end{lmma} {\it Proof :}\\ We get ( \ref{lem1.1a} ) - ( \ref{lem1.4} ) by using the condition $ U^{*} U = 1 $ along with the fact that $ \alpha $ is a homomorphism and then comparing the coefficients of $ 1, U^{*}V ,{ U^{*} }^{2}, U^{*}V^{*} , U V^{*} ,{ V^{*} }^{2} , U^{2} , U V , V^{2} . $ Similarly the condition $ U U^{*} = 1 $ gives ( \ref{lem1.5} ) -( \ref{lem1.8} ). \mbox{}\hfill $\sqare$\vspace{1ex} \begin{lmma} \label{Lemma 2} We have analogues of ( \ref{lem1.1a} ) - ( \ref{lem1.8} ) with $ A_{1},B_{1},C_{1},D_{1} $ replaced by $ A_{2},B_{2},C_{2},D_{2} $ respectively. \end{lmma} {\it Proof :}\\ We use the condition $ V^{*}V = V V^{*} = 1 $ \mbox{}\hfill $\sqare$\vspace{1ex} \\ Now we note that if $ \alpha ( U^{m} V^{n} ) = \sum c_{kl} U^{k} V^{l} \otimes Q_{kl} $ for some $ Q_{kl} \in {\cal Q} $, then the condition that $ \alpha $ commutes with the laplacian implies $ c_{kl} = 0 $ unless $ k^{2} + l^{2} = m^{2} + n^{2} $. We use this observation in the next lemma. \begin{lmma} \label{Lemma 3} Inspecting the terms with zero coefficient in $ \alpha ( U^{*} V ) , \alpha ( V U^{*} ) , \alpha ( U V ) , \alpha ( V U ) $, we get \begin{equation} \label{lem3.1} C^{*}_{1} A_{2} = 0 , D^{*}_{1} B_{2} = 0 , A^{*}_{1} C_{2} = 0 , B^{*}_{1}D_{2} = 0 \end{equation} \begin{equation} \label{lem3.2} A_{2}C^{*}_{1} = 0 , B_{2}D^{*}_{1} = 0 , C_{2} A^{*}_{1} = 0 , D_{2} B^{*}_{1} = 0 \end{equation} \begin{equation} \label{lem3.3} A_{1} A_{2} = 0 , B_{1} B_{2} = 0 , C_{1}C_{2} = 0 , D_{1} D_{2} = 0 \end{equation} \begin{equation} \label{lem3.4} A_{2} A_{1} = 0 ,B_{2} B_{1} = 0, C_{2} C_{1} = 0, D_{2} D_{1} = 0. \end{equation} \end{lmma} {\it Proof :}\\ The equation ( \ref{lem3.1} ) is obtained from the coefficients of $ U^{2}, V^{2} ,{ U^{*} }^{2} , { V^{*} }^{2} $ in $ \alpha ( U^{*} V ) $ while ( \ref{lem3.2} ) , ( \ref{lem3.3} ) , ( \ref{lem3.4} ) are obtained from the same coefficients in $ \alpha ( V U^{*} ) , \alpha ( U V ) , \alpha ( V U ) $ respectively. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{lmma} \label{Lemma 4}: $ A_{1} B_{2} + \overline{ \lambda } B_{1} A_{2} = \lambda A_{2} B_{1} + B_{2} A_{1} $ $ A_{1} D_{2} + \lambda D_{1} A_{2} = \lambda A_{2} D_{1} + { \lambda }^2 D_{2} A_{1} $ $ C_{1} B_{2} + \lambda B_{1} C_{2} = \lambda C_{2} B_{1} + { \lambda }^{2} B_{2} C_{1} $ $ C_{1} D_{2} + \overline{ \lambda } D_{1} C_{2} = \lambda C_{2} D_{1} + D_{2} C_{1} $ \end{lmma} {\it Proof :}\\ Follows from the relation $ \alpha ( U V ) = \lambda \alpha ( V U ) $ and equating non zero coefficients of $ U V , U V^{-1} , U^{-1} V $ and $ U^{-1} V^{-1} $. \mbox{}\hfill $\sqare$\vspace{1ex} \vspace{4mm} Now, by Lemma 2.12 in \cite{goswami} it follows that $ \tilde{\alpha}: {\cal A}_{\theta} \otimes {\cal Q} \rightarrow {\cal A}_{ \theta } \otimes {\cal Q} $ defined by $\tilde{\alpha}(X \otimes Y)=\alpha(X)(1 \otimes Y)$ extends to a unitary of the Hilbert ${\cal Q}$-module $L^2 ( {\cal A}_{\theta},\tau ) \otimes {\cal Q}$ (or in other words, $\alpha$ extends to a unitary representation of ${\cal Q}$ on $L^2({\cal A}_{\theta},\tau)$). But $\alpha$ keeps $W = {\rm Sp}\{U,V,U^{*},V^{*}\}$ invariant( as observed in the beginning of this section ). So $\alpha$ is a unitary representation of ${\cal Q}$ on $W$.Hence, the matrix ( say M ) corresponding to the 4 dimensional representation of $ {\cal Q} $ on $W$ is a unitary in $ M_4 ( {\cal Q} )$. From the definition of the action it follows that $ M =\left( \begin {array} {cccc} A_{1} & A_{2} & C^{*}_{1} & C^{*}_{2} \\ B_{1} & B_{2} & D^{*}_{1} & D^{*}_{2} \\ C_{1} & C_{2} & A^{*}_{1} & A^{*}_{2} \\ D_{1} & D_{2} & B^{*}_{1} & B^{*}_{2} \end {array} \right ) $ Since $ M $ is the matrix corresponding to a finite dimensional unitary representation, $ \kappa ( M_{k l } )= M^{-1}_{ k l } $ where $ \kappa $ denotes the antipode of $ {\cal Q} $ (See \cite{VanDaele}) But $ M $ is a unitary, $ M^{-1} = M^{*} $ So,$ ( k ( M_{k l} ) ) = \left ( \begin {array} {cccc} A^{*}_{1} & B^{*}_{1} & C^{*}_{1} & D^{*}_{1} \\ A^{*}_{2} & B^{*}_{2} & C^{*}_{2} & D^{*}_{2} \\ C_{1} & D_{1} & A_{1} & B_{1} \\ C_{2} & D_{2} & A_{2} & B_{2} \end {array} \right ) $ \begin{lmma} \label{Lemma 5a}: $ A_{1} $ is a normal partial isometry and hence has same domain and range. \end{lmma} {\it Proof :}\\ From the relation $ A^{*}_{1} A_{1} + B^{*}_{1}B _{1} + C^{*}_{1}C_{1} + D^{*}_{1}D_{1} = 1 $ in Lemma \ref{Lemma 1}, we have by applying $ \kappa $, $ A^{*}_{1} A_{1} + A^{*}_{2}A _{2} + C_{1}C^{*}_{1} + C_{2}C^{*}_{2} = 1 $ . Applying $ A_{1} $ on the right of this equation and using $ C^{*}_{1} A_{1} = 0 $ from Lemma \ref{Lemma 1}, and $ A_{2}A_{1} = A^{*}_{1} C_{2} = 0 $ from Lemma \ref{Lemma 3}, we have \begin{equation} \label {i} A^{*}_{1} A_{1} A_{1} = A_{1} \end{equation} Again, from the relation $ A_{1} A^{*}_{1} + B_{1}B^{*}_{1} + C_{1}C^{*}_{1} + D_{1}D^{*}_{1} = 1 $ in Lemma \ref{Lemma 1}, applying $ \kappa $ and multiplying by $ A^{*}_{1} $ on the right, and then using $ C_{1} A^{*}_{1} = 0 $ from Lemma \ref{Lemma 1}, $ A_{1} A_{2} = C_{2} A^{*}_{1} = 0 $ from Lemma \ref{Lemma 3} , we have \begin{equation} \label {ii} A_{1} A^{*}_{1}A^{*}_{1} = A^{\ast}_{1} \end{equation} From (\ref {i}), we have \begin{equation} \label {iii} ( A^{*}_{1} A_{1} ) ( A_{1} A^{*}_{1} ) = A_{1} A^{*}_{1} \end{equation} By taking $*$ on (\ref {ii}), we have \begin{equation} \label {iv} A_{1} A_{1} A^{*}_{1} = A_{1} \end{equation} So, by multiplying by $ A^{*}_{1} $ on the left, we have \begin{equation} \label {v} ( A^{*}_{1} A_{1} )( A_{1} A^{*}_{1} ) = A^{*}_{1} A_{1} \end{equation} From (\ref {iii}) and (\ref {v}), we have $ A_{1} A^{*}_{1} = A^{*}_{1} A_{1} $, i.e $ A_{1} $ is normal. So, $ A_{1} = A^{*}_{1} A_{1} A_{1} $ ( from ( \ref {i} ) ) $ = A_{1} A^{*}_{1} A_{1} $ Therefore, $ A_{1} $ is a partial isometry which is normal and hence has same domain and range. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{rmrk} In an exactly similar way, it can be proved that $ D_{1} $ is a normal partial isometry and hence has same domain and range. \end{rmrk} \begin{lmma} \label{Lemma 6}: We have $ C^{*}_{1} B^{*}_{1} = C^{*}_{2} B^{*}_{2} = A_{1} D_{1} = A_{2} D_{2} = B_{1} C^{*}_{1} = B^{*}_{1} C^{*}_{1} = B_{1} A_{1} = A_{1} B^{*}_{1} = D_{1} A_{1} = A^{*}_{1} D_{1} = C^{*}_{1} B_{1} = D_{1} C^{*}_{1} = 0 $ \end{lmma} {\it Proof :}\\ Using $ A_{2} C^{*}_{1} = B_{2} D^{*}_{1} = C_{2} A^{*}_{1} = D_{2} B^{*}_{1} = 0 $ from Lemma \ref{Lemma 3} and applying $ \kappa $, we have the first four equalities. But $ A_{1} D_{1} = 0 $ .Hence \begin{equation} \label{lem6.1} Ran ( D_{1} ) \subseteq Ker A_{1} \end{equation} By the above made remark, \begin{equation} \label{lem6.2} Ran ( D_{1} ) = Ran ( D^{*}_{1} ) \end{equation} ( \ref{lem6.1} ) and ( \ref{lem6.2} ) imply $ Ran ( D^{*}_{1} ) \subseteq Ker ( A_{1} ) $, so $ A_{1} D^{*}_{1} = 0 $ But from Lemma \ref{Lemma 1}, we have $ A_{1} D^{*}_{1} + \overline{ \lambda } B_{1} C^{*}_{1} = 0 $, which gives $ B_{1} C^{*}_{1} = 0 $. From Lemma \ref{Lemma 3}, we have $ C^{*}_{1} A_{2} = A_{2} A_{1} = 0 $, from which it follows by applying $ \kappa $, that $ B^{*}_{1} C^{*}_{1} = A^{*}_{1} B^{*}_{1} = 0 $ So, $ B_{1} A_{1} = 0 $. So, $ Ran ( A_{1} ) \subseteq Ker B_{1} $ But by Lemma \ref{Lemma 5a}, $ A_{1} $ is a normal partial isometry and so has same range and domain. Thus, $ Ran ( A^{*}_{1} ) \subseteq Ker ( B_{1} ) $ which implies $ B_{1} A^{*}_{1} = 0 $ , i.e , \begin{equation} \label{lem6.3}A_{1} B^{*}_{1} = 0 \end{equation} Again, from Lemma \ref{Lemma 3}, $ A^{*}_{1} C_{2} = 0 $. Hence, by applying $ \kappa $, $ D_{1} A_{1} = 0 $ , i.e, $ A^{*}_{1} D^{*}_{1} = 0 $. But $ D_{1} $ is a partial isometry ( from the remark following Lemma \ref{Lemma 5a} ), we conclude $ A^{*}_{1} D_{1} = 0 $. But by Lemma \ref{Lemma 1}, we have $ A^{*}_{1} D_{1} + \overline { \lambda } B^{*}_{1} C_{1} = 0 $ But, $A^{*}_{1} D_{1} = 0 $ implies $ B^{*}_{1} C_{1} = 0 $ i.e, $ C^{*}_{1} B_{1} = 0 $ Also, $ A_{1} B^{*}_{1} = 0 $ ( from (\ref{lem6.3} ) ) and $ A_{1} B^{*}_{1} + \lambda D_{1} C^{*}_{1} = 0 $( by Lemma \ref{Lemma 1} ), so $ D_{1} C^{*}_{1} = 0 $ \mbox{}\hfill $\sqare$\vspace{1ex} \begin{lmma} \label{Lemma 7}: $ C_{1} $ is a normal partial isometry and hence has same domain and range. \end{lmma} {\it Proof :}\\From the relation $ A^{*}_{1} A_{1} + B^{*}_{1}B _{1} + C^{*}_{1}C_{1} + D^{*}_{1}D_{1} = 1 $ in Lemma \ref{Lemma 1}, multiplying by $ C^{*}_{1} $ on the right and using $ A_{1} C^{*}_{1} = 0 $ from Lemma \ref{Lemma 1}, and $ B_{1}C^{*}_{1} = D_{1} C^{*}_{1} = 0 $ from Lemma \ref{Lemma 6}, we have \begin{equation} \label{lem7.1} C^{*}_{1} C_{1} C^{*}_{1} = C^{*}_{1} \end{equation} Therefore, $ C^{*}_{1} $ and hence $ C_{1} $ is a partial isometry. Also,from Lemma \ref{Lemma 1}, $ A^{*}_{1} A_{1} + B^{*}_{1}B _{1} + C^{*}_{1}C_{1} + D^{*}_{1}D_{1} = 1 = A_{1} A^{*}_{1} + B_{1} B^{*}_{1} + C_{1}C^{*}_{1} + D_{1}D^{*}_{1} $ . Using the normality of $ A_{1} $ and $ D_{1} $ ( obtained from Lemma \ref{Lemma 5a} and the remark following it ) to this equation, we have \begin{equation} \label{lem7.2} B^{*}_{1}B _{1} + C^{*}_{1}C_{1} = B_{1} B^{*}_{1} + C_{1}C^{*}_{1} \end{equation} Multiplying by $ C^{*}_{1} $ to the left of (\ref{lem7.2}), and using $ C^{*}_{1} B^{*}_{1} = C^{*}_{1} B_{1} = 0 $ from Lemma \ref{Lemma 6}, we have : $ C^{*}_{1} C^{*}_{1} C_{1} = C^{*}_{1} C_{1} C^{*}_{1}. $ But $ C^{*}_{1} C_{1} C^{*}_{1} = C^{*}_{1} $ ( from (\ref{lem7.1}) ) , hence $ C^{*}_{1} C^{*}_{1} C_{1} = C^{*}_{1} $ Applying $ C_{1} $ on the left, we have \begin{equation} \label{lem7.3} ( C_{1} C^{*}_{1} ) ( C^{*}_{1} C_{1} ) = C_{1} C^{*}_{1} \end{equation} Now multiplying by $ C^{*}_{1} $ on the right of (\ref{lem7.2}) and using $ B_{1} C^{*}_{1} = B^{*}_{1} C^{*}_{1} = 0 $ from Lemma \ref{Lemma 6}, we have $ C^{*}_{1} C_{1} C^{*}_{1} = C_{1} C^{*}_{1} C^{*}_{1} $ and using (\ref{lem7.1}), we have $ C_{1} C^{*}_{1} C^{*}_{1} = C^{*}_{1} . $ Thus, $ C^{*}_{1} C_{1} =( C_{1} C^{*}_{1}) ( C^{*}_{1} C_{1} ) = C_{1} C^{*}_{1} $ ( by (\ref{lem7.3}) ), hence $ C_{1} $ is a normal partial isometry and so has the same domain and range. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{rmrk} : 1. In the same way, it can be proved that $ B_{1} $ is a normal partial isometry and hence has same domain and range. 2. In an exactly similar way, it can be proved that $ A_{2} , B_{2} , C_{2} , D_{2} $ are normal partial isometries and hence has same domain and range. \end{rmrk} \begin{lmma} \label{Lemma 8}: We have $ A_{1} C_{2}= B_{1} D_{2} = C_{1} A_{2} = D_{1} B_{2} . $ \end{lmma} {\it Proof :}\\ By Lemma \ref{Lemma 6}, we have $ A_{1} D_{1} = A_{2} D_{2} = C^{*}_{2} B^{*}_{2} = 0 $. Now, using the fact that $ D_{1}, D_{2} $ and $ B_{2} $ are normal partial isometries, we have $ A_{1} D^{*}_{1} = A_{2} D^{*}_{2} = C^{*}_{2} B_{2} = 0 $ Taking adjoint and applying $ \kappa $, we have the first, second and the fourth equalities. To prove the third one, we take adjoint of the relation $ C^{*}_{1} B_{1} = 0 $ obtained from Lemma \ref{Lemma 6} and then apply $ \kappa $ . \mbox{}\hfill $\sqare$\vspace{1ex} \vspace{4mm} Now we define for $ i = 1,2 $ $ A^{*}_{i} A_{i} = P_{i}, B^{*}_{i} B_{i} = Q_{i}, C^{*}_{i} C_{i} = R_{i}, S_{i} = 1 - P_{i} - Q_{i} - R_{i} $ $ A_{i} A^{*}_{i} = P^{{\prime}}_{i}, B_{i} B^{*}_{i} = Q^{{\prime}}_{i}, C_{i} C^{*}_{i} = R^{{\prime}}_{i}, S^{{\prime}}_{i} = 1 - P^{{\prime}}_{i} - Q^{{\prime}}_{i} - R^{{\prime}}_{i} $ By Lemma \ref{Lemma 1}, and the remark following it, we have $ D^{*}_{i} D_{i} = 1 - ( P_{i} + Q_{i} + R_{i} ) $ and $ D_{i} D^{*}_{i} = 1 - ( P^{{\prime}}_{i} + Q^{{\prime}}_{i} + R^{{\prime}}_{i} ) $ Also we note that, since $ A_{i}, B_{i}, C_{i}, D_{i} $ are normal, it follows that $ P_{i}= P^{{\prime}}_{i}, Q_{i} = Q^{{\prime}}_{i}, R_{i} = R^{{\prime}}_{i}, S_{i} = S^{{\prime}}_{i} $. \begin{lmma} : \label{Lemma 9} $ P_{1} + R_{1} = 1 - ( P^{{\prime}}_{2} + R^{{\prime}}_{2} ) $ \end{lmma} {\it Proof :}\\ From Lemma \ref{Lemma 3}, $ A_{1} A_{2} = B_{1} B_{2} = C_{1} C_{2} = D_{1} D_{2} = 0 $ and from the first relation, we have $ A^{*}_{1} A_{1} A_{2} A^{*}_{2} = 0 $ which gives \begin{equation} \label{lem9.1} P_{1} P^{{\prime}}_{2} = 0 \end{equation} From the second relation, we have $ B^{*}_{1} B_{1} B_{2} B^{*}_{2} = 0 ,$ hence \begin{equation} \label{lem9.2} Q_{1} Q^{{\prime}}_{2} = 0 \end{equation} Similarly, the third and fourth relations imply \begin{equation} \label{lem9.3} R_{1} R^{{\prime}}_{2} = 0 \end{equation} and \begin{equation} \label{lem9.4} ( 1- ( P_{1} + Q_{1}+ R_{1} ) ) ( 1- ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) ) = 0 \end{equation} respectively. Now applying the same method to the relations $ A_{1} C_{2} = B_{1} D_{2} = C_{1}A_{2} = D_{1}B_{2} = 0 $ obtained from Lemma \ref{Lemma 8}, we obtain \begin{equation} \label{lem9.5} P_{1} R^{{\prime}}_{2} = 0 \end{equation} \begin{equation} \label{lem9.6} Q_{1} ( 1 - ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) ) = 0 \end{equation} \begin{equation} \label{lem9.7} R_{1} P^{{\prime}}_{2} = 0 \end{equation} \begin{equation} \label{lem9.8} ( 1 - ( P_{1} + Q_{1}+ R_{1} ) ) Q^{{\prime}}_{2} = 0 \end{equation} From (\ref{lem9.4}), we get : $ 1 - ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) - P_{1} + P_{1} ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) - Q_{1} + Q_{1} ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) - R_{1} + R_{1} ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) = 0 $ Hence, $ 1 - ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) - P_{1} + P_{1} ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) - Q_{1} ( 1 - ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) ) - R_{1} + R_{1} ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) = 0 $ Applying (\ref{lem9.6}), we have $ 1 - ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) - P_{1} + P_{1} ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) - R_{1} + R_{1} ( P^{{\prime}}_{2} + Q^{{\prime}}_{2} + R^{{\prime}}_{2} ) = 0 $ Now,using (\ref{lem9.2}), we write this as : $ - ( 1 - ( P_{1} + Q_{1} + R_{1} ) ) Q^{{\prime}}_{2} + 1 - P^{{\prime}}_{2} - R^{{\prime}}_{2} - P_{1} + P_{1} P^{{\prime}}_{2} + P_{1} R^{{\prime}}_{2} - R_{1} + R_{1} P^{{\prime}}_{2} + R_{1} R^{{\prime}}_{2} = 0 $ Now using (\ref{lem9.1}), (\ref{lem9.5}), (\ref{lem9.7}), (\ref{lem9.3}), (\ref{lem9.8}), we obtain $ 1 - P^{{\prime}}_{2} - R^{{\prime}}_{2} - P_{1} - R_{1} = 0 $ So, we have, $ P_{1} + R_{1} = 1 - ( P^{{\prime}}_{2} + R^{{\prime}}_{2} ) $ \mbox{}\hfill $\sqare$\vspace{1ex} \begin{rmrk} 1.From Lemma \ref{Lemma 9} and the fact that $ P_{i} = P^{{\prime}}_{i}, Q_{i} = Q^{{\prime}}_{i}, R_{i} = R^{{\prime}}_{i}, i = 1,2 $ , we have \begin{equation} \label{1} P_{1} + R_{1} = 1 - ( P^{{\prime}}_{2} + R^{{\prime}}_{2} ) \end{equation} \begin{equation} \label{2} P_{1} + R_{1} = 1 - ( P_{2} + R_{2} ) \end{equation} \begin{equation} \label{3} P^{{\prime}}_{1} + R^{{\prime}}_{1} = 1 - ( P_{2} + R_{2} ) \end{equation} \begin{equation} \label{4} P^{{\prime}}_{1} + R^{{\prime}}_{1} = 1 - ( P^{{\prime}}_{2} + R^{{\prime}}_{2} ) \end{equation} 2. From the above results, we observe that if $ {\cal Q} $ is imbedded in $ B( H ) $ for some Hilbert space H, then H breaks up into two orthogonal complements , the first being the range of $ P_{1} $ and $ R_{1} $ and the other being the range of $ Q_{1} $ and $ S_{1} $ . Let $ p = P^{{\prime}}_1 + R^{{\prime}}_{1} $ Then $p$ is also equal to $ P_{1} + R_{1} = Q^{{\prime}}_{2} + S^{{\prime}}_{2} = Q_{2} + S_{2}$ and $ p^{ \bot } = Q^{{\prime}}_{1} + S^{{\prime}}_{1} = P_{2} + R_{2} = P^{{\prime}}_{2} + R^{{\prime}}_{2} = Q_{1} + S_{1} $ . \end{rmrk} \begin{lmma} : \label{Lemma 10} $ A_{1} B_{2} - B_{2} A_{1} = 0 = A_{2} B_{1} - { \overline{ \lambda }}^{2} B_{1} A_{2} $ $ A_{1} D_{2} - { \lambda }^2 D_{2} A_{1} = 0 = A_{2} D_{1} - D_{1} A_{2} $ $ C_{1} B_{2} - { \lambda }^{2} B_{2} C_{1} = 0 = B_{1} C_{2} - C_{2} B_{1} $ $ C_{1} D_{2} - D_{2} C_{1} = 0 = D_{1} C_{2} - { \lambda }^{2} C_{2} D_{1} $ \end{lmma} {\it Proof :}\\ From Lemma \ref{Lemma 4}, we have $ A_{1} B_{2} + \overline{ \lambda } B_{1} A_{2} = \lambda A_{2} B_{1} + B_{2} A_{1}. $ So, $ A_{1} B_{2} - B_{2} A_{1} = \lambda ( A_{2} B_{1} - { \overline{ \lambda }}^{2} B_{1} A_{2} ) $ Now, $ Ran ( A_{1} B_{2} - B_{2} A_{1} ) \subseteq Ran ( A_{1} ) + Ran ( B_{2} ) = Ran ( A_{1} A^{*}_{1} ) + Ran ( B_{2} B^{*}_{2} ) = Ran ( P^{{\prime}}_{1} ) + Ran ( Q^{{\prime}}_{2} ) \subseteq Ran ( p ) $ On the other hand, $ Ran ( A_{2} B_{1} - { \overline{ \lambda }}^{2} B_{1} A_{2} ) \subseteq Ran ( A_{2} ) + Ran ( B_{1} ) = Ran ( P^{{\prime}}_{2} ) + Ran ( Q^{{\prime}}_{1} ) \subseteq Ran ( p^{ \bot } ) $ So, $ A_{1} B_{2} - B_{2} A_{1} = 0 = A_{2} B_{1} - { \overline{ \lambda }}^{2} B_{1} A_{2} $ Similarly, the other three relations can be proved. \mbox{}\hfill $\sqare$\vspace{1ex} \\ Let us now consider a $C^*$ algebra ${\cal B}$, which has eight direct summands, four of which are isomorphic with the commutative algebra $C(\mathbb{T}^2)$, and the other four are irrational rotation algebras. More precisely, we take $${\cal B}=\oplus_{k=1}^8 C^*(U_{k1},U_{k2}), $$ where for odd $k$, $U_{k1},U_{k2}$ are the two commuting unitary generators of $C(\mathbb{T}^2)$, and for even $k$, $U_{k1}U_{k2}={\rm exp}(4 \pi i \theta)U_{k2}U_{k1}$, i.e. they generate ${\cal A}_{2 \theta}$. We set the folowing: $$ \tilde{A_1}:=U_{11}+U_{41},~~\tilde{B_1}:=U_{52}+U_{61},~~\tilde{C_1}:=U_{21}+U_{31},~~\tilde{D_1}:=U_{71}+U_{81},$$ $$\tilde{A_2}:=U_{62}+U_{72},~~\tilde{B_2}:=U_{12}+U_{22},~~\tilde{C_2}:=U_{51}+U_{82},~~\tilde{D_2}:=U_{32}+U_{42}.$$ Denote by $\tilde{M}$ the $4 \times 4$ ${\cal B}$-valued matrix given by $$ \tilde{M}=\left( \begin {array} {cccc} \tilde{A_{1}} & \tilde{A_{2}} & {\tilde{C_{1}}}^* & {\tilde{C_{2}}}^* \\ \tilde{B_{1}} & \tilde{B_{2}} & {\tilde{D_{1}}}^* & {\tilde{D_{2}}}^* \\ \tilde{C_{1}} & \tilde{C_{2}} & {\tilde{A_{1}}}^* & {\tilde{A_{2}}}^* \\ \tilde{D_{1}} & \tilde{D_{2}} & {\tilde{B_{1}}}^* & {\tilde{B_{2}}}^* \end {array} \right ).$$ We have the following: \begin{lmma} \label{bhopf} (i) The $\ast$-subalgebra generated by the elemements $\tilde{A_i},\tilde{B_i},\tilde{C_i},\tilde{D_i},i=1,2$ is dense in ${\cal B}$;\\ (ii) There is a unique compact (matrix) quantum group structure on ${\cal B}$, where the corresponding coproduct $\Delta_0$, counit $\epsilon_0$ and antipode $\kappa_0$ (say) are given on the above generating elements by $$ \Delta_0({\tilde{M}}_{ij})=\sum_{k=1}^4 {\tilde{M}}_{ik} \otimes {\tilde{M}}_{kj},$$ $$ \kappa_0({\tilde{M}}_{ij})={\tilde{M}_{ji}}^*,~~~\epsilon_0({\tilde{M}}_{ij})=\delta_{ij}.$$ \end{lmma} The proof can be given by routine verification and hence is omitted. \vspace{2mm}\\ Moreover, we have an action of ${\cal B}$ on ${\cal A}_\theta$, as given by the following lemma. \begin{lmma} \label{baction} There is a smooth isometric action of ${\cal B}$ on ${\cal A}_\theta$, which is given by the following : $$ \alpha_0(U)=U \otimes (U_{11}+U_{41})+V \otimes (U_{52}+U_{61})+U^{-1} \otimes (U_{21}+U_{31})+V^{-1} \otimes (U_{71}+U_{81}),$$ $$ \alpha_0(V)=U \otimes (U_{62}+U_{72})+V \otimes (U_{12}+U_{22})+U^{-1} \otimes (U_{51}+U_{82})+V^{-1} \otimes (U_{32}+U_{42}).$$ \end{lmma} {\it Proof :}\\ It is straightforward to verify that the above indeed defines a smooth action of the quantum group ${\cal B}$ on ${\cal A}_\theta$. To complete the proof, we need to show that $\alpha_0$ keeps the eigenspaces of ${\cal L}$ invariant. For this, we observe that, since $U_{ij}U_{kl}=0$ if $i \neq k$, we have $$ \alpha_0(U^m)=U^m \otimes (U_{11}+U_{41})^m+V^m \otimes (U_{52}+U_{61})^m+U^{-m} \otimes (U_{21}+U_{31})^m+V^{-m} \otimes (U_{71}+U_{81})^m,$$ $$ \alpha_0(V^n)=U^n \otimes (U_{62}+U_{72})^n+V^n \otimes (U_{12}+U_{22})^n+U^{-n} \otimes (U_{51}+U_{82})^n+V^{-n} \otimes (U_{32}+U_{42})^n.$$ From this, it is clear that in the expression of $\alpha_0(U^m) \alpha_0(V^n)$, only coefficients of $U^iV^j$ survive, where $(i,j)$ is one the following:\\ $(m,n), (m,-n), (-m,n), (-m,-n) , (n,m) , (n,-m) ,(-n,m) , (-n,-m).$ This completes the proof of the action being isometric. \mbox{}\hfill $\sqare$\vspace{1ex} \\ Now we are in a position to describe ${\cal Q}=QISO({\cal A}_\theta)$ explicitly. \begin{thm} $ {\cal Q}= QISO ( {\cal A}_{ \theta } )$ is isomorhic (as a quantum group) with ${\cal B} = C ( \mathbb{T}^{2} ) \oplus {\cal A}_{ 2 \theta } \oplus C ( \mathbb{T}^{2} ) \oplus {\cal A}_{ 2 \theta } \oplus C ( \mathbb{T}^{2} ) \oplus {\cal A}_{ 2 \theta } \oplus C ( \mathbb{T}^{2} ) \oplus {\cal A}_{ 2 \theta } $, with the coproduct described before. \end{thm} {\it Proof :}\\ Define $ \phi : {\cal B} \rightarrow {\cal Q} $ by $$ \phi ( U_{11} ) = A_{1}P^{\prime}_{1}Q^{\prime}_{2} , ~~\phi ( U_{12} ) = B_{2}P^{\prime}_{1}Q^{\prime}_{2} ,~~ \phi ( U_{21} ) = C_{1}{P^{\prime}_{1}}^ \bot Q^{\prime}_{2} ,~~ \phi ( U_{22} ) = B_{2}{P^{\prime}_{1}}^ \bot Q^{\prime}_{2} ,$$ $$ \phi ( U_{31} ) = C_{1}{P^{\prime}_{1}}^ \bot {Q^{\prime}_{2}}^ \bot ,~~ \phi ( U_{32} ) = D_{2}{P^{\prime}_{1}}^ \bot {Q^{\prime}_{2}}^ \bot,~~\phi ( U_{41} ) = A_{1}P^{\prime}_{1}{Q^{\prime}_{2}}^ \bot ,~~ \phi ( U_{42} ) = D_{2}P^{\prime}_{1}{Q^{\prime}_{2}}^ \bot ,~~ ,$$ $$ \phi ( U_{51} ) = C_{2}{P^{\prime}_{2}}^ \bot Q^{\prime}_{1} ,~~ \phi ( U_{52} ) = B_{1}{P^{\prime}_{2}}^ \bot Q^{\prime}_{1} ,~~ \phi ( U_{61} ) = B_{1}P^{\prime}_{2}Q^{\prime}_{1} , ~~\phi ( U_{62} ) = A_{2}P^{\prime}_{2}Q^{\prime}_{1}, $$ $$ \phi ( U_{71} ) = D_{1}P^{\prime}_{2}{Q^{\prime}_{1}}^ \bot,~~\phi ( U_{72} ) = A_{2}P^{\prime}_{2}{Q^{\prime}_{1}}^ \bot ,~~ \phi ( U_{81} ) = D_{1}{P^{\prime}_{2}}^ \bot {Q^{\prime}_{1}}^ \bot,~~\phi ( U_{82} ) = C_{2}{P^{\prime}_{2}}^ \bot {Q^{\prime}_{1}}^ \bot.$$ We show that $ \phi $ is well defined and indeed gives a $\ast$-homomorphism. Using the facts that $A_1,B_2$ are commuting normal partial isometries, we have, \begin{eqnarray*} \lefteqn{ A_{1} P^{\prime}_{1} Q^{\prime}_{2} B_{2} P^{\prime}_{1} Q^{\prime}_{2}}\\ &=& A_{1} A_{1} A^{*}_{1} B_{2} B^{*}_{2} B_{2} A_{1} A^{*}_{1} B_{2} B^{*}_{2} = A_{1} A^{*}_{1} A_{1} B_{2} A_{1} A^{*}_{1} B_{2} B^{*}_{2}\\ &=& A_{1} B_{2} A_{1} A^{*}_{1} B_{2} B^{*}_{2} B_{2}{P^{\prime}}_{1} Q^{\prime}_{2} A_{1}{P^{\prime}}_{1} Q^{\prime}_{2} = B_{2} A_{1} A^{*}_{1} B_{2} B^{*}_{2} A_{1} A_{1} A^{*}_{1} B_{2} B^{*}_{2}\\ &=& A_{1} B_{2} A^{*}_{1} B_{2} B^{*}_{2} A_{1} A^{*}_{1} A_{1} B_{2} B^{*}_{2} = A_{1} B_{2} A^{*}_{1} B_{2} B^{*}_{2} A_{1} B_{2} B^{*}_{2}\\ & =& A_{1} B_{2} A^{*}_{1} B_{2} A_{1} B^{*}_{2} B_{2} B^{*}_{2} = A_{1} B_{2} A^{*}_{1} A_{1} B_{2} B^{*}_{2}\\ &=& A_{1} B_{2} A_{1} A^{*}_{1} B_{2} B^{*}_{2}.\end{eqnarray*} So, $ \phi(U_{11})=A_{1} P^{\prime}_{1} Q^{\prime}_{2} $ and $ \phi(U_{12})=B_{2} P^{\prime}_{1} Q^{\prime}_{2} $ commute and they are clearly unitaries when viewed as operators on the range of $P^\prime_1 Q^\prime_2$, which proves that there exists a unique $C^*$-homomorphism from $C(\mathbb{T}^2)\cong C^*(U_{11},U_{12})$ to ${\cal Q}$ which sends $U_{11}$ and $U_{12}$ to $ A_{1} P^{\prime}_{1} Q^{\prime}_{2}$ and $B_{2} P^{\prime}_{1} Q^{\prime}_{2} $ respectively. Again, using the facts that $ C_{1} $ and $ B_{2} $ are normal partial isometries satisfying the relation $ B_{2} C_{1} = \frac{1}{{ \lambda }^2} C_{1} B_{2} $, we have, $ \phi ( U_{22} ) \phi ( U_{21} ) $ \\ $ = B_{2} { P^{\prime}_{1} }^{\bot} Q^{\prime}_{2} $ \\ $ = \frac{1}{\lambda^2} C_{1} {P^{\prime}_{1}}^{\bot} Q^{\prime}_{2} B_{2} {P^{\prime}_{1}}^{\bot} Q^{\prime}_{2} $ \\ $ = \frac{1}{{\lambda}^2} \phi ( U_{21} ) \phi ( U_{22} ). $ i.e, $ \phi( U_{21} ) \phi( U_{22} ) = {\lambda}^2 \phi( U_{22} ) \phi( U_{21} ) $ and they are clearly unitaries on the range of $ P^{\prime}_{1} {Q^{\prime}_{2}}^{\bot} $ which proves that there exists a unique $C^*$-homomorphism from $ {\cal A}_{2 \theta} \cong C^*(U_{21},U_{22})$ to ${\cal Q}$ which sends $U_{21}$ and $U_{22}$ to $ C_{1} {{P^{\prime}}_{1}}^{\bot} Q^{\prime}_{2}$ and $B_{2} {{P^{\prime}}_{1}}^{\bot} Q^{\prime}_{2} $ respectively. The other cases can be worked out similarly and thus it is shown that $ \phi $ defines a $C^{\ast} $ homomorphism from $ {\cal B} $ to $ {\cal Q} $ and moreover, it is easy to see that $\phi(\tilde{M}_{ij})=M_{ij}$, and thus $\phi$ is a morphism of quantum group, and it clearly satisfies $({\rm id} \otimes \phi) \circ \alpha_0=\alpha$. By universality of the quantum isometry group ${\cal Q}$, this completes the proof that ${\cal Q} \cong {\cal B}$ as compact quantum groups. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{rmrk} In particular, we note that if $ \theta $ is taken to be $ 1/2 $, then we have a commutative compact quantum group as the quantum isometry group of a noncommutative $ C^{\ast} $ algebra. \end{rmrk} We conclude this section with an identification of the `quantum double torus' discovered and studied by Hajac and Masuda (\cite{hajac}) with an interesting quantum subgroup of $QISO({\cal A}_\theta)$. Consider the $C^*$-ideal ${\cal I}$ of ${\cal Q}$ generated by $\tilde{C_i}, \tilde{D_i},$ $i=1,2$. It is easy to verify that ${\cal I}=C^*(U_{ik}, i=2,3,4,5,7,8; k=1,2)$, hence ${\cal Q}/{\cal I} \cong C^*(U_{1k},U_{6k},k=1,2)$. Moreover, ${\cal I}$ is in fact a Hopf ideal, i.e. ${\cal Q}/{\cal I}$ is a quantum subgroup of ${\cal Q}$. Denoting by $A_0,B_0,C_0, D_0$ the elements $U_{11}, U_{61}, U_{62},U_{12}$ respectively, we can describe the structure of ${\cal Q}/{\cal I}$ as follows: \begin{thm} Consider the $C^*$ algebra ${\cal Q}^{\rm hol}=C(\mathbb{T}^2) \oplus {\cal A}_{2\theta}$, given by the generators $A_0,B_0,C_0,D_0$ ( where $A_0,D_0$ correspond to $C(\mathbb{T}^2)$ and $B_0,C_0$ correspond to ${\cal A}_{2 \theta}$), with the following coproduct: $$ \Delta_h(A_0)=A_0 \otimes A_0+C_0 \otimes B_0, ~~\Delta_h(B_0)=B_0 \otimes A_0+D_0 \otimes B_0,$$ $$ \Delta_h(C_0)=A_0 \otimes C_0+C_0 \otimes D_0,~~\Delta_h(D_0)=B_0 \otimes C_0+D_0 \otimes D_0.$$ Then $({\cal Q}^{\rm hol},\Delta_h)$ is a compact quantum group isomorphic with ${\cal Q}/{\cal I}$. It has an action $\beta_0$ on ${\cal A}_\theta$ given by $$ \beta_0(U)=U \otimes A_0+V \otimes B_0,~~\beta_0(V)=U \otimes C_0 + V \otimes D_0.$$ Moreover, ${\cal Q}^{\rm hol}$ is universal among the compact quantum groups acting `holomorphically' on ${\cal A}_\theta$ in the following sense: whenever a compact quantum group $({\cal S}, \Delta)$ has a smooth isometric action $\gamma $ on ${\cal A}_\theta$ satisfying the additional condition that $\gamma$ leaves the subalgebra generated by $\{ U^m V^n,~m,n \geq 0 \}$ invariant, then there is a unique morphism from ${\cal Q}^{\rm hol}$ to ${\cal S}$ which intertwins the respective actions. \end{thm} {\it Proof:}\\ We need to prove only the universality of ${\cal Q}^{\rm hol}$. Indeed, it follows from the universality of ${\cal Q}$ in the category of smooth isometric actions that there is a unique morphism $\phi$ (say) from ${\cal Q}$ to ${\cal S}$ such that $\gamma=({\rm id} \otimes \phi) \circ \alpha_0$. Writing this relation on $U$, $V$ and noting that by assumption on $\gamma$, the coefficients of $U^{-1},V^{-1}$ in the expression of $\gamma(U), \gamma(V)$ are $0$, it is immediate that $\phi(\tilde{C_i})=\phi(\tilde{D_i})=0$ for $i=1,2$, i.e. $\phi({\cal I})=\{0 \}$. Thus, $\phi$ induces a morphism $\tilde{\phi}$ (say) from ${\cal Q}/{\cal I}$ to ${\cal S}$ satisfying $\gamma=({\rm id} \otimes \tilde{\phi}) \circ \beta_0$. \mbox{}\hfill $\sqare$\vspace{1ex}\\ \section{Quantum isometry group of deformed spectral triples} In this section, we give a general scheme for comuputing quantum isometry groups by proving that quantum isometry group of a deformed noncommutative manifold coincides with (under reasonable assumptions) a similar deformation of the quantum isometry group of the original manifold. To make this precise, we introduce a few notation and terminologies. We begin with some generalities on compact quantum groups. Given a compact quantum group $({\cal G},\Delta)$, recall that the the dense unital $\ast$-subalgebra ${\cal G}_0$ of ${\cal G}$ generated by the matrix coefficients of the irreducible unitary representations has a canonical Hopf $\ast$-algebra structure. Moreover, given an action $\gamma : {\cal B} \rightarrow {\cal B} \otimes {\cal G}$ of the compact quantum group $({\cal G}, \Delta)$ on a unital $C^*$-algebra ${\cal B}$, it is known that one can find a dense, unital $\ast$-subalgebra ${\cal B}_0$ of ${\cal B}$ on which the action becomes an action by the Hopf $\ast$-algebra ${\cal G}_0$( see, for example, \cite{wang} , \cite{podles} ). We shall use the Sweedler convention of abbreviating $\gamma(b) \in {\cal B}_0 \otimes_{\rm alg} {\cal G}_0$ by $b_{(1)} \otimes b_{(2)}$, for $b \in {\cal B}_0$. This applies in particular to the canonical action of the quantum group ${\cal G}$ on itself, by taking $\gamma=\Delta$. Moreover, for a linear functional $f$ on ${\cal G}_0$ and an element $c \in {\cal G}_0$ we shall define the `convolution' maps $f \diamond c :=(f \otimes {\rm id} ) \Delta ( c )$ and $c \diamond f := ({\rm id} \otimes f) \Delta ( c)$. We also define convolution of two functionals $f$ and $g$ by $(f \diamond g)(c)=(f \otimes g)(\Delta(c))$. Let us now consider a $C^*$ dynamical system $({\cal A}, \mathbb{T}^n, \beta)$ where $\beta$ is an action of $\mathbb{T}^n$, and assume that there exists a spectral triple $({\cal A}^\infty, {\cal H}, D)$ on the smooth subalgebra ${\cal A}^\infty$ w.r.t. the action of $\mathbb{T}^n$, such that the spectral triple satisfies all the assumptions of \cite{goswami} for ensuring the existence of the quantum isometry group. Let ${\cal Q} \equiv QISO({\cal A})$ denote the quantum isometry group of the spectral triple $({\cal A}^\infty, {\cal H},D)$, with ${\cal L}$ denoting the corresponding Laplacian as in \cite{goswami}. Let ${\cal A}_0$ be the $\ast$-algebra generated by complex linear (algebraic, not closed) span ${\cal A}^\infty_0$ of the eigenvectors of ${\cal L}$ which has a countable discrete set of eigenvalues each with finite multiplicities, by assumptions in \cite{goswami}, and it is assumed, as in \cite{goswami}, that ${\cal A}^\infty_0$ is a subset of ${\cal A}^\infty$ and is norm-dense in ${\cal A}$. Moreover, we make the following assumptions :\\ (i) ${\cal A}_0$ is dense in ${\cal A}^\infty$ w.r.t. the Frechet topology coming from the action of $\mathbb{T}^n$.\\ ( ii ) $\bigcap_{n \geq 1 } {\rm Dom} ({{\cal L}}^n)={\cal A}^\infty.$\\ (iii) ${\cal L}$ commutes with the $\mathbb{T}^n$-action $\beta$, hence $C(\mathbb{T}^n)$ can be identified as a quantum subgroup of ${\cal Q}$.\\ Let $\pi$ denote the surjective map from ${\cal Q}$ to its quantum subgroup $C(\mathbb{T}^n)$, which is a morphism of compact quantum groups. We denote by $\alpha : {\cal A} \rightarrow {\cal A} \otimes {\cal Q}$ the action of ${\cal Q}=QISO({\cal A})$ on ${\cal A}$, and note that on ${\cal A}_0$, this action is algebraic, i.e. it is an action of the Hopf $\ast$-algebra ${\cal Q}_0$ consisting of matrix elements of finite dimensional unitary representations of ${\cal Q}$. We have $ ( id \otimes \pi ) \circ \alpha=\beta$. We shall abbreviate $ e^{ 2 \pi iu}$ by $e(u)$ ($u \in \mathbb{R}^n$), and shall denote by $ \eta $ the canonical homomorphism from $\mathbb{R}^n$ to $\mathbb{T}^n$ given by $ \eta (x_1,x_2,......,x_n ) =(e (x_1),e (x_2),.....e (x_n) )$. For $u \in \mathbb{R}^n$, $\alpha_u$ will denote the $\mathbb{R}^n$-action on ${\cal A}$ given by $\alpha_u(a):=({\rm id} \otimes \Omega(u))(\alpha(a))$, where $ \Omega ( u ) := {\rm ev}_{\eta ((u))} \circ \pi$, for $u \in R^n$ (${\rm ev}_x$ being the state on $C(\mathbb{T}^n)$ obtained by evaluation of a function at the point $x \in \mathbb{T}^n$). Let us now briefly recall Rieffel's formulation of deformation quantization (see, e.g. \cite{rieffel}). Let $J$ denote a skew symmetric $n \times n$ matrix with real entries. We define a `deformed' or `twisted' multiplication $\times_J : {\cal A}^\infty \times {\cal A}^\infty \rightarrow {\cal A}^\infty$ given by $$ a \times_J b:=\int \int \alpha_{Ju}(a) \alpha_v(b) e(u.v) du dv,$$ where $u.v$ denotes the standard (Euclidean) inner product on $\mathbb{R}^n$ and the integral makes sense as an oscillatory integral, described in details in \cite{rieffel} and the references therein. This defines an associative algebra structure on ${\cal A}^\infty$, with the $\ast$ of ${\cal A}$ being an involution w.r.t. the new product $\times_J$ as well, and one can also get a $C^*$-algebra, denoted by ${\cal A}_J$, by completing ${\cal A}^\infty$ in a suitable norm denoted by ${ \left\| \right\|}_{J} $ (see \cite{rieffel}) which is a $C^*$-norm w.r.t. the product $\times_J$. We shall denote by ${\cal A}^\infty_J$ the vector space ${\cal A}^\infty$ equipped with the $\ast$-algebra structure coming from $\times_J$. One has a natural Frechet topology on ${\cal A}^\infty_J$, given by a family of seminorms $ \{ {\left\| \right\|}_{n,J} \} $ where $ \{ {\left\| a \right\|}_{n,J} \} = \sum_{\left| \mu \right| \leq n} ( \left| \mu \right|!)^{-1} {\left\| \alpha_{X^{\mu}}( a ) \right\|}_{J} , $ ( $ \alpha_{X^{\mu}} $ as in \cite{rieffel}), in which ${\cal A}^\infty_J$ is complete. Moreover, it follows from the estimates ( Proposition 4.10 , page 35 ) in \cite{rieffel} that ${\cal A}^\infty={\cal A}^\infty_J$ as topological spaces, i.e. they coincide as sets and the corresponding Frechet topologies are also equivalent. In view of this, we shall denote this space simply by ${\cal A}^\infty$, unless one needs to consider it as Frechet algebra, in which case the suffix $J$ will be used. Assume furthermore that for each skew-symmetric matrix $J$, there exists a spectral triple on ${\cal A}^\infty_J$ satisfying the assumptions in \cite{goswami} for defining the quantum isometry group $QISO({\cal A}_J)$, and assume also that the corresponding Laplacians, say ${\cal L}_J$, coincide with ${\cal L}$ on ${\cal A}^\infty \subset {\cal A}_J$, so that the quantum isometry group $QISO ( {\cal A}_J )$ is the universal compact quantum group acting on ${\cal A}_J$, with the action keeping each of the eigenspaces of ${\cal L}$ invariant. Note that the algebraic span of eigenvectors of ${\cal L}_J$ coincides with that of ${\cal L}$, i.e. ${\cal A}^\infty_0$, which is already assumed to be Frechet-dense in ${\cal A}^\infty={\cal A}^\infty_J$, hence in particular norm-dense in ${\cal A}_J$. We now state and prove a criterion, to be used later, for extending positive maps defined on ${\cal A}_0$. \begin{lmma} \label{positive} Let ${\cal B}$ be another unital $C^*$-algebra equipped with a $\mathbb{T}^n$-action, so that we can consider the $C^*$-algebras ${\cal B}_J$ for any skew symmetric $n \times n$ matrix $J$. Let $\phi : {\cal A}^\infty \rightarrow {\cal B}^\infty$ be a linear map, satisfying the following :\\ (a) $\phi$ is positive w.r.t. the defomed products $\times_J$ on ${\cal A}_0$ and ${\cal B}^\infty$, i.e. $\phi(a^* \times_J a) \geq 0 $ (in ${\cal B}^\infty_J \subset {\cal B}_J$) for all $a \in {\cal A}_0$, and \\ (b) $\phi$ extends to a norm-bounded map (say $\phi_0$) from ${\cal A}$ to ${\cal B}$.\\ Then $\phi$ also have an extension $\phi_J$ as a $\|~\|_J$-bounded positive map from ${\cal A}_J$ to ${\cal B}_J$ satisfying $\| \phi_J \| =\| \phi(1) \|_J$. \end{lmma} {\it Proof :}\\ we can view $\phi$ as a map between the Frechet spaces ${\cal A}^\infty$ and ${\cal B}^\infty$, which is clearly closable, since it is continuous w.r.t. the norm-topologies of ${\cal A}$ and ${\cal B}$, which are weaker than the corresponding Frechet topologies. By the Closed Graph Theorem, we conclude that $\phi$ is continuous in the Frechet topology. Since ${\cal A}^\infty={\cal A}^\infty_J$ and ${\cal B}^\infty={\cal B}^\infty_J$ as Frechet spaces, consider $\phi$ as a continuous map from ${\cal A}^\infty_J$ to ${\cal B}^\infty_J$, and it follows by the Frechet-continuity of $\times_J$ and $\ast$ and the Frechet-density of ${\cal A}_0$ in ${\cal A}^\infty_J$ that the positivity (w.r.t. $\times_J$) of the restriction of $\phi$ to ${\cal A}_0 \subset {\cal A}^\infty_J$ is inherited by the extension on ${\cal A}^\infty={\cal A}^\infty_J$. Indeed, given $a \in {\cal A}^\infty_J={\cal A}^\infty$, choose a sequence $a_n \in {\cal A}_0$ such that $a_n \rightarrow a$ in the Frechet topology. We have $\phi(a^* \times_J a)=\lim_n \phi(a_n^* \times_J a_n)$ in the Frechet topology, so in particular, $\phi(a_n^* \times_J a_n) \rightarrow \phi(a^* \times_J a)$ in the norm of ${\cal B}_J$, which implies that $\phi(a^* \times_J a)$ is a positive element of ${\cal B}_J$ since $\phi(a_n^* \times_J a_n)$ is so for each $n$. Next, we note that ${\cal A}^\infty$ is closed under holomorphic functional calculus as a unital $\ast$-subalgebra of ${\cal A}_J$ (the identity of ${\cal A}^\infty_J$ is same as that of ${\cal A}$), so any positive map defined on ${\cal A}^\infty$ admits a bounded extension (say $\phi_J$) on ${\cal A}_J$, which will still be a positive map, so in particular the norm of $\phi_J$ is same as $\| \phi_J(1)\|$. \mbox{}\hfill $\sqare$\vspace{1ex} \\ We shall also need Rieffel-type deformation of compact quantum groups (due to Rieffel and Wang, see \cite{wang2}, \cite{toral} and references therein), w.r.t. the action by a quantum subgroup isomorphic to $C(\mathbb{T}^n)$ for some $n$. Indeed, for each skew symmetric $n \times n$ real matrix $J$, we can consider a $2n$-parameter action on the compact quantum group, and equip the corresponding Rieffel-deformation the structure of a compact quantum group. We will discuss about it in some more details later on. For a fixed $J$, we shall work with several multiplications on the vector space ${{\cal A}_0} \otimes_{\rm alg} {\cal Q}_0$, where ${\cal Q}_0$ is the dense Hopf $\ast$-algebra generated by the matrix coefficients of irreducible unitary representations of the quantum isometry group ${\cal Q}$. We shall denote the counit and antipode of ${\cal Q}_0$ by $\epsilon$ and $\kappa$ respectively. Let us define the following $$ x \odot y = \int_{\mathbb{R}^{4n}} e( -u.v )e( w.s )(\Omega ( -Ju )\diamond x \diamond (\Omega ( Jw ) ) (\Omega ( -v ) \diamond y \diamond \Omega( s )) du dv dw ds ,$$ where $x,y \in {\cal Q}_0$. This is clearly a bilinear map, and will be seen to be an associative multiplication later on. Moreover, we define two bilinear maps $\bullet$ and $\bullet_J$ by setting $(a \otimes x) \bullet (b \otimes y):=ab \otimes x \odot y$ and $(a \otimes x) \bullet_J (b \otimes y):=(a \times_J b) \otimes (x \odot y)$, for $a,b \in {{\cal A}_0}$, $x,y \in {\cal Q}_0$. We have $ \Omega(u) \diamond ( \Omega(v) \diamond c ) = ( \Omega(u) \diamond \Omega(v) ) \diamond c $. \begin{lmma} \label{Lemma1} The map $\odot$ satisfies $$ \int_{\mathbb{R}^{2n}} ( \Omega( J u ) \diamond x ) \odot ( \Omega( v ) \diamond y )e(u.v ) du dv = \int_{\mathbb{R}^{2n}} (x \diamond ( \Omega( J u ))( y \diamond \Omega( v ) )e( u.v ) du dv ,$$ for $x,y \in {\cal Q}_0$. \end{lmma} {\it Proof :}\\ We have \begin{eqnarray*} \lefteqn{{\rm LHS}}\\ &=& \int( ( \Omega(Ju^{\prime})) \diamond x ) \odot (\Omega( v^{{\prime}} ) \diamond y )e( u.v ) du^{\prime}dv^{\prime}\\ & =& \int_{\mathbb{R}^{2n}}\{ \int_{\mathbb{R}^{4n}} e(-u.v )e(w.s )(\Omega(-Ju ) )\diamond ((\Omega( Ju^{\prime})) \diamond x ) \diamond (\Omega( Jw )) ( \Omega( -v )) \diamond (\Omega( v^{\prime} ) \diamond y ) \diamond \Omega( s ) ~ dudvdwds \}e(u^{\prime}.v^{\prime})du^{\prime}dv^{\prime} \\ & =& \int_{\mathbb{R}^{2n}} \{ \int_{\mathbb{R}^{4n}} e( -u.v )e ( w.s ) (\Omega( -Ju )) \diamond ( \Omega( Ju^{{\prime}} )) \diamond x ) \diamond (\Omega( Jw )) (\Omega( -v )) \diamond (\Omega( v^{{\prime}}) \diamond y ) \diamond \Omega( s ) ~du dv dw ds \} e ( u^{{\prime}} v^{{\prime}} ) du^{{\prime}}dv^{{\prime}}\\ & =& \int_{\mathbb{R}^{6n}} ( (\Omega( J( u^{\prime}-u )) \diamond x ) \diamond \Omega( Jw )) (\Omega( v^{{\prime}}- v )) \diamond y \diamond \Omega( s ) e( u^{{\prime}}.v^{{\prime}}) e(-u.v )e ( w.s ) du dv dw ds du^{{\prime}} dv^{{\prime}}\\ & = & \int_{\mathbb{R}^{2n}} e( w.s )dw ds \{ \int_{\mathbb{R}^{4n}} e ( u^{{\prime}}.v^{{\prime}} )e ( -u.v ) du dv du^{{\prime}}dv^{{\prime}} ( \Omega( J ( u^{{\prime}} -u )) \diamond x_w )( ( \Omega( v^{{\prime}} - v )) \diamond y_s ) \},\end{eqnarray*} where $ x_w = x \diamond \Omega( Jw ) , y_s =y \diamond \Omega( s ) $. The proof of the lemma will be complete if we show $$ \int_{\mathbb{R}^{4n}} e ( u^{{\prime}}.v^{{\prime} } )e ( -u.v ) (\Omega( J( u^{{\prime}} -u )) \diamond x_w ) ( \Omega( v^{{\prime} } -v ) \diamond y_s ) du dv du^{{\prime}} dv^{{\prime}} = x_w.y_s. $$ By changing variable in the above integral, with $z = u^{{\prime}} - u, t = v{{\prime}} -v$, it becomes $ \int_{\mathbb{R}^{4n}} e( -u.v ) e ( ( u + z ).( v + t ) ) \phi ( z,t ) du dv dz dt $ $ = \int_{\mathbb{R}^{4n}} \phi ( z,t ) e ( u.t + z.v ) e ( z.t ) du dv dz dt, $ where $$ \phi ( z,t ) = (\Omega( J ( z ) ) \diamond x_w )( \Omega( t ) \diamond y_s ) .$$ By taking $ ( z,t ) = X, ( v,u ) = Y,$ and $ F ( X ) = \phi ( z,t ) e( z.t )$, the integral can be written as \begin{eqnarray*} \lefteqn{ \int\int F ( X ) e ( X.Y )dX dY }\\ & = & F ( 0 ) ~({\rm ~by~ Corollary ~1.12 ~of~ \cite{rieffel},~ page ~9})\\ & =& ( \Omega( J ( 0 ) ) \diamond x_w ) ( \Omega( 0 ) \diamond y_s ) \\ & =& x_w.y_s, \end{eqnarray*} since \begin{eqnarray*} \Omega( J(0) )\diamond x_{w} = ( ev_{\eta(0)} \pi \otimes id ) \Delta ( x_{w} ) = ( \epsilon_{\mathbb{T}^n} \circ \pi \otimes id ) \Delta( x_{w} ) = ( \epsilon \otimes id ) \Delta( x_{w} ) = x_{w} \end{eqnarray*} and similarly $ \Omega(0) \diamond y_{s} =y_s$ (here $ \epsilon_{\mathbb{T}^{n}} $ denotes the counit of the quantum group $ C( \mathbb{T}^{n} ) $). This proves the claim and hence the lemma. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{lmma} \label{ Lemma2} We have for $ a \in {{\cal A}_0}$ $$ \alpha ( \alpha _u ( a ) ) = a_{(1)} \otimes ( id \otimes \Omega( u ) ) ( \Delta ( a_{( 2 )} ).$$ \end{lmma} {\it Proof :}\\ We have \begin{eqnarray*} \lefteqn{ \alpha _u ( a )}\\ & =& ( id \otimes \Omega ( u ) ) \alpha ( a )\\ & =& ( id \otimes \Omega ( u ) ) ( a_{( 1 )} \otimes a_{( 2 )} )\\ &= & a_{( 1 )} ( \Omega ( u )) ( a_{( 2 )} ).\end{eqnarray*} This gives, \begin{eqnarray*} \lefteqn{ \alpha ( \alpha _{ u } ( a ) )}\\ &=& \alpha ( a_{ ( 1 ) } ) \Omega ( u ) ( a_{ ( 2 ) } )\\ & = & ( id \otimes id \otimes \Omega ( u ) ) ( \alpha ( a_{(1)} \otimes a_{ (2) } ) )\\ &= & ( id \otimes id \otimes \Omega ( u ) ) ( ( \alpha \otimes id ) \alpha ( a ) )\\ &= & ( id \otimes id \otimes \Omega ( u ) ) ( ( id \otimes \Delta ) \alpha ( a ) )\\ &= & a_{(1)} \otimes ( id \otimes \Omega ( u ) ) \Delta ( a_{(2)}).\end{eqnarray*} \mbox{}\hfill $\sqare$\vspace{1ex} \begin{lmma} \label{Lemma3} For $a,b \in {{\cal A}_0}$, we have $$ \alpha ( a \times _{J} b ) = a_{(1)}b_{(1)} \otimes ( \int\int ( a_{(2)} \diamond u ) ( b_{(2)} \diamond v ) e ( u.v ) du dv ).$$ \end{lmma} {\it Proof :}\\ Using the notations and definitions in page 4-5 of \cite{rieffel}, we note that for any $ f: \mathbb{R}^{2} \rightarrow {I\! \! \!\! C} $ belonging to $ {I\! \! B}( \mathbb{R}^{2} ) $ and fixed $ x \in E $( where $ E $ is a Banach algebra ), the function $ F( u, v ) = xf(u,v )$ belongs to $ {I\! \! B}^{E} ( \mathbb{R}^{2} ) $ and we have \begin{eqnarray*} \lefteqn{ x ~( \int \int f( u,v ) e( u.v ) du dv )}\\ & =& x ~( \lim_L \sum_{p \in L} \int \int ( f \phi_{p})( u,v ) e( u.v ) du dv )\\ &=& \lim_L \sum_{p \in L} \int \int x~( f \phi_{p})( u,v ) e( u.v ) du dv )\\ &= & \int\int x~f(u,v ) e(u.v) du dv.\end{eqnarray*} Then, \begin{eqnarray*} \lefteqn{\alpha ( a \times _{J} b )}\\ &=& \alpha ( \int\int \alpha _{Ju} ( a ) \alpha _{v} ( b ) e( u.v ) du dv )\\ &=& \alpha ( \int\int a_{(1)} ( \Omega(Ju) )( a_{(2)} ) b_{(1)} ( \Omega(v) )(b_{(2)}) e(u.v) du dv ) \\ &=& \alpha (a_{(1)} b_{(1)} \int\int ( \Omega(Ju) )( a_{(2)} ) ( \Omega(v) )(b_{(2)}) e(u.v) du dv ) \\ &=& \alpha( a_{(1)} ) \alpha( b_{(1)} ) \int\int ( \Omega(Ju) )( a_{(2)} ) ( \Omega(v) )(b_{(2)}) e(u.v) du dv \\ &=& \int \int \alpha( a_{(1)} ) \alpha( b_{(1)} ) ( \Omega(Ju) )( a_{(2)} ) ( \Omega(v) )(b_{(2)}) e(u.v) du dv ) \\ &=& \int\int \alpha ( \alpha _{Ju}(a) )\alpha ( \alpha _{v} ( b ) ) e( u.v ) du dv \\ &= & \int\int ( a_{(1)} \otimes ( id \otimes \Omega(Ju) ) ( \Delta ( a_{( 2 )} )) ) ( b_{(1)} \otimes ( id \otimes \Omega ( v ) ) ( \Delta ( b_{( 2 )}) )) e ( u.v ) du dv\\& & ( {\rm ~using ~Lemma ~\ref{ Lemma2} )}\\ &=& a_{(1)}b_{(1)} \otimes \int\int ( a _{(2)} \diamond \Omega( Ju ) ) ( b_{(2)} \diamond \Omega( v ) ) e ( u.v ) du dv .\end{eqnarray*} \mbox{}\hfill $\sqare$\vspace{1ex} \begin{lmma} \label{Lemma4} For $a,b \in {{\cal A}_0}$, $$ \alpha ( a ) \bullet_J \alpha ( b ) = a_{(1)}b_{(1)} \otimes \{ \int\int ( \Omega( Ju ) \diamond a_{(2)} ) \odot ( \Omega( v ) \diamond b_{(2)}) e( u.v ) du dv \}.$$ \end{lmma} {\it Proof :}\\ We have \begin{eqnarray*} \lefteqn{\alpha ( a )\bullet_J \alpha ( b )}\\ & = & ( a_{(1)} \otimes a_{(2)} ) ( b_{(1)} \otimes b_{(2)} )\\ &=& a_{(1)} \times _{J} b_{(1)} \otimes ( a_{(2)} \odot b_{(2)} )\\ &=& \int\int \alpha_{Ju} ( a_{(1)} ) \alpha _{v} ( b_{(1)} ) e( u.v ) du dv \otimes ( a_{(2)} \odot b_{(2)} ). \end{eqnarray*} Let $\epsilon : {\cal Q}_0 \rightarrow \ {I\! \! \!\! C}$ be the counit of the compact quantum group ${\cal Q}$. So we have $ ( id \otimes \epsilon ) \alpha = id .$ This gives, \begin{eqnarray*} \lefteqn{ \alpha ( a ) \bullet_J \alpha ( b )}\\ & =& \int\int ( id \otimes \epsilon ) \alpha ( \alpha_{Ju} ( a_{(1)} ) ) ( id \otimes \epsilon ) \alpha ( \alpha _{v} ( b_{(1)} ) e( u.v ) du dv \otimes ( a_{2} \odot b_{2} )\\ &=& \int\int ( id \otimes \epsilon ) ( \alpha ( \alpha_{Ju} ( a_{(1)} ))) ( id \otimes \epsilon ) ( \alpha ( \alpha_{v} ( b_{(1)} ) ) e( u.v ) du dv \otimes ( a_{(2)} \odot b_{(2)} ). \end{eqnarray*} Note that by Lemma \ref{Lemma3}, $ \int\int ( id \otimes \epsilon ) ( \alpha ( \alpha_{Ju} ( a_{(1)} ) ) ( id \otimes \epsilon ) ( \alpha ( \alpha _{v} ( b_{(1)} ) ) ) e(u.v ) du dv $ $ = \int\int ( id \otimes \epsilon ) ( a_{(1)(1)} \otimes ( id \otimes \Omega ( Ju) ) ( \Delta (a_{(1)(2)})) (id \otimes \epsilon ) ( b_{(1)(1)} \otimes ( id \otimes \Omega ( v ) ) ( \Delta ( b_{(1)(2)} )) e( u.v ) du dv $ $ = \int\int ( id \otimes \epsilon ) ( a_{(1)(1)} \otimes ( a_{(1)(2)} \diamond \Omega( Ju ) ) ( id \otimes \epsilon ) ( b_{(1)(1)} \otimes ( b_{(1)(2)} \diamond \Omega( v ) ) e( u.v ) du dv $ $ = \int\int a_{(1)(1)} b_{(1)(1)} \epsilon ( ( a_{(1)(2)} \diamond \Omega( Ju ) ) \epsilon ( b_{(1)(2)} \diamond \Omega( v ) ) e( u.v ) du dv. $ Using the fact that $f \diamond \epsilon=\epsilon \diamond f=f$ for any functional on ${\cal Q}_0$, one has $ \epsilon ( a_{(1)(2)} \diamond \Omega( Ju ) ) = \Omega( Ju ) ( a_{(1)(2)} )$ and $ \epsilon ( b_{(1)(2)} \diamond \Omega( v ) ) = \Omega( v ) ( b_{(1)(2)} ) $, from which it follows that \begin{eqnarray*} \lefteqn{ \alpha ( a )\bullet_J \alpha ( b )}\\ &=& a_{(1)(1)}b_{(1)(1)} \int\int \Omega ( Ju ) ( a_{(1)(2)} ) \Omega ( v ) ( b_{(1)(2)} ) e(u.v ) du dv \otimes ( a_{(2)} \odot b_{(2)} )\\ &=& \int\int ( id \otimes \Omega ( Ju ) \otimes id ) ( a_{(1)(1)} \otimes a_{(1)(2)} \otimes a_{(2)} ) \bullet ( id \otimes \Omega ( v ) \otimes id ) ( b_{(1)(1)} \otimes b_{(1)(2)} \otimes b_{(2)} )\\ & & e( u.v ) du dv\\ &=& \int\int ( id \otimes \Omega ( Ju ) \otimes id ) ( a_{(1)} \otimes \Delta ( a_{(2)} ) ) \bullet ( id \otimes ( \Omega ( v ) \otimes id ) ( b_{(1)} \otimes \Delta ( b_{(2)} ) ) e( u.v ) du dv \\ & =& \int\int \{ a_{(1)} \otimes ( \Omega ( Ju ) \otimes id ) \Delta ( a_{2} ) \} \bullet \{ b_{1} \otimes ( \Omega ( v ) \otimes id ) \Delta ( b_{2} ) \} e( u.v ) du dv\\ &=& a_{(1)}b_{(1)} \otimes \int\int ( ( \Omega ( Ju ) \otimes id ) \Delta ( a_{2} ) ) \odot ( \Omega ( v ) \otimes id ) ) \Delta ( b_{(2)} ) e( u.v ) du dv\\ &=& a_{(1)}b_{(1)} \otimes \int\int ( \Omega( Ju ) \diamond a_{(2)} ) \odot ( \Omega( v ) \diamond b_{(2)} ) e( u.v ) du dv,\end{eqnarray*} where we have used the relation $(\alpha \otimes id) \alpha=(id \otimes \Delta) \alpha$ to get $a_{(1)(1)} \otimes a_{(1)(2)} \otimes a_{(2)} =a_{(1)} \otimes \Delta ( a_{(2)} )$ and similarly $ b_{(1)(1)} \otimes b_{(1)(2)} \otimes b_{(2)} = b_{(1)} \otimes \Delta ( b_{(2)} ) .$ \mbox{}\hfill $\sqare$\vspace{1ex} Combining Lemma \ref{Lemma1}, Lemma \ref{Lemma3} and Lemma \ref{Lemma4} we conclude the following. \begin{lmma} \label{Lemma5} For $a,b \in {{\cal A}_0},$ we have $\alpha(a) \bullet_J \alpha(b)=\alpha(a \times_J b).$ \end{lmma} We shall now identify $\odot$ with the multiplication of a Rieffel-type deformation of ${\cal Q}$. Since ${\cal Q}$ has a quantum subgroup isomorphic with $\mathbb{T}^n$, we can consider the following canonical action $\lambda$ of $\mathbb{R}^{2n}$ on ${\cal Q}$ given by $$ \lambda_{( s,u )} = ( \Omega( -s ) \otimes id ) \Delta ( id \otimes \Omega ( u ) ) \Delta.$$ Now, let $ \widetilde J := -J \oplus J $, which is a skew-symmetric ${2n} \times 2n$ real matrix, so one can deform ${\cal Q}$ by defining the product of $x$ and $y$ ($x,y \in {\cal Q}_0$, say) to be the following: $$ \int\int \lambda_{\widetilde J ( u,w )}( x ) \lambda _{v,s} ( y ) e ( ( u,w ).( v,s ) ) d ( u,w ) d ( v,s ).$$ We claim that this is nothing but $\odot$ introduced before. \begin{lmma} $ x \odot y = x \times_{\widetilde{J}} y ~ \forall x,y \in Q_{0} $ \end{lmma} {\it Proof :}\\ Let us first observe that \begin{eqnarray*} \lefteqn{ \lambda _{ \widetilde J ( u,w ) } ( x )}\\ &=& ( \Omega ( Ju ) \otimes id ) \Delta ( id \otimes \Omega ( Jw ) ) \Delta ( x )\\ &=& \Omega( Ju ) \diamond x \diamond \Omega( Jw ), \end{eqnarray*} and similarly $ \lambda_{(v,s)}(y)= \Omega( -v ) \diamond y \diamond \Omega( s ).$ Thus, we have \begin{eqnarray*} \lefteqn{ x \odot y}\\ & =& \int_{\mathbb{R}^{4n}} ( \Omega( -Ju ) \diamond x \diamond \Omega( Jw ) ) ( \Omega( -v ) \diamond y \diamond \Omega( s ) ) e ( -u.v ) e ( w.s ) du dv dw ds\\ &=& \int_{\mathbb{R}^{4n}} \Omega( J u^{{\prime}} ) \diamond x \diamond \Omega( J w ) ) \Omega( -v ) \diamond y \diamond \Omega( s )) e ( u^{{\prime}}.v ) e ( w.s ) du^{{\prime}} dv dw ds\\ &=& \int_{\mathbb{R}^{2n} } \int_{\mathbb{R}^{2n}} \lambda _{\widetilde J ( u,w )} ( x )\lambda _{( v,s )} ( y ) e ( ( u,w ).( v,s ) ) d( u,w ) d ( v,s ),\end{eqnarray*} which proves the claim. \mbox{}\hfill $\sqare$\vspace{1ex} \\ Let us denote by ${\cal Q}_{\widetilde{J}}$ the $C^*$ algebra obtained from ${\cal Q}$ by the Rieffel deformation w.r.t. the matrix $\widetilde{J}$ described above. It has been shown in \cite{wang2} that the coproduct $\Delta$ on ${\cal Q}_0$ extends to a coproduct for the deformed algebra as well and $({\cal Q}_{\widetilde{J}}, \Delta)$ is a compact quantum group. \begin{lmma} \label{Lemma5a} The Haar state (say $h$) of ${\cal Q}$ coincides with the Haar state on ${\cal Q}_{\widetilde{J}}$ ( say $ h_{J} $ ) on the common subspace ${\cal Q}^{\infty}$, and moreover, $h(a \times_{\widetilde{J}} b)=h(ab)$ for $a,b \in {\cal Q}^{\infty}$. \end{lmma} {\it Proof :}\\ From \cite{wang2} ( Remark 3.10(2) ), we have that $h$ = $h_{J}$ on ${\cal Q}_{0}$. By using $ h ( \Omega ( - s ) \otimes id ) = \Omega ( - s )( id \otimes h ) $ and $ h( id \otimes \Omega( u ) ) = \Omega ( u ) ( h \otimes id ) $, we have for $a \in {\cal Q}_0$ \begin{eqnarray*} \lefteqn { h ( \lambda_{s,u} ( a ) ) }\\ & = & \Omega ( - s ) ( id \otimes h ) \Delta ( id \otimes \Omega ( u ) ) \Delta ( a ) \\ & = & \Omega ( -s ) ( h ( ( id \otimes \Omega ( u ) ) \Delta ( a ) )1 ) \\ & = & h ( ( id \otimes \Omega ( u ) ) \Delta ( a ) ) \\ & = & \Omega ( u ) ( h( a ).1 ) \\ & = & h( a ) .\end{eqnarray*} Therefore, $ h \lambda_{s,u} ( b ) = h( b ) ~ \forall b \in {\cal Q}_0 $. Now, \begin{eqnarray*} \lefteqn{ h( a {\times}_{\widetilde{J}} b )}\\ & =& \int\int h( \lambda_{\widetilde{J}u} ( a ) \lambda_{v} ( b ) ) e( u . v ) du dv \\ & =& \int\int h( \lambda_{v} ( \lambda_{\widetilde{J}u - v} ( a ) b ) ) e( u . v ) du dv \\ &=& \int\int h( \lambda_{t} ( a ) b ) e( s.t ) ds dt ,\end{eqnarray*} where $ s = - u, t = \widetilde{J}u - v ,$ which by Corollary 1.12, \cite{rieffel} equals $ h( \lambda_{0} ( a ) b ) = h( a b ). $ That is, we have proved \begin{equation} \label{etaseta} \langle a, b \rangle_J=\langle a, b \rangle ~~\forall a,b \in {\cal Q}_0, \end{equation} where $\langle \cdot, \cdot \rangle_J$ and $\langle \cdot, \cdot \rangle$ respectively denote the inner products of $L^2(h_J)$ and $L^2(h)$. We now complete the proof of the lemma by extending (\ref{etaseta}) from ${\cal Q}_0$ to ${\cal Q}^\infty$, by using the fact that ${\cal Q}^\infty$ is a common subspace of the Hilbert spaces $L^2(h)$ and $L^2(h_J)$ and moreover, ${\cal Q}_0$ is dense in both these Hilbert spaces. In particular, taking $a=1 \in {\cal Q}_0$, we have $h=h_J$ on ${\cal Q}^\infty$. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{rmrk} \label{haarrem} Lemma \ref{Lemma5a} implies in particular that for every fixed $a_1,a_2 \in {\cal Q}_0$, the functional ${\cal Q}_0 \ni b \mapsto h(a_{1} \times_{\widetilde{J}} b \times_{\widetilde{J}} a_2)=h((\kappa^2(a_2)\times_{\widetilde{J}}a_1)b)$ extends to a bounded linear functional on ${\cal Q}$. \end{rmrk} \begin{lmma} \label{Lemma5b} If $h$ is faithful on $ {\cal Q} $, then $ h_{J} $ is faithful on ${\cal Q}_{\widetilde{J}}$. \end{lmma} {\it Proof :}\\ Let $a \geq 0, \in {\cal Q}_{\widetilde{J}} $ be such that $ h( a ) = 0$. Let $e$ be the identity of $ \mathbb{T}^{2n} $ and $U_{n}$ be a sequnce of neighbourhoods of $e$ shrinking to $e$, $f_{n}$ smooth, positive functions with support contained inside $U_{n}$ such that $ \int f_{n}(z) dz = 1 ~ \forall n$. Define $ \lambda_{f_{n}} ( a ) = \int_{\mathbb{T}^{2n}} \lambda_{z} ( a ) f_{n}( z ) dz $. It is clear that $ \lambda_{f_{n}} ( a ) \in {\cal Q}^{\infty}$ and is positive in $ {\cal Q}_{\widetilde{J}}$. Moreover, using the fact that the map $ z \mapsto \lambda_{z} ( a ) $ is continuous $ \forall a $, we have $ \lambda_{f_{n}} ( a ) \rightarrow a $ as $ n \rightarrow \infty $. Now \begin{eqnarray*} \lefteqn {h_{J}( \lambda_{f_{n}} ( a ) )} \\ & = & \int_{\mathbb{T}^{2n}} h_{J} ( \lambda_{z} ( a ) ) f_{n} ( z ) dz \\ & = & \int_{\mathbb{T}^{2n}} h_{J} ( a ) f_{n} ( z ) dz \\ & = & 0, \end{eqnarray*} so we have $ h( \lambda_{f_{n}} ( a ) ) = 0 $, since $h$ and $h_J$ coincide on ${\cal Q}^\infty$ by Lemma \ref{Lemma5a}. Now we fix some notation which we are going to use in the rest of the proof. Let $ L^{2}( h ) $ and $ L^{2}( h_{J} ) $ denote the G.N.S spaces of $ {\cal Q} $ and $ {\cal Q}_{\widetilde{J}} $ respectively with respect to the Haar states. Let $ i $ and $ i_{J} $ be the canonical maps from $ {\cal Q} $ and $ {\cal Q}_{\widetilde{J}} $ to $ L^{2}( h ) $ and $ L^{2}( h_{J} ) $ respectively. Also, let $ \Pi_{J} $ denote the G.N.S representation of $ {\cal Q}_{J} $. Using the facts $ h ( b^{\ast} \times_{\widetilde{J}} b ) = h ( b^{\ast} b ) ~ \forall b \in {\cal Q}^{\infty} $ and $ h = h_{J} $ on $ {\cal Q}^{\infty} $ , we get $ \left\|i_{J}( b ) \right\|^{2}_{L^{2}( h_{J} )} = \left\|i( b ) \right\|^{2}_{L^{2}( h ) } \forall b \in {\cal Q}^{\infty} $. So the map sending $ i ( b ) $ to $ i_{J} ( b ) $ is an isometry from a dense subspace of $ L^{2}( h )$ onto a dense subspace of $ L^{2}( h_{J} ) $, hence it extends to a unitary, say $ \Gamma : L^{2} ( h ) \rightarrow L^{2} ( h_{J} ) $. We also note that the maps $ i $ and $ i_{J} $ agree on $ {\cal Q}^{\infty} $. Now, $ \lambda_{f_{n}} ( a ) = b^{\ast} \times_{\widetilde{J}} b $ for some $ b \in {\cal Q}_{\widetilde{J}} $. So $ h( \lambda_{f_{n}} ( a ) ) = 0 $ implies $ \left\|i_{J} ( b ) \right\|^{2}_{L^{2} ( h_{J} ) } = 0 $. Therefore, one has $ \Pi_{J}( b^{\ast} ) i_{J} ( b ) = 0$, and hence $ i_{J}( b^{\ast} b ) = i_{J} ( \lambda_{f_{n}} ( a ) ) = 0$. It thus follows that $ \Gamma ( i ( \lambda_{f_{n}}( a ) ) ) = 0$, which implies $ i ( \lambda_{f_{n}}( a ) ) = 0$. But the faithfulness of $h$ means that $ i $ is one one, hence $ \lambda_{f_{n}}( a ) = 0$ for all $n$. Thus, $a=lim_{n \rightarrow \infty} \lambda_{f_{n}} ( a ) =0 $, which proves the faithfulness of $h_J$. \mbox{}\hfill $\sqare$\vspace{1ex} \begin{thm} \label{premain} If the Haar state is faithful on ${\cal Q}$, then $\alpha : {{\cal A}_0} \rightarrow {{\cal A}_0} \otimes {\cal Q}_0$ extends to an action of the compact quantum group ${\cal Q}_{\widetilde{J}}$ on ${\cal A}_J$, which is isometric, smooth and faithful. \end{thm} {\it Proof :}\\ We have already seen in Lemma \ref{Lemma5} that $\alpha$ is an algebra homomorphism from ${\cal A}_0$ to ${\cal A}_0 \otimes_{\rm alg} {\cal Q}_0$ (w.r.t. the deformed products), and it is also $\ast$-homomorphism since it is so for the undeformed case and the involution $\ast$ is the same for the deformed and undeformed algebras. It now suffices to show that $\alpha$ extends to ${\cal A}_J$ as a $C^*$-homomorphism. Let us fix any faithful imbedding ${\cal A}_J \subseteq {\cal B}({\cal H}_0)$ (where ${\cal H}_0$ is a Hilbert space) and consider the imbedding ${\cal Q}_{\widetilde{J}} \subseteq {\cal B}(L^2(h_J))$. By definition, the norm on ${\cal A}_J \otimes {\cal Q}_{\widetilde{J}}$ is the minimal (injective) $C^*$-norm, so it is equal to the norm inherited from the imbedding ${\cal A}_J \otimes_{\rm alg} {\cal Q}_{\widetilde{J}} \subseteq {\cal B}({\cal H}_0 \otimes L^2(h_J))$. Let us consider the dense subspace ${\cal D} \subset {\cal H}_0 \otimes L^2(h_J)$ consisting of vectors which are finite linear combinations of the form $\sum_i u_i \otimes x_i,$ with $u_i \in {\cal H}_0$, $x_i \in {\cal Q}_0 \subset L^2(h_J)$. Fix such a vector $\xi=\sum_{i=1}^k u_i \otimes x_i$ and consider ${\cal B}:={\cal A} \otimes M_k({I\! \! \!\! C})$, with the $\mathbb{T}^n$-action $\beta \otimes \it {\rm id}$ on ${\cal B}$. Let $\phi: {\cal A}^\infty \rightarrow {\cal B}^\infty$ be the map given by $$ \phi(a):= \left( \left( ({\rm id} \otimes \phi_{(x_i,x_j)})(\alpha(a)) \right) \right)_{ij=1}^k,$$ where $\phi_{(x,y)}(z):=h(x^* \times_{\widetilde{J}} z \times_{\widetilde{J}} y)$ for $x,y,z \in {\cal Q}_0$. Note that the range of $\phi$ is in ${\cal B}^\infty={\cal A}^\infty \otimes M_k({I\! \! \!\! C})$ since we have $\phi_{(x,y)}({\cal A}^\infty) \subseteq {\cal A}^\infty$ by the Remark 2.16 of \cite{goswami}, using our assumption (ii) that $\bigcap_{n \geq 1}{\rm Dom}({\cal L}^n)={\cal A}^\infty$. Since $\alpha$ maps ${\cal A}_0$ into ${\cal A}_0 \otimes_{\rm alg} {\cal Q}_0$ and $h=h_J$ on ${\cal Q}_0$, it is easy to see that for $a \in {\cal A}_0$, $\phi(a^* \times_J a)$ is positive in ${\cal B}_J$. Moreover, by the Remark \ref{haarrem}, $\phi_{(x_i,x_j)}$ extends to ${\cal Q}$ as a bounded linear functional, hence $\phi$ extends to a bounded linear (but not necessarily positive) map from ${\cal A}$ to ${\cal B}$. Thus, the hypotheses of Lemma \ref{positive} are satisfied and we conclude that $\phi$ admits a positive extension, say $\phi_J$, from ${\cal A}_J$ to ${\cal B}_J={\cal A}_J \otimes M_k({I\! \! \!\! C})$. Thus, we have for $a \in {\cal A}_0$, \begin{eqnarray*} \lefteqn{\sum_{i,j=1}^k \langle u_i, \phi(a^* \times_J a)u_j \rangle}\\ & \leq & \| a \|_J^2 \sum_{ij} \langle u_i, \phi(1)u_j \rangle = \| a\|_J^2 \sum_{ij} \langle u_i,u_j \rangle h(x_i^* \times_{\widetilde{J}} x_j)\\ &=& \| a \|_J^2 \sum_{ij} \langle u_i \otimes x_i, u_j \otimes x_j \rangle =\|a\|_J^2 \| \sum_{i=1}^k u_i \otimes x_i \|^2.\end{eqnarray*} This implies $$ \| \alpha(a) \xi \|^2=\langle \xi, \alpha(a^* \times_J a) \xi \rangle \leq \| a \|^2_J \| \xi \|^2$$ for all $\xi \in {\cal D}$ and $a \in {\cal A}_0$, hence $\alpha$ admits a bounded extension which is clearly a $C^*$-homomorphism. \mbox{}\hfill $\sqare$\vspace{1ex} \\ Let $ {\cal C}_{J} $ be the category of compact quantum groups acting isometrically on $ {\cal A}_{J} $ with objects being the pair $ ( {\cal S},\alpha_{{\cal S}} ) $ where the compact quantum group $ {\cal S} $ acts isometrically on $ {\cal A}_{J} $ by the action $ \alpha_{{\cal S}} . $ If the action is understood, we may simply write $ ( {\cal S},\alpha_{{\cal S}} ) $ as $ {\cal S} . $ For any two compact quantum groups $ {\cal S}_1 $ and $ {\cal S}_2 $ in ${\cal C}_J$, we write $ {\cal S}_1 < {\cal S}_2 $ if there is a surjective $C^*$ homomorphism $\pi$ from $ {\cal S}_2 $ to $ {\cal S}_1 $ preserving the respective coproducts (i.e. ${\cal S}_1$ is a quantum subgroup of ${\cal S}_2$) and $\pi$ also satisfies $\alpha_{{\cal S}_1}=({\rm id} \otimes \pi) \circ \alpha_{{\cal S}_2}.$ \begin{rmrk} \label{pqrs} It can easily be seen that $ S_1 < S_2 $ means that $ ( S_1 )_J < ( S_2 )_J $ \end{rmrk} \begin{thm} \label{abcd} If the Haar state on $QISO( {\cal A} )$ is faithful, we have the isomorphism of compact quantum groups: $$( QISO ( A ) )_{ \widetilde J } \cong QISO ( A_{J} ). $$ \end{thm} {\it Proof :}\\ Let $ {\cal Q} ( {\cal A}_{J} ) $ is the universal object in $ {\cal C}_{J} $. By Theorem \ref{premain}, we have seen that $ ( {\cal Q} ( {\cal A} ) )_{\widetilde J} $ also acts faithfully, smoothly and isometrically on $ {\cal A}_{J} $, which implies, $$ ( {\cal Q} ( {\cal A} ))_{\widetilde J} < {\cal Q} ( {\cal A}_{J} ) ~~{\rm in}~ {\cal C}_J .$$ So, by Remark \ref{pqrs}, $ ( ( {\cal Q} ( {\cal A} )_{\widetilde J})_{- \widetilde J} < ( {\cal Q} ( {\cal A}_{J} ) )_{- \widetilde J} $ in $ {\cal C}_{0} $, hence $ {\cal Q} ( {\cal A} ) < ( {\cal Q} ( {\cal A}_{J} ) )_{- \widetilde J}.$ Replacing $ {\cal A} $ by $ {\cal A}_{-J} $, we have \begin{eqnarray*} \lefteqn{ {\cal Q} ( {\cal A}_{-J} )}\\ & <& {\cal Q} ( ( {\cal A}_{- J} )_{J} )_{- \widetilde J} ~~({\rm in}~~ {\cal C}_{-J})\\ & \cong & {\cal Q} ( {\cal A} )_{- \widetilde J} ~~({\rm in}~~ {\cal C}_{-J})~ \cong ( {\cal Q} ( {\cal A} ) )_{\widetilde {-J}}. \end{eqnarray*} Thus, $ {\cal Q} ( {\cal A}_{J} ) < ( {\cal Q} ( {\cal A} ) )_{\widetilde J} $ in $ {\cal C}_{J}, $ which implies $ {\cal Q} ( {\cal A}_{J} ) \cong ( {\cal Q} ( {\cal A} ) )_{\widetilde J} $ in $ {\cal C}_{J}. $ \mbox{}\hfill $\sqare$\vspace{1ex} \\ \begin{xmpl} We recall that ${\cal A}_{\theta}$ is a Rieffel type deformation of $ C( \mathbb{T}^2 ) $,( See \cite{rieffel},example 10.2, page 69 ) and it can be easily verified that in this case the hypotheses of this section are true. So Theorem \ref{abcd} can be applied to compute $ QISO ( {\cal A}_{\theta} ) .$ This gives an alternative way to prove the results obtained in subsection 2.3. \end{xmpl} \begin{xmpl} We can apply our result to the isospectral deformations of compact oriented Riemannian manifolds considered in \cite{CD}, in particular to the deformations $S^n_\theta$ of classical $n$-sphere, with the spectral triple defined in \cite{CD}. Since we have proved that $QISO(S^n) \cong C(O(n))$, it will follow that $QISO(S^n_\theta) \cong O_\theta(n)$, where $O_\theta(n)$ is the compact quantum group obtained in \cite{CD} as the $\theta$-deformation of $C(O(n))$. \end{xmpl} \begin{rmrk} We would like to conclude this article with the following important and interesting open question : Does there exist a connected, compact manifold whose quantum isometry group is non commutative as a $ C^{*} $ algebra ? We have already observed that for $ S^{n}, \mathbb{T}^{1}, \mathbb{T}^{2} $, the answer is negative. \end{rmrk} {\noindent} {\bf Acknowldgement:}\\ The authors thank P. Hajac and S.L. Woronowicz for some stimulating discussion.
1,477,468,751,239
arxiv
\section{Introduction} Since 2005, 14 satellite companions to the Milky Way have been discovered (see \citet{willman09} and references therein). Despite the fact that many of these objects are less luminous than a typical globular cluster ($-1.5 < M_V < -8.6$), these 14 objects have a range of properties that encompass the most extreme of any galaxies, including: the highest inferred dark matter content \citep[e.g.][]{simongeha,geha09}, the lowest [Fe/H] content \citep{Kirby08}, unusually elliptical morphologies \citep[e.g. Hercules; ][]{Coleman07,sandherc}, and in some cases evidence for severe tidal disturbance \citep[e.g. Ursa Major II; ][]{Zucker06,munoz09}. The varied properties of these lowest luminosity galaxies are valuable probes for understanding the physics of dark matter and galaxy formation on the smallest scales. Of the newly discovered Milky Way (MW) satellites, Leo~IV ($M_V = -5.5$, $r_h \sim 130$ pc) is among the least studied, despite several signs that it is an intriguing object. Leo IV appears to be dominated by an old and metal-pool stellar population \citep{sdsssfh}. However it also has an apparently complex color magnitude diagram (CMD), with a 'thick' red giant branch, possibly caused by either multiple stellar populations and/or depth along the line of sight \citep{Belokurov07}. Leo IV also may have a very extended stellar distribution, despite its apparently round ($\epsilon=0.22^{+0.18}_{-0.22}$) and compact \citep[$r_{h}=2.5^{+0.5}_{-0.7}$ arcmin;][]{sdssstruct} morphology. A search for variable stars was recently performed by \citet{Moretti09}, who used the average magnitude of three RR Lyrae stars to find a distance modulus of $(m-M)=20.94\pm0.07$ mag, corresponding to $154\pm5$ kpc. Interestingly, one of the three RR~Lyrae variables lies at a projected radius of $\sim$10 arcmin, roughly three times the half light radius, leading to the suggestion that Leo~IV may actually possess a 'deformed morphology'. Based on Keck/DEIMOS spectroscopy of 18 member stars, Leo~IV has one of the smallest velocity dispersions of any of the new MW satellites, with $\sigma=3.3\pm1.7$ km/s \citep{simongeha}. A metallicity study of 12 of these spectra \citep{Kirby08} showed Leo~IV to be extremely metal poor, with $\langle [Fe/H] \rangle=-2.58$ with an intrinsic scatter of $\sigma_{[Fe/H]}=0.75$ -- the highest dispersion among the new dwarfs. Recently, the MW satellite Leo~V ($M_V \sim -4.3 \pm 0.3$) has been discovered, separated by only $\sim$2.8 degrees on the sky and $\sim$40 km/s from Leo~IV \citep{leov}. With a Leo~V distance of $\sim$180 kpc, this close separation in phase space led \citet{leov} to suggest that the Leo~IV/Leo~V system may be physically associated. This argument was bolstered by \citet{leovspec}, who spectroscopically identified two possible Leo~V members 13 arcminutes from the satellite's center (Leo~V's $r_{h}$ is $\sim$0.8 arcminutes) along the line connecting Leo~IV and Leo~V, suggesting that Leo~V is losing mass. A recent analysis by \citet{leoivleov} of two 1 square degree fields situated between Leo~IV and Leo~V shows tentative evidence for a stellar 'bridge' between the two systems with a surface brightness of $\sim$32 mag arcsec$^{-2}$. Motivated by all the above, we obtained deep photometry of Leo~IV with Megacam on the MMT. In this paper, we use these data to present a detailed analysis of both the structure and SFH of Leo~IV. We also search for any signs of disturbance in Leo~IV which may hint at a past interaction with the recently discovered, nearby Leo~V. In \S~\ref{sec:observations} we describe the observations, data reduction and photometry. We also present our final catalog of Leo~IV stars. In \S~\ref{sec:struct} we derive the basic structural properties of Leo~IV, and search for signs of extended structure. We quantitatively assess the stellar population of Leo~IV in \S~\ref{sec:starform} using both CMD-fitting software and an analysis of its blue plume population. We discuss and conclude in \S~\ref{sec:discuss}. \section{Observations and Data Reduction}\label{sec:observations} We observed Leo IV on April 21 2009 (UT) with Megacam \citep{Megacam} on the MMT. MMT/Megacam has 36 CCDs, each with $2048\times4608$ pixels at 0\farcs08/pixel (which were binned $2\times2$), for a total field-of-view (FOV) of $\sim$24'$\times$24'. We obtained 5 250s dithered exposures in $g$ and in $r$ in clear conditions with 0\farcs9 seeing. We reduced the data using the Megacam pipeline developed at the Harvard-Smithsonian Center for Astrophysics by M. Conroy, J. Roll and B. McLeod, which is based in part on M. Ashby's Megacam Reduction Guide\footnote{http://www.cfa.harvard.edu/$\sim$mashby/megacam/megacam$\_$frames.html}. The pipeline includes standard image reduction tasks such as bias subtraction, flatfielding and cosmic ray removal. Precise astrometric solutions for each science exposure were derived using the Sloan Digital Sky Survey Data Release 6 \citep[SDSS-DR6][]{sdsscite} and the constituent images were then resampled onto a common grid (using the lanczos3 interpolation function) and combined with SWarp\footnote{http://astromatic.iap.fr/software/swarp/} using the weighted average. Stellar photometry was determined on the final image stack using a nearly identical methodology as \citet{sandherc} with the command line version of the {\sc DAOPHOTII/Allstar} package \citep{Stetson94}. We allowed for a quadratically varying PSF across the field when determining our model PSF and ran {\sc Allstar} in two passes -- once on the final stacked image and then again on the image with the first round's stars subtracted, in order to recover fainter sources. We culled our {\sc Allstar} catalogs of outliers in $\chi^{2}$ versus magnitude, magnitude error versus magnitude and sharpness versus magnitude space to remove objects that were not point sources. In general, these cuts varied as a function of magnitude. We positionally matched our $g$ and $r$ band source catalogs with a maximum match radius of 0\farcs5, only keeping those point sources detected in both bands in our final catalog. Instrumental magnitudes are put onto a standard photometric system using stars in common with SDSS-DR6. We used all SDSS stars within the field of view with $17.5 < r < 21.5$ and $ -0.25 < g - r < 1.5 $ to perform the photometric calibration and simultaneously fit for a zeropoint and a linear color term in $g-r$. The linear color term slope was 0.11 in ($r - r_{MMT}$) versus $(g-r)$ and 0.086 in ($g-g_{MMT}$) versus $(g-r)$, consistent with that found in the MMT study of Bootes II by \citet{Walsh08}. Slight residual zeropoint gradients across the field of view were fit to a quadratic function and corrected for (see also Saha et al. 2010), resulting in a final overall scatter about the best-fit zeropoint of $\delta g \sim$0.05 and $\delta r \sim$0.05 mag. To calculate our photometric errors and completeness as a function of magnitude and color, we performed a series of artificial star tests using a technique nearly identical to that of \citet{sandherc}. Briefly, artificial stars were placed into our Leo IV images on a regular grid with spacing between ten and twenty times the image full width at half maximum with the {\sc DAOPHOT} routine {\sc ADDSTAR}. In all, ten iterations were performed on our Leo IV field for a total of $\sim$300000 implanted artificial stars. The $r$ magnitude for a given artificial star was drawn randomly from 18 to 29 mag, with an exponentially increasing probability toward fainter magnitudes. The $g-r$ color is then randomly assigned over the range -0.5 to 1.5 with equal probability. These artificial star frames were then run through the same photometry pipeline as the unaltered science frames, applying the same $\chi^{2}$, sharpness and magnitude-error cuts. For reference, we are 50\% (90\%) complete in $g$ at $\sim$25.3 (23.6) mag and in $r$ at 24.8 (23.3) mag. When necessary, such as for calculating the SFH of Leo IV in \S~\ref{sec:starform}, the completeness as a function of both magnitude and color is taken into account. \subsection{The Color Magnitude Diagram and Final Leo IV Catalog}\label{sec:fincat} We present the CMD of Leo~IV in Figure~\ref{fig:CMD}. Plotted in the left panel are all stars within the half-light radius (as determined in \S~\ref{sec:structparams}), while in the right panel we present a Hess diagram of the same field with a scaled background subtracted using stars located outside a radius of 12 arcminutes. In both panels we highlight the possible stellar populations of Leo~IV. In the right panel we plot three theoretical isochrones from \citet{Girardi04}. The solid and dashed lines are old 14 Gyr isochrones with [Fe/H] of -2.3 and -1.7, respectively. We adjust these isochrones to have $m-M$=20.94, as found in the RR~Lyrae study of Leo~IV by \citet{Moretti09}, which fit the ridgeline well. With the dotted line, we also plot a 1.6 Gyr isochrone with [Fe/H]=-1.3. This isochrone agrees relatively well with the 'blue plume' stars which are evident in Leo~IV's CMD. We mark several regions in the left panel of Figure~\ref{fig:CMD} for quantitative study in later sections. The solid box denotes the blue horizontal branch (BHB) stars, which are clearly defined in our CMD. The dashed and dotted regions are two different selection regions for blue plume stars. The dashed is the total blue plume population, although much of this region may be plagued by foreground stars or unresolved galaxies. We thus will also utilize the stars within the dotted box as a relatively contamination-free tracer of the blue plume star population in later sections. An open question is whether or not the blue plume stars are young stars, as is plausible based on the CMD, or are blue stragglers. We quantitatively assess the blue plume population in \S~\ref{sec:BP}. Given that the CMD of Leo~IV is visually in excellent agreement with the distance measurement of \citet{Moretti09}, and the possible presence of multiple stellar populations, we do not attempt a CMD-fitting method for measuring the distance to Leo~IV \citep[e.g.,][]{sandherc}, and will adopt $(m-M)=20.94$ throughout this work. We will vary this quantity when necessary to determine how sensitive our results are to this assumption. We present our full Leo~IV catalog in Table~\ref{table:phot}, which includes our $g$ and $r$ band magnitudes (uncorrected for extinction) with their uncertainty, along with the Galactic extinction values derived for each star \citep{Schlegel98}. We also note whether the star was taken from the SDSS catalog rather than our MMT data, as was done for objects near the saturation limit of our Megacam data. Unless stated otherwise, all magnitudes reported in the remainder of this paper will be extinction corrected. \section{Leo IV Structural Properties}\label{sec:struct} We split our analysis of the structural properties of Leo~IV into two components. First, we fit parameterized models to the surface density profile of Leo~IV. Following this, we search for signs of extended structure in Leo~IV, especially in light of its proximity to Leo~V. \subsection{Parameterized Fit}\label{sec:structparams} It is common to fit the surface density profile of both globular clusters and dSphs to King \citep{King66}, Plummer \citep{Plummer11}, and exponential profiles. This is an important task both to facilitate comparisons with other observational studies and for understanding the MW satellites as a population \citep[e.g.,][]{sdssstruct}. To do this, we use the CMD selection region shown in Figure~\ref{fig:CMDselect} for isolating likely Leo IV members. This CMD selection box was determined by first taking a M92 ridgeline at $(m-M)$=20.94, the distance to Leo~IV found by \citet{Moretti09}, and placing two bordering selection boundaries a minimum of 0.1 mag on either side in the $g-r$ color direction. These selection regions are increased to match the typical $g-r$ color uncertainty at a given $r$ magnitude (as determined with our artificial star tests) when that number exceeds 0.1 mag. A magnitude limit of $r$=24.8 mag was applied to correspond to our 50\% completeness limit. At the present we focus on Leo IV ridge line stars, but we discuss the spatial properties of the blue plume and horizontal branch stars in \S~\ref{sec:BP}. We fit the three parameterized density profiles to the stellar distribution of Leo IV: \begin{equation} \Sigma_{King}(r) = \Sigma_{0,K}\left( \left(1+\frac{r^2}{r_c^2}\right)^{-\frac{1}{2}}-\left(1+\frac{r_t^2}{r_c^2}\right)^{-\frac{1}{2}}\right)^2 \end{equation} \begin{equation} \Sigma_{Plummer}(r) = \Sigma_{0,P}\left(1+\frac{r^2}{r_P^2}\right)^{-{2}} \end{equation} \begin{equation} \Sigma_{exp}(r) = \Sigma_{0,E}\exp\left(-\frac{r}{\alpha}\right) \end{equation} \noindent where $r_P$ and $\alpha$ are the scale lengths for the Plummer and exponential profiles and $r_c$ and $r_t$ are the King core and tidal radii, respectively. For the Plummer profile, $r_P$ equals the half-light radius $r_h$, while for the exponential profile $r_h \approx 1.668\alpha$. We simultaneously fit a background surface density, $\Sigma_{b}$, while fitting the Plummer and exponential profiles. For the King profile, there is a degeneracy between the tidal radius and the background surface density. We thus fix the background value to the average of that found for the Plummer and exponential profiles for our King profile fits \citep[e.g.][]{Walsh08}. We use a maximum likelihood (ML) technique for constraining structural parameters similar to that of \citet{sdssstruct}, which we have refined in \citet{sandherc}, and further refined in the current work. We point the reader to those works for further details concerning the expression of the likelihood function. Including the central position, $\alpha_{0}$ and $\delta_{0}$, position angle ($\theta$), and ellipticity ($\epsilon$) both the exponential and Plummer profiles have the same free parameters -- ($\alpha_{0}$,$\delta_{0}$,$\theta$,$\epsilon$,$r_{half}$,$\Sigma_{b}$), while the King profile free parameters are ($\alpha_{0}$,$\delta_{0}$,$\theta$,$\epsilon$,$r_{c}$,$r_{t}$). Uncertainties on structural parameters are determined through 1000 bootstrap resamples, from which a standard deviation is calculated. We have tested the robustness of our algorithm for dwarf galaxies with roughly the same number of stars as Leo IV in an Appendix, and will use these tests to inform our results in what follows. Our results are presented in Table~\ref{table:paramfits}. We show our best fit stellar profiles in Figure~\ref{fig:stellarprofile}. Although the plotted stellar profiles are not fit to the plotted binned data points, they do show excellent agreement. We note that the apparent slight overdensity at $R\sim$8 arcmin above the derived parameterized fits does not correspond to any single feature, as can be seen from the smoothed map of Leo~IV that we present in \S~\ref{sec:extend} and Figure~\ref{fig:smoothmap}, but is likely just the result of several fluctuations at roughly the same radius. Interestingly, Leo~IV appears to be particularly round, at least according to the parameterized model fit to the data. In fact, our ML-derived ellipticity is consistent with zero (see also the Appendix), allowing us to only place an upper limit of $\epsilon \lesssim 0.23$ (at the 68\% confidence limit). Given this low ellipticity, we can not place any meaningful constraints on the position angle of Leo IV, as both the tests in the Appendix indicate and the bootstrap resamples of our Leo IV data reaffirm. On a separate note, the tidal radius for the King profile fit, with a value of $r_t=18.55'$, is larger than the Megacam FOV. Thus, this value should be taken with caution. It is useful to compare our parameterized fit results with similar work in the literature. Using the original SDSS data, \citet{sdssstruct} fit Leo~IV with an exponential profile using a ML technique similar to the one utilized in the current work, and found results within 1-$\sigma$ of those presented here. More recently, \citet{leoivleov} used deeper data from the 3.5 m Calar Alto telescope around both Leo~IV and Leo~V, again applying a ML algorithm to measure their structure. In this case, the authors find $r_h=4.6^{+0.8}_{-0.7}$ and a well measured ellipticity of $\epsilon=0.49\pm0.11$, both statistically inconsistent with the results presented in the current work. As commented on by \citet{leoivleov}, this is likely because of the different stellar populations probed -- while the present work uses mostly main sequence and subgiant stars to determine the structure of Leo IV, \citet{leoivleov} use mostly brighter stars and objects on the blue horizontal branch. Also, Leo~IV lies on the edge of one of their pointings, which could have biased their results. Resolution of this discrepancy will require additional measurements. \subsection{Extended Structure Search}\label{sec:extend} We now search for signs of tidal disturbance and other anomalies -- which would not be picked up by our parameterized fits in \S~\ref{sec:structparams} -- based on the morphology of Leo~IV's isodensity contours. We do this with an eye towards determining if there is a current physical connection between Leo~IV and Leo~V. Our basic approach is similar to that of \citet{sandherc}. We include stars within the same CMD selection box used for our parameterized fits (Figure~\ref{fig:CMDselect}), placed those stars in $10''\times10''$ bins and spatially smoothed these pixels with three different Gaussians of $\sigma$=0.5,1.0 and 1.5 arcminutes. The background level of stars in the CMD selection box, and its variance, was determined via the MMM routine in IDL\footnote{available at http://idlastro.gsfc.nasa.gov/}. To avoid the bulk of Leo~IV, these statistics were determined in two boxes of size $20'\times3'$ situated 9.5 arcminutes North and South of Leo~IV. We found that our smoothed maps were unaffected if only one of these boxes was used, or if we varied their sizes. We present our smoothed maps in Figure~\ref{fig:smoothmap}, with the contours representing regions that are 3, 4, 5, 6, 7, 10 and 15 standard deviations above the background. We focus on the $\sigma$=0.5 arcminute map, since it contains the most detail without loss of potential Leo~IV features. The $\sigma$=1.0 arcminute smoothing scale will be useful in \S~\ref{sec:fake}, as we explore our sensitivity to stellar streams. Outside of the main body of Leo~IV, there appears to be only a handful of compact, 3, 4 and 5 $\sigma$ overdensities. How significant are these overdensities, given the binning, smoothing and number of pixels that went into the making of Figure~\ref{fig:smoothmap}? To gauge their significance, we take our input photometric catalog, randomize the star positions and remake our smoothed maps (with the 0.5 arcmin Gaussian) for several realizations (Figure~\ref{fig:randompos}). While some of these maps have several 3 and 4 $\sigma$ overdensities, while others have fewer, we find that the distribution of pixel values maintains a Poisson distribution. We thus conclude that the majority of features outside the main body of Leo~IV in Figure~\ref{fig:smoothmap} are likely just noise -- with the possible exception of the 5-$\sigma$ overdensities at positions $(\sim-6',\sim12')$ and $(\sim-8',\sim-10')$ with respect to Leo~IV. Background-subtracted Hess diagrams of these two regions were made from our Leo~IV catalog, but they do not yield CMDs that are consistent with Leo~IV's stellar population (see \S~\ref{sec:fake}). Thus, our observations yield no strong evidence for substructure in the vicinity of Leo~IV. The main body of Leo~IV itself has some interesting features. There is a hint of an elongation or disturbance in the core of Leo~IV, along with two 'tendrils' -- one directed to the West and the other to the Southwest. Again, due to the small number of stars in Leo~IV, these irregularities may be an effect of small number statistics. To evaluate the significance of the morphology in Figure~\ref{fig:smoothmap}, we follow the path of \citet{Walsh08} and their evaluation of morphological irregularities in Bootes~II. We bootstrap resample the Leo~IV stars and replot our smoothed maps. The results of nine such resamples can be seen in Figure~\ref{fig:resampmaps}. While we cannot rule out that the tendrils seen in our Leo~IV map are genuine, they are not ubiquitous features in our resampled maps, and so we cannot with confidence claim they are real features. We finally point out that there appears to be no sign of interaction or disturbance in the direction of Leo~V, which we indicate in the middle panel of Figure~\ref{fig:smoothmap}. A Hess diagram of all stars farther than 5 arcminutes from the center of Leo~IV, and within 1.5 arcminutes of the line that connects Leo~IV and Leo~V, yields a differenced CMD that is consistent with noise (after proper background subtraction). Any such disturbance would have to be below our detection threshold, which we determine in the next section. Finally we point out that these smoothed maps are sensitive to structures at the distance of both Leo~IV and Leo~V ($\sim$180 kpc; Belokurov et al. 2009) given that we are predominantly probing the CMD at magnitudes brighter than the subgiant branch. \subsubsection{Inserting Artificial Remnants}\label{sec:fake} In order to assess our surface brightness limit, and our sensitivity to structures of different sizes and morphologies, we insert fake 'nuggets' and 'streams' into our Leo~IV catalog with stellar populations drawn from one consistent with that of Leo~IV. As in \citet{sandherc}, we use the {\it testpop} program within the CMD-fitting package, StarFISH, to generate our artificial 'Leo~IV' CMDs and then remake our smoothed maps. By varying the number of stars (and, by extension, the surface brightness) in these structures, we can then assess whether our adopted search method for extended structure would have recovered them, and if so, what the resulting CMD would look like. This method of 'observing' and then examining these artificial remnants is more informative than traditional methods of simply quoting a '$3-\sigma$' surface brightness limit, even if the result is a relatively ambiguous detection limit. For details of the procedure, we refer the reader to \citet{sandherc}, and to our presentation of Leo~IV's SFH in \S~\ref{sec:starform}. We inject both 'nuggets' -- Leo~IV-like stellar populations with an exponential profile having $r_{half}=1.0$ arcmin -- and 'streams' -- with a Gaussian density profile in the right ascension direction with $\sigma$=1.5 arcmin and a uniform distribution in the declination direction over the Northern half of the field -- into our final Leo~IV photometric catalog. In Figures~\ref{fig:nuginj} and \ref{fig:strinj}, we illustrate some results from our tests, along with their properties in Table~\ref{tab:fakeresult}. We show an example of a 35 star 'nugget' near what we believe is our detection threshold in Figure~\ref{fig:nuginj}. While this nugget has a peak detection at 6.4-$\sigma$, it is not particularly different in morphology or significance than the other random peaks in our Leo~IV field. This, however, changes when the resulting Hess diagram is examined, showing several stars along the red giant branch -- something that is not apparent in the true overdensities in our field. Taking this as our rough detection limit for compact remnants of Leo~IV, we are sensitive to objects with a central surface brightness of $\mu_{0,r}=27.9$ and $\mu_{0,g}=28.3$. Turning towards our artificial 'streams', both the 200 and 300 star streams are easily detectable in our smoothed maps (where we have used a 1 arcminute smoothing scale to better pick out the thick streams -- Figure~\ref{fig:strinj}). Further, the resulting Hess CMDs show a considerable red giant and BHB presence, and the beginnings of the main sequence for the 300 star case. The analogous 100 star stream is not convincingly detected. We thus suggest that we are reliably sensitive to streams with central surface brightness $\mu_{r}\sim29.6$ and $\mu_{g}\sim29.8$ (as measured along the center of the stream) with a geometry and morphology roughly similar to that simulated. The recent work of \citet{leoivleov} have presented tentative evidence for a stream connecting Leo~IV and Leo~V with a surface brightness of $\sim$32 mag arcsec$^{-2}$. Unfortunately, despite our much deeper data, we are unable to probe down to such faint surface brightness limits due to the relatively small area of our single pointing. \subsection{Structure along the line of sight}\label{sec:los} Because previous authors have suggested that the 'thick' red giant branch in Leo~IV may be due to elongation along the line of sight, it is worth searching for signs of such elongation in the width of the BHB \citep[e.g.,][]{klessen03}. To do this, we have created a BHB star fiducial using data collected by J. Strader from SDSS encompassing 41 horizontal branch stars from M3 and M13, corrected for extinction and their relative distances. Using these stars, a third-order polynomial was fit with the IDL routine {\sc robust\_poly\_fit}, and we used this polynomial to define our BHB fiducial. For simplicity, we assume that all stars in our Megacam field within the solid box in Figure~\ref{fig:CMD} are BHB stars belonging to Leo~IV. Placing the BHB fiducial at our assumed distance to Leo~IV, $(m-M)=20.94$, leads to a visually excellent match. However, since we are interested in the scatter about this fiducial to put constraints on the width of Leo~IV, rather than the absolute distance to it, we adjust the fiducial BHB sequence so that the average $r$ magnitude deviation of the Leo~IV BHB stars against the fiducial is $\sim$0.0 (this adjustment was 0.06 mag, affirming the distance measurement of \citet{Moretti09}). The resulting root mean square deviation of the Leo~IV BHB stars about the fiducial is $\sim$0.2 mag, which corresponds to a deviation of $\sim$15 kpc at the distance of Leo~IV. While this limit is comparable to the quoted difference in distance between Leo~IV and Leo~V \citep{leov}, it is also roughly the measurement limit achievable using the spread in BHB mags as a measure of line-of-sight depth. First, there are known RR~Lyrae variables among the BHB stars in Leo~IV \citep{Moretti09}, which we have observed at a random phase. Second, for a spread in metallicity of $\sigma_{[Fe/H]}=0.75$ in Leo~IV \citep{Kirby08}, one would expect a natural spread in BHB star magnitudes of $\sim$0.2 mag \citep{sandage90,olszewski96}. We therefore conclude that our data, while consistent with no elongation along the line of sight, can not put strong constraints on the depth of Leo~IV. \section{Stellar population}\label{sec:starform} There is spectroscopic evidence that Leo~IV has a large metallicity spread, and according to \citet{Kirby08} it has the largest metallicity spread seen among the new dwarfs ($\langle [Fe/H] \rangle$=$-$2.58 with a spread of $\sigma_{[Fe/H]}$=0.75). Additionally, \citet{Belokurov07} have hinted that Leo~IV has a 'thick' red giant branch, indicating a spread in metallicity, and this appears to be the case (at least superficially) in the CMD presented in the current work (Figure~\ref{fig:CMD}). Also intriguing is the population of blue plume stars pointed out in Figure~\ref{fig:CMD} and \S~\ref{sec:fincat}. In this section, we first perform a CMD-fitting analysis of the stellar population of Leo~IV with StarFISH, and then go on to look at the blue plume population in detail. We end the section by combining the structural properties found in \S~\ref{sec:structparams} with the stellar population determined in the current section to calculate the total absolute magnitude of Leo~IV. \subsection{Star Formation History via CMD-fitting}\label{sec:starfish} Here we apply the CMD-fitting package StarFISH \citep{starfish} to our Leo~IV photometry within the half light radius to determine its SFH and metallicity evolution. As discussed in previous works, StarFISH uses theoretical isochrones (we use those of Girardi et al. 2004, although any may be used) to construct artificial CMDs with different combinations of distance, age, and metallicity. Once convolved with the observed photometric errors and completeness (using the artificial star tests of \S~\ref{sec:observations}), these theoretical CMDs are converted into realistic model CMDs which can be directly compared to the data on a pixel-to-pixel basis, after binning each into Hess diagrams. We use the Poisson statistic of \citet{match} as our fit statistic. The best fitting linear combination of model CMDs is determined through a modification of the standard AMOEBA algorithm \citep{Press88,starfish}. Several steps are taken to determine the uncertainties in StarFISH fits, which are discussed in detail in previous work -- see \citet{Harrislmc,leot}. Our StarFISH analysis is similar to that of \citet{sandherc}. We include isochrones with [Fe/H]=-2.3,-1.7 and -1.3 and ages between $\sim$10 Myr and $\sim$15 Gyr. Age bins of width $\Delta$log(t)=0.4 dex were adopted, except for the two oldest age bins centered at $\sim$10 Gyr and $\sim$14 Gyr, where the binning was $\Delta$log(t)=0.3 dex. A 'background/foreground' CMD, created by taking all stars outside an elliptical radius of 12 arcminutes (with ellipticity of $\epsilon$=0.05, as in our best-fitting exponential profile, see Table~\ref{table:paramfits}), was simultaneously fit along with our input stellar populations in order to correct for contamination by unresolved galaxies and foregound stars in our Leo~IV CMD. Stars with magnitudes $18.0<r<24.75$ (corresponding roughly to our 50\% completeness limit) and $-0.50 < g-r < 1.15$ were fit. After some experimentation, we used a Hess diagram bin size of 0.15 mag in magnitude and 0.15 mag in color. We assume a binary fraction of 0.5 and a Salpeter initial mass function. Because we correct our stellar catalog for Galactic extinction with the dust maps of \citet{Schlegel98}, we do not allow the mean extinction of our model CMDs to vary. As in the rest of this work, we chose to fix the distance modulus in the code to $(m-M)$=20.94 mag, although our results are robust with respect to this assumption, as we discuss below. We show the best-fit StarFISH solution in Figure~\ref{fig:sfh}, along with a comparison of the best model CMD with that of Leo~IV in Figure~\ref{fig:sfh_model}. Leo~IV is consistent with a single stellar population with an age of $\sim$14 Gyr and a [Fe/H]=-2.3, although the error bars and upper limits indicate that there is latitude for both a small, young stellar population and a mix of metallicities at older ages. Thus, despite the visual impression of a 'thick' giant branch, our analysis does not require a metallicity spread to match the oberved CMD. Thorough spectroscopy of all stars in the red giant region will be required to determine Leo~IV membership and to properly quantify the metallicity spread. This result is robust to small changes in the distance modulus. If we alter the input distance modulus from $(m-M)$=20.94 to $(m-M)$=20.84, than a larger fraction of the best-fitting SFH comes from the [Fe/H]=-1.3 bin, although the [Fe/H]=-2.3 bin still dominates. Likewise, a distance modulus of $(m-M)$=21.04 yields an old, [Fe/H]=-2.3 stellar population with even less from the [Fe/H]=-1.7 and -1.3 bins. The model and observed CMDs match remarkably well (Figure~\ref{fig:sfh_model}) given that there are known mismatches between the theoretical isochrones and empirical, single population CMDs \citep[e.g.,][]{Girardi04}. Also, the available models do not span the metallicity range that is apparent in the new MW dwarfs; \citet{Kirby08} found $\langle [Fe/H] \rangle=-2.58$ with an intrinsic scatter of $\sigma_{[Fe/H]}=0.75$ in Leo~IV, while the Girardi isochrone set reaches down to [Fe/H]=-2.3. There is a slight mismatch in the BHB between model and data, with the best-fitting model CMD having a BHB which is $\sim$0.1-0.2 mag brighter than that observed (this is a factor of $\sim$2 larger than the small BHB magnitude correction we made in \S~\ref{sec:los}). This could be due to a slight error in our assumed distance modulus or a true stellar population that is even more metal poor than the most metal-poor model in the Girardi isochrone set, which we know to be the case from \citet{Kirby08}. Nonetheless, our basic finding that Leo~IV is predominantly old ($\sim$14 Gyr) and metal poor is unaffected. \subsection{Evidence for Young Stars}\label{sec:BP} It is well known that dwarf spheroidals harbor a population of blue stragglers -- a hot, blue extension of objects which lie along the normal main sequence \citep[e.g.,][]{Mateo95}. Because the densities in dwarfs do not reach that necessary to produce collisional binaries (as they can in the cores of globular clusters), it is likely that they are primordial binary systems, coeval with the bulk of stars in the dwarf. Unfortunately, their position along the main sequence makes it difficult to disentangle blue straggler stars from young main sequence stars in the MW's dwarf spheroidals, which has been a continuous source of ambiguity \citep[e.g.,][among many others]{Mateo95,Mapelli07,Mapelli09}. It is very difficult to exclude the possibility that some of the blue plume stars in any given dwarf are actually young main sequence objects. We now articulate two arguments in support of the hypothesis that at least some of the stars in the blue plume of Leo~IV are young. First, the blue plume stars appear to be segregated within the body of Leo~IV. We plot the position of the 'high probability' blue plume stars with low background/foreground contamination, as identified within the dotted box in Figure~\ref{fig:CMD}, onto our smoothed map of Leo~IV (Figure~\ref{fig:showstars}). All of the selected high-probability blue plume stars within the body of Leo~IV are on one side. If Leo~IV is assumed to be spherically symmetric, the chances of all seven being on the same half of the galaxy are $\sim(1/2^{7})$ or less than 1 percent. This segregation would be difficult to explain if all of these objects were blue stragglers, which would presumably have the same distribution as the galaxy as a whole. We understand that this {\it a posteriori} argument is insufficient on its own, but it should be considered in the context of the high normalized blue plume fraction, which we now discuss. Our second argument in favor of a young population of stars in Leo~IV stems from the high blue plume frequency normalized by the BHB star counts, following the work of \citet{Momany07}. Briefly, \citet{Momany07} sought to explore the ambiguity between young main sequence stars and genuine blue straggler stars in a sample of MW dwarf galaxies by calculating the number of total blue plume objects with respect to a reference stellar population -- the BHB. The basic result of their work, whose data was kindly provided by Y. Momany, was that those dwarf spheroidals that do not have a true, young stellar component have a blue plume fraction that follows a relatively well defined $M_{V}$ vs log($N_{BP}/N_{BHB}$) anti-correlation, where the blue plume consists of only blue straggler stars (Figure~\ref{fig:bpfreq}). The physical origin of this anti-correlation is unclear \citep[see, however, ][for a plausible explanation for the anti-correlation in globular clusters]{Davies04}, although it is seen in both open clusters \citep{deMarchi08} and globular clusters \citep{Piotto04}. Carina, the one dwarf galaxy in their sample which does have a known, recent bout of star formation \cite[$\sim$1-3 Gyr;][]{hurleykeller98,monelli03}, was shown to have an excess of blue plume stars with respect to the aforementioned relation (shown as a lower limit in Figure~\ref{fig:bpfreq}, due to difficulties in accounting for blue stragglers associated with the older and fainter main sequence turn-off in that system), pointing to the fact that Carina's blue plume was populated with young main sequence stars in addition to a standard blue straggler population. More recently, \citet{Martin08} also found a high blue plume frequency in Canes Venatici I, which we also show in Figure~\ref{fig:bpfreq}. \citet{Martin08} used the combination of spatial segregation and blue plume frequency to argue that Canes Venatici I harbors a young stellar component. We take the ratio of blue plume to BHB stars as identified in Figure~\ref{fig:CMD} by the dashed and solid regions, respectively, utilizing those stars within the half light radius of Leo IV and making background and completness corrections. We calculate a blue plume frequency of log($N_{BP}/N_{BHB}$)=$0.56\pm0.13$ for Leo~IV and plot this ratio along with those of the MW dwarf spheroidals just discussed in Figure~\ref{fig:bpfreq}. As can be seen, Leo~IV lies off the standard $M_{V}$ vs log($N_{BP}/N_{BHB}$) anti-correlation just as Canes Venatici I and Carina do. Leo~IV lies $2-\sigma$ away from the linear relation fit by \citet{Momany07} for the non-star forming dwarfs. Taken together, the segregation of blue plume stars along with the high blue plume to BHB fraction points to at least {\it some} of the stars being young main sequence objects. We know that at least one of these blue plume stars is a blue straggler, due to the discovery of one SX Phoenicis variable in Leo~IV \citep{Moretti09}. This is only the third of the new dwarf spheroidals (excluding Leo~T, which appears to be a transition object) that harbor a young stellar population -- the others being the previously discussed Canes Venatici I \citep{Martin08} and Ursa Major II \citep{sdsssfh}. We note that our evidence for young stars is similar to that presented for Canes Venatici I, for which it was also found that the blue plume population is segregated and blue straggler frequency is high \citep{Martin08}. We measure the luminosity of the young stars in \S~\ref{sec:absmag} and discuss the implications of Leo~IV's young stellar component in \S~\ref{sec:discuss}. \subsection{Absolute Magnitude}\label{sec:absmag} As pointed out by \citet{sdssstruct}, measuring the total magnitudes of the new MW satellites is difficult due to the small number of stars at detectable levels. We account for this 'CMD shot noise' by mimicking our measurement of the total magnitude of Hercules \citep{sandherc}, which borrowed heavily from the original \citet{sdssstruct} analysis. We take the best SFH solution presented in \S~\ref{sec:starfish}, and create a well-populated CMD (of $\sim$200000 stars) incorporating our photometric completeness and uncertainties, using the {\it repop} program within the StarFISH software suite. We drew one thousand random realizations of the Leo~IV CMD with an identical number of stars as we found for our exponential profile fit, and determined the 'observed' magnitude of each realization above our 90\% completeness limit. We then accounted for those stars below our 90\% completeness level by using luminosity function corrections derived from \citet{Girardi04}, using an isochrone with a 15 Gyr age and [Fe/H]=-2.3. We take the median value of our one thousand random realizations as our measure of the absolute magnitude and its standard deviation as our uncertainty (Table~\ref{table:paramfits}). To convert from $M_{r}$ magnitudes to $M_{V}$ magnitudes we use $V-r=0.16$ \citep{Walsh08}. We find $M_{V}=-5.5\pm0.3$ and a central surface brightness, assuming our exponential profile fit, of $\mu_{0,V}=27.2\pm0.3$. Both our total absolute magnitude and central surface brightness measurement agrees to within $1-\sigma$ with the measurement of \citet{sdssstruct}, which used only SDSS data. We use a similar technique for determining the approximate luminosity of Leo~IV's young stellar population only. We draw an equivalent number stars from our well-populated StarFISH CMD as in Leo~IV within our high probability blue plume box (see Figure~\ref{fig:CMD} and \S~\ref{sec:BP}), and corrected for the luminosity of stars outside this region using a luminosity function derived from an isochrone with an age of 1.6 Gyr and [Fe/H]=-1.3 \citep{Girardi04}. We again draw 1000 random realizations to determine our uncertainties. We find that the young stellar population has $M_{V}=-2.1\pm0.5$, or roughly $\sim$5\% of the satellite's total luminosity or $\sim$2\% of its stellar mass. This magnitude and the resulting fraction of the young stellar populations luminosity should be taken with caution given our assumptions and the small number of stars involved. \section{Discussion \& Conclusions}\label{sec:discuss} In this work we have presented deep imaging of the Leo~IV MW satellite with Megacam on the MMT and study this galaxy's structure and SFH. In particular, we assess reports in the literature concerning both its stellar population and its possible association with the nearby satellite, Leo~V. Leo~IV's SFH is dominated by an old ($>12$ Gyr), metal poor ($[Fe/H]\lesssim-2.0$) stellar population, although we uncover evidence for a young sprinkling of star formation 1-2 Gyrs ago. Our best-fit StarFISH results indicate that a single metal poor population dominates, although the data is also compatible with a spread in metallicities. The old population is consistent with the emerging picture that the faintest MW satellites are 'reionization fossils' \citep[e.g.~][]{Ricotti05,Gnedin06}, who formed their stars before reionization and then lost most of their baryons due to photoevaporation. The apparent sprinkling of young stars begs the question of what has enabled Leo~IV to continue forming stars at a low level. There is no sign of HI in Leo~IV, with an upper limit of 609 $M_{\odot}$ \citep{Grcevich09}, although we note that this limit is still a factor of $\sim$2 larger than the stellar mass associated with the young stellar population studied in \S~\ref{sec:BP}. One possible mechanism for late gas accretion, and subsequent star formation, among the faint MW satellites was recently discussed by \citet{Ricotti09} to help explain the complex SFH and gas content of Leo~T \citep{leot,leotgas}. In this scenario, the smallest halos stop accreting gas after reionization as expected, but as their temperature decreases and dark matter concentration increases with decreasing redshift they are again able to accrete gas from the intergalactic medium at late times, assuming they themselves are not accreted by their parent halo until $z\lesssim1-2$. This can lead to a bimodality in the SFH of the satellite, with both a $>12$ Gyr population and one that is $<10$ Gyr, as we see in Leo~IV. One stringent requirement of this model is that the satellite can not have been accreted by its host halo until $z\lesssim1-2$ (and thus not exposed to tidal stirring and ram pressure stripping until late times, allowing the satellite to retain its newly accreted gas). Future proper motion studies will be able to test if Leo~IV is compatible with this late gas accretion model. Another prediction of this model is the possible existence of gas-rich minihaloes that never formed stars, but could serve as fuel for star formation if they encountered one of the luminous dwarfs. More detailed study of this late gas accretion mechanism will be necessary to understand the possible diversity of SFHs in the faint MW satellites. Additionally, we note that if the apparent segregation of young stars in Leo~IV is real, then it is not an isolated case among the MW satellites. As has been mentioned in \S~\ref{sec:BP}, \citet{Martin08} noted that Canes~Venatici I has a compact star forming region clearly offset from the galaxy as a whole, with an age of 1-2 Gyrs, similar to Leo~IV. Additionally, Fornax has several compact clumps and shells that house young stellar populations roughly $\sim$1.4 Gyrs old \citep{coleman04,Coleman05,Olszewski06}. It has been suggested that these could have been the result of a collision between Fornax and a low-mass halo, which was possibly gas-rich. \citet{pennarubia09} investigated the disruption of star clusters in triaxial, dwarf-sized halos and found that segregated structures can persist depending on the orbital properties of the cluster, providing yet another viable mechanism. More work is needed to distinguish between all of the above scenarios and to properly model the emerging diversity of SFHs among the new, faint MW satellites. Structurally, Leo~IV appears to be very round, with $\epsilon \lesssim 0.23$ (at the 68\% confidence limit) and a half light radius ($\sim 130$ pc) which is typical of the new MW satellites. An exhaustive search for signs of extended structure in the plane of the sky has ruled out any associated streams with surface brightnesses of $\mu_{r}\lesssim29.6$. The extent of Leo~IV along the line of sight is less than $\sim$15 kpc, a limit that will be difficult to improve upon given the inherent limitations of using the spread in BHB magnitudes to measure depth. We find no evidence for structural anomalies or tidal disruption in Leo~IV. We do not have the combination of image depth and area necessary to confirm the stellar bridge, with a surface brightness of 32 mag arcsec$^{-2}$, recently reported in between Leo~IV and Leo~V \citep{leoivleov}. Indeed, Leo~V is almost certainly disrupting, as discussed by \citet{leovspec}, due to the presence of two member stars $\sim$13 arcminutes ($>13 r_{h}$) from Leo~V's center along the line connecting the putative Leo~IV/Leo~V system. The nature of Leo~V is still very ambiguous, with the kinematic data being consistent with it being dark matter free -- suggesting that perhaps Leo~V is an evaporating star cluster \citep{leovspec}. It is thus critical to obtain yet deeper data on these two systems and the region separating them to uncover their true nature. The probable detection of a small population of young stars illustrates once again that it is crucial to obtain deep and wide field follow up for all of the newly detected MW satellites. Every new object has a surprise or two in store upon closer inspection. \section{Acknowledgements} We are grateful to the referee, Nicolas Martin, for his constructive report. Many thanks to Maureen Conroy, Nathalie Martimbeau, Brian McLeod and the whole Megacam team for the timely help in reducing our Leo~IV data. DJS is grateful to Jay Strader for his excellent scientific advice, and for providing his co-calibrated M3 \& M13 blue horizontal branch sequence. We are grateful to Evan Kirby, Josh Simon and Marla Geha for providing their kinematic and metallicity data on Leo IV. Yazan Momany kindly provided his blue plume frequency data. DJS is grateful to Matthew Walker and Nelson Caldwell for providing a careful reading of the paper, along with useful comments. EO was partially supported by NSF grant AST-0807498. DZ acknowledges support from NASA LTSA award NNG05GE82G and NSF grant AST-0307492. \section{Appendix} In this Appendix, we determine how well the maximum likelihood technique presented in \S~\ref{sec:structparams} can measure the structural parameters of a dwarf with a comparable number of stars as Leo~IV. We create mock models of Leo~IV-like systems having an exponential profile with $r_{h}$=3.0 arcmin and 400 stars in the input catalog (all on a footprint with the same area as our MMT pointing), while systematically varying the ellipticity and position angle. A uniform background of 4.5 stars per square arcminute is randomly scattered across the field to mimic the actual observations. We utilize the same algorithm as in \S~\ref{sec:structparams}, with 1000 bootstrap resamples to determine our uncertainties. We can recover the ellipticity of our mock dwarf galaxies remarkably well, as can be seen in Figure~\ref{fig:ellinout}. Here we show our results on the recovered ellipticity as a function of input model ellipticity, between $\epsilon_{input}=$0.05 and 0.85. The data points are the median ellipticity found for the one thousand bootstrap resamples for each model, and the error bars encompass 68\% of the resamples around that median. Note that for models with an input ellipticity of $\epsilon \lesssim 0.25$ it is difficult for our algorithm to converge on the correct ellipticity value. In this regime, we present the measured ellipticity as an upper limit (see \S~\ref{sec:structparams}). In this case, we systematically overpredict the ellipticity with large error bars, and thus only quote upper limits. Larger values of the ellipticity are well measured. We also do a good job of measuring the position angle (as long as the ellipticity is high enough) and the half light radius of our mock Leo~IV-like systems, which we illustrate in a series of examples in Figure~\ref{fig:pamodelexample}. From the figure, one can see the gradual improvement in the measurement of the position angle as one goes from an ellipticity of 0.1 to 0.35, while the half light radius remains relatively well measured throughout. The slight systematic offset seen in the bottom left panel of Figure~\ref{fig:pamodelexample} can be explained due to the difficulty in recovering the true ellipticity for $\epsilon_{input} \lesssim 0.25$ systems. If one slightly overestimates the true input ellipticity of the data, this leads to a slight overestimation of the half-light radius, a degeneracy that can be seen in Figure 9 of \citet{sdssstruct}. The take away message is that one must treat any 'measurement' of the position angle with extreme caution for ellipticity values of $\epsilon \lesssim 0.25$ as we do in \S~\ref{sec:structparams}. In the future, we intend to present a more extensive series of tests of our maximum likelihood code in order to understand how star number and imaging field of view have on the estimation of structural parameters for the new MW satellites. \clearpage \bibliographystyle{apj}
1,477,468,751,240
arxiv
\section{Introduction} The $Q$-curvature arises naturally as a conformal invariant associated to the Paneitz operator. When $n=4$, the Paneitz operator is defined as: $$P_g=\Delta_g^2+\delta(\frac{2}{3}R_g\,g-2\, {\rm Ric_g})d,$$ where $\delta$ is the divergence operator, $d$ is the differential operator, $R$ is the scalar curvature of $g$, and ${\rm Ric}$ is the Ricci curvature tensor. The Branson's $Q$-curvature \cite{Branson} is defined as $$Q_g=\frac{1}{12}\left\{-\Delta_g R_g +\frac{1}{4}R_g^2 -3|E_g|^2 \right\}, $$ where $E_g$ is the traceless part of ${\rm Ric_g}$, and $|\,\cdot\,|$ is the point-wise norm taken with respect to the metric $g$. Under the conformal change $g=e^{2u}g_0$, the Paneitz operator transforms by $P_{g}=e^{-4u}P_{g_0}$, and $Q_{g}$ satisfies the fourth order equation \begin{equation}\label{Paneitz4} P_{g_{0}}u+2Q_{g_0}=2Q_{g}e^{4u}.\end{equation} This is analogous to the transformation law satisfied by the Laplacian operator $-\Delta_g$ and the Gaussian curvature $K_g$ on surfaces, \begin{equation}-\Delta_{g_0}u+K_{g_0}=K_{g}e^{2u}.\end{equation} When the background metric $g_0$ is the flat metric $|dx|^2$, the transformation law \eqref{Paneitz4} that the $Q$-curvature satisfies becomes \begin{equation} \Delta^2_{g_0} u =2Q_{g}e^{4u}.\end{equation} The invariance of the integration of the $Q$-curvature in dimension $4$ is due to the Gauss-Bonnet-Chern formula for a closed manifold $M$: \begin{equation}\label{GBCEq}\chi(M)=\displaystyle\frac{1}{4\pi^2} \int_{M}\left(\frac{|W_g|^2}{8}+Q_g\right) dv_g,\end{equation} where $W_g$ denotes the Weyl tensor. For complete manifolds with conformally flat ends, the work of Chang, Qing, and Yang \cite{CQY1} proves the formula between the asymptotic isoperimetric ratio and the integration of the $Q$-curvature. In Chang, Qing and Yang's work \cite{CQY1, CQY2}, they used an important notion ``normal metric" on conformally flat manifolds to prove the formula of the asymptotic isoperimetric ratio. Normal metric was first introduced by Huber \cite{Huber}, and later used by Finn \cite{Finn} and \cite{Hartman}. Huber proved that in dimension two, for a surface with finite total Gauss curvature, the metric is always normal. In \cite{CQY1}'s work, it is a key observation that if the scalar curvature at infinity is nonnegative, then the metric is normal. The proof mostly uses maximum principle and properties of harmonic functions. In this paper, we generalize this result. We show that if the negative part of the scalar curvature is integrable, then the metric is normal. The main result is the following theorem. \begin{thm}\label{thm:main} Let $(M^n,g)=(\mathbb R^n,e^{2u} |dx|^2)$ be a noncompact complete conformally flat metric of even dimension, satisfying \begin{equation}\int_{M^n}|Q_g|d v_g<\infty, \end{equation} and \begin{equation}\label{totalR} \int_{\mathbb{R}^n}(R_{g}^-)^{\frac{n}{2}}dv_g<\infty, \end{equation} where $R_{g}^-$ is the negative part of the scalar curvature. Then the metric is normal. \end{thm} Therefore, as a direct corollary, we generalize Chang, Qing and Yang's work to the following theorem. \begin{cor} Let $(M^n,g)=(\mathbb{R}^n, e^{2w} |dx|^2)$ be a noncompact complete conformally flat manifold of even dimension, satisfying \begin{equation}\int_{M^n}|Q_g|d v_g<\infty, \end{equation} and \begin{equation}\label{scalar} \int_{M^n} (R_g^-)^{\frac{n}{2}} dv_g\leq \infty. \end{equation} Then \begin{equation}\label{totalQ} \displaystyle \frac{1}{c_n}\int_{M^n} Q_g dv_g\leq \chi(\mathbb{R}^{n} )=1. \end{equation} Moreover, the difference of the two sides in the above inequality is given by the asymptotic isoperimetric ratio: \begin{equation} \displaystyle \chi(\mathbb{R}^{n} )-\frac{1}{c_n}\int_{\mathbb{R}^n} Q_g dv_g=\lim_{r\rightarrow \infty } \frac{{\rm Vol}_g(\partial B_{j}(r))^{n/(n-1)}}{n\omega_n^{\frac{1}{n}} \cdot {\rm Vol}_g(B_{j}(r))}. \end{equation} \noindent Here $B_{j}(r)$ denotes the Euclidean ball with radius $r$ at the $j$-th end. \end{cor} Hereafter $c_n$ denotes the constant $2 ^ {n -2} (\frac {n -2} 2)! \pi ^ {\frac n 2}$. It is the value of the integral of the $Q$-curvature on the unit $n$-hemisphere $\mathbb S^n_+$. $\omega_n$ denotes the volume of unit ball in $\mathbb R^n$. An orientation preserving homeomorphism $f: \mathbb R^n\rightarrow \mathbb R^n$ is call quasiconformal map if $$ \sup_{x \in \mathbb R^n} H(x,f):= \limsup_{r\rightarrow 0+} \sup_{|u-x|=|v-x|=r }\frac{ |f(u)-f(x)|}{|f(v)-f(x)|} <\infty. $$ We denote the dilation of a quasiconformal map by $$H(f):= \sup_{x\in \mathbb R^n} H(x,f)<\infty.$$ We call $f$ is $H$-quasiconformal if $H(f)\leq H$. In \cite{YW1}, the second author has found a relation between the integral of the $Q$-curvature and the quasiconformal equivalence of two manifolds (metric spaces), and deduce the isoperimetric inequality. This is analogous to Fiala \cite{Fiala}, and Huber \cite{Huber} on two dimensional surfaces with absolutely integrable Gauss curvature. \begin{thm}\cite{YW1} Suppose $(M^n, g)=(\mathbb R^n, e^{2u} |dx|^2)$ is a noncompact complete Riemannian manifold with normal metric. If its $Q$-curvature satisfies \begin{equation}\int_{M^n}|Q_g|d v_g<\infty \end{equation} and \begin{equation}\frac{1}{c_n}\int_{M^n} Q_gd v_g<1, \end{equation} then there is an $H$-quasiconformal map $f: \mathbb R^n\rightarrow \mathbb R^n$ and constant $C$ such that \begin{equation} C^{-1}e^{nu}\leq J_f(x)\leq C e^{nu}\:\: \text{a.e. $x\in\mathbb R^n$}, \end{equation} where $H$ depends on $n$, and $\alpha$. $C$ depends on the metric $n$ and $g$. \end{thm} In dimension 4, the definition of the $Q$-curvature is \begin{equation}\begin{split} Q_g=&\frac{1}{12}(-\Delta_g R_g+\frac{1}{4} R_g^2 -3|E_g|^2)\\ =&-\frac{1}{12}\Delta_g R_g+ 2\sigma_2 (A_g), \end{split} \end{equation} where $E_g$ denotes the traceless part of the Ricci curvature. $$A_g:= \frac{1}{n-2} (Rc_g- \frac{1}{2(n-1)} R_g g)$$ denotes the Schouten tensor. Thus in dimension 4, the $Q$-curvature differs from $2 \sigma_2(A_g)$ by a divergence term. By a similar argument as in \cite[Lemma 3.2]{LW}, the integral of $\Delta_g R_g$ vanishes if the $Q$-curvature is absolutely integrable and the scalar curvature is in $L^{\frac{n}{2}}$ (which is $L^{2}$). From this, we can prove that there is a quasiconformal map from this manifold to the Euclidean space, and the dilation $H$ of the quasiconformal map is controlled by the integral of the $ \sigma_2(A_g)$. This is a new phenomenon regarding conformal invariants. Previously, from the work of Theorem 1.1 and 1.2 in \cite{YW1}, we only know that the integral of the $Q$-curvature may control the asymptotic behavior of the conformally flat manifolds. We state this result in the following theorem. \begin{thm}\label{thm:quasi} Let $(M^4,g)=(\mathbb R^4,e^{2u} |dx|^2)$ be a noncompact complete conformally flat metric, satisfying $$\int_{M^4}|Q_g|d v_g<\infty, $$ and \begin{equation}\label{R} \int_{M^4}|R_{g}|^{2}dv_g<\infty, \end{equation} If \begin{equation}\label{strictSigma} \displaystyle \frac{1}{2\pi^2}\int_{M^4} \sigma_2 (A_g) dv_g< 1 \end{equation} then there is an $H$-quasiconformal map $f$ between $(M^4, g)$ and the Euclidean space (with flat metric). The Jacobian $J_f$ of this map satisfies \begin{equation} C^{-1}e^{4u}\leq J_f(x)\leq C e^{4u}\:\: \text{a.e. $x\in\mathbb R^4$}. \end{equation} Here $C= C( g)$, and $\displaystyle H= H( 1-\frac{1}{2\pi^2} \int_{M^n} \sigma_2 (A_g) dv_g)$. Moreover, $M^4$ satisfies the isoperimetric inequality: for any smooth bounded domain $\Omega$, \begin{equation} Vol_g( \Omega)^{\frac{3}{4}}\leq C Area_g(\partial\Omega) \end{equation} with $C= C( g)$. \end{thm} \begin{rem} Previously we only know that the integral of the $Q$-curvature may control the asymptotic behavior of the conformally flat manifolds. Theorem \ref{thm:quasi} indicates that with suitable integrability assumption of the curvature, the value of the integral of the $\sigma_2 (A_g)$ may also control the asymptotic behavior in a similar manner. This includes all the quasiconformal equivalence results and isoperimetric inequality proved in \cite{YW1}. \end{rem} \begin{rem} One can prove a similar result in higher even dimensional manifolds as well. But there are two differences, which would make the statement of the result more complicated. First, in higher dimensions, the condition \eqref{R} needs to be on the Riemannian curvature tensor $$\int_{M^n} |Rm|^{\frac{n}{2}} dv_g<\infty. $$ Second, by the Spyros Aleksakis \cite{Alex1, Alex2} classification theorem of global conformal invariants, $\sigma_2 (A_g)$ in \eqref{strictSigma} should be replaced by the Pfaffian of the Riemannian curvature tensor $Pf(Rm)$ (up to a multiplicative constant). For simplicity, we only state the theorem in dimension 4. \end{rem} \begin{rem} Recall that $c_n$ denotes the constant $2 ^ {n -2} (\frac {n -2} 2)! \pi ^ {\frac n 2}$. $c_4 = 4\pi^2$. The constant appeared on the left hand side of \eqref{strictSigma} is equal to $\frac{2}{c_4}$. \end{rem} \hide{The Chang-Qing-Yang theorem asserts that for $4$-manifolds (in fact, their theorem is valid for all even dimensional manifolds) which is conformal to the Euclidean space, the integral of the $Q$-curvature controls the asymptotic isoperimetric ratio at the end of this complete manifold. This is analogous to the two-dimensional result by Cohn-Vossen \cite{Cohn-Vossen}, who studied the Gauss-Bonnet integral for a noncompact complete surface $M^2$ with analytic metric, and showed that if the manifold has finite total Gaussian curvature, then \begin{equation}\label{1.1} \displaystyle \frac{1}{2\pi}\int_{M} K_g dv_g\leq \chi(M), \end{equation} where $\chi(M)$ is the Euler characteristic of $M$. Later, Huber \cite{Huber} and Hartman \cite{Hartman} extended this inequality to metrics with much weaker regularity. Huber also proved that such a surface $M$ is conformally equivalent to a closed surface with finitely many points removed. \begin{def}\label{normal} The metric is normal on an end $E_j \subset M ^ n$ of a locally conformally flat manifold $M^n$ if $(E_j,g)=(\mathbb{R}^n\setminus B, e^{2w}|dx|^2)$ and $$w(x)=\frac{1}{c_n}\int_{\mathbb{R}^n\setminus B}\log\frac{|y|}{|x-y|}P(y)dy+C $$ for some continuous and integrable function $P(y)$. The dimensional constant $c_n:= 2 ^ {n -2} (\frac {n -2} 2)! \pi ^ {\frac n 2}$ is the value that appears in the fundamental solution equation \begin{equation}\label{1} (-\Delta)^{n/2}\log\frac{1}{|x|}=c_n\delta_0(x), \end{equation} where $\Delta$ is the Laplacian on Euclidean space. \end{def} Consider the conformally flat metric of even dimensional manifold $(M^n, g)= (\mathbb R^n,e^{2u} |dx|^2)$ where $n=2k,k\in\mathbb N$. We denote by $Q_{g}$ the Q-curvature of the metric $g$, which is an $n$-th order curvature operator. The total Q-curvature $\int_{\mathbb R^n}Q_{g}(x)dv_g$ is a conformal invariant. In dimension 4, the Q-curvature has the explicit expression \begin{equation} Q_g=\frac{1}{12}(-\Delta_g R_g+\frac{1}{4}R_g^2-3|E_g|^2) \end{equation} where $R_g$ is the scalar curvature and $E_g$ is the traceless part of the Ricci curvature. The invariance of Q-curvature for a closed 4-manifold $(M^4,g)$ is actually related to the Chern-Gauss-Bonnet formula \begin{equation} \chi(M^4)=\frac{1}{4\pi^2}\int_M(\frac{|W_g|^2}{8}+Q_g)dv_g \end{equation} where $W_g$ is the Weyl tensor of $g$. For non-compact case, Chang-Qing-Yang proved in \cite{CQY1} that if a non-compact complete conformally flat 4-manifold $(\mathbb R^4,g=e^{2u} |dx|^2)$ has finite total Q-curvature, i.e. \begin{equation} \int_{M^4}|Q_{g}|dv_g<\infty, \end{equation} and that the metric is normal, i.e. \begin{equation} u(x)=\frac{1}{4\pi^2}\int_{\mathbb R^4}\log\frac{|y|}{|x-y|}Q_g(y)e^{4u(y)}dy+C, \end{equation} then \begin{equation} \frac{1}{4\pi^2}\int_{\mathbb{R}^4}Q_{g} e^{4u(x)}dx\leq\chi(\mathbb R^4)=1 \end{equation} and \begin{equation} \chi(\mathbb R^4)-\frac{1}{4\pi^2}\int_{\mathbb{R}^4}Q_{g}(x)e^{4u(x)}dx=\lim_{r\rightarrow\infty}\frac{(\mathrm{Vol}_g(\partial B_r))^{\frac{4}{3}}}{4(2\pi^2)^{\frac{1}{3}}\mathrm{Vol}_g(B_r)} \end{equation} where $B_r$ is the ball of radius $r$ and the right hand side is the isoperimetric ratio on the conformally flat end. In the same paper, they showed that non-negativity of the scalar curvature $R_g$ will imply the normality of the metric, thus the theorem will still holds. It was pointed out in \cite{LW} that the above argument holds for all even-dimensional conformally flat manifolds with the same assumption. Recently, Lu-Wang in \cite{LW} proved that \begin{thm}(Theorem 1.5 of \cite{LW})\label{lw1} Let $(M^n,g)$ be an even dimensional locally conformally flat complete manifold with finite total Q-curvature and finitely many conformally flat simple ends. Suppose that on each end, the metric is normal. If $M^n$ is immersed in $\mathbb R^{n+1}$ satisfying \begin{equation} \int_{M^n}|L|^ndv_g<\infty \end{equation} with $L$ being the second fundamental form, then \begin{equation} \int_{M^n}Q_gdv_g\in 2c_n\mathbb Z \end{equation} where $c_n=2^{n-2}(\frac{n-2}{2})!\pi^{\frac{n}{2}}$ is the integral of the Q-curvature on the standard n-hemisphere $\mathbb S^n_+$. \end{thm} The above theorem should be compared with the quantization of total scalar curvature for immersed surfaces in \cite{White}. The normality of metric on the end is a natural condition for quantization, it will give control of the asymptotic behavior for metrics at infinity. In this note, we want to generalize the result of \cite{CQY1}. We prove that the metric is normal if the negative part of the scalar curvature is $L^{n/2}$ integrable. } \noindent \textbf{Acknowledgments:} The second author would like to thank Matt Gursky for the question regarding Theorem 1.3, and inspiring discussions during the 2015 Princeton-Tokyo conference on geometric analysis. \section{Normality of the conformally flat metric} We only need to prove that the integrability of the negative part of the scalar curvature implies the normality of metric on the end. \begin{defn} We call a metric $(\mathbb R^n,e^u\delta_{ij})$ normal if it satisfies \begin{equation} u(x)=\frac{1}{c_n}\int_{\mathbb R^n}\log{\frac{|y|}{|x-y|} }Q_g(y)e^{nu(y)}dy+C \end{equation} for some constant $C$ and $c_n$ is a dimensional constant. \end{defn} Let $v(x):=\frac{1}{c_n}\int_{\mathbb R^n}\log{\frac{|y|}{|x-y|} }Q_g(y)e^{nu(y)}dy$ and $h(x):=u(x)-v(x)$. We want to show that $h$ is constant function. By the conformal transformation of Q-curvature and the fact that $\log\frac{1}{|x|}$ is a fundamental solution of the $\frac{1}{c_n}(-\Delta)^{\frac{n}{2}}$, we have \begin{equation}\label{1} (\Delta)^\frac{n}{2}h=(\Delta)^kh=0. \end{equation} Moreover, we can use the scalar curvature equation and the integrability condition to get an asymptotic decay of $\Delta h$. \begin{lem}\label{l1} With the same assumptions as in Theorem \ref{thm:main}, we have \begin{equation}\label{2} \limsup_{r\rightarrow\infty}\fint_{B_r}\Delta h=\limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r}\Delta h\leq0. \end{equation} \end{lem} \begin{proof} By the scalar curvature equation of conformally flat manifold, we have \begin{equation} \begin{split} &\int_{B_{2r}\setminus B_r}\Delta u\\ =&\int_{B_{2r}\setminus B_r}-\frac{R_ge^{2u} }{2(n-1) }-\frac{n-2}{2}|\nabla u|^2\\ \leq&\int_{B_{2r}\setminus B_r}\frac{R_g^-e^{2u}}{2(n-1)}\\ \leq& \frac{1}{2(n-1)}(\int_{B_{2r}\setminus B_r}(R_g^-)^{\frac{n}{2}}e^{nu(x)}dx)^{\frac{2}{n}}(\mathrm{Vol_g}(B_{2r}\setminus B_r))^{\frac{n-2}{n}}\\ =&C(n)r^{n-2}.\:(\text{By (\ref{totalR})}) \end{split} \end{equation} So \begin{equation} \limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r}\Delta u\leq O(r^{-2})\rightarrow0. \end{equation} For $v$, we do integration by part \begin{equation} \begin{split} &|\int_{B_{2r}\setminus B_r}\Delta v|\\ =&|\int_{\partial B_{2r}}\nabla v\cdot\nu-\int_{\partial B_r}\nabla v\cdot\nu |\\ \leq& C(n) (\int_{\partial B_{2r}}|\nabla v|+\int_{\partial B_r}|\nabla v|). \end{split} \end{equation} where $\nu$ is the outer-normal vector field on the boundary. Notice that \begin{equation} \begin{split} \int_{\partial B_r}|\nabla v| \leq& (\int_{\partial B_r}|\nabla v|^2d\sigma_r)^{\frac{1}{2}}\cdot(\mathrm{Vol_g}(\partial B_r))^{\frac{1}{2}}\\ =&C(n)r^{\frac{n-1}{2}}( \int_{\partial B_r} | \int_{\mathbb R^n} \frac{x-y}{|x-y|^2}Q_{g}e^{nu(y)}dy |^2 dx )^{\frac{1}{2}}\\ \leq& C(n)r^{\frac{n-1}{2}}\{ \int_{\partial B_r} [(\int_{\mathbb R^n} |\frac{1}{|x-y|^2}Q_ge^{nu(y)}|dy )(\int_{\mathbb R^n}|Q_g|dv_g)]dx \}^{\frac{1}{2}}\\ \leq &C(n)r^{\frac{n-1}{2}}\{ \int_{\partial B_r} (\int_{\mathbb R^n} |\frac{1}{|x-y|^2}Q_ge^{nu(y)}|dy )dx \}^{\frac{1}{2}}\\ = &C(n)r^{\frac{n-1}{2}}\{ \int_{\mathbb R^n}Q_ge^{nu(y)} (\int_{\partial B_r} |\frac{1}{|x-y|^2}|dx )dy \}^{\frac{1}{2}}\\ \leq &C(n)r^{\frac{n-1}{2}}\{ \int_{\mathbb R^n}Q_ge^{nu(y)} (\int_{\partial B_r,y\in\partial B_r} |\frac{1}{|x-y|^2}|dx )dy \}^{\frac{1}{2}}\\ = &C(n)r^{\frac{n-1}{2}}\{ \int_{\mathbb R^n}Q_ge^{nu(y)} dy \}^{\frac{1}{2}} (\int_{\partial B_r,y\in\partial B_r} |\frac{1}{|x-y|^2}|dx )^{\frac{1}{2}}\\ \leq&C'(n)r^{\frac{n-1}{2}} (r^{n-3})^{\frac{1}{2}}\\ =&O(r^{n-2}). \end{split} \end{equation} So \begin{equation}\label{v} |\fint_{B_{2r}\setminus B_r}\Delta v|= O(r^{-2})\rightarrow0. \end{equation} Combining the above we get \begin{equation} \limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r}\Delta h=\limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r}\Delta u-\Delta v\leq0. \end{equation} \end{proof} The next lemma was proved in \cite{LW} in dimension 4, but the same argument works in all dimensions. \begin{lem}\label{l2} With the same assumptions in Theorem \ref{thm:main}, we have \begin{equation} (\fint_{B_{2r}\setminus B_r}|\nabla v|)^2\leq\fint_{B_{2r}\setminus B_r}|\nabla v|^2=O(r^{-2}). \end{equation} \end{lem} \begin{proof} The first inequality is just H{\"o}lder's inequality. We only need to prove the order of decay for $\fint_{B_{2r}\setminus B_r}|\nabla v|^2$. \begin{equation} \begin{split} \int_{B_{2r}\setminus B_r}|\nabla v|^2 &=\int_{B_{2r}\setminus B_r} | \int_{\mathbb R^n} \frac{x-y}{|x-y|^2}Q_{g}e^{nu(y)}dy |^2 dx\\ &\leq\int_{B_{2r}\setminus B_r}| \int_{\mathbb R^n} \frac{1}{|x-y|^2}Q_{g}e^{nu(y)}dy || \int_{\mathbb R^n} Q_{g}e^{nu(y)}dy |dx\\ &\leq C(n)\int_{B_{2r}\setminus B_r}\int_{\mathbb R^n}|\frac{1}{|x-y|^2}Q_{g}e^{nu(y)}|dydx\\ &= C(n)\int_{\mathbb R^n}\int_{B_{2r}\setminus B_r}|\frac{1}{|x-y|^2}Q_{g}e^{nu(y)}|dxdy\\ &\leq C(n) \int_{\mathbb R^n} |Q_{g}e^{nu(y)} |( \int_{B_{2r}\setminus B_r}\frac{1}{|x-y|^2}dx) dy\\ &\leq C(n) \int_{\mathbb R^n} |Q_{g}e^{nu(y)} |( \int_{B_{2r}\setminus B_r,y\in \partial B_r}\frac{1}{|x-y|^2}dx) dy\\ &\leq C(n) \int_{\mathbb R^n} |Q_{g}e^{nu(y)} | (\int_{B_{3r}}\frac{1}{|x|^2}dx)dy\\ &=O(r^{n-2}). \end{split} \end{equation} Thus by taking average we proved the lemma. \end{proof} We're now ready to prove the main theorem. We denote by $\omega_n$ the volume of unit ball in $\mathbb R^n$. The area of the unit sphere in $\mathbb R^n$ is then equal to $n\omega_n$. \begin{proof} of Theorem \ref{thm:main} From (\ref{1}), we have $\Delta^{k}h=0$ and $\lim_{r\rightarrow\infty}\sup\fint_{B_r}\Delta h\leq0$. As the first step, we will prove $\Delta^{k-1}h=0$. By the mean value property of harmonic functions, for any $p\in\mathbb R^{2k}$, \begin{equation} \begin{split} \Delta^{k-1}h(p) &=\frac{1}{\omega_nr^n}\int_{B_r(p)}\Delta^{k-1}h(x)dx\\ &=\frac{1}{\omega_nr^n}\int_{\partial B_r(p)}\partial_r\Delta^{k-2}h(x)\cdot\nu dx\\ &=\frac{n\omega_n}{\omega_nr}\fint_{\partial B_r(p)}\partial_r\Delta^{k-2}h(x)\cdot\nu dx\\ &=\frac{n}{r}\partial_r\fint_{\partial B_r(p)}\Delta^{k-2}h(x)dx.\\ \end{split} \end{equation} Namely \begin{equation} \frac{r}{n}\Delta^{k-1}h(p)\leq \partial_r\fint_{\partial B_r(p)}\Delta^{k-2}h(x)dx. \end{equation} Integrating the above equation, we get \begin{equation} \begin{split} &\frac{1}{2n}r^2\Delta^{k-1}h(p)+\Delta^{k-2}h(p)\leq\fint_{\partial B_r(p)}\Delta^{k-2}h(x) dx\\ &\frac{\omega_n}{2}r^{n+1}\Delta^{k-1}h(p)+n\omega_nr^{n-1}\Delta^{k-2}h(p)\leq\int_{\partial B_r(p)}\Delta^{k-2}h(x) dx. \end{split} \end{equation} Integrating again both sides, we get \begin{equation} \begin{split} &\frac{\omega_n}{2(n+2)}r^{n+2}\Delta^{k-1}h(p)+\omega_nr^{n}\Delta^{k-2}h(p)\\ \leq&\int_{B_r(p)}\Delta^{k-2}h(x) dx\\ =&\int_{\partial B_r}\nabla_r\Delta^{k-3}h(x)\cdot\nu\\ =&n\omega_nr^{n-1}\fint_{\partial B_r}\nabla_r\Delta^{k-3}h(x)\cdot\nu\\ =&n\omega_nr^{n-1}\partial_r\fint_{\partial B_r}\Delta^{k-3}h(x).\\ \end{split} \end{equation} We can rewrite this inequality in the way that \begin{equation} \frac{1}{2n(n+2)}r^{3}\Delta^{k-1}h(p)+\frac{1}{n}r\Delta^{k-2}h(p)\leq\partial_r\fint_{\partial B_r}\Delta^{k-3}h(x).\end{equation} Integrating both sides in $r$, we obtain \begin{equation} \frac{1}{8n(n+2)}r^{4}\Delta^{k-1}h(p)+\frac{1}{2n}r^2\Delta^{k-2}h(p)+\Delta^{k-3}h(p)\leq \fint_{\partial B_r}\Delta^{k-3}h(x). \end{equation} Keep doing this procedure finitely many times. Then we get \begin{equation}\label{3} \begin{split} &a_{k-1}r^{2(k-2)}\Delta^{k-1}h(p)+a_{k-2}r^{2(k-3)}\Delta^{k-2}h(p)+\cdots+a_1\Delta h(p)\\ \leq&\fint_{\partial B_r}\Delta h(x), \end{split}\end{equation} where \begin{equation} \begin{split} a_{k-1}&=\frac{1}{[2\cdot4\cdots(2k-4)]\cdot[2k(2k+2)(2k+4)\cdots(4k-6)]}\\ a_{k-2}&=\frac{1}{[2\cdot4\cdots(2k-6)]\cdot[(2k)(2k+2)\cdots(4k-8)]}\\ &\cdots\\ a_{k-j}&=\frac{1}{[2\cdot4\cdots(2(k-j)-2)]\cdot[(2k)(2k+2)\cdots(4(k-1)-2j)]}\\ &...\\ a_2&=\frac{1}{2\cdot2k}\\ a_1&=1. \end{split} \end{equation} Each $a_i$, for $i=1,\cdots k-1$, is a positive constant. By (\ref{2}), the leading cooefficient $a_{k-1}\Delta^{k-1}h(p)$ of the polynomial (\ref{3}) must be non-positive. Since $a_{k-1}>0$, we have \begin{equation} \begin{split} \Delta^{k-1}h(x)=\mathrm{constant}\leq0\\ \Delta^{k-1}\partial_ih(x)=0 \end{split} \end{equation} for every $i=1,\cdots,n$. Applying the mean value property to $\Delta^{k-2}\partial_ih$ and integrating by parts as above, we have \begin{equation}\label{4} \begin{split} &a_{k-1}r^{2(k-2)}\Delta^{k-2}\partial_ih(p)+a_{k-2}r^{2(k-3)}\Delta^{k-3}\partial_ih(p)+\cdots+a_1\partial_i h(p)\\ \leq&\fint_{\partial B_r}\partial_ih(x). \end{split} \end{equation} Next, we make use the scalar curvature equation to see \begin{equation} \begin{split} &\limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r}\Delta u+\fint_{B_{2r}\setminus B_r}\frac{n-2}{2}|\nabla u|^2\\ =&\limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r}-\frac{R_ge^{2u}}{2(n-1)} \leq0. \end{split} \end{equation} By Holder's inequality, \begin{equation} \begin{split} &\limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r}\Delta u+\frac{n-2}{2}(\fint_{B_{2r}\setminus B_r}|\nabla u|)^2\leq0. \end{split} \end{equation} Combine the estimates in Lemma \ref{l1} and Lemma \ref{l2}. Then we have \begin{equation}\label{5} \begin{split} &\limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r}\Delta h+\frac{n-2}{2}(\fint_{B_{2r}\setminus B_r}|\nabla h|)^2\leq0. \end{split} \end{equation} Thus \begin{equation} \begin{split} &\limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r}\Delta h+\frac{n-2}{2}(\fint_{B_{2r}\setminus B_r}|\partial_i h|)^2\leq0 \end{split} \end{equation} for each $i=1,...,n$. Plugging (\ref{3}) and (\ref{4}) in (\ref{5}), we have a polynomial $P_i(r)$ satisfying \begin{equation} \limsup_{r\rightarrow\infty}P_i(r)\leq0. \end{equation} So the leading coefficient of $P_i(r)$ must be non-positive. Namely \begin{equation} a_{k-1}^2|\Delta^{k-2}\partial_ih(p)|^2\leq0 \end{equation} for each $p\in\mathbb R^n$. This implies that \begin{equation}\label{intermediate} \Delta^{k-2}\partial_ih=0 \end{equation} for each $i=1,...,n$, and thus \begin{equation} \Delta^{k-1}h=0. \end{equation} Now we reduce the problem to \begin{equation} \begin{split} &\Delta^{k-1}h=0\\ &\text{with}\\ &\limsup_{r\rightarrow\infty}\fint_{B_r}\Delta h\leq0 \end{split} \end{equation} As an intermediate step, we obtain (\ref{intermediate}) $\Delta^{k-2}\partial_ih=0$. We can apply this argument inductively to get \begin{equation} \begin{split} \Delta^{k-3}\partial_ih=0,\:\text{for each $i$}\\ \Delta^{k-2}h=0. \end{split} \end{equation} After finitely many steps, we get \begin{equation}\label{6} \Delta\partial_ih=0 \end{equation} for each $i=1,...,n$, and thus \begin{equation} \Delta^2h=0. \end{equation} Notice that the induction argument will make use of (\ref{2}), so we cannot obtain $\Delta h=0$ directly from the induction. However, since $\Delta h$ is harmonic, by Liouville's theorem and (\ref{2}), we have \begin{equation} \Delta h=C_0\leq0. \end{equation} Argue as in \cite{CQY1} \begin{equation}\label{7} \begin{split} &\limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r}|\nabla h|^2\\ \leq &\limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r} (2|\nabla u|^2+2|\nabla v|^2)\\ = &\limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r} -\frac{4}{n-2}\Delta u-\frac{2R_g^-e^{2u}}{(n-1)(n-2)}+2|\nabla v|^2\\ =& \limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r} -\frac{4}{n-2}\Delta h \\ &+ \limsup_{r\rightarrow\infty}\fint_{B_{2r}\setminus B_r} \frac{4}{n-2}\Delta v -\frac{2R_g^-e^{2u}}{(n-1)(n-2)}+2|\nabla v|^2\\ \leq &-\frac{4}{n-2}C_0+0\\ =&-\frac{4}{n-2}C_0. \end{split} \end{equation} In the last inequality we used Lemma \ref{l1}, Lemma \ref{l2} and the integrability of the negative part of scalar curvature. (\ref{6}) tells us that $\partial_i h$ is harmonic for each $i=1,...,n$. By Liouville's theorem again, we have \begin{equation} \partial_i h=\mathrm{constant}, \end{equation} and thus \begin{equation} \Delta h=0. \end{equation} $C_0$ can be chosen to be $0$. So $\partial_i h=0$ for each $i=1,...,n$. Therefore we have proved that $h$ is a constant. This concludes the proof that the metric is normal. Now that the metric is normal. By Theorem 1.1 and Theorem 1.3 of \cite{CQY1}, we have (\ref{totalQ}). \end{proof} \section{Quasiconformal map and the isoperimetric inequality} \begin{proof} of Theorem \ref{thm:quasi} In this theorem, we use the $L^{2}$-integrability of the scalar curvature in two ways. We first need it to apply Theorem \ref{thm:main} to show that the metric is normal. Also we need it to prove that the divergence term in the $Q$-curvature vanishes. Since \eqref{R} holds, Theorem \ref{thm:main} implies that the metric is normal. Suppose in addition that the strict inequality holds in \eqref{totalQ}, i.e. \begin{equation}\label{strictQ} \displaystyle \frac{1}{c_4}\int_{M^4} Q_g dv_g < \chi(\mathbb{R}^{4} )=1. \end{equation} Then it was already proved by the second author in \cite{YW1} that there exists an $H$-quasiconformal map between $M^4$ and the Euclidean space with Jacobian comparable to the volume form $e^{4u}$. Thus the problem reduces to prove \eqref{strictQ}. For this purpose, we first see that \begin{equation} Q_g=-\frac{1}{12}\Delta_g R_g+ 2\sigma_2 (A_g). \end{equation} In \cite[Lemma 3.2] {LW}, it was shown that if the metric is normal and the second fundamental form $L$ of the isometric embedding $M^4 \hookrightarrow \mathbb R^5$ satisfies \begin{equation}\label{L} \int_{M^4}|L|^{4}dv_g<\infty, \end{equation} then \begin{equation}\label{R2} \int_{M^4} \Delta_g R_g dv_g =0.\end{equation} We can adopt a similar method to prove \eqref{R2} under the assumption \eqref{R} \begin{equation*} \int_{M^4}|R_g|^{2}dv_g<\infty. \end{equation*} For completeness we give the proof of \eqref{R2} in the following. \begin{lem}\label{vanishing} Let $(M^4,g)=(\mathbb R^4,e^{2u} |dx|^2)$ be a noncompact complete conformally flat metric, satisfying $$\int_{M^4}|Q_g|d v_g<\infty, $$ \begin{equation} \int_{M^4}|R_{g}|^{2}dv_g<\infty, \end{equation} and the metric is normal on each end. Then $$\int_{M^4} \Delta_g R_g dv_g=0.$$ \end{lem} \begin{proof}[Proof of Lemma \ref{vanishing}] Let $B^0(0, \rho)$ be the ball centered at the origin, with radius $\rho$ with respect to the Euclidean metric. On the Euclidean space, there always exists a smooth cut-off function $\eta_\rho$ which is supported on $B^0(0, 2\rho)$. It is equal to $1$ on $B^0(0, \rho)$, and its $k$-th derivative is of order $O(1/\rho^k)$ over the annulus $B^0(0, 2\rho)\setminus B^0(0, \rho)$. Again since the $Q$-curvature is absolutely integrable, so is $\Delta_gR_g$. Since $\eta_\rho=1$ on $B^0(0,\rho) $, \begin{equation}\label{2.3} \begin{split} &\int_{M^4}\Delta_gR_g d v_g\\ =&\lim_{\rho\rightarrow\infty} \int_{ B^0(0, 2\rho) }\Delta_gR_g \eta_\rho d v_g\\ =&\lim_{\rho\rightarrow\infty} \int_{B^0(0, 2\rho)\setminus B^0(0, \rho)}R_g \Delta_g \eta_\rho d v_g.\\ \end{split}\end{equation} Here the last equality holds because all boundary terms in the integration by parts formula vanish, and $\Delta_g\eta_\rho=0$ on the complement of $B^0(0, 2\rho)\setminus B^0(0, \rho)$. Using $$ dv_g=e^{4w}dx,$$ $$\Delta_g\eta_\rho dv_g=\partial_i(e^{2w}\partial_i\eta_\rho) dx, $$ we have \begin{equation} \begin{split} &\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} R_g \Delta_g\eta_\rho dv_g\\ &= \int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} R_g \partial_i(e^{2w}\partial_i\eta_\rho )dx \\ =& \int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} R_g (\Delta_0\eta_\rho e^{2w}+ \partial_i(e^{2w})\partial_i\eta_\rho )dx \\ \leq& C\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} \frac{R_g}{\rho^2} e^{2w}dx\\ & + C \int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} \frac{R_g |\partial_i w|}{\rho}e^{2w}dx \\ =:& I+II.\\ \end{split} \end{equation} The first term $I$ can be bounded by the $L^2$-norm of the scalar curvature. \begin{equation}\begin{split} | I|\leq&C (\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} | R_g|^2 e^{4w}dx)^{1/2}\cdot (\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} \frac{1}{\rho^4}dx)^{1/2}\\ &\rightarrow 0,\\ \end{split}\end{equation} as $\rho$ tends to $\infty$. We will now study $II$ through the asymptotic behavior of the derivatives of $w$. We notice that the pointwise estimate of $\partial_i w$ is not known. But since we are taking the integral over the annulus (with respect to the Euclidean metric), it can be reduced to the integral estimate of $\partial_i w$ over spheres at the end of the manifold. \begin{equation}\begin{split} |II|=&C \left|\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} \frac{R_g \partial_i w}{\rho}e^{2w}dx\right|\\ \leq&C (\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} | R_g|^2 e^{4w}dx)^{1/2}\cdot (\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} \frac{|\partial_i w|^2}{\rho^2}dx)^{1/2}.\\ \end{split}\end{equation} Notice that \begin{equation}\begin{split}\label{3.1} & \int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} |\partial_i w|^2dx\\ =&\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)}\left|\frac1{4\pi^2}\int_{\mathbb R^4}\frac{x_i-y_i}{|x-y|^2}Q e^{4w(y)} d y\right|^2 d v_0\\ \leq&C \int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} \left| \int_{\mathbb{R}^4}\frac{1}{|x-y|}Q(y)e^{4w(y)} dy\right|^2dx\\ \leq&C \int_{B^0(0, 2\rho)\setminus B^0(0, \rho)}\int_{\mathbb{R}^4}\frac{1}{|x-y|^2} Q(y)e^{4w(y)} dydx \cdot \int_{\mathbb{R}^4} Q(y)e^{4w(y)} dy.\\ \end{split}\end{equation} Since for any $y\in \mathbb{R}^4$, we have $$\int_{x\in \partial B^0(0, r)} \frac{1}{|x-y|^2} d\sigma(x)=| \partial B^0(0, r)|\cdot O(\frac{1}{r^2}),$$ \begin{equation} \begin{split}\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)}\frac{1}{|x-y|^2} dx=&\int_\rho^{2\rho} \int_{x\in \partial B^0(0, r)} \frac{1}{|x-y|^2} d\sigma(x)dr\\ =&\int_\rho^{2\rho}| \partial B^0(0, r)|\cdot O(\frac{1}{r^2}) dr=O(\rho^2).\\\end{split} \end{equation} Plugging this into (\ref{3.1}), and using the fact that $\int_{\mathbb{R}^4}Q(y)e^{4w(y)} dy<\infty$, we obtain \begin{equation}\begin{split} \int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} |\partial_i w|^2dx \leq C (\int_{\mathbb{R}^4}Q(y)e^{4w(y)} dy)^2 \cdot O(\rho^2)= O(\rho^2). \end{split}\end{equation} Therefore, \begin{equation}\begin{split} | II|\leq&C(\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} | R|^2 dv_g)^{1/2}\cdot (\frac{1}{\rho^2}\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} |\partial_i w|^2dx)^{1/2}\\ \leq&C(\int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} | R|^2 dv_g)^{1/2}\rightarrow 0\\ \end{split}\end{equation} as $\rho$ tends to $\infty$. To conclude, \begin{equation}\begin{split} |\int_{M^4} \Delta_g R_g dv_g|=&\lim_{\rho\rightarrow\infty}| \int_{B^0(0, 2\rho)\setminus B^0(0, \rho)} R_g \Delta_g \eta_\rho dv_g| \\ \leq& \lim_{\rho\rightarrow\infty} (|I|+|II|)=0.\\ \end{split}\end{equation} This completes the proof of the lemma. \end{proof} From this, and inequality \eqref{strictSigma}, we deduce \eqref{strictQ}. Therefore we complete the proof of the existence of a quasiconformal map. The isoperimetric inequality is a direct consequence of the existence of such an $H$-quasiconformal map. \end{proof} \begin{bibdiv} \begin{biblist} \bib{Alex1}{article}{ author={Alexakis, Spyros}, title={On the decomposition of global conformal invariants. I}, journal={Ann. of Math. (2)}, volume={170}, date={2009}, number={3}, pages={1241--1306}, issn={0003-486X}, review={\MR{2600873}}, doi={10.4007/annals.2009.170.1241}, } \bib{Alex2}{book}{ author={Alexakis, Spyros}, title={The decomposition of global conformal invariants}, series={Annals of Mathematics Studies}, volume={182}, publisher={Princeton University Press, Princeton, NJ}, date={2012}, pages={x+449}, } \bib{Branson}{article}{ author={Branson, Thomas P.}, title={Sharp inequalities, the functional determinant, and the complementary series}, journal={Trans. Amer. Math. Soc.}, volume={347}, date={1995}, number={10}, pages={3671--3742}, issn={0002-9947}, review={\MR{1316845}}, doi={10.2307/2155203}, } \bib{CQY1}{article}{ author={Chang, Sun-Yung A.}, author={Qing, Jie}, author={Yang, Paul C.}, title={On the Chern-Gauss-Bonnet integral for conformal metrics on $\mathbf R^4$}, journal={Duke Math. J.}, volume={103}, date={2000}, number={3}, pages={523--544}, issn={0012-7094}, review={\MR{1763657}}, doi={10.1215/S0012-7094-00-10335-3}, } \bib{CQY2}{article}{ author={Chang, Sun-Yung A.}, author={Qing, Jie}, author={Yang, Paul C.}, title={Compactification of a class of conformally flat 4-manifold}, journal={Invent. Math.}, volume={142}, date={2000}, number={1}, pages={65--93}, issn={0020-9910}, review={\MR{1784799}}, doi={10.1007/s002220000083}, } \hide{ \bib{Chern}{book}{ author={Chern, S. S.}, author={Chen, W. H.}, author={Lam, K. S.}, title={Lectures on differential geometry}, series={Series on University Mathematics}, volume={1}, publisher={World Scientific Publishing Co., Inc., River Edge, NJ}, date={1999}, pages={x+356}, }} \hide{\bib{Chern-Osserman}{article}{ author={Chern, Shiing-Shen}, author={Osserman, Robert}, title={Complete minimal surfaces in euclidean $n$-space}, journal={J. Analyse Math.}, volume={19}, date={1967}, pages={15--34}, } \bib{Cohn-Vossen}{article}{ author={Cohn-Vossen, Stefan}, title={K\"urzeste Wege und Totalkr\"ummung auf Fl\"achen}, language={German}, journal={Compositio Math.}, volume={2}, date={1935}, pages={69--133}, issn={0010-437X}, review={\MR{1556908}}, }} \hide{ \bib{Saloff}{article}{ author={Coulhon, Thierry}, author={Saloff-Coste, Laurent}, title={Isop\'erim\'etrie pour les groupes et les vari\'et\'es}, language={French}, journal={Rev. Mat. Iberoamericana}, volume={9}, date={1993}, number={2}, pages={293--314}, issn={0213-2230}, review={\MR{1232845}}, doi={10.4171/RMI/138}, }} \hide{ \bib{DS1}{article}{ author={David, Guy}, author={Semmes, Stephen}, title={Strong $A_\infty$ weights, Sobolev inequalities and quasiconformal mappings}, conference={ title={Analysis and partial differential equations}, }, book={ series={Lecture Notes in Pure and Appl. Math.}, volume={122}, publisher={Dekker, New York}, }, date={1990}, pages={101--111}, review={\MR{1044784}}, }} \hide{\bib{FeffermanGraham}{article}{ author={Fefferman, Charles}, author={Graham, C. Robin}, title={The ambient metric}, series={Annals of Mathematics Studies}, volume={178}, publisher={Princeton University Press, Princeton, NJ}, date={2012}, pages={x+113}, isbn={978-0-691-15313-1}, review={\MR{2858236}}, }} \bib{Fiala}{article}{ author={Fiala, F.}, title={Le probl\`eme des isop\'erim\`etres sur les surfaces ouvertes \`a courbure positive}, language={French}, journal={Comment. Math. Helv.}, volume={13}, date={1941}, pages={293--346}, issn={0010-2571}, review={\MR{0006422}}, } \bib{Finn}{article}{ author={Finn, Robert}, title={On a class of conformal metrics, with application to differential geometry in the large}, journal={Comment. Math. Helv.}, volume={40}, date={1965}, pages={1--30}, issn={0010-2571}, review={\MR{0203618}}, } \bib{Hartman}{article}{ author={P. Hartman}, title={Geodesic parallel coordinates in the large}, journal={Amer. J. Math.}, volume={86}, date={1964}, pages={705--727}, issn={0002-9327}, } \bib{Huber}{article}{ author={Huber, Alfred}, title={On subharmonic functions and differential geometry in the large}, journal={Comment. Math. Helv.}, volume={32}, date={1957}, pages={13--72}, issn={0010-2571}, review={\MR{0094452}}, } \hide{\bib{lu}{article}{ author={Lu, Zhiqin}, title={On the lower order terms of the asymptotic expansion of Tian-Yau-Zelditch}, journal={Amer. J. Math.}, volume={122}, date={2000}, number={2}, pages={235--273}, issn={0002-9327}, }} \bib{LW}{article}{ author={Lu, Zhiqin}, author={Wang, Yi} title={On locally conformally flat manifolds with finite total Q-curvature}, journal={to appear in Cal. Var. PDEs}, } \hide{\bib{Stein}{book}{ author={Stein, Elias M.}, title={Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals}, series={Princeton Mathematical Series}, volume={43}, note={With the assistance of Timothy S. Murphy; Monographs in Harmonic Analysis, III}, publisher={Princeton University Press, Princeton, NJ}, date={1993}, pages={xiv+695}, isbn={0-691-03216-5}, review={\MR{1232192}}, } \bib{Varopoulos}{article}{ author={Varopoulos, N. Th.}, title={Small time Gaussian estimates of heat diffusion kernels. I. The semigroup technique}, journal={Bull. Sci. Math.}, volume={113}, date={1989}, number={3}, pages={253--277}, issn={0007-4497}, review={\MR{1016211}}, }} \bib{YW1}{article}{ author={Wang, Yi}, title={The isoperimetric inequality and quasiconformal maps on manifolds with finite total $Q$-curvature}, journal={Int. Math. Res. Not. IMRN}, date={2012}, number={2}, pages={394--422}, issn={1073-7928}, review={\MR{2876387}}, } \bib{YW2}{article}{ author={Wang, Yi}, title={The isoperimetric inequality and $Q$-curvature}, journal={Adv. Math.}, volume={281}, date={2015}, pages={823--844}, } \hide{ \bib{Weyl}{book}{ author={Weyl, Hermann}, title={The Classical Groups. Their Invariants and Representations}, publisher={Princeton University Press, Princeton, N.J.}, date={1939}, pages={xii+302}, } \bib{White}{article}{ author={White, Brian}, title={Complete surfaces of finite total curvature}, journal={J. Differential Geom.}, volume={26}, date={1987}, number={2}, pages={315--326}, issn={0022-040X}, } \bib{WhiteErratum}{article}{ author={White, Brian}, title={Correction to: ``Complete surfaces of finite total curvature'' [J. Differential Geom.\ {\bf 26} (1987), no.\ 2, 315--326; MR0906393 (88m:53020)]}, journal={J. Differential Geom.}, volume={28}, date={1988}, number={2}, pages={359--360}, issn={0022-040X}, }} \end{biblist} \end{bibdiv} \end{document}
1,477,468,751,241
arxiv
\section{Introduction} \label{sec:intro} Some of the most interesting unsolved problems in general relativity require full dynamical solutions of Einstein's equations in three spatial dimensions. Such solutions have to be found numerically, and this is only barely becoming technically feasible. An important set of problems in this category is the binary coalescence of black holes and the binary coalescence of neutron stars. Such events are expected to be a significant source of gravitational waves that will be detectable by new generations of detectors such as LIGO. In Newtonian physics, binary stars can orbit in an equilibrium system. In general relativity, by contrast, a binary system loses energy by gravitational wave emission. The orbit shrinks, and the two stars ultimately coalesce. Though this is clearly not an equilibrium situation, the orbital decay occurs on a much longer timescale than an orbital period, at least up until the last plunging orbit when the stars are very close. Preliminary calculations of binary coalescence and gravitational collapse suggest that the amount of energy radiated gravitationally is small. Thus, even when the system becomes highly dynamical and far from equilibrium, one might expect that it is still the nonradiative part of the gravitational field that controls the evolution. Wilson\cite{Frontiers:Wilson,Texas_90:Wilson} has proposed an approximation scheme that tracks the evolution of coalescing binary neutron stars without solving the full dynamical Einstein field equations. The method may also be applicable to binary black hole systems\cite{Note1}. The scheme applies to systems that are either in or near equilibrium, in which case a reduced set of Einstein's equation should adequately describe the system. For example, a binary system is near equilibrium as long as the emission of gravitational radiation is small. In strict equilibrium, as in the case of a single rotating star, there is a coordinate frame in which the first and second time derivatives of the metric are zero. In the $3+1$ formalism, this means in particular that the time derivatives of the 3-metric $\gamma_{ij}$ and the extrinsic curvature $K_{ij}$ are zero. In quasi-equilibrium, the time derivatives are small, and the metric and extrinsic curvature will not depart significantly from their initial values. Wilson's approximation consists in setting time derivatives exactly equal to zero in a selected subset of Einstein's equations, and ignoring the remaining dynamical equations. This approximation results in a smaller, more tractable set of field equations. Part of the strategy for selecting the subset of Einstein's equations is to guarantee that $\gamma_{ij}$ and $K_{ij}$ are solutions of the initial-value (or constraint) equations. Wilson has proposed evolving the system through a sequence of initial-value problems by solving the full dynamical equations for the {\it matter} in the instantaneous background metric, and then updating the metric quantities at each time step by re-solving the selected subset of Einstein's equations. We will outline below a simpler method to track the evolution, which exploits the near equilibrium of the matter as well. As compelling as this idea sounds, it is impossible to calibrate the approximation without comparing it with solutions to the exact equations. No such exact solutions exist for realistic, dynamical 3-dimensional cases. Only recently has it become possible to solve Einstein's equations numerically for interesting 2-dimensional problems. In fact, it is only in the last few years that as simple a problem as the equilibrium structure of a rapidly rotating relativistic star could be thoroughly investigated. In this paper, we use these rotating equilibrium solutions to calibrate Wilson's approximation scheme. This is the simplest case for which the approximation scheme is different from the exact equations. Because the system is a true equilibrium, it is clearly necessary that the approximation work well in this case. Only then will we have confidence that the method is at all useful in more complicated situations such as binary systems. \section{Basic equations} \label{sec:basic_eqns} A general metric may be written in $3+1$ form as \begin{equation} ds^2 = - \alpha^2 dt^2 + \gamma_{ij} (dx^i + \beta^i dt) (dx^j + \beta^j dt). \end{equation} The dynamical equation for $\gamma_{ij}$ is \begin{equation} \label{eqn:met_evol} \partial_t \gamma_{ij} = -2\alpha K_{ij} + D_i \beta_j + D_j \beta_i, \end{equation} where $D_i$ denotes a covariant derivative with respect to $\gamma_{ij}$. The trace of this equation is \begin{equation} \partial_t \ln \gamma^{1/2} = - \alpha K + D_i \beta^i, \end{equation} where $\gamma=\hbox{det}\gamma_{ij}$ and $K=K^i{}_i$. The trace-free part of Eq.~(\ref{eqn:met_evol}) is \begin{eqnarray} \label{eqn:met_evol_tf} \gamma^{1/3}\partial_t(\gamma^{-1/3} \gamma_{ij}) &=& -2\alpha (K_{ij} - {1 \over 3} \gamma_{ij} K) + \nonumber \\ &&\mbox{} + D_i \beta_j + D_j \beta_i - {2 \over 3} \gamma_{ij} D_k \beta^k. \end{eqnarray} We fix the six components of the extrinsic curvature $K_{ij}$ by demanding that each data slice be a maximal slice and that the left-hand side of Eq.~(\ref{eqn:met_evol_tf}) be equal to zero. This gives \begin{equation} \label{eqn:max_slice} K = 0, \end{equation} and \begin{equation} \label{eqn:K_def} 2 \alpha K_{ij} = D_i \beta_j + D_j \beta_i - {2\over 3} \gamma_{ij} D_k \beta^k. \end{equation} Note that $\partial_t\gamma\neq 0$ unless $D_i\beta^i=0$. To solve the Hamiltonian constraint equation, it is convenient to use a conformal decomposition of the spatial metric. To satisfy the demand that the left-hand side of Eq.~(\ref{eqn:met_evol_tf}) be zero, we choose the metric to be conformally flat\cite{Note2} so that $\gamma^{-1/3}\gamma_{ij} = f_{ij}$, where $f_{ij}$ is the flat metric in whatever coordinate system is used. Therefore, we decompose the spatial metric as \begin{equation} \gamma_{ij} = \Phi^4 f_{ij}. \end{equation} The conformal factor $\Phi$ is determined then by the Hamiltonian constraint \begin{equation} \label{eqn:ham_const} \nabla^2 \Phi = - {1 \over 8} \Phi^5 K^{ij} K_{ij} - 2 \pi \Phi^5 \rho, \end{equation} where the source term is \begin{equation} \label{eqn:rho_def} \rho = n^a n^b T_{ab}. \end{equation} Here $n^a$ is the normal vector to a $t={}$constant slice, $T_{ab}$ is the stress-energy tensor, and $\nabla^2$ is the flat-space Laplacian. Note that although indices $i,j,\ldots$ range over $1,\ldots,3$, indices $a,b,\ldots$ range over $0,\ldots,3$. The shift vector is determined by substituting Eq.~(\ref{eqn:K_def}) into the momentum constraint \begin{equation} \label{eqn:mom_const} D_j K^{ij} = 8 \pi S^i, \end{equation} where \begin{equation} \label{eqn:S_def} S^a = - \gamma^a{}_b n_c T^{bc}. \end{equation} We use the results that for a conformally flat metric we may write \begin{eqnarray} D^j \beta^i + D^i \beta^j - {2 \over 3} \gamma^{ij} D_k \beta^k = \hspace{1.25in} & \nonumber \\ \Phi^{-4} \left[\nabla^j \beta^i + \nabla^i \beta^j - {2 \over 3} f^{ij} \nabla_k \beta^k \right], \end{eqnarray} and for $K=0$, \begin{equation} D_j K^{ij} = \Phi^{-10} \nabla_j (\Phi^{10} K^{ij}), \end{equation} where $\nabla_j$ denotes the covariant derivative in flat space. Thus Eq.~(\ref{eqn:mom_const}) becomes \begin{eqnarray} \nabla^2 \beta^i + {1 \over 3} \nabla^i (\nabla_j \beta^j) = 16 \pi \alpha \Phi^4 S^i \hspace{1.05in} & \\ \mbox{} + \left({1 \over \alpha} \nabla_j \alpha - {6 \over \Phi} \nabla_j \Phi \right) \!\!\!\left( \nabla^j \beta^i + \nabla^i \beta^j - {2 \over 3} f^{ij} \nabla_k \beta^k \right). \hspace{-0.25in}\nonumber \end{eqnarray} This equation can be simplified to two equations, one involving a vector Laplacian and the other a scalar Laplacian, by setting \begin{equation} \label{eqn:shift_dec} \beta^i = G^i - {1 \over 4} \nabla^i B. \end{equation} Then the two equations that must be solved are \begin{eqnarray} \label{eqn:G_eqn} \nabla^2 G^i = 16 \pi \alpha \Phi^4 S^i \hspace{1.85in} & \\ \mbox{} + \left({1 \over \alpha} \nabla_j \alpha - {6 \over \Phi} \nabla_j \Phi \right) \left( \nabla^j \beta^i + \nabla^i \beta^j - {2 \over 3} f^{ij} \nabla_k \beta^k \right) \hspace{-0.25in} \nonumber \end{eqnarray} and \begin{equation} \label{eqn:B_eqn} \nabla^2 B = \nabla_i G^i. \end{equation} Though we are not imposing the full set of dynamical equations for the evolution of $K_{ij}$, we do have the freedom to preserve the maximal slicing condition (\ref{eqn:max_slice}) by requiring $\partial_t K=0$. The resulting equation can also be written with a simple Laplacian by using Eq.~(\ref{eqn:ham_const}). The result is the lapse equation \begin{equation} \label{eqn:lapse_eqn} \nabla^2 (\alpha \Phi) = (\alpha \Phi) \left[{7 \over 8} \Phi^4 K_{ij} K^{ij} + 2 \pi \Phi^4 (\rho + 2S) \right], \end{equation} where \begin{equation} \label{eqn:Strace_def} S=\gamma^{ij} T_{ij}. \end{equation} The above field equations, in combination with the matter equations to be discussed below, form a coupled nonlinear set that must be solved by iteration. The boundary conditions for the field quantities follow from asymptotic flatness; the specific form depends on the application. We are especially interested in uniformly rotating configurations such as binary neutron stars in synchronous orbit. For such systems we work in a corotating coordinate system so that there is no time variation of the fields (in the near equilibrium approximation of the method). Following Wilson\cite{Texas_90:Wilson}, we can implement this by replacing Eq.~(\ref{eqn:shift_dec}) with \begin{equation} \beta^i = G^i - {1 \over 4} \nabla^i B + \left({\bf \Omega} \times {\bf r} \right)^i, \end{equation} which leaves Eqs.~(\ref{eqn:B_eqn}) and (\ref{eqn:G_eqn}) unchanged. Here $\bf \Omega$ is the constant angular velocity of the system. For the matter, we will consider a perfect fluid for which \begin{equation} T_{ab} = \left(\rho_0 + \rho_i + P\right) U_a U_b + Pg_{ab}. \end{equation} Here $\rho_0$ is the rest-mass density, $\rho_i$ is the internal energy density, $P$ is the pressure, and $U^a$ is the fluid 4-velocity. For this source, the density $\rho$ in Eq.~(\ref{eqn:rho_def}) is \begin{equation} \rho = \left(\rho_0 + \rho_i + P\right) \left(\alpha U^t\right)^2 - P, \end{equation} the momentum source $S^i$ in Eq.~(\ref{eqn:S_def}) is \begin{equation} S^i = \left(\rho_0 + \rho_i + P \right) \left(\alpha U^t\right) \gamma^{ij} U_j, \end{equation} and the source term $S$ in Eq.~(\ref{eqn:Strace_def}) is \begin{equation} S= \left(\rho_0 + \rho_i + P \right) \left[(\alpha U^t)^2 -1 \right] + 3P. \end{equation} We treat fluids that are in uniform rotation, for which the 4-velocity $U^a$ is given by \begin{equation} \label{eqn:U_def} \vec{U} = U^t \left({\partial \over \partial t} + \Omega {\partial \over \partial \phi} \right). \end{equation} The normalization condition $\vec U\cdot\vec U=-1$ gives \begin{equation} \alpha U^t = \left(1 + \Phi^{-4} f^{ij} U_i U_j \right)^{1/2}. \end{equation} Now consider the equations for the matter in the near equilibrium approximation. The key approximation is that in the corotating frame there is a Killing vector that is timelike everywhere. In the nonrotating coordinates, this vector can be written as \begin{equation} \vec\xi = {\partial \over \partial t} + \Omega {\partial \over \partial \phi}. \end{equation} Because the 4-velocity (\ref{eqn:U_def}) is proportional to a Killing vector, the matter equations may be integrated to give the hydrostatic equilibrium result\cite{problembook} \begin{equation} \label{eqn:enth_def} {U^t \over h} = {\rm constant}, \end{equation} where \begin{equation} \ln h \equiv \int {dP \over \rho_0 + \rho_i + P}. \end{equation} For a polytropic equation of state \begin{equation} P = K\rho^\Gamma_0, \end{equation} where $K$ and $\Gamma$ are constants, we have \begin{equation} \rho_i = {P \over \Gamma - 1},\qquad h = {\rho_0 + \rho_i + P \over \rho_0}. \end{equation} In this approximation, we have reduced all of the hydrodynamics to a single algebraic equation, Eq.~(\ref{eqn:enth_def}). \section{Axisymmetric Rotating Star: Equations} \label{sec:axi_eqns} To calibrate the method, we apply it to a true equilibrium system in axisymmetry and compare with the complete numerical solution found with no approximations. For this purpose, we use models of rotating neutron stars supported by a polytropic equation of state. Fully relativistic models have been constructed by several authors (see Refs.~\cite{cook92,cook94a,cook94c} and references therein). Solving Einstein's equations for these stars is nontrivial numerically. It is only the recent availability of such solutions that makes this calibration feasible. In spherical polar coordinates and axisymmetry, we find that Eqs.~(\ref{eqn:G_eqn}) and (\ref{eqn:B_eqn}) are satisfied by setting the quantity $B$ of Eq.~(\ref{eqn:shift_dec}) to zero and with the only nonzero component of the shift vector $\beta^\phi\equiv\beta$. Note that this implies, not only that the left-hand side of Eq.~(\ref{eqn:met_evol_tf}) is zero, but also that $\partial_t\gamma = 0$. This means that we are finding a stationary solution of the approximate equations. Given this solution for the shift vector, the term $K_{ij}K^{ij}$ appearing in Eqs.~(\ref{eqn:ham_const}) and (\ref{eqn:lapse_eqn}) is given by \begin{equation} K_{ij} K^{ij} = {\sin^2 \theta \over 2 \alpha^2} \left(r^2 \beta^2_{,r} + \beta^2_{,\theta}\right), \end{equation} where commas denote partial derivatives. Only the $\phi$-component of the vector Eq.~(\ref{eqn:G_eqn}) is nontrivial, and becomes the scalar equation \begin{eqnarray} \label{eqn:shift_eqn} \left[\nabla^2 + {2 \over r} {\partial \over \partial r} + {2 \cot \theta \over r^2} {\partial \over \partial \theta} \right] \!\beta = \left( {1 \over \alpha} {\partial \alpha \over \partial r} - {6 \over \Phi}{\partial \Phi \over \partial r} \right) \!{\partial \beta \over \partial r} & \nonumber \\ \mbox{} + {1 \over r^2} \left({1 \over \alpha} {\partial \alpha \over \partial \theta} - {6 \over \Phi} {\partial \Phi \over \partial \theta} \right) {\partial \beta \over \partial \theta} + {16 \pi \alpha \over r^2 \sin^2 \theta} S_{\phi}. \end{eqnarray} The 4-velocity components appearing in the matter sources are given by \begin{eqnarray} U^t &=& \left[\alpha^2 - \Phi^4 r^2 \sin^2 \theta (\beta + \Omega)^2 \right]^{-1/2}, \\ \nonumber U_\phi &=& \Phi^4 r^2 \sin^2 \theta\, U^t \left(\beta + \Omega \right). \end{eqnarray} The above equations turn out to be simplified versions of the exact equations for stationary, axisymmetric configurations given in Ref.~\cite{cook92} (henceforth CST\cite{Note3}). The exact metric has four nonzero metric coefficients, denoted by $\gamma$, $\rho$, $\alpha$ and $\omega$ in CST, though the approximate metric here has only three: $\alpha$, $\beta$, and $\Phi$. Thus even though there is no dynamics in the field, and even though the equation of hydrostatic equilibrium for the matter is rigorously obeyed, the Wilson scheme is still an approximation for this problem. The correspondence between the approximate and exact metric coefficients is given by \begin{eqnarray} \label{eqn:CST_coefs} \alpha^2 &=& e^{\gamma + \rho}, \\ \nonumber \Phi^4 &=& e^{\gamma - \rho} = e^{2 \alpha_{\rm CST}}, \\ \beta &=& - \omega. \nonumber \end{eqnarray} The fluid velocity $v$ in the ZAMO frame used in CST is given by \begin{equation} (\alpha U^t)^2 = {1 \over 1 - v^2}. \end{equation} In spherical symmetry, the approximate scheme reduces to the exact scheme, with two nonzero metric coefficients. We will now quantify the degree of error in the nonspherical axisymmetric case. We can take over the numerical scheme of CST to solve the approximate equilibrium equations. In fact, the structure of the equations is very close in that they involve the same differential operators on the left-hand sides. In particular, Eqs.~(\ref{eqn:ham_const}) and (\ref{eqn:lapse_eqn}) involve $\nabla^2$, as in Eq.~(3) of CST, and the operator in Eq.~(\ref{eqn:shift_eqn}) is the same as that in Eq.~(5) of CST. Therefore the solution is computed as in Eqs.~(27) and (29) of CST. The nondimensionalized source terms analogous to Eq.~(30) of CST are \begin{eqnarray} \tilde{S}_\Phi(s,\mu) = - {1 \over 16} {\Phi^7 \over \left(\alpha \Phi \right)^2} \left(1-\mu^2 \right) \left({s \over 1-s}\right)^2 \hspace{0.55in} &\\ \mbox{} \times \Bigl\{\left[ \left(1-s\right) s \hat \omega_{,s} \right]^2 + \left(1 - \mu^2 \right) \hat\omega^2_{,\mu}\Bigr\} \nonumber \\ \mbox{} - 2 \pi \Phi^5 \bar{r}_e^2 \left({s \over 1-s}\right)^2 \left[\left(\bar\rho_0 +\bar\rho_i + \bar{P} \right) {1 \over 1 -v^2} - \bar{P}\right], \hspace{-0.15in} \nonumber \end{eqnarray} \begin{eqnarray} \tilde{S}_{\alpha\Phi}(s,\mu) = \alpha\Phi \Bigg[\!\!\Bigg[ {7 \over 16} {\Phi^6 \over (\alpha \Phi)^2} \left(1 - \mu^2\right) \left({s \over 1-s}\right)^2 \hspace{0.26in} & \\ \mbox{}\times\Bigl\{[\left(1-s\right) s \hat\omega_{,s}]^2 + \left(1-\mu^2\right)\hat\omega^2_{,\mu} \Bigr\} \nonumber \\ \mbox{} + 2 \pi \Phi^4 \bar{r}^2_e \left({s\over 1-s}\right)^2 \left[\left(\bar\rho_0 + \bar\rho_i + \bar{P} \right) {1 \over 1-v^2} - \bar{P} \right] \hspace{-0.17in}\nonumber \\ \mbox{} + 4 \pi \Phi^4 \bar{r}^2_e \left({s \over 1-s}\right)^{\!2} \left[\left(\bar\rho_0 + \bar\rho_i +\bar{P}\right) {v^2 \over 1-v^2} + 3\bar{P} \right]\!\!\Bigg]\!\!\Bigg], \hspace{-0.3in} \nonumber \end{eqnarray} and the source term analogous to Eq.~(32) of CST is \begin{eqnarray} \tilde{S}_{\hat\omega}(s,\mu) &=& s^2 (1-s)^2 \left[{1 \over \alpha\Phi} \left(\alpha \Phi\right)_{,s} - {7 \over \Phi} \Phi_{,s} \right] \hat\omega_{,s} \nonumber \\ &&\mbox{} + (1 - \mu^2) \left[{1 \over \alpha \Phi} \left(\alpha \Phi\right)_{,\mu} - {7 \over \Phi} \Phi_{, \mu} \right] \hat\omega_{, \mu} \\ \nonumber &&\mbox{} - 16 \pi \Phi^4 \bar{r}^2_e \left({s \over 1-s}\right)^2 { \bar\rho_0 + \bar\rho_i+\bar{P} \over 1-v^2} \left(\hat\Omega - \hat\omega \right), \end{eqnarray} where $s$ is an auxiliary radial coordinate defined in CST. The entire iterative scheme used to solve the approximate equations is identical to the one in CST. To calibrate the approximation, we first compute an exact sequence of constant rest mass polytropes of increasing angular momentum. Each member of the sequence is specified by two parameters: the ratio of polar to equatorial radius, and the central rest-mass density. We next compute the approximate sequence using the same values for these two parameters for each model. We then compare the metric coefficients of corresponding models using the relationships in (\ref{eqn:CST_coefs}). We also compare global quantities such as the total mass and angular momentum. As a further diagnostic, we calculate two relativistic virial quantities\cite{gourgoulhon94,bonazzola94} whose values should be identically one for an exact equilibrium solution. In the notation of CST, these quantities are \begin{eqnarray} \lambda_{2d} = 32\pi\int\left[P + (\epsilon + P){v^2\over 1-v^2}\right] e^{2\alpha} r dr d \theta \Bigg/ \hspace{0.25in} &\\ \int\Biggl\{\left({\partial\gamma\over\partial r} + {\partial\rho\over\partial r}\right)^2 + {1 \over r^2} \left({\partial\gamma\over\partial\theta} + {\partial\rho\over\partial\theta}\right)^2 \hspace{0.7in}\nonumber \\ \mbox{} - 3e^{-2\rho}\sin^2\theta\left[ r^2 \left({\partial\omega\over\partial r}\right)^2 + \left({\partial\omega\over\partial theta}\right)^2 \right]\Biggr\} r dr d\theta, \hspace{-0.3in}\nonumber \end{eqnarray} \begin{eqnarray} \lambda_{3d} = 16 \pi \int \left[3P + (\epsilon + P ) {v^2 \over 1-v^2} \right] \hspace{0.85in} \\ \mbox{}\times e^{2\alpha+(\gamma-\rho)/2} r^2 \sin\theta dr d\theta\Bigg/ &\nonumber \\ \int\bigg[\partial(\gamma + \rho)\partial(\gamma + \rho) - \partial\alpha\partial\gamma + \partial\alpha\partial\rho \hspace{0.75in} \nonumber \\ \mbox{} - {1\over 2r}(1-e^{2\alpha-\gamma+\rho}) \Bigl(4{\partial\alpha\over\partial r} + {4 \over r \tan\theta}{\partial\alpha\over\partial\theta} - {\partial\gamma\over partial r} \nonumber \nonumber \\ \mbox{} - {1 \over r \tan\theta}{\partial\gamma\over\partial\theta} + {\partial\rho\over\partial r} + {1 \over r \tan\theta} {\partial\rho\over\partial\theta}\Bigr) \hspace{-0.25in} \nonumber \\ \mbox{} -{3\over2}e^{-2\rho}r^2\sin^2\theta \partial\omega\partial\omega\bigg] e^{(\gamma-\rho)/2} r^2 \sin \theta dr d\theta,\hspace{-0.35in}\nonumber \end{eqnarray} where \begin{equation} \partial \alpha \partial \rho \equiv {\partial \alpha \over \partial r} {\partial \rho \over \partial r} + {1 \over r^2} {\partial \alpha \over \partial r} {\partial \rho \over \partial \theta} \end{equation} and $\epsilon = \rho_0 + \rho_i$ is the total mass-energy density. Here $\lambda_{3d}$ involves an integration with a 3-dimensional volume element $r^2\sin\theta\,dr\,d\theta$ and is the relativistic generalization of the classical virial theorem \begin{equation} 2 E_{\rm kin} + 3(\Gamma-1) E_{\rm int} + U_{\rm grav} =0. \end{equation} The quantity $\lambda_{2d}$ involves an integration with a 2-dimensional volume element $r\,dr\,d\theta$. The discrepancy from unity is a measure of {\it numerical} error for our solutions of the exact equations. It is a measure of the larger {\it inherent} error for our solutions of the approximate equations. \section{Axisymmetric Rotating Star: Numerical Results} \label{sec:axi_results} To calibrate the approximate scheme against the exact solution, we choose the most stringent case, in which the configuration is very relativistic and rapidly rotating. When it is rotating rapidly, there are large deviations from spherical symmetry, so that the approximation is no longer exact. For polytropes, the largest rotation is attained for nearly incompressible matter, i.e. for large $\Gamma=1+1/n$ or small polytropic index $n$. We choose $n=0.5$. In constructing an exact sequence of rotating equilibria as a benchmark, we start with a nonrotating star having a central value of energy density $\bar\epsilon = 1$ (note that all ``barred'' quantities are nondimensional as defined in CST). This configuration is relativistic, with $M/R=0.298$ and rest mass $\bar{M}_0 = 0.148$, just below the maximum rest mass of a nonrotating star for this equation of state ($\bar{M}_0 = 0.151$). Holding the rest mass constant, we construct a sequence of increasing uniform rotation, up to the point of mass shedding. As described above, we then construct the corresponding models with the same central value of $\bar\epsilon$ and ratio of polar to equatorial radius using the approximate scheme. A comparison of some of the global quantities for the sequence is given in Table~\ref{tab:Normal_table}. The high values of polar redshift $Z_p$ and $T/W$ confirm that the sequence is both highly relativistic and rapidly rotating. As expected, the deviations are largest near the mass shed limit, but even there they are never worse than about 1\%. We can understand why the overall discrepancy is small by looking at Fig.~\ref{fig:conf_flatness}. Here we plot a measure of the deviation in the exact solution from conformal flatness, which is assumed in the approximate method. In the figure we plot the angular profile at selected radii of the quantity \begin{equation} \label{eqn:Delta_def} \Delta\equiv{\alpha_{\rm CST} - (\gamma - \rho)/2 \over \alpha_{\rm CST}} \end{equation} computed for the exact rotating model with $T/W = 0.159$. Note that this quantity is identically zero on the axis because of local flatness there. The maximum deviation occurs on the equator ($\bar{r}=0.48$), but is only about 5\%. Outside the star, $\Delta\to 0$ as $r\to\infty$. \begin{figure} \epsfxsize=3.4in\epsffile{Fig1.eps} \caption{Angular profile of the deviation of the exact solution from conformal flatness at selected radii. The deviation $\Delta$ is defined in Eq.~(\protect\ref{eqn:Delta_def}). The star is a rapidly rotating, highly relativistic polytrope with $n=0.5$ and rest mass just below the maximum rest mass of a nonrotating star for this equation of state. The radii $\bar{r}$ are in the nondimensional units of CST, and $\mu=\cos\theta$.} \label{fig:conf_flatness} \end{figure} In Fig.~\ref{fig:phi_error} we plot along an equatorial radius the fractional error in the conformal factor, \begin{equation} \delta\Phi = {\Phi - \Phi_{\rm exact} \over \Phi_{\rm exact}}, \end{equation} where $\Phi_{\rm exact}\equiv\exp[(\gamma-\rho)/4]$. Similarly, in Fig.~\ref{fig:omega_error} we plot the fractional error $\delta\omega$. Figure~\ref{fig:density} shows the mass-energy $\bar\epsilon$ along an equatorial radius for the two schemes. The two profiles are almost coincident. \begin{figure} \epsfxsize=3.4in\epsffile{Fig2.eps} \caption{Fractional error in the conformal factor $\Phi$ along an equatorial radius for the star in Fig.~\protect\ref{fig:conf_flatness}.} \label{fig:phi_error} \end{figure} \begin{figure} \epsfxsize=3.4in\epsffile{Fig3.eps} \caption{Fractional error in the metric coefficient $\omega$ along an equatorial radius for the star in Fig.~\protect\ref{fig:conf_flatness}.} \label{fig:omega_error} \end{figure} \begin{figure} \epsfxsize=3.4in\epsffile{Fig4.eps} \caption{Total mass-energy density $\bar\epsilon$ along an equatorial radius for the star in Fig.~\protect\ref{fig:conf_flatness}. The solid line shows the exact solution, the dotted line, the approximate solution.} \label{fig:density} \end{figure} A further comparison is provided by Fig.~\ref{fig:virial_normal}, which shows the virial quantities $\lambda_{2d}$ and $\lambda_{3d}$ along the sequence, computed for each of the two schemes. In the case of the exact method, the deviation from unity is a measure of numerical error, which is less than 0.1\%. The deviation for the approximate method measures the inherent error, which is about a factor of 10 bigger. \begin{figure} \epsfxsize=3.4in\epsffile{Fig5.eps} \caption{Virial quantities along the sequence in Table~\protect\ref{tab:Normal_table}. Results for the exact equations are shown by the solid line for $\lambda_{2d}$ and the dotted line for $\lambda_{3d}$. Results for the approximation are shown by the short dash line for $\lambda_{2d}$ and the long dash line for $\lambda_{3d}$.} \label{fig:virial_normal} \end{figure} To push the approximate scheme to the limit, we now consider a second equilibrium sequence, a ``supramassive'' sequence. This sequence has no nonrotating member, since its rest mass exceeds the maximum rest mass of a nonrotating star for this equation of state ($\bar{M}_0 = 0.151$). Thus the sequence exists only by virtue of rotation. We construct the supramassive sequence with $\bar{M}_0 = 0.176$. We expect the discrepancy between the approximate and exact methods to be somewhat larger for this sequence since it is everywhere far from spherical symmetry. This expectation is borne out in Table~\ref{tab:Supra_table} and Fig.~\ref{fig:virial_supra}. Nevertheless, the discrepancy is not very large. \begin{figure} \epsfxsize=3.4in\epsffile{Fig6.eps} \caption{Virial quantities along the supramassive sequence in Table~\protect\ref{tab:Supra_table}. Results for the exact equations are shown by the solid line for $\lambda_{2d}$ and the dotted line for $\lambda_{3d}$. Results for the approximation are shown by the short dash line for $\lambda_{2d}$ and the long dash line for $\lambda_{3d}$.} \label{fig:virial_supra} \end{figure} \section{Conclusion} \label{sec:conclusions} We have tested Wilson's approximation scheme on rapidly rotating relativistic stars. Since these are equilibrium objects, it is necessary that the scheme give reasonably accurate results if we are to believe its predictions for more complicated systems such as binaries. In fact, we have found that the method works remarkably well, even for highly relativistic objects far from spherical symmetry. The largest errors in any quantities we examined were around 5\%, and in general the errors were much smaller. Global measures such as virial quantities were in error by far less than 1\%. This agreement is very encouraging. \acknowledgments We thank E. Gourgoulhon for useful discussions about the work on relativistic virial quantities in Refs.~\cite{gourgoulhon94,bonazzola94}. This work was supported in part by NSF Grants Nos.~AST 91-19475 and PHY~94-08378, and by NASA Grant NAGW-2364 to Cornell University. We also acknowledge support from the Grand Challenge Grant NSF PHY 93-18152 / ASC 93-18152.
1,477,468,751,242
arxiv
\section{Introduction} Many new charmonium-like states have been discovered by the B-factory experiments, typically via a prominent hadronic decay to a known charmonium state, such as $J/\psi$, $\psi(2S)$ or $\chi_{c1}$. Some have attracted particular interest because of their net electric charge~\cite{zplus}. Three of the new (neutral) states were discovered by Belle in the 3.90--3.95~GeV/$c^2$ mass region. The $X(3940)$ was found in the $e^+ e^- \to J/\psi X$ double charmonium production process, with a prominent decay to the $D\bar{D}^*$ final state~\cite{bellex}. The $Y(3940)$ was observed in the $B$ decay process $B^- \to Y(3940) K^-$ with $Y(3940) \to \omega J/\psi$~\cite{belley,babary}, and is a candidate for an exotic state, such as a hybrid meson ($c\bar{c}g$), or a $D^*\bar{D}^*$ bound state~\cite{branz}. The $Z(3930)$ was found in the $\gamma \gamma \to D\bar{D}$ process~\cite{bellez}, and is usually identified with the $\chi_{c2}(2P)$. These three states appear in different production and decay processes, and are usually considered to be distinct particles, however there is no decisive evidence for this. The interpretation of these states has been discussed by many authors: see, {\it e.g.}, Ref.~\cite{godfrey}. It is important to search for a signature of the $Y(3940)$ or any other resonant state contributing to two-photon production of $\omega J/\psi$. This final state is the lightest combination of two vector mesons with definite $C$-even and $I=0$ quantum numbers that can be produced in two-photon processes via a hidden-charm state. In this paper we present measurements of the $\gamma \gamma \to \omega J/\psi$ process in the 3.9--4.2~GeV/$c^2$ mass region, in which we observe a resonant enhancement. The signal is from the two-photon process $e^+e^-\rightarrow e^+e^- \omega J/\psi$ in the ``zero-tag'' mode, where neither the final-state electron nor positron recoiling from photon emission are detected. We use experimental data recorded with the Belle detector~\cite{belle} at the KEKB $e^+e^-$ asymmetric-energy (3.5 on 8 GeV) collider~\cite{kekb}, corresponding to an integrated luminosity of 694~fb$^{-1}$. The data are accumulated mainly on the $\Upsilon(4S)$ resonance $(\sqrt{s} = 10.58~{\rm GeV})$ and 60~MeV below it. A small fraction of data from different beam energies near 10.36~GeV (the $\Upsilon(3S)$ mass) and 10.87~GeV (the $\Upsilon(5S)$ mass) is also included in the sample. A comprehensive description of the Belle detector is given elsewhere~\cite{belle}. Charged tracks are reconstructed in a central drift chamber (CDC) located in a uniform 1.5~T solenoidal magnetic field. The $z$ axis of the detector and the solenoid is along the positron beam, with the positrons moving in the $-z$ direction. Track trajectory coordinates near the collision point are measured by a silicon vertex detector (SVD). Photon detection and energy measurements are provided by a CsI(Tl) electromagnetic calorimeter (ECL). A combination of silica-aerogel Cherenkov counters (ACC), a time-of-flight counter (TOF) system consisting of a barrel of 128 plastic scintillation counters, and specific ionization ($dE/dx$) measurements in the CDC provides $K/\pi$ separation for charged tracks over a wide momentum range. The magnet return iron is instrumented to form a $K_L$ detection and muon identification (KLM) system that detects muon tracks. Signal candidates are triggered by a variety of track triggers that require two or more CDC tracks associated with TOF hits, ECL clusters, a total energy deposit in the ECL above a threshold (0.5~GeV), or a muon track in the KLM detector. In addition, events with a total ECL energy above 1.1~GeV are triggered by a separate logic. Because of the presence of a lepton pair in the final state of the signal processes, a combination of the above triggers provides a high overall trigger efficiency, ($98\pm 2$)\%. We select signal event candidates by reconstructing all the final state particles from $\omega \to \pi^+\pi^-\pi^0$ and $J/\psi \to l^+l^-$ ($l = e \ {\rm or}\ \mu)$. Twelve selection criteria are imposed: (1) there are just 4 charged tracks with transverse momentum $p_t > 0.1$~GeV/$c$ originating in the beam collision region; (2) the net charge of the tracks is zero; (3) none of the tracks is identified as a kaon (we require a likelihood ratio ${\cal L}(K)/({\cal L}(K)+{\cal L}(\pi)) < 0.8$, which is satisfied by 99.6\% of pions but only 5\% of kaons, for momenta below 0.8~GeV/$c$); (4) there is a net-charge-zero combination of two tracks whose invariant mass is in the $J/\psi$ mass region, $|M(2~{\rm tracks}) - M_{J/\psi}|<0.2$~GeV/$c^2$, where $M_{J/\psi}=3.0969$~GeV/$c^2$ and we assume the pion mass for each of the two tracks; (5) there is one or more neutral pion candidate formed by a mass-constrained fit of two photons, with $\chi^2 < 4$; (6) the number of $\pi^0$'s with $p_t > 0.1$~GeV/$c$ must not exceed one in an event. If there is a $\pi^0$ satisfying the $p_t$ condition, it is accepted as the only $\pi^0$ candidate in that event. If no $\pi^0$ satisfies the $p_t$ condition, we retain all the $\pi^0$ candidates (below $p_t<0.1$~GeV/$c$) at this stage; (7) events in the kinematic region of initial-state-radiation processes, $e^+e^- \to \gamma X$, where the photon is emitted very close to the direction of the incident $e^-$ beam, are eliminated. We reject events satisfying the following condition on the $z$-component of the laboratory momentum for the final-state system $X$, $P_z < (M_5^2-49.0~{\rm GeV}^2/c^4)/14.0~{\rm GeV}/c^3+ 0.6~{\rm GeV}/c$, where $P_z$ and $M_5$ are the $z$-component of the momentum and the invariant mass of the system constructed from the four tracks and a neutral pion candidate, respectively. For further analysis we examine only events with $W<4.3$~GeV, where $W$ is defined by $W = M_5 - M(l^+l^-) + M_{J/\psi}$, using a refined two-lepton invariant mass ($M(l^+l^-)$) based on the lepton flavor identified by the following criteria: (8) if either of the tracks is identified as an electron, based on the ECL energy deposit, the tracks are identified as $e^+e^-$. Otherwise, if either track is identified as a muon based on KLM information, the tracks are identified as $\mu^+\mu^-$. An event that fails both tests is rejected. If one or more photons with energy between 20 and 200 MeV are found within $3^\circ$ of either the $e^+$ or $e^-$ track, the energy of the most energetic photon near the track is added to the track momentum. Following this correction, (9) we refine the $J/\psi$ selection with a more stringent requirement for the lepton-pair invariant mass, 3.07~GeV/$c^2 < M(l^+l^-) < 3.12$~GeV/$c^2$; (10) we suppress $\psi(2S) \pi^0$ events with the mass difference requirement, $|M(l^+l^-\pi^+\pi^-) - M(l^+l^-) - 0.589~{\rm GeV}/c^2| > 0.01$~GeV/$c^2$; (11) to select $\omega$ candidate, a condition on the $\pi^+\pi^-\pi^0$ invariant mass, 0.753~GeV/$c^2 < M(3\pi) < 0.813$~GeV/$c^2$, is imposed. If there are multiple $\omega$ candidates due to multiple $\pi^0$'s in an event, we choose the one with the smallest $\chi^2$ in the $\pi^0$ mass constrained fit, in order to avoid multiple entries in the final $\omega J/\psi$ mass spectrum. Finally, (12) we require transverse momentum balance for the 5-body system, $|\sum \mbox{\boldmath$p$}_t^*| < 0.1$~GeV/$c$, where $\mbox{\boldmath$p$}_t^*$ is the momentum of a particle in the $e^+e^-$ c.m. frame, in the plane perpendicular to the beam direction~\cite{foot1}. Figures 1(a) and 1(b) show the distributions of $M(l^+l^-)$ just after requirement (8) and the mass difference $M(l^+l^-\pi^+\pi^-)- M(l^+l^-)$ just after requirement (9), respectively. The $\psi(2S)$ contribution is effectively removed by criterion (10) above. \begin{figure} \centering {\epsfig{file=ojpff1.eps,width=82mm}} \caption{(a) The $M(l^+l^-)$ distribution just after the dilepton selection. (b) The $M(l^+l^-\pi^+\pi^-)-M(l^+l^-)$ distribution after the tight $J/\psi$ selection. Events between the arrows are rejected as consistent with $\psi(2S)$ production.} \label{fig:csrat} \end{figure} The main background process is multi-pion production from two-photon processes. However, after all selection requirements are applied, non-$\omega J/\psi$ backgrounds are rather small, as shown in the scatter plot in Fig.~2(a) for the samples where all the selections except those for $M(l^+l^-)$ and $M(3\pi)$ are applied. Figures~2(b) and 2(c) show the distributions of $M(l^+l^-)$ and $M(3\pi)$, respectively, in the selection bands for the opposite-side particle; for clarity, we exclude events somewhat below the $\omega J/\psi$ threshold, $W \leq 3.85$~GeV. In Figs.~2(b) and 2(c), the experimental $M(l^+l^-)$ and $M(3\pi)$ distributions are compared with those from the signal Monte Carlo (MC) events, which are generated assuming spin-parity ($J^P$) and mass ($W$) of the $\omega J/\psi$ system to be $0^+$ and 3.93~GeV/$c^2$, respectively. Details of the signal MC generation are given below. We confirm that the experimental mass distributions are consistent with those of signal MC events. We find that there are two events in the signal region with multiple $\omega$ candidates, out of 73 events in total; we choose only one combination in each event, according to criterion (11). The fraction is consistent with the 1--2\% multiple candidate rate expected from the signal MC sample. We show the $W$ distribution for the final $\gamma \gamma \to \omega J/\psi$ candidate events in Fig.~3. There is a prominent resonance-like peak around 3.92~GeV. It is far above the non-$\omega J/\psi$ background contribution, which is estimated from the events in the $\omega$ and $J/\psi$ mass sidebands (shown as shaded histograms for comparison); we define eight sideband regions in the plane of Fig.~2(a) with the same dimensions as the signal region, {\it i.e.}, each region centered at 3.035~GeV, 3.095~GeV or 3.155~GeV with a width of 0.05~GeV in the $M(l^+l^-)$ direction and centered at 0.693~GeV, 0.783~GeV and 0.873~GeV with a width of 0.06~GeV in the $M(3\pi)$ direction, and average the distribution over the eight regions. We modify the $W$ value of each sideband event plotted in Fig.~3, shifting it by the difference between the sum of mass coordinates of the central point of the signal region (3.878~GeV) from that of the sideband region where the event is found, for comparison to the signal-event distribution. Figure~4(a) shows a scatter plot of the transverse momentum balance {\it vs.} $W$ after requirement (11). A prominent concentration of events near $W=3.89-3.95$~GeV and $|\sum \mbox{\boldmath$p$}_t^*| < 0.05$~GeV/$c$ is visible; a comparison of the $|\sum \mbox{\boldmath$p$}_t^*|$ projection with signal MC is shown in Fig.~4(b). Based on these results, and the shape in $W$ (Fig.~3), we conclude that the concentration of events is due to a resonance formed in two-photon collisions. \begin{figure} \centering {\epsfig{file=ojpff2.eps,width=80mm}} \caption{(a) Scatter plot of $M(3\pi)$ {\it vs.} $M(l^+l^-)$ for the experimental data, with all the other cuts applied. The rectangle shows the signal region. (b) The $M(l^+l^-)$ and (c) $M(3\pi)$ distributions with all the other cuts applied, and requiring $W > 3.85$~GeV. Entries due to multiple $\omega$ candidates in a given event are included. Points with error bars show the data; the dashed histograms show signal MC events, normalized to the data yield in the selection area, with the scaled sideband yield subtracted. The arrows show the selection regions.} \label{fig:csrat} \end{figure} The $W$ distribution for the final candidate events is fitted by an incoherent sum of resonant and background components. We adopt an S-wave Breit-Wigner function with a variable width for the resonant component, $(2N_R/\pi)M^2\Gamma'/\{(W^2 - M^2)^2 + M^2\Gamma'^2)\}$ and $\Gamma' = \Gamma (p^*/p_0^*)$, where $p^*$ is the momentum of the two-body decay to $\omega J/\psi$, in the rest frame of a parent particle of mass $W$; $p^*_0$ is the value for $W=M$~\cite{belley}. The nominal mass ($M$), width ($\Gamma$) and yield parameter ($N_R$) are treated as fit parameters. We represent the background component by a quadratic function of $p^*$ that vanishes at the nominal $\omega J/\psi$ threshold, $M_{\rm th}=3.8796$~GeV/$c^2$. We also add a constant term, to represent the high $W$ tail, which, as the sideband study suggests, is dominated by non-$\omega J/\psi$ events. The sum of the two components has a functional form, $\{(a p^* + b p^{*2}) + c\}\theta(W-M_{\rm th})$, where $\theta(x)$ is a unit step function that is non-zero only for $x>0$. The parameters $a$, $b$ and $c$ are floated within the constraint that each of the two background components must be non-negative throughout the fitting region. The fit takes into account the $W$ resolution in the measurement, which is approximated by a double-Gaussian function from the signal MC events (59\% of the signal has a resolution $\sigma$ of 4.5~MeV, while the remainder has $\sigma=16$~MeV with the peak position displaced by $-4$~MeV). We perform an unbinned maximum likelihood fit in the region 3.875~GeV$<W<4.2~$GeV. The signal candidates with the smallest $W$ are the two events with $W$ between 3.879 and 3.880~GeV. The $W$ dependences of the efficiency and luminosity function are taken into account in the fitting function. The efficiency is determined using signal MC events as described in detail later. We use the $W$ dependence of the efficiency for $J^P=0^+$ for the nominal fit. Between the threshold and 3.96 GeV, the $W$-dependence is weak: the efficiency varies by 10\% only, and has a minimum near $W=3.92$~GeV. The obtained resonance parameters for the mass and the width are as follows: \begin{eqnarray} M &=& (3915 \pm 3 \pm 2)~{\rm MeV}/c^2, \nonumber \\ \Gamma &=& (17 \pm 10 \pm 3)~{\rm MeV}, \nonumber \end{eqnarray} \noindent where the first and second errors are statistical and systematic, respectively. The estimated yield from the resonant component in the fit is $49 \pm 14 \pm 4$ events in the region below 4.2~GeV. The statistical significance of the resonant peak is $7.7 \sigma$, which is determined from the difference of the logarithmic likelihoods, $-2 {\rm ln}(L/L_0)$, taking the difference of the number of degrees of freedom in the fits into account, where $L_0$ and $L$ are the likelihoods of the fits with and without a resonant component, respectively. The relevant fit curves are shown in Fig.~3. The $\chi^2$ of the nominal fit, determined using 10-MeV-width binning in the range 3.85--4.2~GeV, is 27.3, for 29 degrees of freedom. The systematic errors quoted above are determined from a study of alternate fits: we use a Breit-Wigner function with a constant width; we enlarge the invariant-mass resolution by 20\% (an over-estimate of the data-Monte Carlo difference allowed by the fit); we change the upper limit of the fit region in $W$ to 4.1~GeV and 4.3~GeV, respectively. The changes in the central values of the corresponding resonance parameter are combined in quadrature. We also take into account the uncertainty of the mass scale, estimated to be 1~MeV/$c^2$, in the measurement of $M$. There is no significant change in the parameters if $J^P=2^+$ is assumed; the changes of mass and width are less than 0.1~MeV/$c^2$ and 0.3~MeV, respectively. The resonant contribution for the $J^P=2^+$ assumption is 1.0~event smaller than that for $J^P=0^+$. \begin{figure} \centering {\epsfig{file=ojpff3.eps,width=80mm}} \caption{The $W$ distribution of the final candidate events (dots with error bars). The shaded histogram is the distribution of non-$\omega J/\psi$ backgrounds estimated from the sideband distributions. The bold solid, thinner solid and dashed curves are the total, resonance and background contributions, respectively, from the standard fit (see the text). The dot-dashed curve is the fit without a resonance.} \label{fig:csrat} \end{figure} \begin{figure} \centering {\epsfig{file=ojpff4.eps,width=87mm}} \caption{(a) Scatter plot of $p_t$ balance {\it vs.} $W$ for the final candidate events in which only requirement (12) is omitted. (b) The projection onto the $p_t$ balance axis for events with $W<3.95$~GeV. The dashed histogram is the expectation from signal MC events, normalized to the number of signal candidates in the selected region. The $p_t$ balance requirement is indicated by the arrow. } \label{fig:csrat} \end{figure} \begin{table}[t] \caption{Sources and sizes of systematic error in the efficiency determination} \label{tab:systerr} \begin{tabular}{c|c} \hline \hline Source & Syst.~ error (\%) \\ \hline Trigger efficiency & 2 \\ Track reconstruction & 4 \\ $\pi^0$ reconstruction & 3 \\ Particle identification cuts & 2 \\ Effect of background hits & 3 \\ $J/\psi$ selection & 3 \\ $\omega$ selection & 6 \\ $W$-dependence, effect of background, etc. & 3 \\ Luminosity function, integrated luminosity & 5 \\ \hline Total & 11\% \\ \hline \hline \end{tabular} \end{table} The efficiency for selecting $\gamma \gamma \to \omega J/\psi$ events is determined using signal MC events generated by TREPS code~\cite{treps}. We generate $10^5$ MC events for both $e^+e^-$ and $\mu^+\mu^-$ decays of $J/\psi$, at nine different $W$ points between 3.89 and 4.15~GeV. The efficiency for the signal process at $W=3.92$~GeV is determined to be $(1.85 \pm 0.20)\%$ ($(1.26 \pm 0.14)\%$) for the $J^P=0^+$ ($2^+$) assumption; the efficiency is defined for the range $Q^2 < 1.0$~GeV$^2$, for each incident photon. We assume production in helicity-2 for $J^P=2^+$~\cite{bellez} and decay to $\omega J/\psi$ in an S-wave for both $0^+$ and $2^+$. The other two possible $J^P$ assumptions, $0^-$ and $2^-$, are similar and give an efficiency close to that for $J^P=0^+$. Based on the efficiencies calculated for the two $J/\psi$ decay modes, the fraction of signal in the $e^+e^-$ mode is expected to be 36\%. This is consistent with the fraction in the data: 27 $J/\psi \to e^+e^-$ events among the 73 signal candidates. Sources of systematic errors in the efficiency determination and their contributions are listed in Table I. We confirm that the inefficiency due to each of the particle identification cuts, (3) and (8), is very small, less than 1\%, for signal events. The uncertainties in the efficiencies of the invariant mass cuts are estimated by varying the selection regions near $M(l^+l^-) = M_{J/\psi}$ and $M(3\pi) = M_{\omega}$ by $\pm 20\%$ in the MC. We sum the uncertainties in quadrature, and find 11\% in total. Treating the observed structure as a resonance denoted by $X(3915)$, we derive the product of the two-photon decay width and the branching fraction to $\omega J/\psi$, using the yield parameter $N_R$ from the fit and the selection efficiency. We obtain \begin{eqnarray} \Gamma_{\gamma \gamma}(X(3915)) {\cal B}(X(3915) \to \omega J/\psi)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber \\ \ \ \ \ \ \ \ \ \ \ \ = \left\{ \begin{array}{ll} (61 \pm 17 \pm 8)~ {\rm eV} & \mbox{for $J^P=0^+$} \\ (18 \pm 5 \pm 2)~ {\rm eV} & \mbox{for $J^P=2^+$, helicity-2.} \end{array} \right. \nonumber \end{eqnarray} Based on this result, and the measured width $\Gamma$, the product of the two partial widths of the $X(3915)$, $\Gamma_{\gamma\gamma}(X) \Gamma_{\omega J/\psi}(X)$ is of order $10^3$~keV$^2$. If we assume $\Gamma_{\gamma\gamma} \sim {\cal O}(1$~keV), typical for an excited charmonium state, this implies $\Gamma_{\omega J/\psi} \sim {\cal O}(1$~MeV): a rather large value for a charmonium-transition partial width of such a state. This value of the product of the partial decay widths is roughly compatible with the prediction assuming the $D^*\bar{D}^*$ bound-state model~\cite{branz}. To conclude, we have observed a resonance-like enhancement in the $\gamma \gamma \to \omega J/\psi$ process with a statistical significance of $7.7\sigma$, which contains $49 \pm 14 \pm 4$ events in the peak component. The mass and width have been measured to be $M = (3915 \pm 3 \pm 2)~{\rm MeV}/c^2$ and $\Gamma = (17 \pm 10 \pm 3)~{\rm MeV}$, respectively. These values are consistent with those of the $Y(3940)$, which is seen in the $\omega J/\psi$ final state~\cite{belley, babary}, and close to those of the $Z(3930)$, which is seen in $\gamma \gamma \to D\bar{D}$~\cite{bellez}. We thank the KEKB group for excellent operation of the accelerator, the KEK cryogenics group for efficient solenoid operations, and the KEK computer group and the NII for valuable computing and SINET3 network support. We acknowledge support from MEXT, JSPS and Nagoya's TLPRC (Japan); ARC and DIISR (Australia); NSFC (China); DST (India); MEST, KOSEF, KRF (Korea); MNiSW (Poland); MES and RFAAE (Russia); ARRS (Slovenia); SNSF (Switzerland); NSC and MOE (Taiwan); and DOE (USA).
1,477,468,751,243
arxiv
\section{Introduction} The prime counting function $\pi(x)$ denotes the number of primes less or equal to $x$. In 1872, Lionnet \cite{lionnet} raised the question whether the inequality \begin{equation} \pi(2n) - \pi(n) \leq \pi(n) \tag{1.1} \label{1.1} \end{equation} holds for every integer $n \geq 2$. This means that for each integer $n \geq 2$ the interval $(n,2n]$ contains at most as many prime numbers as the interval $[2,n]$. A first progress concerning this question was done by Landau \cite[p.\:215--216]{landau}. He used the Prime Number Theorem, i.e. \begin{displaymath} \pi(x) = \frac{x}{\log x} + O \left( \frac{x}{\log^2x} \right) \end{displaymath} as $x \to \infty$, to show that \eqref{1.1} holds for every sufficiently large positive integer $n$. In 1923, Hardy and Littlewood \cite{hardy} conjectured that \begin{displaymath} \limsup_{n \to \infty} (\pi(x+n) - \pi(n)) \leq \pi(x) \end{displaymath} for every $x \geq 2$. From here it has been derived the well known conjecture (in the following denoted by HLC) that \begin{equation} \pi(m+n) \leq \pi(m) + \pi(n) \quad\q \forall \, m,n \in \mathds{N} \setminus \{ 1 \}. \tag{1.2} \label{1.2} \end{equation} Clearly, the HLC is a generalization of Lionnet's question \eqref{1.1}. Although the HLC could neither be proven nor disproved in general so far, some special cases can be shown. As a consequence of their explicit estimates for $\pi(x)$, Rosser and Schoenfeld \cite{rosser1975} stated without proof that \begin{equation} \pi(2x) - \pi(x) \leq \pi(x) \tag{1.3} \label{1.3} \end{equation} for every real $x \geq 3$. A detailed proof was finally given by Kopetzky and Schwarz \cite{kopetzky}. If we combine \eqref{1.3} with $\pi(4) - \pi(2) = \pi(2)$, it turns out that Lionnet's inequality \eqref{1.1} indeed holds for every integer $n \geq 2$. Erdös \cite{erdos} reported that Ungar verified the HLC for every pair of integers $(m,n)$ satisfying $2 \leq \min(m,n) \leq 41$. One year later, Schinzel and Sierpi\'nski \cite{schinzel} could show that the inequality is fulfilled for every pair of integers $(m,n)$ with $2 \leq \min(m,n) \leq 132$. In a later paper, Schinzel \cite{schinzel2} extended this range to $2 \leq \min(m,n) \leq 146$. The current best result in this direction was given by Gordan and Rodemich \cite{rodemich}. They found that the HLC is fullfilled for every pair of integers $(m,n)$ satisfying \begin{equation} 2 \leq \min(m,n) \leq 1731. \tag{1.4} \label{1.4} \end{equation} The next result is due to Panaitopol \cite[Theorem 1]{pana1}. He showed that the HLC is true for every pair of integers $(m,n)$ satisfying $m, n \geq 2$ and \begin{displaymath} \frac{m}{29} \leq n \leq m. \end{displaymath} In \cite[Proposition 3]{dusart2}, Dusart improved the result of Panaitopol by showing that the HLC is true for every pair of positive real numbers $(x,y)$ satisfying $x, y \geq 3$ and \begin{equation} \frac{x}{109} \leq y \leq x. \tag{1.5} \label{1.5} \end{equation} Using explicit estimates for the prime counting function $\pi(x)$, we find the following improvement. \begin{thm} \label{thm101} Let $m$ and $n$ be integers satisfying $m, n \geq 2$ and $m/1950 \leq n \leq m$. Then we have \begin{displaymath} \pi(m+n) \leq \pi(m) + \pi(n). \end{displaymath} \end{thm} In 1975, Udrescu \cite{udrescu} found the following generalization. Under the assumption that $n$ satisfies $\varepsilon m \leq n \leq m$, where $\varepsilon$ is a real number with $0 < \varepsilon \leq 1$, he showed that the HLC holds for every sufficiently large positive integer $m$. Dusart \cite{dusart2} showed that Udrescu's result holds for every integer $m \geq e^{3.1/\log(1+\varepsilon)}$. We give the following improvement. \begin{thm} \label{thm102} Udrescu's result holds for every integer \begin{displaymath} m \geq e^{\sqrt{0.3426/\log(1+\varepsilon)}}. \end{displaymath} \end{thm} In \cite{pana2}, Panaitopol \cite{pana2} used explicit estimates for the prime counting function $\pi(x)$ to get that the HLC is true for all positive integers $m,n \geq 2$ with \begin{equation} \pi(m) \leq n \leq m. \tag{1.6} \label{1.6} \end{equation} Since $\pi(x) \sim x/\log x$ as $x \to \infty$, the last result yields an improvement of Theorem \ref{thm101} for all sufficiently large values of $m$. In this paper, we find the following refinement of \eqref{1.6}. \begin{thm} \label{thm103} Let $c_0 = 0.70881678090424862707121$. Then we have $\pi(m+n) \leq \pi(m) + \pi(n)$ for all integers $m \geq n \geq 2$ with $n \geq c_0m/\log^2 m$. \end{thm} In the case where $m+n \leq 10^{20}$, we can use some recent results concerning the distance of $\pi(x)$ and the \emph{logarithmic integral} $\text{li}(x)$, which is defined for every real $x > 1$ as \begin{displaymath} \text{li}(x) = \int_0^x \frac{\text{d}t}{\log t} = \lim_{\varepsilon \to 0+} \left \{ \int_{0}^{1-\varepsilon}{\frac{\text{d}t}{\log t}} + \int_{1+\varepsilon}^{x}{\frac{\text{d}t}{\log t}} \right \}, \end{displaymath} to get the following result. \begin{thm} \label{thm104} Let $c_1 = 2(1-\log 2) = 0.6137 \ldots$. Then we have $\pi(m+n) \leq \pi(m) + \pi(n)$ for all integers $m \geq n \geq 2$ satisfying $m + n \leq 10^{20}$ and \begin{displaymath} n \geq 2\sqrt{m} \left( 1 - \frac{2c_1}{\log m + c_1} \right). \end{displaymath} \end{thm} Finally, we find the following result which depends on the correctness of the Riemann hypothesis. \begin{thm} \label{thm105} Let $c_2 = 1/(4\pi)$. If the Riemann hypothesis is true, then $\pi(m+n) \leq \pi(m) + \pi(n)$ for all integers $m \geq n \geq 2$ satisfying $n \geq c_2 \sqrt{m}\log m \log(m\log^8m)$. \end{thm} \section{On a result of Segal} In 1962, Segal \cite[Theorem I]{segal} obtained the following inequality condition involving only prime numbers which is equivalent to the HLC. Here, as usual, $p_r$ denotes the $r$th prime number. \begin{lem}[Segal] \label{lem201} The \emph{HLC} is true if and only if \begin{equation} p_k \geq p_{k-q} + p_{q+1} - 1 \tag{2.1} \label{2.1} \end{equation} for all integers $k,q$ satisfying $k \geq 3$ and $1 \leq q \leq (k-1)/2$. \end{lem} Then, Segal \cite[Theorem II]{segal} used this equivalence to get the following result. \begin{lem}[Segal] \label{lem202} If the \emph{HLC} is false for some positive integer $m+n$, then the smallest such value of $m+n$ is the smallest value of $p_k$ for which \eqref{2.1} is false. \end{lem} He used a computer to see that the inequality \eqref{2.1} holds for every positive integer $k \leq 9679$; i.e. for every prime number $p_k \leq 101\,081$. Now it follows from Lemma \ref{lem202} that the HLC holds for all integers $m,n \geq 2$ with $m+n \leq 101\,081$. In 2001, Panaitopol \cite{pana1} improved Lemma \ref{lem201} by showing the following \begin{lem}[Panaitopol] \label{lem203} The \emph{HLC} is true if and only if the inequality \eqref{2.1} holds for all integers $k,q$ satisfying $k \geq 9680$ and $34 \leq q \leq (k-1)/27$. \end{lem} Using Lemmata \ref{lem202} and \ref{lem203} and a computer, Panaitopol \cite{pana1} found that the HLC is true for all integers $m,n \geq 2$ with $m+n \leq 3\,497\,861 = p_{250000}$. Extending this computation, we get the following \begin{prop} \label{prop204} Let $N_0 = 1.7 \times 10^9$. Then the \emph{HLC} holds for all integers $m,n \geq 2$ satisfying $m+n \leq 39\,708\,229\,123 = p_{N_0}$. \end{prop} \section{A Proof of Theorem \ref{thm101}} First, we set \begin{equation} f_c(t) = \frac{t}{\log t - 1 - c/\log t}. \tag{3.1} \label{3.1} \end{equation} By \cite[Corollary 3.5]{axler16}, we have $\pi(t) \geq f_1(t)$ for every $t \geq 468049$. Let $b$ a real number with $b \in (1,2)$ and let $B$ a positive real number so that $\pi(t) \leq f_b(t)$ for every $x \geq B$. Further, let $r$ and $s$ be positive real numbers with $r \geq s \geq 1$. We set \begin{equation} \lambda_b(r,s) = \frac{(b-1)(r+1) - \log(s+1)\log s}{2r \log(1+\frac{1}{r}) + 2\log(s+1)} + \frac{1}{2} \left( \log r - \log \left( 1 + \frac{1}{r} \right) \right) \tag{3.2} \label{3.2} \end{equation} and \begin{displaymath} \eta_b(r,s) = \frac{r\log r - \log s - (1+\log(s+1)\log s)\log(1+\frac{1}{r}) - br\log s}{r \log(1+\frac{1}{r}) + \log(s+1)} + \log \left( 1 + \frac{1}{r} \right) \log s. \end{displaymath} Then we get the following result. \begin{prop} \label{prop301} Let $r$ and $s$ be real numbers with $r \geq s \geq 1$. Let $\chi_b(r,s) = \lambda_b(r,s)^2 + \eta_b(r,s)$ and \begin{displaymath} \varphi_b(r,s) = \frac{\emph{sgn}(\chi_b(r,s)) + 1}{2} \cdot \chi_b(r,s). \end{displaymath} Then we have $\pi(x+y) \leq \pi(x) + \pi(y)$ for every pair of real numbers $(x,y)$ satisfying $x \geq y \geq 3$, \begin{equation} x \geq \max \left\{ \exp(\lambda_b(r,s) + \sqrt{\varphi_b(r,s)}), 468049r, \frac{B}{1+1/r} \right\}, \tag{3.3} \label{3.3} \end{equation} and $x/r \leq y \leq x/s$. \end{prop} \begin{proof} By \eqref{1.5}, we can assume that $r \geq s \geq 109$. Let $h(x,y) = \pi(x) + \pi(y) - \pi(x+y)$. We need to show that $h(x,y) \geq 0$. First, we note that \begin{equation} \log \left( 1 + \frac{x}{y} \right) - \frac{b}{\log(x+y)} + \frac{1}{\log y} \geq \log 110 - \frac{2}{\log 468049} \geq 0. \tag{3.4} \label{3.4} \end{equation} Since $\log(x/y) \geq \log s$, we have \begin{equation} \frac{1}{\log y - 1 - \frac{1}{\log y}} \geq \frac{1 + \frac{\log s}{\log y}}{\log x - 1 - \frac{1}{\log x}}. \tag{3.5} \label{3.5} \end{equation} From \eqref{3.3}, it follows that $(\log x - \lambda_b(r,s))^2 \geq \lambda_b(r,s)^2 + \eta_b(r,s)$. Substituting the definition of $\eta_b(r,s)$ into the last inequality, we see that \begin{align*} \log^2x & - 2\lambda_b(r,s)\log x - \log \left( 1 + \frac{1}{r} \right) \log r \geq \frac{r\log r - \log s - (1+\log(s+1)\log s)\log(1+\frac{1}{r}) - br\log s}{r \log(1+\frac{1}{r}) + \log(s+1)}. \end{align*} Let $\kappa(r,s) = r \log(1+1/r) + \log(s+1)$. Then we can use \eqref{3.2} to get that the last inequality is equivalent to \begin{align*} \kappa(r,s)\log \left(x + \frac{x}{r} \right)\log \frac{x}{r} & - b(r+1)\log \frac{x}{s} + r \log \frac{x}{r} \\ & \geq (b-1)\log s - (1+\log(s+1)\log s)\log \left( x+\frac{x}{r} \right). \end{align*} Since $x/r \leq y \leq x/s$, we get \begin{displaymath} \frac{\kappa(r,s)}{r} - \frac{b(1+\frac{1}{r})}{\log(x+y)} + \frac{1}{\log(x+y)} \geq \frac{(b-1)\log s}{r\log(x+y)\log y} - \frac{1}{r\log y} - \frac{\log(s+1)\log s}{r\log y}. \end{displaymath} Hence \begin{displaymath} \frac{\kappa(r,s)}{r} - \frac{b(1+\frac{1}{r})}{\log(x+y)} + \frac{1}{\log x} \geq \frac{b\log s}{r\log(x+y)\log y} - \frac{\log s}{r\log^2 y} - \frac{1}{r\log y} - \frac{\log(s+1)\log s}{r\log y}. \end{displaymath} Now we substitute the definitoin of $\kappa(r,s)$ to obtain the inequality \begin{displaymath} \log \left( 1+\frac{1}{r} \right) - \frac{b}{\log(x+y)} + \frac{1}{\log x} + \frac{1}{r} \left( \log \left( 1 + \frac{x}{y} \right) - \frac{b}{\log(x+y)} + \frac{1}{\log y} \right) \left( 1 + \frac{\log s}{\log y} \right) \geq 0. \end{displaymath} Therefore, \begin{equation} \log \left( 1+\frac{y}{x} \right) - \frac{b}{\log(x+y)} + \frac{1}{\log x} + \frac{1}{r} \left( \log(s+1) - \frac{b}{\log(x+y)} + \frac{1}{\log y} \right) \left( 1 + \frac{\log s}{\log y} \right) \geq 0. \tag{3.6} \label{3.6} \end{equation} Next, we note that $y \geq x/r \geq 468049$ and $x+y \geq x(1+1/r) \geq B$. Hence $h(x,y) \geq f_1(x) + f_1(y) - f_b(x+y)$, where $f_c(t)$ is defined as in \eqref{3.1}. Setting $g_c(t) = \log t - 1 - c/\log t$, we see that \begin{displaymath} h(x,y) \geq x \left( \frac{\log(1 + \frac{y}{x}) - \frac{b}{\log(x+y)} + \frac{1}{\log x}}{g_1(x)g_b(x+y)} \right) + y \left( \frac{\log(1 + \frac{x}{y}) - \frac{b}{\log(x+y)} + \frac{1}{\log y}}{g_1(y)g_b(x+y)} \right). \end{displaymath} Now we can use \eqref{3.4} and \eqref{3.5} to get the inequality \begin{displaymath} h(x,y) \geq x \left( \frac{\log(1 + \frac{y}{x}) - \frac{b}{\log(x+y)} + \frac{1}{\log x} + \frac{1}{r}(\log(1 + \frac{x}{y}) - \frac{b}{\log(x+y)} + \frac{1}{\log y})(1 + \frac{\log s}{\log y})}{g_1(x)g_b(x+y)} \right). \end{displaymath} Finally it suffices to apply the inequality \eqref{3.6}. \end{proof} Now we use Propositions \ref{prop204} and \ref{prop301} to give the following proof of Theorem \ref{thm101}. \begin{proof}[Proof of Theorem \ref{thm101}] We set $b = 1.15$. By \cite[Corollary 1]{axler2017}, we can choose $B = 38\,284\,442\,297$. In addition, we substitute the following explicit values for $r$ and $s$ into Proposition \ref{prop301} to get $\pi(x+y) \leq \pi(x) + \pi(y)$ for every $x \geq x_0$ and $x/r \leq y \leq x/s$, where $x_0$ is equal to the least integer greater than or equal to the right-hand side of \eqref{3.3}: \begin{center} \begin{tabular}{|l||c|c|c|c|c|} \hline $r$ \rule{0mm}{4mm} & $1950$ & $1949.9652$ & $1949.8838$ & $1949.6933$ & $1949.2476$ \\ \hline $s$ \rule{0mm}{4mm} & $1949.9652$ & $1949.8838$ & $1949.6933$ & $1949.2476$ & $1948.2049$ \\ \hline $x_0$\rule{0mm}{4mm} & $38\,284\,409\,814$ & $38\,284\,393\,330$ & $38\,284\,407\,670$ & $38\,284\,394\,575$ & $38\,284\,419\,151$ \\ \hline \hline $r$\rule{0mm}{4mm} & $1948.2049$ & $1945.7667$ & $1940.0707$ & $1926.7942$ & $1896.0125$ \\ \hline $s$\rule{0mm}{4mm} & $1945.7667$ & $1940.0707$ & $1926.7942$ & $1896.0125$ & $1825.5323$ \\ \hline $x_0$\rule{0mm}{4mm} & $38\,284\,398\,522$ & $38\,284\,417\,850$ & $38\,284\,399\,116$ & $38\,284\,426\,596$ & $38\,284\,405\,535$ \\ \hline \hline $r$\rule{0mm}{4mm} & $1825.5323$ & $1668.8817$ & $1344.8932$ & $ 785.8821$ & $ 189.9788$ \\ \hline $s$\rule{0mm}{4mm} & $1668.8817$ & $1344.8932$ & $ 785.8821$ & $ 189.9788$ & $ 109 $ \\ \hline $x_0$\rule{0mm}{4mm} & $38\,284\,440\,640$ & $38\,284\,412\,784$ & $38\,284\,406\,728$ & $38\,284\,305\,355$ & $38\,083\,977\,941$ \\ \hline \end{tabular} \end{center} In particular, we see that $\pi(m+n) \leq \pi(m) + \pi(n)$ for every $m \geq 38\,284\,440\,640$ and $m/1950 \leq n \leq m/109$. If $m \leq 38\,284\,440\,640$ and $m/1950 \leq n \leq m/109$, we get $m+n \leq (1+1/109)m \leq 39\,708\,229\,123$ and the result follows from Proposition \ref{prop204}. The remaining case where $m,n \geq 2$ and $m/109 \leq n \leq m$ is a direct consequence of \eqref{1.4} and \eqref{1.5}. \end{proof} \section{A Proof of Theorem \ref{thm102}} In this section, we use explicit estimates for the prime counting function $\pi(x)$ to give the following proof of Theorem \ref{thm102}. \begin{proof}[Proof of Theorem \ref{thm102}] If $\varepsilon \in [1/1950, 1]$, the result follows from Theorem \ref{thm101}. So, let $\varepsilon \in (0, 1/1950)$ and let $m,n \geq 2$ be integers with $\varepsilon m \leq n \leq m$ and $m \geq e^{\sqrt{0.3426/\log(1+\varepsilon)}}$. Then $m \geq 168\,527\,259\,431$. Hence, \begin{displaymath} \log(1+\varepsilon) \geq \frac{0.3426}{\log^2m} \geq \frac{0.3}{\log^2m} + \frac{1.1}{\log^3m}. \end{displaymath} Hence \begin{displaymath} \log(m+n) - 1 - \frac{1}{\log(m+n)} - \frac{3.15}{\log^2 m} - \frac{14.25}{\log^3 m} \geq \log m - 1 - \frac{1}{\log m} - \frac{2.85}{\log^2 m} - \frac{13.15}{\log^3 m}. \end{displaymath} Now we can use \cite[Corollary 3]{axler2017} to see that \begin{equation} \frac{m}{\log(m+n) - 1 - \frac{1}{\log(m+n)} - \frac{3.15}{\log^2(m+n)} - \frac{14.25}{\log^3(m+n)}} \leq \pi(m). \tag{4.1} \label{4.1} \end{equation} Since $\log 2 > 3.15/\log^2m + 14.25/\log^3m$, we get \begin{displaymath} \log(m+n) - 1 - \frac{1}{\log(m+n)} - \frac{3.15}{\log^2(m+n)} - \frac{14.25}{\log^3(m+n)} \geq \log n - 1 - \frac{1}{\log n}. \tag{4.2} \label{4.2} \end{displaymath} Note that the function $f(x) = xe^{\sqrt{0.3426/\log(1+x)}}$ is decreasing on the interval $(0, 1/1950)$. Hence we get $n \geq \varepsilon m \geq f(1/1950) \geq 86\,424\,235$. If we combine the inequality \eqref{4.2} with Corollary 3.5 of \cite{axler16}, it turns out that the inequality \begin{equation} \frac{n}{\log(m+n) - 1 - \frac{1}{\log(m+n)} - \frac{3.15}{\log^2(m+n)} - \frac{14.25}{\log^3(m+n)}} \leq \pi(n) \tag{4.3} \label{4.3} \end{equation} holds. By \cite[Corollary 1]{axler2017}, we have \begin{displaymath} \pi(m+n) \leq \frac{m+n}{\log(m+n) - 1 - \frac{1}{\log(m+n)} - \frac{3.15}{\log^2(m+n)} - \frac{14.25}{\log^3(m+n)}} \end{displaymath} and it suffices to apply \eqref{4.1} and \eqref{4.3}. \end{proof} \section{A Proof of Theorem \ref{thm103}} Let $k$ be a positive integer and $\varepsilon$ be positive real number. By Panaitopol \cite{pan3}, there exist positive real numbers $a_1, \ldots, a_k$ and two positive real numbers $\alpha_k$ and $\beta_k = \beta_k(\varepsilon)$ so that \begin{equation} \pi(x) \geq \frac{x}{\log x - 1 - \sum_{j=1}^k \frac{a_j}{\log^jx}} \quad\q (x \geq \alpha_k) \tag{5.1} \label{5.1} \end{equation} and \begin{equation} \pi(x) \leq \frac{x}{\log x - 1 - \sum_{j=1}^k \frac{a_j}{\log^jx} - \frac{\varepsilon}{\log^kx}} \quad\q (x \geq \beta_k). \tag{5.2} \label{5.2} \end{equation} Further, let $\gamma_k = \gamma_k(\varepsilon)$ be the smallest positive integer so that \begin{displaymath} \log 2 \geq \frac{\varepsilon}{\log^kx} + \sum_{j=1}^k \frac{a_j}{\log^jx} \end{displaymath} for every $x \geq \gamma_k$. Then we obtain the following result. \begin{prop} \label{prop401} Let $k$ be a positive integer and $\varepsilon, c$ be positive real numbers with $c > \varepsilon$. Then $\pi(x+y) \leq \pi(x) + \pi(y)$ for all real numbers $x,y \geq 2$ with $x \geq \max \{\alpha_k, \beta_k, \gamma_k, \exp(\sqrt[k]{c^2/(2(c - \varepsilon))}) \}$ and \begin{displaymath} \max \left\{ 5393, \frac{cx}{\log^k x} \right\} \leq y \leq x. \end{displaymath} \end{prop} \begin{proof} Since $x \geq \exp(\sqrt[k]{c^2/(2(c - \varepsilon))})$, we have \begin{displaymath} \frac{c}{\log^kx} - \frac{c^2}{2\log^{2k}x} \geq \frac{\varepsilon}{\log^kx}. \end{displaymath} Using the inequality $\log(1+t) \geq t - t^2/2$, which holds for every $t \geq 0$, we see that \begin{displaymath} \log (x+y) - \log x \geq \log \left( 1+ \frac{c}{\log^kx} \right) \geq \frac{\varepsilon}{\log^kx}. \end{displaymath} If we combine the last inequality with \eqref{5.1}, it turns out that \begin{equation} \frac{x}{\log (x+y) - 1 - \sum_{j=1}^k \frac{a_j}{\log^j(x+y)} - \frac{\varepsilon}{\log^k(x+y)}} \leq \frac{x}{\log x - 1 - \sum_{j=1}^k \frac{a_j}{\log^j x}} \leq \pi(x). \tag{5.3} \label{5.3} \end{equation} On the other hand, we have $y \leq x$ and $x \geq \gamma_k$. Hence \begin{equation} \frac{y}{\log (x+y) - 1 - \sum_{j=1}^k \frac{a_j}{\log^j(x+y)} - \frac{\varepsilon}{\log^k(x+y)}} \leq \frac{y}{\log y - 1}. \tag{5.4} \label{5.4} \end{equation} By Dusart \cite[p.\:55]{dusart}, we have $\pi(t) \geq t/(\log t - 1)$ for every $t \geq 5393$. Applying this to \eqref{5.4}, we get \begin{equation} \frac{y}{\log (x+y) - 1 - \sum_{j=1}^k \frac{a_j}{\log^j(x+y)} - \frac{\varepsilon}{\log^k(x+y)}} \leq \pi(y). \tag{5.5} \label{5.5} \end{equation} By \eqref{5.2}, we have \begin{displaymath} \pi(x+y) \leq \frac{x+y}{\log (x+y) - 1 - \sum_{j=1}^k \frac{a_j}{\log^j(x+y)} - \frac{\varepsilon}{\log^k(x+y)}} \end{displaymath} and it suffices to apply \eqref{5.3} and \eqref{5.5}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm103}] Let $k=2$. We set $a_1 = 1$ and $a_2 = 2.85$. By \cite[Corollary 3]{axler2017}, we can choose $\alpha_2 = 38\,099\,531$. Further, we set $\varepsilon = 0.70863503301170907614119$. Then we can use \cite[Theorem 2]{axler2017} to see that \eqref{5.2} holds for every $x \geq \beta_2 = 14\,000\,264\,036\,190\,262$. A simple calculation shows that $\gamma_2 = 23$. Now let $c = c_0$. Substituting these values into Proposition \ref{prop301}, it turns out that the inequality $\pi(m+n) \leq \pi(m) + \pi(n)$ holds for all integers $m \geq n \geq 2$ satisfying $m \geq 14\,000\,264\,036\,190\,263$ and $n \geq cm/\log^2 m$. If $m \leq 14\,000\,264\,036\,190\,262$, the claim follows from Theorem \ref{thm101}. \end{proof} \section{A Proof of Theorem \ref{thm104}} First, we note some results of Dusart \cite{dusart2018} concerning the distance of $\pi(x)$ and $\text{li}(x)$. \begin{prop}[Dusart] For every real $x$ with $2 \leq x \leq 10^{20}$, we have \begin{equation} \pi(x) \leq \emph{li}(x), \tag{6.1} \label{6.1} \end{equation} and for every real $x$ satisfying $1\,090\,877\leq x \leq 10^{20}$, we have \begin{equation} \emph{li}(x) - \frac{2\sqrt{x}}{\log x} \leq \pi(x). \tag{6.2} \label{6.2} \end{equation} \end{prop} \begin{proof} See \cite[Lemma 2.2]{dusart2018}. \end{proof} Now we use this result to find the following proof of Theorem \ref{thm104}. \begin{proof}[Proof of Theorem \ref{thm104}] By Theorems \ref{thm101} and \ref{thm103}, it suffices to consider the case where $n$ satisfies \begin{displaymath} 2 \sqrt{m} \leq n \leq m \times \min \left \{ \frac{1}{1950}, \frac{c_0}{\log^2 m} \right \}, \end{displaymath} where $c_0$ is given as in Theorem \ref{thm103}. If $m \leq 39\,687\,876\,365$, we get $m+n \leq (1+/1950)m \leq 39\,708\,229\,123$ and the result follows from Proposition \ref{prop204}. So we can assume that $m \geq 39\,687\,876\,366$. Using \eqref{6.1}, we see that $\pi(m+n) \leq \text{li}(m+n)$. Now we can use the mean value theorem to see that $\pi(m+n) \leq \text{li}(m) + n/\log m$. Applying \eqref{6.2} to this inequality, we get \begin{displaymath} \pi(m+n) \leq \pi(m) + \frac{2\sqrt{m}}{\log m} + \frac{n}{\log m}, \end{displaymath} which is equivalent to \begin{equation} \pi(m+n) \leq \pi(m) + \frac{2\sqrt{m}}{\log m} + \frac{n}{\log n - 1} - \frac{n(\log(m/n) + 1)}{\log m (\log n - 1)}. \tag{6.3} \label{6.3} \end{equation} Since $m \geq 39\,687\,876\,366$, we have $n \geq 887\,293$.So we can apply the inequality including $\pi(x)$ given in \cite[p.\: 55]{dusart} to \eqref{6.3} and get \begin{equation} \pi(m+n) \leq \pi(m) + \pi(n) + \frac{2\sqrt{m}}{\log m} - \frac{n(\log(m/n) + 1)}{\log m (\log n - 1)}. \tag{6.4} \label{6.4} \end{equation} In order to prove the theorem, we consider the following three cases. \textit{Case} 1. $\sqrt{m}\log m/\log \log m \leq n \leq c_0m/\log^2 m$. \newline In this first case, the inequality \eqref{6.4} implies that \begin{equation} \pi(m+n) \leq \pi(m) + \pi(n) + \frac{2\sqrt{m}}{\log m} - \frac{n(2 \log \log m - c_3)}{\log m (\log m - 2\log \log m + c_3)}, \tag{6.5} \label{6.5} \end{equation} where $c_3 = \log(c_0) - 1 = -1.17409\ldots$. The assumption $n \geq \sqrt{m} \log m/\log \log m$ implies that \begin{displaymath} \frac{n(2 \log \log m - c_3)}{\log m (\log m - 2\log \log m + c_3)} \geq \frac{2\sqrt{m}}{\log m}. \end{displaymath} Applying this to \eqref{6.5}, we obtain the inequality $\pi(m+n) \leq \pi(m) + \pi(n)$. \textit{Case} 2. $2 \sqrt{m} (1 + 4 \log \log m/\log m) \leq n \leq \sqrt{m}\log m/\log \log m$. \newline Here, the inequality \eqref{6.4} implies that \begin{equation} \pi(m+n) \leq \pi(m) + \pi(n) + \frac{2\sqrt{m}}{\log m} - \frac{n(\log(\sqrt{m}\log \log m/\log m) + 1)}{\log m (\log(\sqrt{m}\log m/\log \log m) - 1)}, \tag{6.6} \label{6.6} \end{equation} We have \begin{displaymath} n \geq 2\sqrt{m} \left( 1 + \frac{4 \log \log m}{\log m}\right) \geq 2\sqrt{m} \times \frac{\log (\sqrt{m}\log m/\log \log m) - 1}{\log(\sqrt{m}\log \log m/\log m) + 1}. \end{displaymath} Applying this to \eqref{6.4}, we see that the inequality $\pi(m+n) \leq \pi(m) + \pi(n)$ holds. \textit{Case} 3. $2 \sqrt{m} \leq n \leq 2 \sqrt{m} (1 + 4 \log \log m/\log m)$. \newline Let $r(x) = 1 + 4\log \log x/\log x$. In this latter case, a simple calculation shows that \begin{displaymath} n \geq 2\sqrt{m} \geq 2\sqrt{m} \times \frac{\log(2\sqrt{m}r(m)) - 1}{\log(\sqrt{m}/(2r(m))) + 1} \geq 2 \sqrt{m} \times \frac{\log n - 1}{\log(m/n) + 1}. \end{displaymath} Now we apply this to \eqref{6.4} to get the required inequality. \textit{Case} 4. $2 \sqrt{m}(1-2c_1/(\log m + c_1)) \leq n \leq 2 \sqrt{m}$. \newline In this latter case, we have \begin{displaymath} n \geq 2\sqrt{m} \left( 1 - \frac{2c_1}{\log m + c_1} \right) = 2\sqrt{m} \times \frac{\log(2\sqrt{m}) - 1}{\log(\sqrt{m}/2) + 1} \geq 2 \sqrt{m} \times \frac{\log n - 1}{\log(m/n) + 1}. \end{displaymath} Finally, we apply this to \eqref{6.4} to arrive at the end of the proof. \end{proof} \section{A Proof of Theorem \ref{thm105}} Under the assumption that the Riemann hypothesis is true, Schoenfeld \cite[Corollary 1]{schoenfeld} showed that \begin{displaymath} |\pi(x) - \text{li}(x)| \leq \frac{\sqrt{x}}{8 \pi} \, \log x \end{displaymath} for every $x \geq 2657$. In 2018, Dusart \cite[Proposition 2.6]{dusart2018} found the following refinement. \begin{prop}[Dusart] \label{prop701} If the Riemann hypothesis is true, then \begin{displaymath} |\pi(x) - \emph{li}(x)| \leq \frac{\sqrt{x}}{8 \pi} \, \log \left( \frac{x}{\log x} \right) \end{displaymath} for every real $x \geq 5639$. \end{prop} We use Proposition \ref{prop701} to find the following proof of Theorem \ref{thm105}. \begin{proof}[Proof of Theorem \ref{thm105}] If $m \leq 5 \times 10^{19}$, then $m + n \leq 2m \leq 10^{20}$ and the result follows directly from Theorem \ref{thm104}. So it suffices to consider the case where $m \geq 5\times 10^{19}$. In order to prove the theorem, we consider the following three cases. \textit{Case} 1. $n \geq c_2\sqrt{m}\log^3 m$. \newline If $n \geq c_0m/\log^2m$, where $c_0$ is given as in Theorem \ref{thm103}, the result follows directly from Theorem \ref{thm103}. Hence we can assume that $c_2\sqrt{m}\log^3 m \leq n \leq c_0m/\log^2 m$. By Proposition \ref{prop701}, we have $\pi(m+n) \leq \text{li}(m+n) + f(m+n)$, where $f(t) = (1/(8\pi)) \sqrt{t} \log(t/\log t)$. Now we use the mean value theorem to get \begin{displaymath} \pi(m+n) \leq \text{li}(m) + \frac{n}{\log m} + f(m) + \frac{n}{16\pi\sqrt{m}} \left( \log \left( \frac{m}{\log m} \right) + 2 \right). \end{displaymath} Next we apply Proposition \ref{prop701} to obtain the inequality \begin{displaymath} \pi(m+n) \leq \pi(m) + \frac{n}{\log m} + 2f(m) + \frac{n}{16\pi\sqrt{m}} \left( \log \left( \frac{m}{\log m} \right) + 2 \right), \end{displaymath} which is equivalent to \begin{displaymath} \pi(m+n) \leq \pi(m) + 2f(m) + \frac{n}{16\pi\sqrt{m}} \left( \log \left( \frac{m}{\log m} \right) + 2 \right) + \frac{n}{\log n - 1} - n \,\frac{\log(m/n) + 1}{\log m (\log n - 1)}. \end{displaymath} Since $m \geq 5 \times 10^{19}$, we have $n \geq c_2 \sqrt{m} \log^3m \geq 52\,511\,298\,895\,885$. So we can apply the inequality including $\pi(x)$ given in \cite[p.\:55]{dusart} to the last inequality and get \begin{equation} \pi(m+n) \leq \pi(m) + \pi(n) + 2f(m) + \frac{n}{16\pi\sqrt{m}} \left( \log \left( \frac{m}{\log m} \right) + 2 \right) - n \, \frac{\log(m/n) + 1}{\log m (\log n - 1)}. \tag{7.1} \label{7.1} \end{equation} Since $c_2\sqrt{m}\log^3 m \leq n \leq c_0m/\log^2m$, we see that \begin{displaymath} \pi(m+n) \leq \pi(m) + \pi(n) + 2f(m) + \frac{c_0\sqrt{m}}{16\pi\log^2 m} \left( \log \left( \frac{m}{\log m} \right) + 2 \right) - n \, \frac{2 \log \log m - c_3}{\log m (\log m - 2\log \log m + c_3)}, \end{displaymath} where $c_3 = \log(c_0) - 1 = -1.17409\ldots$. Now we substitute the definition of $f(t)$ to get $\pi(m+n) \leq \pi(m) + \pi(n) + g(m,n)$, where \begin{displaymath} g(m,n) = \frac{\sqrt{m}}{4\pi} \left( \left( 1 + \frac{c_0}{4\log^2m} \right) \log \left( \frac{m}{\log m} \right) + \frac{c_0}{2\log^2 m} \right) - n \, \frac{2 \log \log m - c_3}{\log m (\log m - 2\log \log m + c_3)}. \end{displaymath} Clearly, it suffices to show that $g(m,n) \leq 0$. This inequality is equivalent to \begin{equation} n \geq \frac{\sqrt{m}}{4\pi} \left( \left( 1 + \frac{c_0}{4\log^2m} \right) \log \left( \frac{m}{\log m} \right) + \frac{c_0}{2\log^2 m} \right) \frac{\log m (\log m - 2\log \log m + c_3)}{2 \log \log m - c_3}. \tag{7.2} \label{7.2} \end{equation} Since \begin{displaymath} - \log \log m + \frac{c_0}{4\log^2m} \log \left( \frac{m}{\log m} \right) + \frac{c_0}{2\log^2 m} \leq 0, \end{displaymath} the inequality $n \geq c_2 \sqrt{m} \log^3 m$ implies \eqref{7.2} and we get $\pi(m+n) \leq \pi(m) + \pi(n)$. \textit{Case} 2. $c_2 \sqrt{m}\log m \log(m\log^{13}m) \leq n \leq c_2 \sqrt{m} \log^3 m$. \newline From \eqref{7.1}, it follows that the inequality $\pi(m+n) \leq \pi(m) + \pi(n)$ holds if \begin{equation} n \geq \left( c_2 \sqrt{m} \log \left( \frac{m}{\log m} \right) + \frac{c_2}{16\pi} \left( \log \left( \frac{m}{\log m} \right) + 2 \right)\log^3 m \right) \frac{\log m (\log(c_2\sqrt{m}\log^3m) - 1)}{\log(\sqrt{m}/(c_2\log^3 m)) + 1}. \tag{7.3} \label{7.3} \end{equation} We have \begin{displaymath} \frac{c_2}{16\pi} \left( \log \left( \frac{m}{\log m} \right) + 2 \right)\log^3 m \leq c_2 \sqrt{m} \log \log m \end{displaymath} and \begin{displaymath} \frac{\log(c_2\sqrt{m}\log^3m) - 1}{\log(\sqrt{m}/(c_2\log^3 m)) + 1} \leq 1 + \frac{13\log \log m}{\log m}. \end{displaymath} So if $n$ fulfills the inequality $n \geq c_2 \sqrt{m}\log m \log(m\log^{13}m)$, we get the inequality \eqref{7.3}. Hence we have $\pi(m+n) \leq \pi(m) + \pi(n)$. \textit{Case} 3. $c_2 \sqrt{m}\log m \log(m\log^8m) \leq n \leq c_2 \sqrt{m}\log m \log(m\log^{13}m)$. \newline We use \eqref{7.1} to see that the inequality $\pi(m+n) \leq \pi(m) + \pi(n)$ holds if \begin{equation} n \geq \left( c_2 \sqrt{m} \log \left( \frac{m}{\log m} \right) + \frac{c_2}{16\pi} \left( \log \left( \frac{m}{\log m} \right) + 2 \right)\log m \log(m\log^{13}m) \right) h(m) \log m, \tag{7.4} \label{7.4} \end{equation} where \begin{displaymath} h(m) = \frac{\log(c_2\sqrt{m}\log m \log(m\log^{13}m)) - 1}{\log(\sqrt{m}/(c_2\log m \log(m\log^{13}m))) + 1}. \end{displaymath} Note that \begin{displaymath} \frac{c_2}{16\pi} \left( \log \left( \frac{m}{\log m} \right) + 2 \right)\log m \log(m\log^{13}m) \leq c_2 \sqrt{m} \log \log m \end{displaymath} and $h(m) \leq 1 + 8\log \log m/\log m$. So the inequality $n \geq c_2 \sqrt{m}\log m \log(m\log^8m)$ implies the inequality \eqref{7.4} and we arrive at the end of the proof. \end{proof} \section{Appendix: The Incompatibility of the HLC and the Prime $k$-tuples Conjecture} To formulate the Prime $k$-tuples Conjecture, we first introduce the following definition. \begin{defi} A $k$-tuple of distinct integers $b_1, \ldots, b_k$ is \textit{admissible} if for each prime $p$, there is some congruence class mod $p$ which contains none of the $b_i$. \end{defi} \begin{prim} Let $b_1, \ldots, b_k$ be an admissible $k$-tuple of integers. Then there exist infinitely many positive integers $n$ for which all of the values $n + b_1, \ldots, n + b_k$ are prime. \end{prim} \begin{rema} The Prime $k$-tuples Conjecture is a special case of Schinzel's Hypothesis H \cite[p.\:188]{schinzel}. \end{rema} In order to show that the HLC and the Prime $k$-tuples Conjecture are incompatible, Hensley and Richards \cite{hensley} used the following function which was introduced by Schinzel and Sierpi\'nski \cite[p.\:201]{schinzel}. \begin{defi} Let the function $\rho^{\ast}: \mathds{N} \to \mathds{N}$ be defined by \begin{displaymath} \rho^{\ast}(m) = \max_{n \in \mathds{N}} |\{ k \in \mathds{N} \mid n < k \leq m+n, \text{gcd}(k, m!) = 1 \}|. \end{displaymath} This function describes the maximum number of positive integers in each interval $(n,m+n]$ that are relatively prime to all positive integers less than or equal to $m$. \end{defi} Under the assumption that the Prime $k$-tuples Conjecture is true, Schinzel and Sierpi\'nski \cite[pp.\: 204--205]{schinzel} found the identity \begin{equation} \rho^{\ast}(m) = \limsup_{n \to \infty} (\pi(m+n) - \pi(n)). \tag{8.1} \label{8.1} \end{equation} Hensley and Richards \cite[p.\:380]{hensley} proved that for every real number $\varepsilon$ there exists a $m_0(\varepsilon)$ so that \begin{displaymath} \rho^{\ast}(m) - \pi(m) \geq (\log 2 - \varepsilon) \times \frac{m}{\log^2 m} \end{displaymath} for every $m \geq m_0(\varepsilon)$. In particular, the last inequality gives \begin{equation} \lim_{m \to \infty} (\rho^{\ast}(m) - \pi(m)) = \infty. \tag{8.2} \label{8.2} \end{equation} So if the Prime $k$-tuples Conjecture is true, we can combine \eqref{8.1} and \eqref{8.2} to see that for every sufficiently large values of $m$ there exist infinitly many positive integers $n$ so that the inequality \begin{displaymath} \pi(m+n) > \pi(m) + \pi(n) \end{displaymath} holds which contadicts the HLC. \section*{Acknowledgement} I would like to express my great appreciation to Thomas Leßmann for writing a C++ program in order to verify Proposition \ref{prop204}. I would also like to thank Pierre Dusart whose PhD thesis inspired me to deal with this subject. Furthermore, I thank R. for being a never-ending inspiration.
1,477,468,751,244
arxiv
\section*{\refname}} \usepackage{amssymb} \usepackage{amsmath} \usepackage{graphicx} \usepackage{graphics} \usepackage{dcolumn} \usepackage[dvipsnames]{xcolor} \usepackage{fancyhdr} \usepackage{graphicx} \usepackage{float} \usepackage{multirow} \usepackage{subfig} \usepackage[colorlinks=true,linktocpage=true,urlcolor=blue,citecolor=blue,linkcolor=blue]{hyperref} \usepackage[utf8]{inputenc} \usepackage{dsfont} \usepackage[justification=centerlast, format=plain, labelfont=bf]{caption} \makeatletter \renewcommand\tableofcontents{% \@starttoc{toc}% } \makeatother \DeclareCaptionJustification{justified}{\leftskip=0pt \rightskip=0pt \parfillskip=0pt plus 1fil} \captionsetup{justification=justified} \setlength{\parindent}{0pt} \defP^{\rm tot}{P^{\rm tot}} \defP^{\rm n}{P^{\rm n}} \def\mathbf{k}{\mathbf{k}} \def\mathbf{K}{\mathbf{K}} \def\mathbf{x}{\mathbf{x}} \def\mathbf{\hat n}{\mathbf{\hat n}} \def\mathrm{X}{\mathrm{X}} \def\mathrm{T}{\mathrm{T}} \def\mathrm{B}{\mathrm{B}} \def\mathrm{E}{\mathrm{E}} \def\mathrm{A}{\mathrm{A}} \def\mathrm{n}{\mathrm{n}} \def\VEV#1{\left\langle #1 \right\rangle} \newcommand{{\vec\nabla_{\vec\theta}}}{{\vec\nabla_{\vec\theta}}} \newcommand{{\tau_{\mathrm{reion}}}}{{\tau_{\mathrm{reion}}}} \newcommand{{g_{\mathrm{reion}}}}{{g_{\mathrm{reion}}}} \newcommand{{g_{\mathrm{rec}}}}{{g_{\mathrm{rec}}}} \newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\mathbf{k}}{\mathbf{k}} \newcommand{\omega}{\omega} \newcommand{\Gamma}{\Gamma} \newcommand{\alpha}{\alpha} \newcommand{\epsilon}{\epsilon} \newcommand{M_{pl}}{M_{pl}} \newcommand{\Lambda}{\Lambda} \newcommand{f_{\mathrm{NL}}}{f_{\mathrm{NL}}} \newcommand{\ddd}[1]{\dfrac{\mathrm d^3 #1}{(2\pi)^3}} \newcommand{\dbar}[1]{\dfrac{\mathrm d #1}{(2\pi)}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{align}}{\begin{align}} \newcommand{\end{align}}{\end{align}} \newcommand{T_{\mathrm{gas}}}{T_{\mathrm{gas}}} \newcommand{T_{\rm CMB}}{T_{\rm CMB}} \newcommand{\mathrm{sinc}}{\mathrm{sinc}} \newcommand{f_{\rm sky}}{f_{\rm sky}} \newcommand{{T^{(21)}}}{{T^{(21)}}} \newcommand{v_{\chi}}{v_{\chi}} \newcommand{V_{\chi}}{V_{\chi}} \newcommand{V_{\chi b}}{V_{\chi b}} \newcommand{m_{\chi}}{m_{\chi}} \newcommand{\mathrm{Re}}{\mathrm{Re}} \newcommand{M_{\rm total}}{M_{\rm total}} \newcommand{M_{\odot}}{M_{\odot}} \newcommand{ {\rm Mpc}^{-1} }{ {\rm Mpc}^{-1} } \newcommand{ f_{\rm NL} }{ f_{\rm NL} } \newcommand{\db}[1]{{\bf \color{blue}{[DB: #1]}}} \newcommand{\jbm}[1]{{\bf \color{ForestGreen}{[JBM: #1]}}} \newcommand{\NS}[1]{{\bf \color{red}{[NS: #1]}}} \newcommand{\boldsymbol{x}}{\boldsymbol{x}} \newcommand{\boldsymbol{k}}{\boldsymbol{k}} \newcommand{\boldsymbol}{\boldsymbol} \newcommand{\textrm{e}}{\textrm{e}} \newcommand{\wigner}[6]{ \begin{pmatrix} #1 & #2 & #3 \\ #4 & #5 & #6 \end{pmatrix}} \newcommand{\sixj}[6]{ \begin{Bmatrix} #1 & #2 & #3 \\ #4 & #5 & #6 \end{Bmatrix}} \newcommand{ {v_{\rm cb}} }{ {v_{\rm cb}} } \def\cleb#1#2#3#4#5#6{{\cal C}^{#1#2}_{#3 #4 \,\, #5 #6}} \def\ALMt#1#2#3#4{A^{#1 #2}_{#3 #4}} \def\ALMt{L}{M}{l}{l'}{\ALMt{L}{M}{l}{l'}} \def{A^{\oplus}}^{LM}_{ll'}{{A^{\oplus}}^{LM}_{ll'}} \def{A^{\oplus}}^{L \,-M}_{ll'}{{A^{\oplus}}^{L \,-M}_{ll'}} \def{A^{\ominus}}^{LM}_{ll'}{{A^{\ominus}}^{LM}_{ll'}} \def{A^{\ominus}}^{L \, -M}_{ll'}{{A^{\ominus}}^{L \, -M}_{ll'}} \def\wigner#1#2#3#4#5#6{ \left( \begin{array}{ccc} #1 & #3 & #5 \\ #2 & #4 & #6 \\ \end{array} \right)} \makeatletter \def\doauthor#1#2#3{% \ignorespaces#1\unskip \begingroup #3% \@if@empty{#2}{\@listcomma\endgroup{}{}}{\endgroup{\comma@space}{}\frontmatter@footnote{#2}}% \space \@listand }% \makeatother \makeatletter \def\@ssect@ltx#1#2#3#4#5#6[#7]#8{% \def\H@svsec{\phantomsection}% \@tempskipa #5\relax \@ifdim{\@tempskipa>\z@}{% \begingroup \interlinepenalty \@M #6{% \@ifundefined{@hangfroms@#1}{\@hang@froms}{\csname @hangfroms@#1\endcsname}% {\hskip#3\relax\H@svsec}{#8}% }% \@@par \endgroup \@ifundefined{#1smark}{\@gobble}{\csname #1smark\endcsname}{#7}% }{% \def\@svsechd{% #6{% \@ifundefined{@runin@tos@#1}{\@runin@tos}{\csname @runin@tos@#1\endcsname}% {\hskip#3\relax\H@svsec}{#8}% }% \@ifundefined{#1smark}{\@gobble}{\csname #1smark\endcsname}{#7}% \addcontentsline{toc}{#1}{\protect\numberline{}#8}% }% }% \@xsect{#5}% }% \makeatother \begin{document} \preprint{KCL-2020-41} \title{First Constraints on Small-Scale Non-Gaussianity from\\ UV Galaxy Luminosity Functions} \author{Nashwan Sabti$^{\mathds{S},}$} \affiliation{Department of Physics, King's College London, Strand, London WC2R 2LS, UK} \author{Julian B. Mu\~{n}oz$^{\mathds{M},}$} \affiliation{Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138, USA} \affiliation{Department of Physics, Harvard University, 17 Oxford St., Cambridge, MA 02138, USA} \author{Diego Blas$^{\mathds{B},}$} \affiliation{Department of Physics, King's College London, Strand, London WC2R 2LS, UK} \def\arabic{footnote}{$\mathds{S}$\hspace{0.7pt}}\footnotetext{\href{mailto:[email protected]}{[email protected]}} \def\arabic{footnote}{$\mathds{M}$\hspace{-0.9pt}}\footnotetext{\href{mailto:[email protected]}{[email protected]}} \def\arabic{footnote}{$\mathds{B}$}\footnotetext{\href{mailto:[email protected]}{[email protected]}} \setcounter{footnote}{0} \def\arabic{footnote}{\arabic{footnote}} \begin{abstract} \noindent UV luminosity functions provide a wealth of information on the physics of galaxy formation in the early Universe. Given that this probe indirectly tracks the evolution of the mass function of dark matter halos, it has the potential to constrain alternative theories of structure formation. One of such scenarios is the existence of primordial non-Gaussianity at scales beyond those probed by observations of the Cosmic Microwave Background. Through its impact on the halo mass function, such small-scale non-Gaussianity would alter the abundance of galaxies at high redshifts. In this work we present an application of UV luminosity functions as measured by the Hubble Space Telescope to constrain the non-Gaussianity parameter $f_\mathrm{NL}$ for wavenumbers above a cut-off scale $k_{\rm cut}$. After marginalizing over the unknown astrophysical parameters and accounting for potential systematic errors, we arrive at a $2\sigma$ bound of $f_{\rm NL}=71^{+426}_{-237}$ for a cut-off scale $k_{\rm cut}=0.1\,\mathrm{Mpc}^{-1}$ in the bispectrum of the primordial gravitational potential. Moreover, we perform forecasts for the James Webb Space Telescope and the Nancy Grace Roman Space Telescope, finding an expected improvement of a factor $3-4$ upon the current bound. \end{abstract} \maketitle \tableofcontents \section{Introduction} \vspace{-0.05cm} Cosmological surveys over the last few decades have provided us with an unprecedented understanding of the Universe. These include measurements of the Cosmic Microwave Background (CMB)~\cite{Akrami:2018vks}, as well as of the large-scale structure (LSS) of the Universe~\cite{Abazajian:2008wr, Abbott:2005bi}. Nevertheless, a large swath of our cosmos, corresponding to the cosmic dawn and reionization eras, remains largely unprobed. These two eras are the next frontier of precision cosmology. \vspace{6pt} Progress has been made by obtaining indirect information on the epoch of reionization (EoR) through its effect on the CMB anisotropies~\cite{Hu:1999vq, Adam:2016hgk}, the spectra of distant quasars~\cite{Barkana:2000fd, Becker:2001ee}, as well as the redshifted 21-cm line~\cite{Morales:2009gs}. These observables track the transition from a mostly neutral intergalactic medium to an ionized one. A more direct approach, however, involves observing the redshifted emission of the galaxies at that time. For this, our main handle is the (rest-frame) UV luminosity function (LF) observed by the {\it Hubble Space Telescope} (HST)~\cite{Bouwens:2014fua,Finkelstein_2015,Atek:2015axa,Livermore:2016mbs,Bouwens_2017asdasd,Mehta_2017,Ishigaki_2018,Oesch_2018,Atek:2018nsc}. Data collected by the HST over the last decades have provided us with an increasingly detailed galactic census at high redshifts, which has dramatically enhanced our understanding of early stellar formation~\cite{Tacchella:2012ih}. Besides providing key insights on the astrophysics of reionization, these LFs open a window towards probing different aspects of our cosmological models. In particular, the UV LFs probe cosmological {\it small} scales, which are otherwise difficult to access by current data sets. New features of the fundamental model of our Universe may lie at these scales, e.g.~\cite{Chevallard:2014sxa, Dayal:2014nva, Corasaniti:2016epp, Menci:2017nsr, Yue:2017hbz, Lovell:2017eec, Unal:2018yaa, Irsic:2019iff, Yoshiura:1809192}. The main purpose of this work is to illustrate these exciting possibilities. We do this by exploiting LF observations to learn about the physics of inflation in the form of primordial non-Gaussianity~\cite{Maldacena:2002vr, Celoria:2018euj}. \newpage The most accepted paradigm to explain the currently observed features of the Universe is that it went through an inflationary period at early times~\cite{Guth:1980zm, Linde:1981mu, Baumann:2009ds}. This framework is, however, quite broad in terms of determining which fundamental mechanism was actually operating. A promising strategy to unearth the physics of the inflationary era consists of exploring observables that can differentiate between families of inflationary models, grouped for instance according to effective-field-theory criteria~\cite{Cheung:2007st, Arkani-Hamed:2015bza}. A key feature of many non-minimal models is a deviation in the primordial fluctuations from the simplest Gaussian prediction, a feature known as primordial non-Gaussianity (PNG, see e.g.~\cite{Biagetti:2019bnp} for a recent review). This PNG can be scale dependent, for instance in models in which there is a relevant scale during the inflationary period. {\it Scale-dependent non-Gaussianity} is thus a powerful probe into the physics of the primordial Universe~\cite{Verde:2000vr, Komatsu:2009kd, Byrnes:2010ft}. \vspace{6pt} A departure from Gaussianity in the primordial fluctuations alters the abundance of halos, and thus the UV LF measured by the HST. In particular, local-type PNG has been shown to affect the rarest objects (such as the heaviest halos), as they lie the furthest from the peak of the distribution of overdensities (see e.g.~\cite{Pillepich:2008ka} and references therein). It is in this region where deviations from Gaussianity would be more apparent. This makes galaxy clusters a good probe of local-type PNG in the local ($z\sim 0$) Universe~\cite{Mana:2013qba, LoVerde:2007ri,Jimenez:2009us,LoVerde:2011iz,Shandera:2012ke}. Interestingly, however, the galaxies that the HST observes are hosted in halos which were very rare at their own redshift. This is because, despite their lower overall mass (thousands of times smaller than those of clusters today), they corresponded to large overdensities due to the smaller size of matter fluctuations at that time. Here we show that this makes the HST UV LFs a powerful probe of local-type PNG, enabling us to search for it at scales corresponding to wavenumbers $k\gtrsim 0.1\, {\rm Mpc}^{-1} $, which are difficult to access by CMB~\cite{Akrami:2019izv} and LSS observations~\cite{Shirasaki:2012sx,Leistedt:2014zqa}. PNG at even smaller scales can be accessed through other probes, for instance, through spectral distortions of the CMB anisotropies~\cite{Naruko:2015pva,Emami:2015xqa,Khatri:2015tla,Cabass:2018jgj} (although current bounds are at the level of $f_{\rm NL}\lesssim 10^5$ for scale-independent PNG). \vspace{6pt} In our main analysis we use the LFs from the Hubble Legacy Fields (HLF) catalog~\cite{Bouwens:2014fua}. In particular, we cover the redshift range $z=4-8$ and rest-frame UV magnitudes $M_{\rm UV}$ between $-22.7$ and $-16.4$ to find constraints on the amplitude $f_{\rm NL}$ of primordial non-Gaussianities at small scales $k > k_\mathrm{cut} = 0.1\,\mathrm{Mpc}^{-1}$. We fit a semi-analytical model to the shape of the UV LFs based on that of~\cite{Gillet:2019fjd} and use corrections to the halo mass function induced by primordial non-Gaussianity. By accounting for possible systematic errors in the UV LF data and marginalizing over the astrophysical parameters in our model, we find a bound of $f_{\rm NL}=71^{+426}_{-237}$ at $2\sigma$ for $k_{\rm cut}=0.1\, {\rm Mpc}^{-1} $. This is the first constraint on primordial non-Gaussianities from LF data and covers smaller scales than currently probed, as illustrated in Figure~\ref{fig:current_status}. Our approach is complementary to forecasts using future CMB spectral distortion data~\cite{Emami:2015xqa, Dimastrogiovanni:2016aul}, as well as those proposed for observations of fast radio bursts~\cite{Reischke:2020cgd}. As a cross-check, we have derived constraints using different UV LFs, including those from the lensing-based Hubble Frontier Fields (HFF)~\cite{Atek:2015axa,Livermore:2016mbs,Bouwens_2017asdasd,Ishigaki_2018,Oesch_2018,Atek:2018nsc}, where we find comparable results. Moreover, we perform forecasts for the upcoming {\it James Webb Space Telescope} (JWST) and {\it Nancy Grace Roman Space Telescope} (NGRST), showing that they will improve upon our HST constraints by a factor of $3-4$. \vspace{6pt} In what follows, we will assume a cosmological model with base parameters as measured by Planck~\cite{Aghanim:2018eyx}: $h = 0.6727,\, \Omega_\mathrm{b}h^2 = 0.02236,\, \Omega_\mathrm{c}h^2 = 0.1202,\, n_\mathrm{s} = 0.9649,\, \tau = 0.0544,\, A_\mathrm{s} = 2.101\times 10^{-9}\ \mathrm{and}\ k_\mathrm{pivot}=0.05\,\mathrm{Mpc}^{-1}$. This paper is structured as follows: Section~\ref{sec:UVLF} lays out our semi-analytical model for the UV LF and the HST data used in the analysis. In Section~\ref{sec:primordial_nonGauss} we summarize the formalism of small-scale non-Gaussianity and its impact on the UV LF. In Section~\ref{sec:results} the results of this work are presented. In Section~\ref{sec:forecasts} we make forecasts for JWST and NGRST. Finally, we present our conclusions in Section~\ref{sec:conclusions}. Complementary details are included in the Appendices~\ref{app:comparison_previous_literature} $-$\ref{subsec:21cm_forecast}. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{Figures/current_status.pdf} \caption{Illustration of the current $1\sigma$ constraints on $f_\mathrm{NL}$ as a function of comoving wavenumber $k$ from LSS~\cite{Castorina:2019wmr}, CMB~\cite{Akrami:2019izv} and LF observations, together with forecasts for JWST and NGRST. The smallest scale in our bounds, $k_\mathrm{max}\sim 2\,\mathrm{Mpc}^{-1}$, corresponds to the smallest halo mass probed by the Hubble fields. We set the cut-offs for the LSS and CMB observables following~\cite{LoVerde:2007ri,Castorina:2019wmr}, although we note that these are approximate and for illustration purposes only. The forecasts for JWST and NGRST are based on a wide-field and high-latitude survey mode respectively (see Section~\ref{sec:forecasts}). While forecasts for 21-cm experiments are not included, they possibly have the ability to reach scales $k\sim 50\,\mathrm{Mpc}^{-1}$ (see Section~\ref{subsec:21cm_forecast}).} \label{fig:current_status} \end{figure} \section{The UV Luminosity Function} \label{sec:UVLF} \subsection{UV LF Model} \label{subsec:LF_definitions} The abundance of galaxies in the early Universe can be tracked through their luminosity function, which describes the relation between the observed number density of galaxies and their flux (or magnitude) in a particular band. In the early Universe, galaxies contain young stars that emit in the UV part of the spectrum. This radiation gets redshifted due to the expansion of the Universe and can be observed today with optical or IR-band telescopes, such as the HST. An interesting application of the UV emission is to track the star-formation rate (SFR) across the cosmos~\cite{Kennicutt:1998zb, Robertson:2010an}. We are interested in the UV LF of galaxies around the epoch of reionization. In order to employ HST data, we ought to model the abundance of such galaxies, as well as their properties. This process can be separated into two parts. The first is the halo mass function, which describes how many halos of each mass there are, and is chiefly influenced by cosmology. The second is the halo-galaxy connection, driven by astrophysical processes, which allows us to relate the halo mass to the observed emission. While different parts of this calculation can be directly simulated (see e.g.~\cite{2016MNRAS.462..235L,Tacchella:2018qny,Yung_2018}), throughout this work we will use a semi-analytical numerical method based on simulation results. \vspace{-0.1cm} \subsubsection{Halo Mass Function} Massive, high-luminous galaxies tend to be hosted by heavy halos. While massive halos are more likely able to form such galaxies, they are more rarely found than lower mass halos. The abundance of halos has been extensively studied in the literature, e.g.~\cite{Jenkins:2000bv, Reed:2006rw}. Here we follow the excursion-set approach based on ellipsoidal gravitational collapse of dark matter halos (which results in a better agreement with numerical simulations than spherical-collapse models). In such models the barrier height, i.e., the threshold above which a density perturbation will collapse, depends on the mass of the object. We adapt the collapse formalism developed by Sheth\,\&\,Tormen~\cite{Sheth:2001dp}, where the halo mass function is of the following form: \begin{align} \label{eq:ST_HMF} \frac{\mathrm{d}n}{\mathrm{d}M_\mathrm{h}} = \frac{\rho_\mathrm{m}}{M_\mathrm{h}}\frac{\mathrm{d}\ln\sigma^{-1}}{\mathrm{d}M_\mathrm{h}}f_\mathrm{ST}\ , \end{align} with \begin{align} f_\mathrm{ST} = & A_\mathrm{ST}\sqrt{\frac{2a_\mathrm{ST}}{\pi}}\left[1+\left(\frac{\sigma_M^2}{a_\mathrm{ST}\delta_\mathrm{ST}^2}\right)^{p_\mathrm{ST}}\right]\frac{\delta_\mathrm{ST}}{\sigma_M}\times\nonumber\\ &\times\exp\left(-\frac{a_\mathrm{ST}\delta_\mathrm{ST}^2}{2\sigma_M^2}\right)\ , \end{align} and where $A_\mathrm{ST} = 0.3222$, $a_\mathrm{ST} = 0.707$, $p_\mathrm{ST} = 0.3$, $\delta_\mathrm{ST} = 1.686$ and $\sigma_M$ is the root mean square of the density field smoothed over a mass scale $M$ (see Eq.~\eqref{eq:sigmasq_M}). \subsubsection{Halo-galaxy Connection} We follow a simple phenomenological approach to link host halos to the properties of galaxies that reside in them. We assume that each dark-matter halo hosts one galaxy on average (see, e.g.,~\cite{Wechsler:2018pic} for a detailed review on the halo occupation distribution). The efficiency at which this galaxy will form stars depends on the mass of the host halo and is expected to exhibit a peak at halo masses $10^{11}-10^{12}\, M_\odot$ (at $z = 4$)~\cite{Tacchella:2018qny}, similar to that of our own Milky Way. A simple analytic model that captures this behaviour relates the mass of the host halo $M_\mathrm{h}$ to the typical stellar mass $M_*$ inside the halo via a double power-law\footnote{Note that the usual power-law is expressed in terms of $M_\mathrm{h}$, and our expression can be seen as an approximation to the inverse of that function.}: \begin{align} \label{eq:Mh_Mstar_doublepower_approx} M_\mathrm{h} = \left(\frac{\epsilon_*M_\mathrm{c}^{\alpha_*}}{M_*}\right)^{\frac{1}{\alpha_*-1}} + \left(\frac{\epsilon_*M_\mathrm{c}^{\beta_*}}{M_*}\right)^{\frac{1}{\beta_*-1}}\ , \end{align} where $\epsilon_*$, $\alpha_*$ and $\beta_*$ are free parameters that we will fit for with data and $M_\mathrm{c} = 1.6\times10^{11}\, M_\odot$. We take the fitting parameters to be redshift-independent, as suggested by the results of~\cite{Tacchella:2018qny} (see also~\cite{Trenti:2010sz, Sitwell:2013fpa, Mason:2015cna, Yung_2018}). We have explicitly tested that varying these parameters independently at each redshift does not change our constraints significantly. The UV emission is dominated by massive, young stars and thus tracks the SFR ($\dot M_*$), rather than $M_*$. These two quantities can be related via~\cite{Gillet:2019fjd}: \begin{align} \label{eq:Mstardot_Mstar_relation} \dot{M}_* = \frac{M_*}{t_*H^{-1}(z)}\ , \end{align} where $t_*$ is a (dimensionless) parameter that corrects the stellar-formation time-scale with respect to the cosmic Hubble rate $H(z)$. While this parameter ought to be fit from data, in practice $t_*$ and $\epsilon_*$ have identical effects on the UV LF, and thus we will fix $t_*$ to unity hereafter without any loss of generality. The star formation rate in the rest-frame can be expressed in terms of the UV luminosity $L_\mathrm{UV}$ as~\cite{Sun_2016}: \begin{align} \dot{M}_* = \kappa_\mathrm{UV}L_\mathrm{UV}\ , \end{align} where $\kappa_\mathrm{UV} = 1.15\times 10^{-28}\ M_\odot \, \mathrm{s}\,\mathrm{erg^{-1} yr^{-1}}$ is a conversion factor, and \begin{align} \label{eq:LUV} \log_{10}\left(\frac{L_\mathrm{UV}}{\mathrm{erg \, s^{-1}}}\right) = 0.4(51.63 + \langle A_\mathrm{UV}\rangle-M_\mathrm{UV})\ , \end{align} with $M_\mathrm{UV}$ the absolute UV magnitude and $\langle A_\mathrm{UV}\rangle$ a dust correction term. The observed UV luminosity can experience significant attenuation by dust extinction, especially at high luminosities and low redshifts~\cite{Yung_2018}. We model this extinction following~\cite{Tacchella:2018qny} (similar to the case of Lyman-break galaxies). For galaxies with a spectrum given by $f\sim\lambda^\beta$, the attenuation is assumed to follow $A_\mathrm{UV} = 4.43 + 1.99\beta$ \cite{Meurer:1999jj,Smit:2012nf}. We use the observations of the $\beta$ parameter at $z \leq 8$ reported in~\cite{Bouwens:2013hxa} and fit it following the prescription in~\cite{Trenti:2014hka}: \begin{align} \langle\beta(z,M_\mathrm{UV})\rangle = \begin{cases} a(z)e^{-\frac{b(z)}{a(z)}}+c & M_\mathrm{UV}\geq M_0\\ a(z) + b(z) + c & M_\mathrm{UV} < M_0 \end{cases}\ , \end{align} where $a(z) = \beta_{M_0}(z)-c$, $b(z) = \frac{\mathrm{d}\beta}{\mathrm{d}M_0}(z)(M_\mathrm{UV}-M_0)$, $c = -2.33$, $M_0 = -19.5$ and the values for $\beta_{M_0}$ and $\mathrm{d}\beta/\mathrm{d}M_0$ are taken from~\cite{Bouwens:2013hxa}. The exponential fit at $M_\mathrm{UV}\geq M_0$ prevents the dust extinction from becoming negative. At any given $M_\mathrm{UV}$ a Gaussian distribution with standard deviation $\sigma_\beta = 0.34$~\cite{Bouwens:2011yy} is assigned to $\beta$, which then gives the desired average extinction~\cite{Smit:2012nf}: \begin{align} \label{eq:dust} \langle A_\mathrm{UV}\rangle = 4.43 + 0.79\ln(10)\sigma_\beta^2+1.99\langle\beta\rangle\ . \end{align} At $z > 8$ the dust extinction quickly vanishes~\cite{Yung_2018} and thus we neglect it. In Appendix~\ref{app:dust}, we explore the impact of alternative fitting parameters for the dust extinction on our results. \vspace{6pt} Finally, with all the ingredients combined, the luminosity function can be computed as: \begin{align} \label{eq:UVLF_definition} \phi_\mathrm{UV} = \frac{\mathrm{d}n}{\mathrm{d}M_\mathrm{UV}} = \frac{\mathrm{d}n}{\mathrm{d}M_\mathrm{h}}\frac{\mathrm{d}M_\mathrm{h}}{\mathrm{d}M_\mathrm{UV}}\ . \end{align} Note that in this approach the stellar properties of galaxies only depend on the halo mass, rather than the unique formation history of the host halo. As such, this model is not applicable at the level of each individual galaxy, but should be thought of as describing the {\it average} evolution of stellar properties in galaxies. We illustrate the dependence of the LF on the different parameters in Figure~\ref{fig:UVLF_params_dependence}. It is clear that the effect of $\epsilon_*$ and $f_\mathrm{NL}$ (Section~\ref{subsec:HMF_fNL_corrections}) on the UV LF are strongly degenerate. However, as we will show later on, using a combination of UV LF data at different redshifts will break this degeneracy to a reasonable degree, allowing for the UV LF to be a strong probe of primordial non-Gaussianity. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/params_dependence.pdf} \caption{An illustration of the dependence of the UV luminosity function on the fitting parameters in Eq.~\eqref{eq:Mh_Mstar_doublepower_approx} and the amplitude $f_\mathrm{NL}$ of the small-scale PNG (for $k_{\rm cut}=0.1\, {\rm Mpc}^{-1} $). The changes in the UV LF are exaggerated for descriptive purposes only. The dark (light) shades indicate an increase (decrease) of the corresponding parameter. The plots cover the magnitude range $-24\leq M_\mathrm{UV}\leq-16$ and parameter ranges $-5\leq \alpha_*\leq 0$, $0\leq\beta_*\leq 0.35$, $0.08\leq\epsilon_*\leq 0.31$ and $-3000\leq f_\mathrm{NL}\leq 2000$.} \label{fig:UVLF_params_dependence} \end{figure} \subsection{UV LF Data} \label{subsec:UVLF_data} The high-redshift UV LF has been observed by the Hubble Space Telescope over a decades-long endeavour. This has resulted in two main data catalogs dubbed the Hubble Legacy Fields (HLF) and the Hubble Frontier Fields (HFF). The first consists of several deep-field surveys and has robustly probed the UV LF at the bright end, while the latter consists of observations of six cluster lenses, where faint background galaxies are magnified enough to become observable. Both methods have their own advantages and systematics~\cite{Maizy:2009df}. For instance, the HFF can reach fainter objects, as those are strongly magnified by the cluster lenses, whereas lensing can introduce important uncertainties~\cite{Bouwens_2017asdasd}. On the other hand, the deep blank fields from the HLF catalog have the advantage of being easier to model and, in addition, can better probe the bright end of the LF given the relatively large observed areas~\cite{Maizy:2009df}. As will be discussed in Section~\ref{sec:primordial_nonGauss}, and can be readily seen in the lower right panel of Figure~\ref{fig:UVLF_params_dependence}, the impact of primordial non-Gaussianities will be mainly visible at the bright end of the LF. Therefore, we perform our main analysis with the data obtained from the HLF (data set 1 below) and summarize the results obtained from other data sets in Appendix~\ref{app:alternative_dataset_results}. In particular, we make use of the measured LFs reported by the following references: \begin{itemize} \item \textbf{Data set 1}: Bouwens et al. 2015 ($z = 4+5+6+7+8$)~\cite{Bouwens:2014fua}. \item \textbf{Data set 2}: Atek et al. 2018 ($z = 6$)~\cite{Atek:2018nsc}, Atek et al. 2015 ($z = 7$)~\cite{Atek:2015axa}, Ishigaki et al. 2018 ($z = 8$)~\cite{Ishigaki_2018} and Oesch et al. 2018 ($z = 10$)~\cite{Oesch_2018}. \item \textbf{Data set 3}: Livermore et al. 2017 ($z = 6+7+8$)~\cite{Livermore:2016mbs} and Oesch et al. 2018 ($z = 10$)~\cite{Oesch_2018}. \end{itemize} The first data set derives the UV LF from the HLF catalog, while the latter two use HFF data. Note that in some references the UV LF is reported using either a 1500 or {1600\,\AA} UV band filter. This induces a shift of $|M_{1500} - M_{1600}| \lesssim 0.05$~\cite{Williams_2018}, which we have explicitly checked to not change our results. Hence, from this point onward, we will simply use $M_\mathrm{UV}$ to denote the UV magnitude. Next, we note that while the UV LF at $z = 10$ is also reported in~\cite{Bouwens:2014fua}, we do not include it in our analyses, as nearly all search fields contain zero galaxy candidates at that redshift. A final important point to bear in mind is that the faint end of the quasar LF and the bright end of the galaxy LF overlap, see e.g.~\cite{Matsuoka:2017frx}. The subtracted result, i.e. the galaxy LF, may then present a power-law feature at the bright end~\cite{Ono:2017wjz}. This remains an experimental challenge, as it is difficult to cleanly separate the two contributions. We show data set 1 in Figure~\ref{fig:UVLF_Bouwens2015_fit}, along with our best-fit model (in the absence of primordial non-Gaussianity). \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Figures/Best_fit_Bouwens2015.pdf} \caption{Global fits of our UV luminosity function model to the data from~\cite{Bouwens:2014fua} in the absence of non-Gaussianity (i.e., fixing $f_\mathrm{NL}=0$). A minimum relative error of 30\% is imposed on the data (see Section~\ref{sec:results} for details). The best-fit parameters are $\{\alpha_*, \beta_*, \epsilon_*\} = \{-1.14, 0.20, 0.23\}$.} \label{fig:UVLF_Bouwens2015_fit} \end{figure} \section{Primordial Non-Gaussianity} \label{sec:primordial_nonGauss} Here we explore the phenomenology of the primordial non-Gaussianity models considered in this work and how they affect the distribution of matter and thus the halo mass function. \subsection{Formalism and Models} We begin by laying down our formalism. Assuming statistical homogeneity and isotropy, we can generically write the 2- and 3-point correlation functions of the gravitational potential $\Phi$ as: \begin{align} \label{eq:correlation_function} \langle\Phi(k_1)\Phi(k_2)\rangle =&\ P_\Phi(k_1)(2\pi)^3\delta_D^3(\mathbf{k_1}+\mathbf{k_2})\\ \label{eq:bispectrum} \langle\Phi(k_1)\Phi(k_2)\Phi(k_3)\rangle =&\ B_\Phi (k_1,k_2,k_3) (2\pi)^3 \nonumber\\ &\ \times \delta_D^3(\mathbf{k_1}+\mathbf{k_2}+\mathbf{k_3})\ , \end{align} where \begin{align} P_\Phi(k) = \frac{2\pi^2\Delta_\Phi^2(k)}{k^3} = \frac{2\pi^2}{k^3}\frac{9}{25}A_\mathrm{s}\left(\frac{k}{k_\mathrm{pivot}}\right)^{n_\mathrm{s}-1} \end{align} is the power spectrum of $\Phi$ and $B_\Phi$ its bispectrum. The factor ${9}/{25}$ comes from the relation $\Phi = {3}\zeta/{5}$ between the gravitational potential and the comoving curvature perturbation $\zeta$. Both the power spectrum and bispectrum have been measured to great precision by CMB observations, as well as galaxy surveys, confirming that the large-scale fluctuations of the Universe are Gaussian (and $B_\Phi$ is consistent with zero within current errors), e.g. \cite{Akrami:2019izv,Castorina:2019wmr}. The situation is different at smaller scales, however, where much less is known. We will, therefore, consider deviations from Gaussianity at small scales only, which affect the formation of the galaxies that we will probe, as they have relatively low masses ($M_\mathrm{h}\lesssim 10^{11} M_\odot$). In order to probe non-Gaussianity at those smaller scales, while not altering the CMB predictions, we introduce a cut-off scale $k_\mathrm{cut} = 0.1\, \mathrm{Mpc^{-1}}$ in the bispectrum, below which it vanishes. More concretely, throughout this work we will focus on local-type primordial non-Gaussianity for simplicity, although our analysis could be extended to other models. The most minimal model of this family simply alters the initial gravitational perturbation $\Phi$ by a series expansion around a Gaussian field $\Phi_\mathrm{G}$, which to linear order reads: \begin{align} \label{eq:phi_nonGaussian} \Phi(x) = \Phi_\mathrm{G}(x) + f_\mathrm{NL}\left(\Phi_\mathrm{G}^2(x)-\langle\Phi_\mathrm{G}^2\rangle\right). \end{align} In this case, the bispectrum is given by: \begin{align} \label{eq:phi_nonGaussian_bispectrum} B_\Phi = f_\mathrm{NL}P_\Phi(k_1)P_\Phi(k_2) \prod_{i=1}^3 K(k_i) + \left(5\ \mathrm{perms.} \right)\ , \end{align} where $K(k_i) = \Theta(k_i-k_{\rm cut})$ ensures that only modes above the cut-off contribute to the bispectrum. This is the main shape we will use throughout this work. Examples of theoretical models that may generate small-scale non-Gaussianity\footnote{These models may not generate a sharp cut-off at large scales. As such, our approach should be considered as a phenomenological first step to approximate small-scale PNG.} could involve inflationary scenarios with a changing speed of sound (see e.g.~\cite{LoVerde:2007ri} and references therein). Moreover, it has been shown that small-scale PNG may impact the formation and abundance of primordial black holes~\cite{Young:2013oia, Atal:2018neu, Atal:2019erb, DeLuca:2019qsy}, opening up a door to exciting new possibilities. We note that we are not considering changes to the power spectrum $P_\Phi$ due to primordial non-Gaussianity, as those vanish to first order in $ f_{\rm NL} $, although some inflationary models might directly affect $P_\Phi$ and produce a richer phenomenology. \pagebreak \subsection{Cumulants} We are mostly interested in quantities that are coarse-grained over a region which will collapse into a halo of mass $M$. We define the density perturbation $\delta_M$ smoothed over mass scale $M\equiv M_\mathrm{h}$ as: \begin{align} \label{eq:density_perturbation} \delta_M(z) = \int\frac{d^3k}{(2\pi)^3}W_M(k)T_\Phi(k,z)\Phi(k)\ , \end{align} where $T_\Phi(k,z)$ is the linear transfer function and \begin{align} W_M(k) =& \frac{3\sin\left(kR\right)}{\left(kR\right)^3} - \frac{3\cos\left(kR\right)}{\left(kR\right)^2}\ \end{align} is a top-hat window function with comoving radius $R(M) = (3M/(4\pi\rho_\mathrm{m}))^{1/3}$. Note that $k$ and $\rho_\mathrm{m}$ are also comoving quantities. The transfer function $T_\Phi(k,z)$ is computed as $T_\Phi(k,z) = 5D(z)T_\zeta(k,0)/3$, where $D(z)$ is the linear growth factor and $T_\zeta(k,0)$ is the transfer function for the curvature perturbation at redshift $z = 0$, which we obtain from the \texttt{CLASS} code~\cite{Blas:2011rf}. The deviation from Gaussianity is usually parametrized in terms of higher-order cumulants of the field $\Phi$. We will work to first order in $f_\mathrm{NL}$, where only the skewness is relevant, and we define: \begin{align} \label{eq:kappathree} \kappa_3(M) &= \frac{\langle\delta_M^3\rangle}{\sigma_M^3}\ , \end{align} with $\sigma_M^2$ the mass variance smoothed over mass scale $M$: \begin{align} \label{eq:sigmasq_M} \sigma^2_M &= \int\frac{d^3k}{(2\pi)^3}W_M^2(k)T^2_\Phi(k,z)P_\Phi(k)\ . \end{align} It is obvious from Eq.~\eqref{eq:bispectrum} that $\kappa_3$ itself is proportional to $f_\mathrm{NL}$. In practice, we make use of a fitting function for $\kappa_3$ to ease the computational load, which we calibrate explicitly for $k_\mathrm{cut}=0.1\,\mathrm{Mpc}^{-1}$ in Appendix~\ref{app:kappa3_fit}. We show $\kappa_3$ as a function of halo mass in Figure~\ref{fig:kappa3_fit} for different choices of the cut-off scale $k_{\rm cut}$. Increasing $k_{\rm cut}$ produces an overall suppression of $\kappa_3$. The most striking effect is, however, the vanishing of $\kappa_3$ for halos much heavier than $M_{\rm cut} = 4\pi \rho_\mathrm{m} k_{\rm cut}^{-3}/3$. For $k_{\rm cut}=0.1\, {\rm Mpc}^{-1} $ this corresponds to $M_{\rm cut}\approx 2 \times 10^{14}\, M_{\odot}$, roughly the mass of galaxy clusters. Furthermore, a more stringent cut of $k_{\rm cut}=1\, {\rm Mpc}^{-1} $ does not alter the abundance of halos above ${\sim}2 \times 10^{11}\, M_{\odot}$, which encompasses halos smaller than those hosting our own Milky Way. This shows that the PNG models that we consider would leave no signature in usual searches, such as cluster abundance or CMB analyses, whereas they will affect the UV LF, as well as the 21-cm signal during cosmic dawn. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{Figures/kappa3.pdf} \caption{The normalized skewness as a function of halo mass in the presence of local-type non-Gaussianity for different cut-off scales $k_\mathrm{cut}$. The scatter points are obtained directly from Eq.~\eqref{eq:kappathree}, whereas the solid lines are based on the fit in Eq.~\eqref{eq:kappa3_fit}. The dashed vertical lines roughly represent -- from left to right -- the mass of atomic cooling halos (relevant to the 21-cm signal from cosmic dawn), the heaviest halos probed in the Hubble Legacy/Frontier Fields and halos in which clusters reside.} \label{fig:kappa3_fit} \end{figure} \subsection{Effect on the HMF} \label{subsec:HMF_fNL_corrections} The deviation from Gaussianity in the PDF for $\delta_M$ will become imprinted onto the abundance and distribution of galaxies~\cite{LoVerde:2007ri, LoVerde:2011iz}. For the derivation of the correction to the HMF induced by non-Gaussianities, we will use the Press-Schechter (PS) formalism\footnote{While the HMF we use is not the PS HMF, the moving barrier in the ST formalism adds an {\it extra} term to the constant barrier in the PS formalism~\cite{Sheth:2001dp}, which in turn would make our bounds stronger. Since we do not include this extra term in the computation of the HMF correction, this will make our constraints conservative.}~\cite{LoVerde:2007ri}. In this framework, the volume fraction that has collapsed into halos of mass $M$ is given by the integral of the 1-point PDF of the scaled density perturbation $\nu \equiv \delta_M/\sigma_M$: \begin{align} \label{eq:collapse_fraction} F(M) &= \int_{\nu_\mathrm{c}(M)}^{\infty}\mathrm{d}\nu\rho(\nu,M)\ , \end{align} where $\nu_\mathrm{c}(M) = \sqrt{a_{\rm ST}} \, \delta_\mathrm{crit}/\sigma_M = 1.42 / \sigma_M$. The (differential) halo mass function can then be directly obtained from: \begin{align} \label{eq:HMF_definition} \frac{\mathrm{d}n(M)}{\mathrm{d}M} &= -2\frac{\rho_\mathrm{m}}{M}\frac{\mathrm{d}F(M)}{\mathrm{d}M} = -2\frac{\rho_\mathrm{m}}{M}F'(M)\ , \end{align} where the prime denotes differentiation with respect to halo mass $M$. In a Gaussian cosmology, the PDF is given by $\rho(\nu,M) = \left(2\pi\right)^{-1/2}\exp(-\nu^2/2)$. Now, in a non-Gaussian cosmology any deviations from this distribution can be described by making use of the Edgeworth series expansion~\cite{Juszkiewicz:1993hm, Bernardeau:1994aq}. In this case, the PDF is written as a series in the higher-order cumulants of the distribution. Given that non-Gaussianities manifest themselves in deviations of these cumulants, this makes the Edgeworth expansion a useful tool. Note that even though this is an asymptotic series and its convergence is not guaranteed, we have checked the required truncation order for parameter ranges relevant to our purpose: modelling UV luminosity functions. As we will show in Appendix~\ref{app:higher_order_terms}, we find that a reasonable accuracy (up to $|f_\mathrm{NL}|\sim\mathcal{O}(10^3)$) can be obtained by cutting off the series already at first order, and our constraints do not change when including higher-order corrections. In explicit form, the 1-point PDF then reads: \begin{align} \label{eq:PDF} \rho(\nu, M) =& \frac{\exp(-\nu^2/2)}{(2\pi)^{1/2}}\left(1 + \frac{\kappa_3(M)H_3(\nu)}{6}\right)\ , \end{align} with \begin{equation} H_n(x) = (-1)^n\exp(\nu^2/2)\frac{\mathrm{d}^n}{\mathrm{d}\nu^n}\exp(-\nu^2/2) \end{equation} the (probabilists') Hermite polynomials. This expression can then be inserted into Eq.~\eqref{eq:collapse_fraction} to obtain the collapse fraction to first order in $f_\mathrm{NL}$, which can be written as $F_\mathrm{NG}(M) = F_0(M) + F_1(M)$ and with: \begin{align} F_0 &= \frac{1}{2}\mathrm{erfc}\left(\frac{\nu_\mathrm{c}}{\sqrt{2}}\right)\\ F_1 &= \frac{\exp(-\nu_\mathrm{c}^2/2)}{(2\pi)^{1/2}}\frac{\kappa_3H_2(\nu_\mathrm{c})}{6}\ . \end{align} The derivatives of these quantities with respect to halo mass $M_\mathrm{h}$ read: \begin{align} F_0' &= -\frac{\exp(-\nu_\mathrm{c}^2/2)}{(2\pi)^{1/2}}\nu_\mathrm{c}'\\ \frac{F_1'}{F_0'} &= \frac{\kappa_3H_3(\nu_\mathrm{c})}{6} - \frac{H_2(\nu_\mathrm{c})}{6}\frac{\kappa_3'}{\nu_\mathrm{c}'}\ . \end{align} The non-Gaussian mass function up to first order in $f_\mathrm{NL}$ is then: \begin{align} \label{eq:Edgeworth_correction} \frac{n_\mathrm{NG}'}{n_\mathrm{G}'} = \frac{F'_\mathrm{NG}}{F'_\mathrm{G}} \approx 1 + \frac{F_1'}{F_0'}\ , \end{align} where $n_\mathrm{G}$ indicates the Gaussian HMF. The Sheth-Tormen HMF in Eq.~\eqref{eq:ST_HMF} is then multiplied by this correction to obtain the luminosity function dependence on $f_\mathrm{NL}$. For negative values of $f_\mathrm{NL}$ one must proceed with caution, as the correction can lead to an unphysical (negative) value of the HMF. Instead, we set the correction equal to 0 for all masses where it is negative. As discussed in~\cite{LoVerde:2011iz}, this issue can be circumvented by using the log-Edgeworth expansion. However, while for negative $f_\mathrm{NL}$ this can prove to be a useful trick, we find that its convergence for positive $f_\mathrm{NL}$ is far worse. We also compared with the Edgeworth mass function in this same reference at low redshifts and found good agreement with both the semi-analytical results and the results from N-body simulations. We are not aware of any simulations of the HMF at high redshifts including PNG. Nevertheless, as the halos we probe at high redshift are comparable in rarity to the clusters studied in~\cite{LoVerde:2011iz}, and our HMF has been tested (assuming Gaussianity) against simulations in~\cite{Tacchella:2018qny}, we expect our approximations to be valid in the entire redshift range we consider. \section{Results} \label{sec:results} With the UV LF and non-Gaussianity formalisms established in the previous sections, we present here constraints on the amplitude $f_\mathrm{NL}$ of non-Gaussianity at small scales. We focus mainly on the results obtained by using data set 1 and include results obtained from using the two other data sets with additional remarks in Appendix~\ref{app:alternative_dataset_results}. We start by constructing a $\chi^2$ to assess deviations from the data due to the presence of small-scale non-Gaussianity: \begin{align} \label{eq:chisq} \chi^2(z,\boldsymbol{\theta}) = \underset{M_\mathrm{UV}}{\sum}\left(\frac{\phi_\mathrm{model}(z,M_\mathrm{UV};\boldsymbol{\theta})-\phi_\mathrm{data}(z,M_\mathrm{UV})}{\sigma_\phi^\mathrm{data}(z,M_\mathrm{UV})}\right)^2\ , \end{align} where $\boldsymbol{\theta} = \{\alpha_*,\beta_*,\epsilon_*,f_\mathrm{NL}\}$ represents a vector of the free parameters in our model and the sum goes over all data points. In order to account for cosmic variance, as well as any potential systematic errors in estimations of the UV LF, we impose a minimum relative error of 30\% in the data for all data sets (i.e., $\sigma_\phi^\mathrm{data}$ is at least $0.3\times \phi_\mathrm{data}$ at each $z$ and $M_{\rm UV}$). This is a more conservative approach than used in~\cite{Gillet:2019fjd, Bouwens_2017asdasd}, where the minimum error was set at 20\%. We determined this noise floor by solving for $\chi^2_\text{best-fit} / g_\mathrm{dof}\approx 1$, where $\chi^2_\text{best-fit}$ is the best-fit value of the $\chi^2$ distribution and $g_\mathrm{dof}$ is its number of degrees of freedom. We begin by reproducing the analysis of~\cite{Gillet:2019fjd} in the absence of primordial non-Gaussianity. That work only used a subset of our data, and under the same assumptions we find good agreement. Details of the comparison can be found in Appendix~\ref{app:comparison_previous_literature}, which acts as a consistency check of our model assumptions. \vspace{6pt} Next, we obtain constraints on $ f_{\rm NL} $ in two different ways. The first is by directly marginalizing Eq.~\eqref{eq:chisq} over the parameters $\{\alpha_*,\beta_*,\epsilon_*\}$ for each $f_\mathrm{NL}$, and thus finding the marginalized $\chi^2_{\rm marg}( f_{\rm NL} )$. The second is by performing a joint MCMC analysis of all parameters. While both methods result in similar $ f_{\rm NL} $ constraints, they have different benefits. The first method will allow us to quickly find constraints on $ f_{\rm NL} $ under different assumptions. The advantage of the second method is that any correlations between the different parameters will be clear. We impose broad flat priors of $\alpha_*= [-2,0]$, $\beta_*= [0, 0.9]$, and $ f_{\rm NL} = [-2000,1000]$. The negative prior on $\alpha_*$ (and positive one on $\beta_*$) reflects our understanding of feedback, where both the low- and high-mass limits of galaxies are less efficient at forming stars than in between~\cite{Tacchella:2018qny,Yung_2018}. Moreover, since the parameter $\epsilon_*$ determines the fraction of baryons in a dark matter halo, we include an upper limit on its prior of $\epsilon_*^\mathrm{max} = 2{M_*}/{M_\mathrm{h}} \sim 2{\Omega_\mathrm{b}}/{\Omega_\mathrm{m}}\approx 0.31$~\cite{Tacchella:2018qny, Aghanim:2018eyx}, whereas its lower bound we set equal to 0.001 for convenience. Additionally we fix our cosmological parameters to the Planck 2018 best fits. \vspace{6pt} Our constraints, obtained through the marginalized-$\chi^2$ method, are summarized in Table~\ref{tab:HLF_bounds}. The $1\sigma$ and $2\sigma$ limits are obtained by determining for which $f_\mathrm{NL}$ the quantity $\Delta \chi^2( f_{\rm NL} ) \equiv \chi_{\rm marg}^2 - \chi_\text{best-fit}^2$ is equal to 1 and 4 respectively. The full $\Delta\chi^2( f_{\rm NL} )$ curves are included in Appendix~\ref{app:alternative_dataset_results}. Using data set 1 we find that $ f_{\rm NL} $ is consistent with zero, and has a one-sided error of $\sigma( f_{\rm NL} )=235$ at 1$\sigma$ (and 343 at 2$\sigma$). This error is not symmetric around the mean, as negative values of $ f_{\rm NL} $ have a more marked effect on the HMF (Section~\ref{sec:UVLF}). While this error is significantly larger than the one obtained with Planck data (where $\sigma( f_{\rm NL} )=5.1$ for local-type PNG~\cite{Akrami:2019izv}), it places a constraint on smaller scales, beyond where CMB data can naturally access, and is thus complementary to such bounds. If instead of a 30\% minimum relative error in the data we set this error equal to 20\% (as in~\cite{Gillet:2019fjd, Bouwens_2017asdasd}), the bounds would become stronger by approximately 25\%. In the case such error is not included at all, the improvement would be roughly 50\%. \begin{figure*}[hbtp!] \centering \includegraphics[width=0.7\textwidth]{Figures/MCMC_Bouwens2015.pdf} \caption{Posteriors for $\alpha_*,\, \beta_*,\, \epsilon_*$ and $f_\mathrm{NL}$ using data set 1 (Section~\ref{subsec:UVLF_data}) and a cut-off scale in the bispectrum of $k_\mathrm{cut} = 0.1\,\mathrm{Mpc}^{-1}$. A minimal error of 30\% is imposed in the UV LF data points to account for cosmic variance and other systematic errors. The 2D contours depict the $1\sigma$ and $2\sigma$ confidence levels. The titles and vertical lines in the 1D posteriors represent the maximum-likelihood best fit (central line) and the $\pm1\sigma$ quantiles (outer lines).} \label{fig:MCMC_Bouwens2015zAll} \end{figure*} Table~\ref{tab:HLF_bounds} also shows the results for the other two data sets we consider, which use the HFF instead of the HLF. These are also consistent with no small-scale PNG within 2$\sigma$ and have roughly comparable error-bars, showing that the specific data used, including the range of redshifts and magnitudes accessible, does not dramatically alter our conclusions. \begin{table}[h!] \centering {\def1.35{1.35} \begin{tabular}{c|c|c} \hline\hline \textbf{Data set} & $\boldsymbol{1\sigma}$& $\boldsymbol{2\sigma}$ \\ \hline\hline \textbf{1} (HLF) & $73^{+277}_{-192}$ & $73^{+430}_{-256}$\\ \hline \textbf{2} (HFF) & $-155^{+185}_{-126}$ & $-155^{+297}_{-169}$ \\ \hline \textbf{3} (HFF)& $-302^{+262}_{-351}$ & $-302^{+412}_{-662}$ \\ \hline\hline \end{tabular} } \caption{Constraints on $f_\mathrm{NL}$ at 68\% and 95\% C.L.~using the HLF and HFF data sets described in Section~\ref{subsec:UVLF_data}. A minimum relative error of 30\% is used in the data and the cut-off scale in the bispectrum is set equal to $k_\mathrm{cut} = 0.1\,\mathrm{Mpc}^{-1}$. These bounds are obtained by directly marginalizing the $\chi^2$ in Eq.~\eqref{eq:chisq} over the parameters $\alpha_*,\,\beta_*$ and $\epsilon_*$, although a very similar result is obtained with a direct MCMC search. } \label{tab:HLF_bounds} \end{table} In order to study degeneracies between the parameters, we now perform an MCMC analysis using data set 1 and show the posteriors in Figure~\ref{fig:MCMC_Bouwens2015zAll}. Note that while at a single redshift the impact of $f_\mathrm{NL}$ and $\epsilon_*$ on the UV LF is highly degenerate (see Figure~\ref{fig:UVLF_params_dependence}), this degeneracy is lifted when combining data at different redshifts, as is clear in Figure~\ref{fig:MCMC_Bouwens2015zAll}. This is because different redshift slices have slightly different $f_\mathrm{NL}-\epsilon_*$ degeneracy directions, making their combination break the degeneracy and yielding a nearly Gaussian posterior. The MCMC best fit at $2\sigma$ reads: \begin{align} \label{eq:main_bound} f_\mathrm{NL} = 71^{+426}_{-237}\ , \end{align} in excellent agreement with our result reported in Table~\ref{tab:HLF_bounds}. At $1\sigma$ (see top panels in Figure~\ref{fig:MCMC_Bouwens2015zAll}), the agreement is{\parfillskip=0pt\par} reasonable and the deviations could be due to the implicit assumption in the marginalized-$\chi^2$ method that the data is Gaussian distributed. Since the MCMC method is free of any such assumptions, we consider Eq.~\eqref{eq:main_bound} our main result. \subsection*{Results for other cut-off scales} \label{subsec:other_kcuts} The main analysis in this work uses a cut-off scale of $k_\mathrm{cut} = 0.1\,\mathrm{Mpc}^{-1}$, which roughly denotes the smallest scale that can be probed by the CMB and below which we set the bispectrum equal to zero. While in principle $k_\mathrm{cut}$ ought to be included as a free parameter in the analysis, it is computationally expensive to do so. Therefore, we devote this section to illustrate the sensitivity of our bounds to the cut-off scale $k_\mathrm{cut}$ for a few cases. In a similar fashion as before, we calculate the marginalized $\chi^2$ using different values for $k_\mathrm{cut}$ and display these in Figure~\ref{fig:bounds_kcut}. This figure shows that the bounds on small-scale non-Gaussianity mainly come from scales between $0.1 - 1\,\mathrm{Mpc}^{-1}$. The choice of $k_\mathrm{cut} = 0.1\,\mathrm{Mpc}^{-1}$ exemplifies a scenario in which small-scale non-Gaussianity nearly exploits the UV LF to the fullest extent. This is because such scales correspond to masses around $10^{11}-10^{13}\,M_\odot$, which coincide with mass scales at the bright end of the UV LF as probed by the HST. Therefore, when increasing the cut-off to smaller scales, the UV LF quickly loses its constraining power. \vspace{6pt} Interestingly, however, for $k_\mathrm{cut} = 1\,\mathrm{Mpc}^{-1}$ the best-fit value of $f_\mathrm{NL}$ moves away from 0 (and we note that for larger $k_{\rm cut}$ the constraints widen significantly). This hints to the existence of a bump-like feature in the data (possibly due to non-Gaussian HMF corrections only present at $M_\mathrm{h}\lesssim 10^{11}\,M_\odot$), which would favour a negative $f_\mathrm{NL}$ over $f_\mathrm{NL} = 0$ by ${\sim}1.7\,\sigma$. This behaviour is also present when performing an MCMC analysis, even with redshift-dependent astrophysical parameters. Moreover, we find this deviation in HFF data as well, at the level of ${\sim}1.9\sigma$ for data set 2 and ${\sim}2.5\sigma$ for data set 3. It should be noted, however, that this deviation from Gaussianity can disappear if a different dust correction is employed (see Appendix~\ref{app:dust}). In the next section we will study the potential of the upcoming JWST and NGRST in resolving whether this excess has a physical origin \begin{figure}[h] \centering \includegraphics[width=\linewidth]{Figures/kcut.pdf} \caption{Marginalized $\Delta\chi^2$ as a function of $f_\mathrm{NL}$ and its dependence on the cut-off scale $k_\mathrm{cut}$ of the PNG. These constraints are obtained using data set 1 in Section~\ref{subsec:UVLF_data} at redshifts $z=4+5+6+7+8$ with a minimum relative error of 30\% in the data.} \label{fig:bounds_kcut} \end{figure} \section{Future Data} \label{sec:forecasts} Here we study how well future data from the epochs of cosmic dawn and reionization will be able to constrain small-scale PNG. We will focus on two experiments: the James Webb Space Telescope and the Nancy Grace Roman Space Telescope, which both will significantly improve upon the UV LFs of HST. We also briefly explore how small-scale non-Gaussianity would affect complementary 21-cm experiments in Appendix~\ref{subsec:21cm_forecast}. The JWST is expected to improve upon Hubble mainly at the faint end~\cite{Yung_2018, Behroozi:2020jhj}, leaving the reach at the bright end (at fixed coverage) mostly unchanged. We will consider the Wide Field survey mode from~\cite{Mason:2015cna} (see also~\cite{Williams_2018} for a discussion), which covers an area of ${\sim}4000\,\mathrm{arcmin}^2$ and has a total exposure time of 800 hours. This is a much smaller timescale than those of the surveys available in the HLF catalog~\cite{Beckwith:2006qi}. As such, with this configuration less faint galaxies can be observed, although galaxies at higher redshifts will be found more efficiently due to a different wavelength coverage~\cite{Gardner:2006ky}. With a higher exposure time, we expect JWST to probe fainter galaxies and thus smaller scales. Moreover, it will allow to refine our phenomenological model, since it will help to accurately determine the star formation efficiency of high redshift galaxies~\cite{Tacchella_2020}. Here we assume that JWST observes galaxies with magnitudes above $M_\mathrm{UV}^\mathrm{min} = -22.75$, which roughly corresponds to the brightest galaxies in the HLF catalog~\cite{Bouwens:2014fua}. The lowest-brightness galaxies the JWST can detect correspond to an apparent magnitude of approximately $m_\mathrm{lim} = 29.3$~\cite{Mason:2015cna}. This quantity is translated into an absolute magnitude $M_\mathrm{UV}^\mathrm{max}$ via $M_\mathrm{UV}^\mathrm{max} = m_\mathrm{lim}+ 5-5\log_{10}\left({D_\mathrm{L}}/{\mathrm{pc}}\right)$, where $D_\mathrm{L}$ is the luminosity distance. \vspace{6pt} The NGRST, on the other hand, is expected to cover a significantly larger area than both the HST and JWST. In particular, the High-Latitude Survey~\cite{Spergel:2015sza} that we consider here will image an area of ${\sim}2000\,\mathrm{deg}^2$ over a 2-year period, which will allow us to further probe the bright end of the UV LF. We determine $M_\mathrm{UV}^\mathrm{min}$ for NGRST by requiring that at least one galaxy is contained in the brightest magnitude bin. This gives a value of $-25 \lesssim M_\mathrm{UV}^\mathrm{min} \lesssim -24$\footnote{The dust attenuation at such bright magnitudes can suppress the UV LF significantly. Given that the dust extinction parameters have not been measured at $z > 8$ yet, the absence of this effect at these redshifts and magnitudes is an important caveat in our method of generating mock data}. The galaxies of lowest brightness observable correspond to an apparent magnitude of $m_\mathrm{lim} = 26.5$~\cite{Mason:2015cna} and is translated to an absolute magnitude using the formula above. We make simple forecasts for both JWST and NGRST by generating a set of mock data at redshifts $z=\{4,5,6,7,8,9,10\}$ through the following procedure: \begin{enumerate} \item We define a fiducial luminosity function based on the global fit to data set 1 at $z=4+5+6+7+8$ (fixing $f_\mathrm{NL} = 0$ here, see also Figure~\ref{fig:UVLF_Bouwens2015_fit}). The fiducial parameters of this model are $\{\alpha_*,\beta_*,\epsilon_* ,f_\mathrm{NL}\} = \{-1.14,0.20,0.23,0\}$. \item Next, we bin the luminosity function from $M_\mathrm{UV}^\mathrm{min}$ to $M_\mathrm{UV}^\mathrm{max}$. We use a bin size of $\Delta M_\mathrm{UV}\approx 0.5$ and calculate the average (comoving) number of galaxies inside each bin at each redshift. For the NGRST mock data, the value of $M_\mathrm{UV}^\mathrm{min}$ is determined by requiring that at least one galaxy is contained in the brightest magnitude bin at each redshift. \item Then we draw from a Poisson distribution with average the number of galaxies inside each bin. The central values of the mock data are then these random sampled numbers, placed at the center of each bin. If the Poisson sampling gives 0, we set as average value 0.5 galaxies (so as to obtain a rough upper bound of 1 galaxy). \item The errors are obtained in two steps: Firstly, we draw from a Poisson distribution with mean the average number of galaxies inside each bin. If in the previous step 0 was obtained, then the error is also set to 0.5 galaxies. Secondly, like in our main analysis, we impose a minimal relative error of 10\%, to account for cosmic variance~\cite{Trenti:2007dh}. Since 10\% could be regarded as a conservative value, depending on the sky coverage, we also report forecasts using 0\% and 5\% for comparison. The errors are placed symmetrically around each mock data point. \item Lastly, we translate back to the luminosity function by dividing by volume and the bin size $\Delta M_\mathrm{UV}$. \end{enumerate} \begin{table}[b!] \centering {\def1.35{1.35} \begin{tabular}{c|c|c|c|c|c|c} \hline\hline \multirow{2}{*}{\textbf{Min. error}} & \multicolumn{2}{c|}{\textbf{JWST}} & \multicolumn{2}{c|}{\textbf{NGRST}} & \multicolumn{2}{c}{\textbf{Combined}}\\ & \multicolumn{1}{c}{$1\sigma$} & $2\sigma$ & \multicolumn{1}{c}{$1\sigma$} & $2\sigma$ & \multicolumn{1}{c}{$1\sigma$} & \multicolumn{1}{c}{$2\sigma$} \\ \hline\hline \textbf{None}$^\dagger$& $\pm 20$ & $\pm 28$ & $\pm 9$ & $\pm 13$ & $\pm 14$ & $\pm 20$ \\ \hline \textbf{5\%}& $\pm 37$ & $\pm 53$ & $\pm 49$ & $\pm 70$ & $\pm 24$ & $\pm 34$ \\ \hline \textbf{10\%}& $\pm 56$ & $\pm 79$ & $\pm 99$ & $\pm 156$ & $\hspace{0.79mm} \pm 39$\hspace{0.79mm} & $\pm 56$\\ \hline\hline \end{tabular} } \caption{Forecasted sensitivities to $f_\mathrm{NL}$ by JWST and NGRST, using $k_\mathrm{cut}=0.1\,\mathrm{Mpc}^{-1}$ and different minimal errors in the mock data. These bounds are obtained by marginalizing the $\chi^2$ in Eq.~\eqref{eq:chisq} over the parameters $\alpha_*,\,\beta_*$ and $\epsilon_*$. The $\dagger$ indicates that only Poisson error is included (although unrealistic due to cosmic variance). } \label{tab:JWST} \end{table} Given the JWST and NGRST mock data, we construct a $\chi^2$ in a similar fashion as for the HST data and follow the same two approaches as in Section~\ref{sec:results} to forecast constraints on $ f_{\rm NL} $. Here, we again use a cut-off scale in the bispectrum of $k_\mathrm{cut} = 0.1\,\mathrm{Mpc}^{-1}$. We summarise our forecasts in Table~\ref{tab:JWST}, where we directly marginalized the $\chi^2$ to obtain the bounds on $ f_{\rm NL} $. The values in this table are calculated with respect to the median marginalized $\Delta\chi^2$. NGRST is found to give slightly weaker constraints than JWST, which is mainly due to the lower value of $m_\mathrm{lim}$, i.e., only the brightest galaxies can be observed at high redshifts. When combining the two mock data sets, without overlapping the covered magnitude ranges, the bounds can be further improved. We also ran an MCMC simulation with the JWST mock data (since it is expected to give the strongest constraints) and show the posteriors in Appendix~\ref{app:JWST_posteriors}. All in all, JWST and NGRST would be able to improve upon our current bounds based on HST observations roughly by a factor $3-4$ under conservative assumptions (10\% minimum error) and up to an order of magnitude for more optimistic assumptions. Moreover, given this factor of $3-4$ improvement, JWST and NGRST will be able to either alleviate or further strengthen (to ${\sim}3\sigma$) the deviation from zero in our bounds on $f_\mathrm{NL}$ for $k_\mathrm{cut} \sim 1\,\mathrm{Mpc}^{-1}$ (see Figure~\ref{fig:bounds_kcut} and Section~\ref{subsec:other_kcuts} for details). \section{Conclusions} \label{sec:conclusions} In this work we have demonstrated the ability of UV luminosity functions to probe small-scale non-Gaussianity. This opens a window into the physics of the highest energies known, cosmic inflation, as well as other primordial phenomena happening at {\it small scales}, such as the production of PBHs. We focused on non-Gaussianity manifested at scales smaller than those probed by the CMB and LSS, for which there are no other current bounds, cf. Figure~\ref{fig:current_status}. We have shown that constraints can already be obtained from HST observations. By using UV LF data from the Hubble Legacy Fields and Hubble Frontier Fields catalogs, we have put bounds on the non-Gaussianity parameter $f_\mathrm{NL}$ and examined its robustness with regards to several assumptions in our analysis. The approach of this work is described in Sections~\ref{sec:UVLF} and \ref{sec:primordial_nonGauss}, the results are presented in Section~\ref{sec:results} and forecasts are made in Section~\ref{sec:forecasts}. We conclude that: \begin{itemize} \item Small-scale non-Gaussianity affects the UV luminosity function mostly at the bright end. While there are degeneracies between $f_\mathrm{NL}$ and some astrophysical parameters, these can be broken by combining data at different redshifts. \item Current observations of the UV luminosity function can provide robust bounds on small-scale non-Gaussianity. Our main analysis is performed by using UV LF data from the HLF catalog and assuming a cut-off scale in the bispectrum of $0.1\,\mathrm{Mpc}^{-1}$. We obtain constraints on $f_\mathrm{NL}$ of $71^{+221}_{-119}$ at $1\sigma$ and $71^{+426}_{-237}$ at $2\sigma$. These are comparable to the results obtained with HFF data or under different assumptions regarding the astrophysical parameters. \item JWST and NGRST can further improve upon these bounds by a factor $3-4$. A set of forecasts shows that such experiments would be able to reduce the error on $f_\mathrm{NL}$ down to $\Delta f_\mathrm{NL}\sim100$ at $2\sigma$. \end{itemize} Having established the formalism of the UV luminosity function as a probe of small-scale non-Gaussianity, it is important to consider the origin of the non-zero best-fit amplitude that we find for a cut-off scale of $k_\mathrm{cut} = 1\,\mathrm{Mpc}^{-1}$ in the bispectrum (Section~\ref{subsec:other_kcuts}). This anomaly persists for all UV LF data sets considered. We have shown that JWST and NGRST are able to address this issue. Another promising way forward would be to study the impact of such small-scale PNG on the 21-cm cosmic-dawn signal measured by upcoming global-signal and interferometric 21-cm experiments (see Appendix~\ref{subsec:21cm_forecast}). In addition, a more sophisticated forecast for JWST would give a better picture of the smallest scales that can be affected by PNG. This is particularly the case when a combination of different observational configurations and longer exposure times is considered. \vspace{6pt} In conclusion, our work establishes the UV LF as a powerful probe of the fundamental processes that were at play in the early Universe. Upcoming surveys will offer an exciting possibility to unveil the origin of structures in our cosmos and in which process the UV LF will play a prominent role. \section*{acknowledgements} We thank Sandro Tacchella and Andrei Mesinger for insightful comments on the draft of this work. We are also grateful to Gonzalo Palma for discussions on PNG, and the anonymous referee for providing useful feedback on this paper. We acknowledge the use of the packages \texttt{emcee}~\cite{ForemanMackey:2012ig} and \texttt{corner}~\cite{corner}. NS is a recipient of a King's College London NMS Faculty Studentship. JBM is supported by NSF grant AST-1813694 at Harvard and the Clay Fellowship at the Smithsonian Astrophysical Observatory. \bibliographystyle{apsrev4-1}
1,477,468,751,245
arxiv
\section{Introduction} Network data has become a ubiquitous part of daily life and spans diverse areas; from social networks, emails, and online forums, to scientific citation networks and protein or chemistry interactions. Accordingly, there has been a recent push to develop methods for knowledge extraction and representation learning for networks. When it comes to representation learning for graphs, the main area of focus has been node-level representation learning at different scales with relatively sparse attention given to methods for analyzing whole networks. For instance, a fundamental problem in analyzing whole networks is to determine whether two graphs (or networks) are identical; this is also called the graph isomorphism problem. Babai \cite{babai2016graph} has shown that this problem can be solved in quasipolynomial time. In real-world applications, however, instead of determining whether two graphs are identical, we care about the similarity between graphs. A typical application of this approach involves classifying graphs based on their similarity. Note that this is a generalization of the graph isomorphism problem as two graphs that are identical will be labeled the same. One approach to solving the graph classification problem is to learn a representation of the graph as a vector, called whole graph embedding, which is invariant under the graph isomorphism and then adopt down-streaming classifiers. In this paper, we propose a whole graph embedding method that considers node features as random variables and examines the distribution of node features in sub-graphs. Intuitively, the correlation between node features is related to the role similarity \cite{struc2vec} between them. For example, the nodes at centers of networks representing companies are likely to all be CEOs. Based on this, we calculate the characteristic functions in k-hop sub-graphs and aggregate and sample the characteristic functions to generate the graph-level embedding. To capture topological similarity, we propose a diffusion-wavelet-based method. We then use the minimum difference of pair assignments (MDPA) \cite{MDPA}, a special case of earth mover's distance (EMD) \cite{rubner1998metric}, to measure the distance between the energy distributions for two nodes. Specifically, we make the following contributions in this paper: \vspace{-4.5pt} \begin{itemize} \item We present a framework for depicting the distribution of node features in sub-graphs based on diffusion wavelets and propose a graph-level embedding method based on the aggregation of characteristic functions. \item We mathematically prove that our embedding method produces identical embeddings for isomorphic graphs. We further provide theoretical proof of the robustness of our method to feature noise. \item We evaluate our method on the task of graph classification using four real-world networks. Our experiments show that our framework outperforms existing methods in learning whole graph representations. \end{itemize} \section{Related Work} Many prior work have explored node representation learning \cite{node2vec, deepwalk, line, wang2021embedding, wang2020embedding, wang2021stress, wang2021hyperbolic,wang_tois_2021}. However, these methods do not work well on graph-level classification problems. The methods for graph classification can be grouped into several categories. A classic family of methods involve graph kernels with representative methods like the Weisfeiler-Lehman kernel \cite{shervashidze2011weisfeiler}, random walk kernel \cite{gartner2003graph}, shortest path kernel \cite{borgwardt2005shortest} and deep graph kernel \cite{yanardag2015deep}. Another family of methods relies on graph embedding to learn a vector to represent a graph as a whole. Some of these methods are built upon graph kernels. For example, Graph2Vec \cite{narayanan2017graph2vec} first uses the Weisfeiler-Lehman kernel to extract rooted subgraph features which are then passed to a doc2vec \cite{le2014distributed} model to get embeddings. GL2Vec \cite{chen2019gl2vec} extends Graph2Vec by incorporating line graphs and in this way can deal with edge features. Other methods like SF \cite{de2018simple}, NetLSD \cite{tsitsulin2018netlsd}, and FGSD \cite{verma2017hunt} use the information from the Laplacian matrix and eigenvalues of graph to generate embeddings. Finally, Geo-Scatter \cite{gao2019geometric} and FEATHER \cite{feather} utilize the power of normalized adjacency matrices to capture the probability distribution of neighborhoods. \section{Framework} In this section, we formally introduce our framework. Let $G=(V, E, A)$ be an undirected and unweighted graph, where $V$ is a set of vertices, and $E \subseteq V \times V $ is the set of unweighted edges between vertices in $V$, and $A \in \mathbb{R}^{N \times m}$ describes the attributes of each node in the network. We consider the problem of representing the whole graph as one d-dimensional vector $X \in \mathbb{R}^d$, with $d<<|V|.$ Our framework combines the unique advantages of GraphWave \cite{graphwave} and FEATHER \cite{feather}, and consists of two parts: (1) topological wavelet similarity calculation and (2) sub-graph feature distribution characterization. We calculate node topological similarity based on diffusion wavelets, and we use that to capture the distribution of node features in sub-graphs. After aggregating the characteristic functions of k-hop sub-graphs, we pick representative sampling points and concatenate results to get the graph-level embedding. Below we describe these two parts in greater detail. \subsection{Topological Wavelet Similarity} \subsubsection{Diffusion Wavelets \cite{hammond2011wavelets}} The Laplacian matrix $L$ is the difference between the adjacency matrix and the degree matrix of a graph. Assume $\lambda_1 \leq \lambda_2 \leq ...\leq \lambda_N$ are the eigenvalues of $L$, then $L$ can be decomposed as $L=U \Lambda U^T$, $\Lambda = diag(\lambda_1,...,\lambda_N)$. These eigenvalues describe the temporal frequencies of a signal on the graph. In order to discount larger eigenvalues and smooth the signals, a filter kernel $g_{\tau}$ with scaling parameter $\tau$ is introduced. Here, we use the heat kernel $g_{\tau} = e^{-\lambda{\tau}}$. The spectral wavelet coefficient matrix $\Psi$ is defined as: \begin{equation} \Psi = U\,diag(g_{\tau}(\lambda_1),...,g_{\tau}(\lambda_N)\,U^T \end{equation} For a given node $v_i$, the element $\Psi_{ji}$ represents how much energy comes from node $v_j$ to node $v_i$. Therefore, the $i$-th column of the wavelet coefficient matrix $\Psi$ describes a distribution of energy from the other nodes. It has been proved that nodes with similar energy distribution patterns have similar structural roles in the network \cite{graphwave}. Therefore, the difference between wavelet distributions of two nodes represents their topological distance. \subsubsection{Topological Similarity} The minimum difference of pair assignments (MDPA) can quickly measure the distance between two histograms\cite{MDPA}. It seeks the best one-to-one assignment between two lists to make the sum of the differences inside a pair to be minimized. Under certain conditions, the MDPA problem can be solved with linear time complexity. \begin{theorem} Given two sets of \,$n$\, elements $X=(x_1, ..., x_n)$ and $Y=(y_1, ..., y_n)$, with $\forall i<j$, $x_i<x_j$ and $y_i<y_j$ . We must have the MDPA between $X$ and $Y$ as $\sum_{i=1}^n {|x_i-y_i|}$. \end{theorem} \begin{proof} For any one-to-one assignment $((x_{i_1},y_{j_1}),...,(x_{i_n},y_{j_n}))$ between the $X$ and $Y$, if there exists $s$ and $t$ such that $x_{i_s}<x_{i_t}$ and $y_{j_s}>y_{j_t}$(or $x_{i_s}>x_{i_t}$ and $y_{j_s}<y_{j_t}$), we can always decrease the sum of the differences by switching $y_{j_s}$ and $y_{j_t}$. Therefore, the sum of differences achieves its minimum if and only if both $\{x_i\}$ and $\{y_j\}$ are ordered. \end{proof} We use the notation $\Psi_i$ for the spectral wavelet coefficients at a specific node $v_i$. In this way, to calculate the MDPA distance between $\Psi_i$ and $\Psi_j$, we just need to order both $\Psi_i$ and $\Psi_j$ to be ascending and calculate the pairwise distance. At last, after calculating the MDPA distance between pair of nodes $v_i$ and $v_j$, we define topological node similarity as follows: \begin{equation} s(v_i,v_j)=e^{-MDPA(\Psi_i,\Psi_j)} \end{equation} \subsection{Sub-graph Feature Distribution} We assume that the features of node $v_i$ is a random vector $\hat{a}_i \in \mathbb{R}^m$, and $a_i$ in the attribute matrix $A$ can be considered as an observation. We are going to use the distribution of features in sub-graphs to recover the characteristic function of $\hat{a}_i$. Since the correlation between attributes are negatively related to the node distance \cite{cohen2014distance}, for a given node $v_i$, we consider the feature distribution in k-hop sub-graph $G_k(v_i)$. The characteristic function of $\hat{a}_i$ in $G_k(v_i)$ is \begin{equation} \label{eqn:charfunc} \phi_{v_i}^{(k)}(t)=\mathbb{E}[e^{it{\hat{a}_i}}\vert G_k(v_i)]=\sum_{v_j \in G_k(v_i)}{\mathbb{P}(v_j|v_i)}e^{it{a_j}} \end{equation} The transition probability $\mathbb{P}(v_j|v_i)$ should be proportional to two factors: the similarity between nodes $v_j$ and $v_i$ and the influence of node $v_i$. We use normalized topological node similarity and normalized degree to calculate these values, respectively. The normalized topological node similarity is: \begin{equation} \label{eqn:normsimilarity} \tilde{s}(v_i,v_j)=\frac {s(v_i,v_j)} {\sum_{v_r \in G_k(v_i)}{s(v_i,v_r)}}. \end{equation} Based on equation \ref{eqn:normsimilarity} and Euler's formula, we can expand equation \ref{eqn:charfunc} as: \begin{equation} \label{eqn:charfunc_euler} \phi_{v_i}^{(k)}(t)=\sum_{v_j \in G_k}{\tilde{s}(v_i,v_j)(\cos(ta_j)+i\sin(ta_j))}. \end{equation} By aggregating the characteristic function over all nodes, we are then able to represent the graph level characteristic as: \begin{equation} \label{eqn:graphchar} \phi_G^{(k)}(t)=\frac 1 {\lvert V \rvert}\sum_{v_i \in G}{\phi_{v_i}^{(k)}(t)}. \end{equation} We can sample equation \ref{eqn:graphchar} at $d$ evenly spaced points $t_1,...,t_d$ and concatenate them together to get the k-hop embedding: \begin{equation} \chi_{G_k}=[Re(\phi_G^{(k)}(t_i)),Im(\phi_G^{(k)}(t_i))]_{t_1,...,t_d} \end{equation} Concatenating the k-hop embeddings, we can get the graph level embedding based on topological similarity as: \begin{equation} \chi_G=\left[\chi_{G_1},\chi_{G_2},...,\chi_{G_{k_{max}}}\right] \end{equation} We can repeat this process to get the embedding with transition probability using normalized node influence. The final embedding $X$ is constructed by concatenating the embeddings with transition probability using normalized topological similarity and the embedding with transition probability using normalized node influence.\newline \subsection{Theoretical Properties} The following theorem shows that we can get the same embedding from isomorphic graphs by our method. \begin{theorem} Given two isomorphic graphs $G$ and $G'$, with the same sampling points $t_1, t_2, ..., t_d$, we have \begin{equation} \chi_G=\chi_{G'} \end{equation} \end{theorem} \begin{proof} \small According to the definition of $\chi_G$, what we need to prove is $\forall k$, \begin{equation} \phi_G^{(k)}(t)=\phi_{G'}^{(k)}(t). \end{equation} We introduce a matrix $H^{(k)}$, where \begin{equation} H^{(k)}_{i,j}=\left\{ \begin{aligned} & 1 \,\,\, & v_j \in G_k(v_i) \\ & 0 \,\,\, & v_j \not\in G_k(v_i) \\ \end{aligned}\,\,\,. \right. \end{equation} Apparently, $H^{(k)}$ is a symmetric matrix. We introduce another matrix $S$ with entries $S_{i,j}=\tilde{s}(v_i,v_j)$. Based on equation \ref{eqn:charfunc_euler} and equation \ref{eqn:graphchar}, \begin{equation} \label{eqn:phi_G_k} \begin{aligned} \phi_G^{(k)}(t)&=\frac 1 {\lvert V \rvert}\sum_{v_i \in G}{\phi_{v_i}^{(k)}(t)}\\ &=\frac 1 {\lvert V \rvert}\sum_{v_i \in G}{\sum_{v_j \in G}{H^{(k)}_{i,j}S_{i,j}(\cos(ta_j)+i\sin(ta_j))}} \end{aligned} \end{equation} $G$ and $G'$ are isomorphic, which means that there exists bijection $\Pi$ and $\pi$ such that \begin{equation} \begin{aligned} &\Pi:G \to G' &v_i \mapsto v'_{i'},\\ &\pi:\{1,...,N\} \to \{1,...,N\} &i \mapsto i'.\\ \end{aligned} \end{equation} We use $H'$ and $S'$ to denote the $H$ and $S$ matrices for $G'$. In this way, \begin{equation} \label{eqn:phi_G_k} \begin{aligned} Re\left(\phi_{G'}^{(k)}(t)\right)&=\frac 1 {\lvert V \rvert}\sum_{v'_{i'} \in G'}{\sum_{v'_{j'}\in G'}{H'^{(k)}_{i',j'}S'_{i',j'}\cos(ta_{j'})}}\\ &=\frac 1 {\lvert V \rvert}\sum_{v_i \in G}{\sum_{v_j \in G}{H'^{(k)}_{\pi(i),\pi(j)}S'_{\pi(i),\pi(j)}\cos(ta_{\pi(j)})}}\\ &=\frac 1 {\lvert V \rvert}\sum_{v_i \in G}{\sum_{v_j \in G}{H^{(k)}_{i,j}S_{i,j}\cos(ta_j)}}\\ &=Re\left(\phi_{G}^{(k)}(t)\right) \end{aligned} \end{equation} Similarly, $Im\left(\phi_{G'}^{(k)}(t)\right)=Im\left(\phi_{G}^{(k)}(t)\right)$. \end{proof} In addition to preserving the embedding for isomorphic graphs, another advantage of our method is its robustness against noisy features. This property is very useful in real-world scenarios. \begin{theorem} Given undirected and attributed graph $G=(V,E,A)$ and its variant $G'=(V,E,A')$ with a noise in the features of $v_{j_0}$, for fixed sampling points $t_1, t_2, ..., t_d$, if $\left \Vert a_{j_0}-a'_{j_0} \right \Vert_{\infty} < \frac \epsilon {t_d}$ and $a_j=a'_j$ for any $j \neq {j_0}$, we must have $\left \Vert\chi_G-\chi_{G'}\right \Vert_2 < \epsilon$. \end{theorem} \begin{proof} According to the definition of 2-norm, we have \begin{equation} \left \Vert\chi_G-\chi_{G'}\right \Vert_2 \leq \max_p\left\{\lvert \chi_G^{(p)}-\chi_{G'}^{(p)} \rvert \right\}, \end{equation} where $\chi_G^{(p)}$ denotes the $p$-th component of $\chi_G$. We only have to prove that $\forall k$, $\forall p \leq m$ (m is the dimension of attributes), $\forall v_i \in V$, \begin{equation} \label{eqn:perturbation_re} \lvert Re\left(\phi_{v_i}^{(k)}(t)\right)-Re\left(\phi_{v'_i}^{(k)}(t)\right)\rvert^{(p)} < \epsilon, \end{equation} \begin{equation} \label{eqn:perturbation_im} \lvert Im\left(\phi_{v_i}^{(k)}(t)\right)-Im\left(\phi_{v'_i}^{(k)}(t)\right)\rvert^{(p)} < \epsilon \end{equation} Here, we just show the proof of equation \ref{eqn:perturbation_re}, and the proof of equation \ref{eqn:perturbation_im} is similar. Plugging in equation \ref{eqn:charfunc_euler}, what we need to prove becomes \begin{equation} \label{eqn:eq_forref} \lvert \sum_{v_j \in G_k}{\tilde{s}(v_i,v_j)\cos(ta_j^{(p)})}-\sum_{v_j' \in G'_k}{\tilde{s}(v_i',v_j')\cos(ta_j'^{(p)})}\rvert < \epsilon, \end{equation} Note that $G$ and $G'$ share the same topological structure, $\forall v_i,v_j \in V$, \begin{equation} \tilde{s}(v_i,v_j)=\tilde{s}(v_i',v_j'), \end{equation} where $v_i'$ and $v_j'$ are the corresponding nodes in $G'$. We have \begin{equation} \begin{aligned} LHS\,\,of\,\,(\ref{eqn:eq_forref}) &=\lvert \tilde{s}(v_i,v_{j_0})\cos(ta_{j_0}^{(p)})-\tilde{s}(v_i',v'_{j_0})\cos(ta'\,\,^{(p)}_{j_0}) \rvert\\ &=\lvert \tilde{s}(v_i,v_{j_0})\left(\cos(ta_{j_0}^{(p)})-\cos(t{a'}_{j_0}^{(p)})\right) \rvert\\ &=\lvert \tilde{s}(v_i,v_{j_0})\sin(ta_{j_0}^{(p)}+\theta)t\left({a'}_{j_0}^{(p)}-a_{j_0}^{(p)}\right) \rvert\\ &\leq \lvert \tilde{s}(v_i,v_{j_0})\sin(ta_{j_0}^{(p)}+\theta)t\left \Vert a_{j_0}-a'_{j_0} \right \Vert_{\infty} \rvert\\ &< \epsilon \end{aligned} \end{equation} \end{proof} The same idea can be easily applied to the embedding with transition probability using normalized node influence. \section{Experiment} In this section, we use the classic task of graph classification to evaluate our method. We first introduce the detail of each dataset used in our experiment, then compare our method to 12 well-known baselines (including the current state-of-the-art, ``FEATHER'' \cite{feather}), and finally provide a parameter sensitivity analysis. \subsection{Datasets} We use four publicly available social graph datasets to evaluate our method. These datasets are from the Karate Club GitHub \cite{karateclub}: \begin{itemize} \item \textbf{GitHub Repos}: This dataset consists of networks of developers who starred GitHub repositories until August 2019. The nodes are Github users and the edges are follower relationships. The label of this dataset is whether a network belongs to web or machine learning developers. \item \textbf{Reddit Threads}: This dataset consists of networks of threads from Reddit collected in May 2018. The nodes are Reddit users and the edges are replies between them. The label of this dataset is whether a thread is discussion-based or not. \item \textbf{Twitch Egos}: This dataset consists of networks of ego-nets of users who participated in the partnership program in April 2018. The nodes are Twitch users and the edges are friendships. The label of this dataset is whether a user plays a single or multiple games. \item \textbf{Deezer Egos}: This dataset consists of networks of ego-nets of users collected from the Deezer in February 2020. The nodes are Deezer users and edges are mutual follower relationships. The label of this dataset is the gender of the ego user. \end{itemize} Descriptive statistics of these datasets is shown in Table \ref{dataset}. Similar to prior work (e.g., Rozemberczki et al. \cite{feather}), for datasets without node features, we manually create two features for each node corresponding to the log degree and clustering coefficient of the node. \begin{table} \centering \small \begin{tabular}{cccccccc} \Xhline{2\arrayrulewidth} & & \multicolumn{2}{c} { Nodes } & \multicolumn{2}{c} { Density } & \multicolumn{2}{c} { Diameter } \\ \Xhline{2\arrayrulewidth} Dataset & Graphs & Min & Max & Min & Max & Min & Max \\ \Xhline{2\arrayrulewidth} GitHub Repos & 12,725 & 10 & 957 & $0.003$ & $0.561$ & 2 & 18 \\ Reddit Threads & 203,088 & 11 & 97 & $0.021$ & $0.382$ & 2 & 27 \\ Twitch Egos & 127,094 & 14 & 52 & $0.038$ & $0.967$ & 1 & 2 \\ Deezer Egos & 9,629 & 11 & 363 & $0.015$ & $0.909$ & 2 & 2 \\ \Xhline{2\arrayrulewidth} \end{tabular} \caption{Descriptive statistics of the four datasets used in the graph classification experiments. This table is taken from Rozemberczki et al. \cite{feather}.} \label{dataset} \end{table} \subsection{Graph classification} To better compare our model to prior work, we use the same settings specified by Rozemberczki et al. \cite{feather}: We run the graph classification task on each dataset with ten $80 \% / 20 \%$ train-test split ratios from different random seeds (from 0 to 9) and use an off-the-shelf logistic regression model (using scikit-learn), with the default parameters and the SAGA optimizer for classification. The averaged AUC scores with corresponding standard errors are reported. For all the baselines -- GL2Vec, Graph2Vec, SF, NetLSD, FGSD, Geo-Scatter, FEATHER, Mean Pool, Max Pool, Sort Pool \cite{zhang2018end}, Top K Pool \cite{gao2019graph}, and SAG Pool \cite{lee2019self}, we show the results reported by Rozemberczki et al. \cite{feather}. As shown in Table \ref{graphclassification}, our method outperforms all the baselines on all the datasets. \begin{table} \centering \begin{tabular}{lllll} \Xhline{2\arrayrulewidth} & \thead{GitHub\\Repos} & \thead{Reddit\\Threads} & \thead{Twitch\\Egos} & \thead{Deezer\\Egos} \\ \Xhline{2\arrayrulewidth} GL2Vec & .532$\pm$.002 & .754$\pm$.001 & .670$\pm$.001& .500$\pm$.001 \\ Graph2Vec & .563$\pm$.002 & .808$\pm$.001 & .698$\pm$.001& .510$\pm$.001 \\ SF & .535$\pm$.001 & .819$\pm$.001 & .642$\pm$.001& .503$\pm$.001 \\ NetLSD & .614$\pm$.002 & .817$\pm$.001 & .630$\pm$.001& .525$\pm$.001 \\ FGSD & .650$\pm$.002 & .822$\pm$.001 & .699$\pm$.001& .528$\pm$.001 \\ Geo-Scatter & .532$\pm$.001 & .800$\pm$.001 & .695$\pm$.001& .524$\pm$.001 \\ FEATHER & .728$\pm$.002 & .823$\pm$.001 & .719$\pm$.001& .526$\pm$.001 \\ Mean Pool & .599$\pm$.003 & .801$\pm$.002 & .708$\pm$.001& .503$\pm$.001 \\ Max Pool & .612$\pm$.013 & .805$\pm$.001 & .713$\pm$.001& .515$\pm$.001 \\ Sort Pool & .614$\pm$.010 & .807$\pm$.001 & .712$\pm$.001& .528$\pm$.001 \\ Top K Pool & .634$\pm$.001 & .807$\pm$.001 & .706$\pm$.002& .520$\pm$.003 \\ SAG Pool & .620$\pm$.001 & .804$\pm$.001 & .705$\pm$.002& .518$\pm$.003 \\ Our method & \textbf{.772$\pm$.002} & \textbf{.835$\pm$.001} & \textbf{.722$\pm$.001} & \textbf{.538$\pm$.003} \\ \Xhline{2\arrayrulewidth} \end{tabular} \caption{Average AUC scores (and standard errors) of our model vs all the baselines for the graph classification task. The baseline results are taken from Rozemberczki et al. \cite{feather}.} \label{graphclassification} \end{table} \subsection{Parameter Sensitivity Analysis} \begin{figure}[htbp] \centerline{\includegraphics[width=1.05\linewidth]{khop_combine.pdf}} \caption{Parameter sensitivity for the graph classification task on the GitHub Repos dataset.} \label{para} \end{figure} In this section, we study the sensitivity of our method to the choice of hyper-parameters. The hyper-parameters used in our model are: \begin{itemize} \item \textbf{$k_{max}$ -} (default: 5) The maximum scale of k-hop sub-graph to capture the topological similarity. \item \textbf{$d$ -} (default: 25) The number of sampling points. \item \textbf{$\tau$ -} (default: 0.5) The scaling parameter of the filter kernel. \end{itemize} The parameter sensitivity analysis for the graph classification task on the GitHub Repos dataset is shown in Fig. \ref{para}. We tune each parameter separately while fixing the other parameters to the default value. Overall, our model is not parameter sensitive. The results are stable for all the different number of sampling points ($d$) and $\tau$, and all the scales ($k_{max}$) larger than two. \section{Conclusion} In this paper, we introduced a novel framework to depict the distribution of node features in k-hop sub-graphs based on diffusion wavelets and proposed a graph-level embedding method based on the aggregation of the characteristic functions. We also provide theoretical proofs that our embedding method produces identical embeddings for isomorphic graphs and that it is robust to feature noise. We evaluated our method on the task of graph classification using four real-world networks and compared it against 12 baselines. Our method outperformed them all in all of our experiments, achieving the current state-of-the-art. \noindent \paragraph{Code \& Data Availability:}The code and data for this paper will be made available upon request. \bibliographystyle{ACM-Reference-Format} \balance
1,477,468,751,246
arxiv
\section{Introduction} Throughout this paper, $M^n$ denotes an $n$-dimensional, compact, connected, oriented, smooth manifold with nonempty boundary $\Sigma$ and $n\geq3.$ We say that a Riemannian metric $g$ is a gradient Einstein-type metric on $M^{n}$ or that $(M^n,g)$ is a gradient Einstein-type manifold if there exists a smooth function $u$ on $M^{n}$ satisfying \begin{equation}\label{1-1} \left\{\begin{array}{rcl} \nabla^{2}u &=& \frac{\mu}{\beta}u\big(\Lambda g-\frac{\alpha}{\beta}Ric\big)+\gamma g,\\ u&>& 0\quad \hbox{in}\quad int(M),\\ u&=& 0\quad \hbox{on}\quad \Sigma, \end{array}\right. \end{equation} for some smooth function $\Lambda$ on $M^{n}$ and constants $\alpha, \beta, \mu, \gamma \in \mathbb{R}$, with $\beta\neq 0.$ Here $\nabla^2u$ denotes the Hessian of $u,$ and $Ric$ stands for the Ricci tensor of $g.$ We observe that the $\mu=0$ and $\gamma\neq0$ case has already been covered by the classical Reilly's result~\cite[Lemma~3 and Theorem~B]{reilly2}, from which we know that if a compact Riemannian manifold $(M^n,g)$ with nonempty connected boundary $\Sigma$ admits a smooth function $f$ on $M^n$ and a nonzero constant $L$ such that $\nabla^2f=Lg$ and $f|_\Sigma$ is constant, then it is isometric to a (metric) ball in a Euclidean space, while the $\mu=\gamma=0$ case, the function $u$ must be constant by the maximum principle and we have a contradiction. So, for our purposes, it is enough to consider $\mu\neq0.$ Furthermore, for the $\gamma\neq 0$ case, we have to consider $0$ as a regular value of $u$, while for $\gamma=0$, this is a consequence of \eqref{1-1}, see Proposition~\ref{prop1}. We highlight that the sign of $\alpha\mu$ is of crucial significance for compact gradient Einstein-type manifolds of constant scalar curvature, see Section~\ref{RbtEtm}. For instance, in dimension three, we give a complete topological classification for the boundary by means of the sign of $\alpha\mu,$ see Section~\ref{3-dimension}. The class of metrics defined by \eqref{1-1} is closely related to two particular cases of metrics studied in both mathematics and physics frameworks. The first one relies on the theory of static and $V$-static metrics, which have applications on General Relativity, black holes problems and critical metrics of volume functional, see, e.g.,~\cite{hawking,HMRa,MT}. Indeed, for $\alpha=\beta=-\mu$ and $\gamma=0$ or $\gamma=1$, we additionally suppose that $\Lambda$ is a constant and $-\Delta u=\Lambda u$ so that we recover the special cases of static and $V$-static metrics, respectively. The second one arises from the construction of warped product Einstein metrics through the equation for the Ricci tensor of the base space with nonempty boundary, the so-called $(\lambda,n+m)$-Einstein equation by He, Petersen and Wylie~\cite{HPW1}. They observed that such equation is a natural case to consider since a manifold without boundary often occurs as a warped product over a manifold with nonempty boundary. In the setting of manifolds without boundary, we highlight the general family of gradient Einstein-type metrics $g$, namely: \begin{equation}\label{1-2} \alpha Ric + \beta \nabla^2f+ \mu df\otimes df = (\rho S +\lambda)g, \end{equation} where $\alpha,\beta,\mu,\rho$ are constants with $(\alpha,\beta,\mu)\neq(0,0,0)$, $f,\lambda$ are smooth functions and $S=tr(Ric)$ is the scalar curvature of $g$. This definition, originally introduced by Catino et al.~\cite{CMMR}, unifies various particular cases of metrics studied in the current literature, such as Ricci solitons, $\rho-$Einstein solitons, Yamabe solitons, among others, see also the paper by the second author in~\cite{nazareno}. A metric as defined in \eqref{1-2} is nondegenerate if $\beta^2\neq(n-2)\alpha\mu$ and $\beta\neq0$. Otherwise, if $\beta^2=(n-2)\alpha\mu$ and $\beta\neq0$ we have a degenerate gradient Einstein-type metric. In~\cite{CMMR} the authors give a justification for this terminology through an equivalence of a degenerate gradient Einstein-type metric with a conformally Einstein metric. In Section~\ref{Sec-DNDC}, we observe that PDE~\eqref{1-1} in $int(M)$ is equivalent to an equation like~\eqref{1-2}, this allows us to characterize the degenerate condition in our case. In~\cite{nazareno} the second author focused his analysis on the case $\beta\neq0$, which includes both degenerate and nondegenerate cases. In particular, the nondegenerate condition has been crucial for him, see~\cite[Eq.~(3.10)]{nazareno}. For it, he used that equation~\eqref{1-2} is equivalent to \begin{equation}\label{1-3} \frac{\alpha}{\beta}Ric + \frac{\beta}{\mu u}\nabla^2 u=\Lambda g, \end{equation} where $u=e^{\frac{\mu}{\beta}f}$ and $\Lambda=\frac{1}{\beta}(\rho S+\lambda)$. This was indeed one of our key insights to approach this class of metrics on a smooth manifold with nonempty boundary. Here, we assume the existence of a nonnegative solution $u$ of PDE~\eqref{1-1}, and investigate its topological consequences. In this paper, we first focus attention on basic properties for the boundary of a gradient Einstein-type manifold and give some examples, see Section~\ref{bpBEx}. After, we address the Einstein case, in this setting, we highlight Proposition~\ref{sinalgamma} as an important inequality by means of $\gamma.$ Furthermore, we prove two interesting rigidity results characterizing geodesic balls and hemispheres, see Theorems~\ref{teogammaneq0} and~\ref{gamma0}. Coming back to the general case, we obtain an integral relationship between the gradient Einstein-type manifold and its boundary, see Proposition~\ref{thmA}. As an application, we prove a Chru$\acute{s}$ciel type inequality and approach its consequences to gradient Einstein-type manifolds, see Theorems~\ref{corovstatic} and~\ref{corostatic}. Also, we obtain upper bounds for the area boundary in arbitrary dimension, see Theorem~\ref{areangeneral}. In particular, we get topological constraints and classifications for the boundary of a gradient Einstein-type manifold in three and five dimensional cases, see Proposition~\ref{topsphere} and Corollaries~\ref{coro1}, \ref{coro3} and~\ref{teofivedimensional}. Finally, Theorem~\ref{HCGEtM} gives a rigidity result of geodesic balls on a class of homogeneous gradient Einstein-type manifolds with the nondegenerate condition in their topological interior. \section{Boundary properties and some examples}\label{bpBEx} Here, we discuss basic properties for the boundary of a gradient Einstein-type manifold by means of $\gamma$. We observe that such properties has been proved in particular cases by He-Petersen-Wylie~\cite{HPW1} and Miao-Tam~\cite{MT}. Our proof follows as in the latter two papers. For the sake of clarity, we have adapted the proof to our case. \begin{proposition}\label{prop1} Let $(M^n,g)$ be a gradient Einstein-type manifold with boundary $\Sigma$. \begin{itemize} \item[(i)] The boundary $\Sigma$ is a totally umbilical hypersurface, with second fundamental form $\mathcal{A}=-\gamma|\nabla u|^{-1}g_{\Sigma}.$ In particular, if $\gamma=0$, then $\Sigma$ is a totally geodesic hypersurface. \item[(ii)]Let $\Sigma=\cup\Sigma_{i}$ be the disjoint union of all connected components $\Sigma_i$ of $\Sigma$. Then, the restriction of $\xi_i:=|\nabla u|$ to each $\Sigma_i$ is a positive constant. \end{itemize} \end{proposition} \begin{proof} In the case where $\gamma\neq 0$, we already assume that $0$ is a regular value for $u,$ then $\nabla u(x)\neq 0$ for all $x\in\Sigma$. Let us now assume that $\gamma=0$ and proving that, without an additional hypothesis, $\nabla u(x)\neq 0$ for all $x\in\Sigma$. For it, we take a unit speed geodesic $\sigma:[0,1]\to M^n$ so that $\sigma(0)=x\in\Sigma$ and $\sigma(1)\in int(M)$, thus the function $\theta:[0,1]\to\mathbb{R}$ given by $\theta(t)= u(\sigma(t))$ satisfies $\theta(0)=u(x)=0$ and \begin{eqnarray*} \theta''(t) = \nabla^2u(\sigma'(t),\sigma'(t)) =\frac{\mu}{\beta}\big(\Lambda g-\frac{\alpha}{\beta}Ric\big)(\sigma'(t),\sigma'(t))\theta(t). \end{eqnarray*} If $\nabla u(x)=0$, then we would have that $\theta(t)$ is the solution for the initial value problem $\theta''(t)=f(t)\theta(t)$, with $\theta'(0)=0$ and $\theta(0)=0$. Hence, we would have that $u$ vanishes along $\sigma$, which is a contradiction. Since $\Sigma=u^{-1}(0)$ we can choose $\nu=-\frac{\nabla u}{|\nabla u|}$ to be a unit normal vector field on $\Sigma.$ Note that from \eqref{1-1}, we have $\nabla^2u=\gamma g$ on $\Sigma.$ Thus, for any vector fields $X,Y\in\mathfrak{X}(\Sigma)$, we get \begin{equation}\label{Cal-A} \gamma g_{\Sigma}(X,Y)=\nabla^{2}u(X,Y)=-\langle \nabla_{X}Y,\nabla u\rangle=|\nabla u|\langle \nabla_{X}Y,\nu\rangle= - |\nabla u|\mathcal{A}(X,Y), \end{equation} where $\mathcal{A}$ is the second fundamental form of $\Sigma.$ From identity~\eqref{Cal-A} and the fact that $\nabla u\neq 0$ on $\Sigma$, we conclude (i). For assertion (ii), we again use that: $\nabla^2u=\gamma g$ on $\Sigma$ and $\nabla u$ is normal to $\Sigma$, hence \begin{equation*} X(|\nabla u|^{2})=2\nabla^{2}u(X,\nabla u)=0, \end{equation*} for all $X\in\mathfrak{X}(\Sigma).$ So, we conclude that the restriction of $|\nabla u|=:\xi_i$ to each connected component $\Sigma_i$ of $\Sigma$ is a positive constant. \end{proof} Next, we give some examples of gradient Einstein-type metrics on geodesic balls in simply connected space forms. \begin{example}\label{hemisferio} Let $\mathbb{S}^{n}_{+}\subset\mathbb{R}^{n+1}$ be the upper hemisphere whose boundary is the unitary round sphere $\mathbb{S}^{n-1}.$ The Euclidean metric $g$ is an Einstein-type metric on $\mathbb{S}^{n}_{+}.$ Indeed, given a constant $\alpha$ and nonzero constants $\beta$ and $\mu$. Consider $\Lambda=\frac{\alpha}{\beta}(n-1)-\frac{\beta}{\mu}$ and the height function $h_{v}(x)=\langle \Vec{x},v\rangle$ with respect to vector $v=e_{n+1}.$ Now, it is enough to note that $h_{v}>0$ in $\mbox{int}(\mathbb{S}^{n}_{+})$, $h_{v}=0$ on $\mathbb{S}^{n-1}$, $Ric_{g}=(n-1)g$ and $\nabla^{2}h_{v}=-h_{v}g,$ moreover, $\gamma$ must be zero. \end{example} \begin{example}\label{geodesicballRn} Let $B^{n}$ be the round unitary geodesic ball in $\mathbb{R}^{n}$. It is trivial see that the Euclidean metric is an Einstein-type metric on $B^{n}.$ For this, consider the function $u(x)=\frac{1}{2}-\frac{1}{2}|x|^{2}.$ Thus, given a constant $\alpha$ and nonzero constants $\beta$ and $\mu$, we can choose $\Lambda=0$ and $\gamma=-1$. \end{example} \begin{example}\label{geodesicballsn} Let $\mathbb{S}^n$ be the unitary round sphere with its canonical metric $g$, and take a point $p\in\mathbb{S}^n.$ Let $M^{n}$ be the geodesic ball $B(p,r)$ of radius $r<\frac{\pi}{2}$ centered at $p$ with boundary $\Sigma=\partial B(p,r).$ Consider the radial smooth function \begin{equation*} u(exp_p(sv))=\frac{\cos s}{\cos r}-1,\quad with \quad 0\leq s\leq r\quad and \quad |v|=1. \end{equation*} Note that $u>0$ in $int(M)$ and $u=0$ on $\Sigma$ (where $s=r$). Thus, given a constant $\alpha$ and nonzero constants $\beta$ and $\mu$, we can choose constants $\Lambda$ and $\gamma$ such that $g$ be an Einstein-type metric on $M^n.$ In fact, since $Ric=(n-1)g$, from \eqref{1-1} we have \begin{equation*} -\frac{\cos s}{\cos r}=\frac{\mu}{\beta}\left(\frac{\cos s}{\cos r}-1\right)\Big(\Lambda-\frac{\alpha}{\beta}(n-1)\Big)+\gamma \end{equation*} and as $\gamma$ is a constant, we can apply the previous equation on $\Sigma$ to get $\gamma=-1$. We also obtain that \begin{equation*} \Lambda=\frac{\alpha}{\beta}(n-1)-\frac{\beta}{\mu}. \end{equation*} Furthermore, consider the level sets of $u$, which we denote by $\Sigma_{s}$, $0<s\leq r$. By a straightforward computation, the mean curvature of $\Sigma_s$ is $H_s=(n-1)\cot s$, besides, the area element $A(s)$ of $\Sigma_{s}$ satisfies the initial value problem \begin{equation*} A'(s)=(n-1)\cot s A(s), \quad A(0)=0. \end{equation*} \end{example} \begin{example}\label{geodesicballHn} Let $\mathbb{H}^{n}$ be the hyperbolic space with its canonical metric $g$, and take a point $p\in \mathbb{H}^{n}.$ If $M^{n}$ is a geodesic ball $B(p,r)$ of radius $r$ centered at $p$, then, in an analogous way to Example~\ref{geodesicballsn}, we can prove that $g$ is an Einstein-type metric on $M^{n}$. For it, define the radial smooth function \begin{equation*} u(exp_p(sv))=1-\frac{\cosh s}{\cosh r}, \quad with \quad 0\leq s\leq r\quad and \quad |v|=1, \end{equation*} so that $u>0$ in $int(M)$ and $u=0$ on $\Sigma$ (where $s=r$). Take a constant $\alpha$ and two nonzero constants $\beta$ and $\mu$. Next, note that $Ric=-(n-1)g$, hence \begin{equation*} \gamma=-1 \quad and \quad \Lambda= \frac{\beta}{\mu}-\frac{\alpha}{\beta}(n-1). \end{equation*} Furthermore, denote the level sets of $u$ by $\Sigma_{s}$, $0<s\leq r.$ By a straightforward computation, the mean curvature of $\Sigma_s$ is $H_s=(n-1)\coth s$, besides, the area element $A(s)$ of $\Sigma_{s}$ satisfies the initial value problem \begin{equation*} A'(s)=(n-1)\coth s A(s), \quad A(0)=0. \end{equation*} \end{example} These four examples are Einstein manifolds (i.e., $Ric=\frac{S}{n}g$) with connected boundaries and are inspired by two particular types of metrics discussed in our introduction, namely, static and V-static metrics. We highlight that a vast area of research has tried to obtain the rigidity of such metrics to one of these examples, given certain geometric constraints. For instance, a long-standing and famous conjecture proposed by Boucher-Gibbons-Horowitz~\cite{BGH} asks if the only $n$-dimensional compact static metric with positive scalar curvature and connected boundary is given by a standard round hemisphere from Example~\ref{hemisferio} by taking $(\alpha,\beta,\mu)=(1,1,-1)$. See also \cite{Ambrozio} and \cite{MT2} to know some rigidity results on static and $V$-static metrics, respectively. To obtain examples having disconnected boundary, we refer the reader to~\cite[Theorem~1]{Ambrozio}. For more details on Examples~\ref{geodesicballsn} and \ref{geodesicballHn}, see proof of Proposition~\ref{sinalgamma}. \section{Einstein manifold case} It is known that the existence of concircular fields in Riemannian manifolds provides an important geometric assumption. The term concircular is originated from conformal transformations that preserve the geodesic circles, called concircular transformations themselves, see Yano~\cite{yano2}. Indeed, this condition has been heavily studied, especially in the case of manifolds without boundary, see, e.g., Tashiro~\cite{Tashiro}. In the Riemannian manifolds with nonempty boundaries, some similar problems also have been considered as, e.g., in the Reilly's work~\cite{reilly1,reilly2}. A remarkable fact is that, when $\alpha=0$ and $\Lambda$ is a constant, the gradient of the function $u$ provides a concircular field, see \eqref{1-1}. In a more general way, under the Einstein assumption, we prove that this function provides a special scalar concircular field on $(M^n,g)$. After this, we work to prove the rigidity of hemispheres and geodesic balls on the class of gradient Einstein-type manifolds which are Einstein manifolds. \begin{lemma}\label{lemma1} Let $(M^n,g)$ be a gradient Einstein-type manifold with boundary $\Sigma$. If $(M^n,g)$ is an Einstein manifold, then the function $u$ provides a special concircular field on $(M^n,g)$. More precisely, \begin{equation}\label{EqConCirc} \nabla^2 u = \Big(-\frac{S}{n(n-1)}u+\gamma\Big)g, \end{equation} with $u>0$ in $int(M)$ and $u=0$ on $\Sigma.$ \end{lemma} \begin{proof} If $(M^n,g)$ is an Einstein manifold, then from~\eqref{1-1} we have \begin{equation}\label{1-4} \nabla^2 u = \left[\frac{\mu}{\beta}\Big(\Lambda u-\frac{\alpha}{\beta}\frac{S}{n}u\Big)+\gamma\right] g \quad\mbox{and}\quad \Delta u = \left[\frac{\mu}{\beta}\Big(\Lambda u-\frac{\alpha}{\beta}\frac{S}{n}u\Big)+\gamma\right]n. \end{equation} Using the classical Bochner's formula \begin{equation*} Ric(\nabla \psi)+\nabla\Delta \psi = div\nabla^2\psi \end{equation*} and from \eqref{1-4} we obtain \begin{equation*} \frac{S}{n}\nabla u+\frac{\mu}{\beta}\nabla\Big(\Lambda n u-\frac{\alpha}{\beta}S u\Big)= \frac{\mu}{\beta}\nabla\Big(\Lambda u-\frac{\alpha}{\beta}\frac{S}{n}u\Big). \end{equation*} Then, by connectedness of $M^n$, \begin{equation*} \frac{S}{n} u+\frac{\mu}{\beta}\left[\Lambda(n-1) -\frac{\alpha}{\beta}\frac{(n-1)S}{n}\right]u = C, \end{equation*} for some constant $C$, and as $u=0$ on $\Sigma$, then $C$ must be zero. Thus, since $u>0$ in $\mbox{int}(M)$, we get \begin{equation}\label{eqespecial} \frac{\mu}{\beta}\Big(\Lambda-\frac{\alpha}{\beta}\frac{S}{n} \Big)=-\frac{S}{n(n-1)}. \end{equation} Replacing \eqref{eqespecial} in the first equation of \eqref{1-4} we conclude the proof. \end{proof} Now, let $p\in M^n$ be an interior maximum point of $u$ so that $\nabla u(p)=0$, and take a unit speed geodesic $\sigma(s)$ emanating from $p$. Moreover, without loss of generality, we assume $Ric=\kappa (n-1)g$, $\kappa\in\left\{-1,0,1\right\}$, in Lemma~\ref{lemma1} to see that \begin{equation*} u''(\sigma(s))=\nabla^2u(\sigma'(s),\sigma'(s))=-\kappa u(\sigma(s))+\gamma, \quad u(\sigma(0))=u(p). \end{equation*} Solving this initial value problem, one has \begin{equation}\label{edo} u(\sigma(s))=\left\{\begin{array}{lll} \frac{\gamma}{2}s^{2}+u(p),& if & \kappa=0,\\ (u(p)-\gamma)\cos s+\gamma,& if & \kappa=1,\\ (u(p)+\gamma)\cosh s-\gamma,& if & \kappa=-1. \end{array}\right. \end{equation} Take $r_{0}=d(p,\Sigma)$ and the geodesic ball $B(p,r_0)\subset M^n$ of radius $r_0$ centered at $p$, recall that $r_0$ must be less than $\pi$ for $\kappa=1$ case, see \cite[Theorem~11.16]{lee}. For each $s_{0}\in(0,r_{0}]$ consider the geodesic sphere $\Sigma_{s_{0}}$ of radius $s_{0}$. In the next result, we characterize the constant $\gamma$ by means of $u(p)$, and compute the mean curvature $H_{s_0}$ of each $\Sigma_{s_{0}}.$ \begin{proposition}\label{sinalgamma} Let $(M^n,g)$ be a gradient Einstein-type manifold with boundary $\Sigma$. If $(M^n,g)$ is an Einstein manifold with $Ric=\kappa(n-1)g$, $\kappa\in\left\{-1,0,1\right\}$, and $p\in M^n$ is an interior maximum point of $u$, then $$\left\{\begin{array}{lll} \gamma< 0& if & \kappa=0,\\ \gamma<u(p)& if & \kappa=1,\\ \gamma<-u(p)& if & \kappa=-1. \end{array}\right.$$ Furthermore, the mean curvature of each $\Sigma_{s_{0}}$ is $$H_{s_{0}}=\left\{\begin{array}{lll} \frac{n-1}{s_{0}}& if & \kappa=0,\\ (n-1)\cot s_{0}& if & \kappa=1,\\ (n-1)\coth s_{0}& if & \kappa=-1. \end{array}\right.$$ \end{proposition} \begin{proof} For the first part, we initially consider the $\kappa=1$ case. Since $p$ is a maximum point of $u$, $r_0<\pi$, and analysing $\nabla u(\sigma(s))$ from \eqref{edo} we get the strict inequality \begin{equation*} u(p)>u(\sigma(s))=(u(p)-\gamma)\cos s+\gamma, \end{equation*} for all $s\in(0,r_{0}].$ So, \begin{equation*} u(p)(1-\cos s)>\gamma(1-\cos s), \end{equation*} and as $1-\cos s>0$, we conclude the result. For the case of $\kappa=-1$, again from \eqref{edo} one has \begin{equation*} u(p)(1-\cosh s)>-\gamma(1-\cosh s), \end{equation*} for all $s\in(0,r_{0}]$, and as $1-\cosh s<0$, we obtain the result. The $\kappa=0$ case is also very similar. For the second part, we consider $\Sigma_{s_{0}}$ as above, and we prove the $\kappa=-1$ case, since the other cases are obtained by similarity. From \eqref{edo} we get \begin{equation}\label{RC} \langle\nabla u(\sigma(s_0)),\sigma'(s_0)\rangle=(u(p)+\gamma)\sinh(s_{0})\neq 0, \end{equation} for all $s_{0}\in(0,r_{0}]$. It follows from Gauss lemma that $\Sigma_{s_{0}}$ is a hypersurface of $M^n$ and the vector fields $\nabla u(\sigma(s_0))$ and $\sigma'(s_0)$ are proportional, besides, $\nu=-\frac{\nabla u(\sigma(s_0))}{|\nabla u(\sigma(s_0))|}$ is a unit normal vector field on $\Sigma_{s_{0}}.$ Let $\mathcal{A}_{s_0}$ be the second fundamental form of $\Sigma_{s_0}.$ By Lemma~\ref{lemma1} one has $\nabla^{2}u=(u+\gamma)g$, then we conclude (analogous to~\eqref{Cal-A}) that \begin{equation*} \mathcal{A}_{s_0}(X,Y)=-|\nabla u(\sigma(s_0))|^{-1}(u(\sigma(s_{0}))+\gamma)g_{\Sigma_{s_0}}(X,Y), \end{equation*} for all $X,Y\in\mathfrak{X}(\Sigma_{s_{0}})$. Hence \begin{equation}\label{meancurvature} H_{s_{0}}=-(n-1)|\nabla u(\sigma(s_0))|^{-1}(u(\sigma(s_{0}))+\gamma). \end{equation} As $\nabla u(\sigma(s_0))$ and $\sigma'(s_0)$ are proportional, and using the characterization of $\gamma$ by means $u(p)$ obtained in the first part of this proposition, we have from \eqref{RC} that \begin{equation}\label{mean1} |\nabla u(\sigma(s_0))|=|u(p)+\gamma|\sinh(s_{0})=-(u(p)+\gamma)\sinh(s_{0}). \end{equation} On the other hand, by \eqref{edo} we have \begin{equation}\label{mean2} u(\sigma(s_{0}))+\gamma=(u(p)+\gamma)\cosh s_{0}. \end{equation} Replacing \eqref{mean1} and \eqref{mean2} into \eqref{meancurvature}, we obtain the desired result. \end{proof} Now, we study the cases $\gamma\neq 0$ and $\gamma=0$ separately. The next two theorems provide a complete classification of gradient Einstein-type manifolds that are Einstein manifolds. In particular, we obtain new characterizations for hemispheres and geodesic balls in simply connected space forms. The first one of them, is an extension to a more general class of metrics of a known result in the context of $V$-static metrics by Miao and Tam~\cite{MT2}. \begin{theorem}\label{teogammaneq0} Let $(M^{n},g)$ be a compact gradient Einstein-type manifold with connected boundary $\Sigma$. If $(M^{n},g)$ is an Einstein manifold, and $\gamma\neq 0$, then it is isometric to a geodesic ball in a simply connected space form. \end{theorem} \begin{proof} Without loss of generality, we can suppose that $Ric=\kappa (n-1)g$, with $\kappa\in\left\{-1,0,1\right\}$. For $\kappa=0$, Lemma~\ref{lemma1} guarantees that $\nabla^{2} u=\gamma g,$ thus, from the Reilly's result~\cite{reilly2}, we know that $(M^{n},g)$ is isometric to a geodesic (metric) ball in a Euclidean space. For $\kappa\in\{-1,1\}$, we follow the approach by Miao and Tam~\cite{MT2}. For it, let $p\in M^n$ be an interior maximum point of $u$, consider $r_{0}=d(p,\Sigma)=d(p,q_{0})$, for some $q_{0}\in\Sigma$, and a geodesic ball $B(p,r_0)$ of radius $r_0$ centered at $p$. First we claim that $\partial B(p,r_0)\subset \Sigma$. Indeed, if $q\in\partial B(p,r_0)$, then $q=\sigma(r_{0})$ for some minimizing unit speed geodesic $\sigma:[0,r_{0}]\to M^n$ emanating from $p$. Luckily, by the solution of EDO~\eqref{edo} the value $u(\sigma(s))$ depends only on the parameter $s$ (it does not depend on geodesic $\sigma$), then $u(q)=u(\sigma(r_{0}))=u(q_{0})=0$ and $q\in \Sigma=u^{-1}(0)$, this proves the claim. Since $M^n$ is compact and connected, then $M^n=B(p,r_0)$. Now, we compare the volume of $B(p,r_0)$ with the volume of a geodesic ball in a space form. Let $A(s)$ be the area element of a level surface $\Sigma_{s}\subset B(p,r_0)$, $s\in(0,r_{0}]$. We prove the $\kappa=1$ case. By Proposition~\ref{sinalgamma} and the first variation of the area formula (if necessary the reader may look the Li's book~\cite{Li}), we get the initial value problem: \begin{equation*} A'(s)=(n-1)\cot s A(s) \ \ \mbox{and} \ \ A(0)=0. \end{equation*} By unicity of the solution of this ODE, we have that the area of each level surface $\Sigma_{s}$ is equal to the area of a level surface in a geodesic ball in $\mathbb{S}^{n},$ see Example~\ref{geodesicballsn}. Since, \begin{equation*} vol(B(p,r_0))=\int_{0}^{r_{0}}A(s)ds, \end{equation*} we obtain that this volume is equal to the volume of the latter mentioned geodesic ball. Using Bishop-Gromov's comparison theorem (see, e.g., Lee~\cite[Theorem~11.19]{lee}) we conclude our proof. The $\kappa=-1$ case is very similar. \end{proof} It is worth mentioning that Examples~\ref{geodesicballRn}, \ref{geodesicballsn} and \ref{geodesicballHn} are explicit cases where Theorem~\ref{teogammaneq0} manifests. Next, a new characterization for hemispheres is obtained by considering $\gamma=0.$ \begin{theorem}\label{gamma0} Let $(M^{n},g)$ be a compact gradient Einstein-type manifold with connected boundary $\Sigma.$ If $(M^{n},g)$ is an Einstein manifold, and $\gamma=0$, then it is isometric to a hemisphere of a round sphere. \end{theorem} \begin{proof} Taking $\gamma=0$ in Lemma~\ref{lemma1}, we get \begin{equation*} \nabla^{2} u=-\frac{S}{n(n-1)} u g. \end{equation*} It follows immediately from Proposition~\ref{sinalgamma} that $S>0$. Alternatively, since $u$ is nonconstant we have that $S$ is a nonzero eigenvalue of the Laplacian with Dirichlet boundary condition and, therefore, $S>0.$ As $u=0$ on $\Sigma$ (which is totally geodesic) we can use \cite[Lemma~3]{reilly1} to conclude that $(M^n,g)$ is isometric to a hemisphere of a round sphere $\mathbb{S}^n(c),$ with sectional curvature $c=\frac{S}{n(n-1)}$. \end{proof} It is worth mentioning that Example~\ref{hemisferio} is an explicit case where Theorem~\ref{gamma0} manifests. The next step is to study conditions for a gradient Einstein-type manifold to be an Einstein manifold. This is the content of Sections~\ref{RbtEtm} and \ref{Sec-DNDC}. In this case, we immediately obtain from Theorems~\ref{teogammaneq0} and \ref{gamma0} rigidity results for geodesic balls and hemispheres by considering connectedness of the boundary. \section{Rigidity and boundary topology of Einstein-type manifolds}\label{RbtEtm} Our purpose here is, under some geometric assumptions, to prove rigidity results on a special class of Einstein-type manifolds. The first step is to establish an integral identity that provides a relation between the geometry of $(M^n,g)$ and its boundary $\Sigma$. We start by observing that \begin{align*} \stackrel\circ{\nabla^2u}&=\nabla^{2}u-\frac{\Delta u}{n}g =\frac{\mu}{\beta}u\big(\Lambda g-\frac{\alpha}{\beta}Ric\big)+\gamma g-\big[\frac{\mu}{\beta}u\big(\Lambda n-\frac{\alpha}{\beta}S\big)+\gamma n\big]\frac{g}{n}. \end{align*} If we define $\stackrel\circ{Ric}=Ric-\frac{S}{n}g,$ then the previous equation becomes \begin{equation}\label{conformal} \stackrel\circ{\nabla^2}u=-\frac{\alpha\mu}{\beta^{2}}u\stackrel\circ{Ric.} \end{equation} Now, we consider $\Sigma$ as the union $\cup_{i}\Sigma_{i}$ of its connected components $\Sigma_{i}.$ By Proposition~\ref{prop1}, we know that each function $\xi_{i}=|\nabla u|$ is a positive constant on $\Sigma_{i}.$ Let us denote the outward pointing unit normal vector filed by $\nu,$ which satisfies $\nu=-\frac{\nabla u}{\xi_i}$ on $\Sigma_{i}.$ Notice that in the case of constant scalar curvature S, without loss of generality, we assume $S=\kappa n(n-1)$, $\kappa\in\left\{-1,0,1\right\}.$ \begin{proposition}\label{thmA} Let $(M^n,g)$ be a compact gradient Einstein-type manifold of constant scalar curvature $S=\kappa n(n-1)$, $\kappa\in\left\{-1,0,1\right\}$, and with boundary $\Sigma = \cup_{i}\Sigma_{i}$. Then \begin{equation*} \sum_i\xi_i\int_{\Sigma_i}\big({\rm Ric}(\nu,\nu)-\kappa(n-1)\big)d\Sigma_i =\frac{\alpha\mu}{\beta^2}\int_{M}u\|\stackrel\circ{Ric}\|^2 dM. \end{equation*} \end{proposition} \begin{proof} First, we proceed as in Gomes~\cite{bjng} to prove the identity \begin{equation}\label{EqMain} div\big(\stackrel\circ{Ric}(\nabla u)\big)=\frac{n-2}{2n}\mathscr{L}_{\nabla u}S-\frac{\alpha\mu}{\beta^2}u\|\stackrel\circ{Ric}\|^2, \end{equation} which is true for any gradient Einstein-type metric (\ref{1-1}) with both $\beta$ and $\mu$ nonzero. Indeed, from the definition of divergence, we have \begin{equation}\label{eqm1} div\big(\stackrel\circ{Ric}(\nabla u)\big)=div(\stackrel\circ{Ric})(\nabla u) + \langle \nabla^2 u,\stackrel\circ{Ric}\rangle. \end{equation} From the second contracted Bianchi identity, we get \begin{equation}\label{eqm2} div(\stackrel\circ{Ric})(\nabla u)= \frac{n-2}{2n}\langle \nabla S,\nabla u\rangle. \end{equation} Since $\langle g,\stackrel\circ{Ric}\rangle=0$, from \eqref{conformal} we get \begin{equation}\label{eqm3} \langle \nabla^2 u,\stackrel\circ{Ric}\rangle = \langle\stackrel\circ{\nabla^2u},\stackrel\circ{Ric}\rangle =-\frac{\alpha\mu}{\beta^2}u\|\stackrel\circ{Ric}\|^2. \end{equation} Inserting \eqref{eqm2} and \eqref{eqm3} into \eqref{eqm1}, we immediately get equation~\eqref{EqMain}. We now assume that $S=\kappa n(n-1)$ is constant and $M^n$ is compact. So, integrating equation~\eqref{EqMain}, we have \begin{equation*} \frac{\alpha\mu}{\beta^2}\int_{M}u\|\stackrel\circ{Ric}\|^2=-\int_{\Sigma}\stackrel\circ{Ric}(\nabla u,\nu). \end{equation*} Using that $\stackrel\circ{Ric}=Ric-\frac{S}{n}g$ and $\xi_i\nu=-\nabla u$ on ${\Sigma_{i}},$ we complete our proof. \end{proof} Proposition~\ref{thmA} is important in order to obtain rigidity results for $(M^n,g)$ just using suitable hypothesis on the boundary. In particular, for $\alpha\mu>0$ one has \begin{equation*} \sum_i\xi_i\int_{\Sigma_i}\big(Ric(\nu,\nu)-\kappa(n-1)\big)d\Sigma_i \geq 0 \end{equation*} with equality holding if and only if $\stackrel\circ{Ric} = 0$. Note that this previous inequality is reverse for $\alpha\mu<0$ case. Thus, we immediately get the next proposition. \begin{proposition} Considering the same set up as in Proposition~\ref{thmA}. Then, $(M^{n},g)$ is an Einstein manifold provided that: \begin{itemize} \item[(i)]$\mbox{Ric}(\nu,\nu)\leq \kappa(n-1)$ along $\Sigma$ and $\alpha\mu> 0$, or \item[(ii)]$\mbox{Ric}(\nu,\nu)\geq \kappa(n-1)$ along $\Sigma$ and $\alpha\mu< 0$. \end{itemize} \end{proposition} The next two theorems can be viewed as a Chru$\acute{s}$ciel type inequality. \begin{theorem}\label{corovstatic} Let $(M^n,g)$ be a compact gradient Einstein-type manifold of constant scalar curvature $S=\kappa n(n-1)$, $\kappa\in\left\{-1,0,1\right\}$, and with boundary $\Sigma=\cup_{i}\Sigma_{i}$. If $\alpha\mu<0$ ($\alpha\mu>0$), then \begin{equation}\label{statineq} \sum\limits_{i}\xi_i\int_{\Sigma_i}\Big(S_{\Sigma_i}-\kappa(n-2)(n-1)-\frac{n-2}{n-1}H_i^2\Big)d{\Sigma_i}\geq 0\,(\leq 0) \end{equation} with equality holding if and only if $(M^n,g)$ is an Einstein manifold. \end{theorem} \begin{proof} By Proposition~\ref{prop1} each $\Sigma_i\subset M$ is a totally umbilical hypersurface with mean curvature $H_{i}$. So, using Gauss equation, one has \[ 2\left(Ric(\nu,\nu)-\kappa(n-1)\right)=\kappa(n-2)(n-1)+\frac{n-2}{n-1}H_i^2-S_{\Sigma_i}. \] The result then follows from Proposition~\ref{thmA}. \end{proof} In particular, since $\Sigma$ is totally geodesic when $\gamma=0$ (see Proposition~\ref{prop1}), we have the following result. \begin{theorem} \label{corostatic} Let $(M^n,g)$ be a compact gradient Einstein-type manifold of constant scalar curvature $S=\kappa n(n-1)$, $\kappa\in\left\{-1,0,1\right\}$, and with boundary $\Sigma=\cup_{i}\Sigma_{i}$. If $\alpha\mu<0$ (or $\alpha\mu>0$) and $\gamma=0$, then \begin{equation*} \sum\limits_{i}\xi_i\int_{\Sigma_i}\Big(S_{\Sigma_i}-\kappa(n-2)(n-1)\Big)d{\Sigma_i}\geq0\,(\leq0) \end{equation*} with equality holding if and only if $(M^n,g)$ is an Einstein manifold. \end{theorem} It is worth mentioning that Theorem~\ref{corostatic} characterizes the standard hemisphere as the only gradient Einstein-type manifold $(M^n,g)$ with constant scalar curvature $n(n-1)$ whose boundary is isometric to a unit sphere. \begin{remark}\label{Remark2} Analogous results to Theorems~\ref{corovstatic} and \ref{corostatic} are obtained under the weaker assumption that $\int_{M}\mathscr{L}_{\nabla u}S\,\geq 0,$ for $\alpha\mu<0,$ or $\int_{M}\mathscr{L}_{\nabla u}S\,\leq 0,$ for $\alpha\mu>0.$ Indeed, we can proceed as in the proof of Proposition~\ref{thmA} to get \begin{equation*} \frac{\alpha\mu}{\beta^2}\int_{M}u\|\stackrel\circ{Ric}\|^2=\frac{n-2}{n}\int_{M} \mathscr{L}_{\nabla u}S+\sum_{i}\xi_{i}\int_{\Sigma_{i}}\stackrel\circ{Ric}(\nu,\nu). \end{equation*} Now, we can argument as in the proof of Theorems~\ref{corovstatic} and \ref{corostatic} to obtain the corresponding results. We point out that this weaker assumption is closely related with the $P$ tensor, introduced in the context of $m$-quasi Einstein manifolds with nonempty boundary by He, Petersen and Wylie, see~\cite[Proposition~5.2]{HPW1}. However, even with this weaker assumption, in the case of equality we again obtain that the scalar curvature is constant. \end{remark} The next result provides an upper bound for the area boundary $|\Sigma|$ of a gradient Einstein-type manifold. In this case, we use the weaker assumption given in Remark~\ref{Remark2}. \begin{theorem}\label{areangeneral} Let $(M^n,g)$ be a compact gradient Einstein-type manifold with boundary $\Sigma$ and $\alpha\mu<0$. If $(\Sigma,g_\Sigma)$ is an Einstein manifold of scalar curvature $S_\Sigma$, with $\min S_\Sigma>0$, and the scalar curvature $S$ of $(M^n,g)$ satisfies $\int_{M}\mathscr{L}_{\nabla u}S\geq 0,$ then \begin{equation}\label{area1} |\Sigma|^{a}\left(\min_{\Sigma} S+\frac{n}{n-1}H^{2}\right)\leq n(n-1)\frac{\max S_{\Sigma}}{\min S_{\Sigma}}\omega_{n-1}^{a}, \end{equation} where $a=2/(n-1)$ and $\omega_{n-1}$ is the volume of the unit $(n-1)$-sphere, with equality holding if and only if $S_\Sigma$ is constant and $(M^n,g)$ is an Einstein manifold. Moreover, for $n\geq 4$ estimate~\eqref{area1} reduces to \begin{equation}\label{area1-1} |\Sigma|^{a}\left(\min_{\Sigma} S+\frac{n}{n-1}H^{2}\right)\leq n(n-1)\omega_{n-1}^{a}. \end{equation} \end{theorem} \begin{proof} Let $Ric_{\Sigma}$ be the Ricci tensor of the metric $g_{\Sigma}$ on $\Sigma$ and $S_{\Sigma}$ its scalar curvature. Since $\min S_{\Sigma}>0$ and $\Sigma$ is compact, we can take the positive constants $\delta=\frac{\min S_\Sigma}{(n-1)(n-2)}$ and $\varepsilon = \frac{\max S_\Sigma}{(n-1)(n-2)}$ so that \begin{equation}\label{epsilon} \delta (n-1)(n-2)\leq S_{\Sigma}\leq \varepsilon (n-1)(n-2). \end{equation} Then \begin{equation}\label{delta} Ric_{\Sigma}=\frac{S_\Sigma}{n-1}g_\Sigma\geq \delta (n-2)g_{\Sigma}. \end{equation} Hence, by Bishop's theorem, see, e.g., Chavel~\cite[Theorem~6 p.~74]{Chavel}, it is true that \begin{equation}\label{area} |\Sigma|\leq\delta^{-\frac{1}{a}}\omega_{n-1}. \end{equation} where $\omega_{n-1}$ is the volume of an $(n-1)$-dimensional unit sphere. Since $\Sigma$ is connected, we can define $\xi=|\nabla u||_{\Sigma}$, which is a positive constant. Thus, using Remark~\ref{Remark2} and Proposition~\ref{prop1}, we obtain \begin{equation}\label{ineq2'} \frac{\alpha\mu}{\beta^{2}}\int_{M}u\|\stackrel\circ{Ric}\|^2\geq \frac{1}{2}\int_{\Sigma}\xi \Big(-S_{\Sigma}+\frac{n-2}{n}S+\frac{n-2}{n-1}H^{2}\Big), \end{equation} where we have used the Gauss equation to compute $\stackrel\circ{Ric}(\nu,\nu)$. Then, since $\alpha\mu<0$, \begin{equation}\label{inequality2} \int_{\Sigma}\left(\frac{n-2}{n}S+\frac{n-2}{n-1}H^{2}\right)\leq \int_{\Sigma}S_{\Sigma}. \end{equation} From \eqref{inequality2} and \eqref{epsilon} we get \begin{equation}\label{inequality3} \min_{\Sigma} S+\frac{n}{n-1}H^{2}\leq n(n-1)\varepsilon. \end{equation} Thus, by using \eqref{inequality3} and \eqref{area} we obtain \eqref{area1}. Furthermore, equality holds if and only if $\Sigma$ is an Einstein manifold (see \eqref{ineq2'}) and the result follows from Theorem~\ref{gamma0}. Moreover, if $n\geq 4$, then $S_{\Sigma}$ is constant by Schur's lemma that implies~\eqref{area1-1}. \end{proof} As an application of Theorem~\ref{areangeneral}, one can obtain new characterizations for hemispheres and geodesic balls in simply connected space forms. For this purpose, it is suffices to assume that the boundary is connected and to apply Theorems~\ref{teogammaneq0} and \ref{gamma0}. \subsection{The dimension three case}\label{3-dimension} Here, we obtain some topological constraints and classifications for the boundary. Again, we observe that in the case of constant scalar curvature $S$, without loss of generality, we can assume that $S=6\kappa$, $\kappa\in\left\{-1,0,1\right\}.$ In what follows $\chi(\Sigma)$ stands for the Euler characteristic of $\Sigma.$ We begin show how the sign of $\alpha\mu$ can lead to topological constraints for the boundary of a three-dimensional gradient Einstein-type manifold. \begin{proposition}\label{topsphere} Let $(M^3,g)$ be a compact gradient Einstein-type manifold with boundary $\Sigma=\cup_{i}\Sigma_{i}$ and constant scalar curvature $S=6\kappa$, $\kappa\in\left\{-1,0,1\right\}$. \begin{itemize} \item[(i)] If $\gamma\neq 0$ and $\alpha\mu<0$ (for $\kappa=-1$, additionally suppose that $H_{i}>2$ in each $\Sigma_i$), then there exists a connected component $\Sigma_{i}$ diffeomorphic to a $2$-sphere. \item[(ii)] If $\gamma= 0$, $\alpha\mu<0$ and $\kappa=1$, then there exists a connected component $\Sigma_{i}$ diffeomorphic to a $2$-sphere. \item[(iii)] If $\gamma\neq 0$, $\alpha\mu>0$, $\kappa=-1$ and $H_{i}\leq 2$ in each $\Sigma_i$, then there exists a connected component $\Sigma_{i}$ diffeomorphic to a torus. \item[(iv)] If $\gamma= 0$, $\alpha\mu>0$ and $\kappa\in\{0,-1\}$, then there exists a connected component $\Sigma_{i}$ diffeomorphic to a torus. \end{itemize} \end{proposition} \begin{proof} The condition $\alpha\mu<0$ together with Theorem~\ref{corovstatic} and Gauss-Bonnet theorem immediately implies \begin{equation}\label{bghet} 4\pi\sum\limits_{i}\xi_i\chi(\Sigma_{i})\geq \sum\limits_{i}\xi_i\left(2\kappa+\frac{1}{2}H_{i}^{2}\right)|\Sigma_{i}|. \end{equation} Notice that, in all cases of $\gamma$ and $\kappa$ in (i) and (ii), we get $\chi(\Sigma_{i})>0$ for some $i$ and then $\Sigma_{i}$ is homeomorphic to a $2$-sphere. The proof of (iii) and (iv) are analogous. \end{proof} The previous topological restrictions for the boundary become more rigid when it is connected. \begin{corollary}\label{coro1} Let $(M^3,g)$ be a compact gradient Einstein-type manifold of constant scalar curvature $S=6\kappa$, $\kappa\in\left\{-1,0,1\right\}$, with $\gamma\neq 0$, $\alpha\mu<0$, and connected boundary $\Sigma$. Then $\Sigma$ is diffeomorphic to the $2$-sphere (assume $H>2$ for $\kappa=-1$) and \[ |\Sigma|\leq 4\pi\big(\kappa+\frac{1}{4}H^2\big)^{-1} \] with equality holding if and only if $(M^3,g)$ is isometric to a geodesic ball in a simply connected space form. \end{corollary} \begin{proof} Since $\Sigma$ is connected, Proposition~\ref{topsphere} implies that $\Sigma$ is homeomorphic to a $2$-sphere, so, $\chi(\Sigma)=2$. Replacing this in \eqref{bghet}, we obtain the required area estimate. From Theorem~\ref{corovstatic}, we conclude that equality holds if and only if $(M^3,g)$ is isometric to a geodesic ball in a simply connected space form. \end{proof} The next corollary gives a topological classification and an upper bound for the area boundary. The proof follows from analogous arguments as in Corollary~\ref{coro1}. \begin{corollary}\label{coro3} Let $(M^3,g)$ be a compact gradient Einstein-type manifold of constant scalar curvature $S=6\kappa$, $\kappa\in\left\{-1,0,1\right\},$ with $\gamma=0$, $\alpha\mu<0,$ and connected boundary $\Sigma.$ Then, \begin{equation*} 2\pi\chi(\Sigma)\geq \kappa|\Sigma|. \end{equation*} In particular, if $\kappa=1$, then $\Sigma$ is diffeomorphic to a sphere and \begin{equation*} |\Sigma|\leq 4\pi \end{equation*} with equality holding if and only if $(M^3,g)$ is isometric to a hemisphere of a round sphere. \end{corollary} \begin{remark} It is worth mentioning that similar results can be obtained in the same way as in Corollaries~\ref{coro1} and \ref{coro3} by analysing $\alpha\mu>0$. \end{remark} \subsection{The dimension five case} Here, we prove an upper bound for the area boundary in terms of its Euler characteristic. In fact, using the Gauss-Bonnet-Chern formula, we have the following result. \begin{corollary}\label{teofivedimensional} Let $(M^5,g)$ be a compact gradient Einstein-type manifold with connected boundary $\Sigma$ and $\alpha\mu<0$. If $(\Sigma,g_\Sigma)$ is an Einstein manifold, and the scalar curvature $S$ of $(M^n,g)$ satisfies $\int_{M}\mathscr{L}_{\nabla u}S\geq 0$ and $\displaystyle\min_{\Sigma}S+\frac{5}{4}H^{2}>0$, then \[ 8\pi^2\chi(\Sigma)\geq \frac{1}{24}\left(\frac{3}{5}\min_{\Sigma}S+\frac{3}{4}H^{2}\right)^2|\Sigma|\] with equality holding if and only if $(M^5,g)$ is an Einstein manifold. \end{corollary} \begin{proof} Notice that we can use inequality~\eqref{inequality2}, from which we deduce \[ \left(\frac{3}{5}\min_{\Sigma}S+\frac{3}{4}H^{2}\right)|\Sigma|\leq\int_{\Sigma}S_{\Sigma}. \] Using H\"older inequality, we obtain \begin{equation}\label{equacaodimensao5} \left(\frac{3}{5}\min_{\Sigma}S+\frac{3}{4}H^{2}\right)^2|\Sigma|\leq\int_{\Sigma}S^2_{\Sigma}. \end{equation} Now recall the Gauss-Bonnet-Chern formula: \[ 8\pi^2\chi(\Sigma)= \frac{1}{4}\int_{\Sigma}\|W\|^2 +\frac{1}{24}\int_{\Sigma}S^2_{\Sigma}-\frac{1}{2} \int_{\Sigma}\|\stackrel{\circ}{\rm Ric}_{\Sigma}\|^2, \] where $W$ is the Weyl tensor of $g_\Sigma$. So, under the Einstein assumption and~\eqref{equacaodimensao5}, we get \[ 8\pi^2\chi(\Sigma)\geq \frac{1}{24}\int_{\Sigma}S^2_{\Sigma}\geq \frac{1}{24}\left(\frac{3}{5}\min_{\Sigma}S+\frac{3}{4}H^{2}\right)^2|\Sigma|. \] We conclude our proof from Theorem~\ref{areangeneral}, since we can use the occurrence of the equality in \eqref{inequality2}. \end{proof} \begin{remark} It is worth mentioning that similar results can be obtained in the same way as in Theorem~\ref{areangeneral} and Corollary~\ref{teofivedimensional} by analysing $\alpha\mu>0$. \end{remark} \section{Degenerate and nondegenerate conditions}\label{Sec-DNDC} This section consists of two parts: degenerate and nondegenerate conditions. We start by assuming the degenerate condition: $\beta^2=(n-2)\mu\alpha$ and $\beta\neq0.$ Of course, the parameters $\mu$ and $\alpha$ are not null, so that we can consider the nonconstant smooth function $f=\frac{\beta}{\mu}\ln(u)$ in $int(M).$ Note that \begin{equation*} df=\frac{\beta}{\mu u}du \quad\mbox{and}\quad \nabla^2\ln u=\frac{1}{u}\nabla^2u-\frac{1}{u^2}du\otimes du, \end{equation*} thus by \eqref{1-1}, one has \begin{eqnarray*} \nabla^2f &=& \frac{\beta}{\mu u}\nabla^2u-\frac{\beta}{\mu u^2}du\otimes du\\ &=&\Lambda g -\frac{\alpha}{\beta}Ric -\frac{\mu}{\beta}df\otimes df + \frac{\beta}{\mu u}\gamma g. \end{eqnarray*} Hence, PDE~\eqref{1-1} in $int(M)$ is equivalent to \begin{equation}\label{1-1Int} \alpha Ric+\beta \nabla^2f+\mu df\otimes df = \Big(\Lambda\beta+\frac{\beta^2}{\mu}\gamma e^{-\frac{\mu}{\beta}f}\Big)g. \end{equation} In the case where the boundary is empty, we consider $\gamma=0$ and $\Lambda =\rho S +\lambda$, for some constant $\rho$ and some smooth function $\lambda$ on $M^n$, so that we recover Eq.~\eqref{1-2} by Catino et al.~\cite{CMMR}. Now we follow the approach in \cite{CMMR} in order to show an expected characterization for degenerate gradient Einstein-type manifolds. Recall that a manifold $(M^n,g)$ is conformally Einstein if its metric $g$ can be pointwise conformally deformed to an Einstein metric $\tilde g.$ \begin{lemma}\label{Justify-DC} A gradient Einstein-type manifold $(M^n,g)$ is degenerate in $int(M)$ if and only if it is conformally Einstein in $int(M).$ \end{lemma} \begin{proof} If $\tilde{g}=e^{2a\varphi}g$, for some real constant $a$ and some smooth function $\varphi$ on $M^n,$ then the Ricci tensor $\tilde{Ric}$ of $\tilde{g}$ is related to $g$ by the well-known formula (see Besse~\cite{besse}) \begin{equation*} \tilde{Ric}=Ric-(n-2)a\nabla^2\varphi+(n-2)a^2d\varphi\otimes d\varphi - [(n-2)a^2|\nabla\varphi|^2+a\Delta\varphi]g. \end{equation*} By choosing $\varphi=f$ satisfying \eqref{1-1Int} in $int(M)$ and $a=-\frac{\beta}{(n-2)\alpha}$, one has \begin{equation*} \tilde{Ric}=Ric+\frac{\beta}{\alpha}\nabla^2 f+\frac{\beta^2}{(n-2)\alpha^2}df\otimes df - \Big[\frac{\beta^2}{(n-2)\alpha^2}|\nabla f|^2-\frac{\beta}{(n-2)\alpha}\Delta f\Big]g. \end{equation*} Under the degenerate condition $\beta^2=(n-2)\mu\alpha$, the previous equation becomes \begin{equation*} \tilde{Ric}=Ric+\frac{\beta}{\alpha}\nabla^2 f+\frac{\mu}{\alpha}df\otimes df - \Big[\frac{\mu}{\alpha}|\nabla f|^2-\frac{\beta}{(n-2)\alpha}\Delta f\Big]g. \end{equation*} From \eqref{1-1Int}, we have \begin{equation*} \tilde{Ric}=\frac{1}{\alpha}\Big(\Lambda\beta+\frac{\beta^2}{\mu}\gamma e^{-\frac{\mu}{\beta}f} - \mu|\nabla f|^2 + \frac{\beta}{n-2}\Delta f\Big)g. \end{equation*} On the other hand, tracing \eqref{1-1Int}, we get \begin{equation*} \Lambda\beta+\frac{\beta^2}{\mu}\gamma e^{-\frac{\mu}{\beta}f}=\frac{1}{n}\Big(\alpha S+\beta \Delta f+\mu |\nabla f|^2\Big). \end{equation*} So, by a straightforward computation \begin{equation*} \tilde{Ric}=\frac{1}{n}\Big(S + 2\frac{\beta}{\alpha}\frac{n-1}{n-2}\Delta f-\frac{\beta^2}{\alpha^2}\frac{n-1}{n-2}|\nabla f|^2\Big) e^{\frac{2\beta}{(n-2)\alpha}f}\tilde{g}. \end{equation*} Thus, $\tilde{g}=e^{\frac{-2\beta}{(n-2)\alpha}f}g$ is an Einstein metric in $int(M)$, see also \cite[Section~2]{CMMR} and \cite[Theorem~1.159]{besse}. Conversely, if $\tilde{g}=e^{\frac{-2\beta}{(n-2)\alpha}f}g$ is an Einstein metric, i.e., $\tilde{Ric}=C\tilde{g}$ for some constant $C$, with $f$ satisfying \eqref{1-1Int} in $int(M),$ then \begin{equation*} Ric+\frac{\beta}{\alpha}\nabla^2 f+\frac{\beta^2}{(n-2)\alpha^2}df\otimes df = \Big[\frac{\beta^2}{(n-2)\alpha^2}|\nabla f|^2-\frac{\beta}{(n-2)\alpha}\Delta f + Ce^{\frac{-2\beta}{(n-2)\alpha}f}\Big]g. \end{equation*} Tracing, we obtain \begin{equation*} S+\frac{\beta}{\alpha}\Delta f= -\frac{\beta^2}{(n-2)\alpha^2}|\nabla f|^2 +\Big[\frac{\beta^2}{(n-2)\alpha^2}|\nabla f|^2-\frac{\beta}{(n-2)\alpha}\Delta f + Ce^{\frac{-2\beta}{(n-2)\alpha}f}\Big]n. \end{equation*} On the other hand, again from \eqref{1-1Int} \begin{equation*} S+\frac{\beta}{\alpha}\Delta f=\frac{1}{\alpha}\Big[-\mu |\nabla f|^2 + \Big(\Lambda\beta+\frac{\beta^2}{\mu}\gamma e^{-\frac{\mu}{\beta}f}\Big)n\Big]. \end{equation*} Hence \begin{eqnarray*} \Lambda &=& \frac{1}{n\beta}\Big[\Big(\mu -\frac{\beta^2}{(n-2)\alpha}\Big)|\nabla f|^2\Big] +\frac{\beta}{(n-2)\alpha}|\nabla f|^2-\frac{1}{n-2}\Delta f + \frac{\alpha}{\beta}Ce^{\frac{-2\beta}{(n-2)\alpha}f}\\ &&- \frac{\beta}{\mu}\gamma e^{-\frac{\mu}{\beta}f}. \end{eqnarray*} Now, we choose the function \begin{equation*} \Lambda = \frac{\beta}{(n-2)\alpha}|\nabla f|^2-\frac{1}{n-2}\Delta f + \frac{\alpha}{\beta}Ce^{\frac{-2\beta}{(n-2)\alpha}f} - \frac{\beta}{\mu}\gamma e^{-\frac{\mu}{\beta}f}, \end{equation*} so that $\mu -\frac{\beta^2}{(n-2)\alpha}=0$ that implies $\beta^2= (n-2)\alpha\mu$. \end{proof} Now we assume the nondegenerate condition: $\beta^2\neq(n-2)\mu\alpha$, with all parameters not null. Here, we use the approach by the second author in~\cite{nazareno}. First of all, we observe that \eqref{1-1} in $int(M)$ is equivalent to \begin{equation}\label{3.5-[7]} Ric + h\nabla^2u = \ell g \end{equation} where $h=\frac{\beta^2}{\alpha\mu u}$ and $\ell=\frac{\Lambda\beta}{\alpha}+\frac{\gamma\beta^2}{\alpha\mu}\frac{1}{u}.$ Thus, we can use Eq.~(3.10) in~\cite{nazareno}), namely, \begin{equation}\label{3.10-[7]} [\beta^2-(n-2)\alpha\mu]du\wedge d\ell=0. \end{equation} Eq.~\eqref{3.10-[7]} immediately shows that: If $(M^n,g)$ is a gradient Einstein-type manifold and $g$ is nondegenerate in $int(M)$, then $\nabla\ell=\psi\nabla u$, for some smooth function $\psi$ in $int(M).$ We observe that for a gradient Einstein-type manifold to be an Einstein manifold it is necessary that the function $\Lambda$ be constant, see~\eqref{eqespecial}. Besides, when $\Lambda$ is constant, a necessary condition for $\ell$ be nonconstant is that $\gamma\neq 0$. An appropriate setting to show a rigidity result for Einstein manifolds on the class of nondegenerate gradient Einstein-type manifolds $(M^n,g)$ is by means of equations~\eqref{3.5-[7]} and \eqref{3.10-[7]} both in $int(M)$ with a nonconstant function $\ell$. This is the content of the main theorem of this section. With Lemma~\ref{Justify-DC} in mind, we observe that the nondegenerate assumption in this theorem is really needed, due to the existence of homogeneous conformally Einstein metrics which are not Einstein metrics, see Besse~\cite{besse}. \begin{theorem}\label{HCGEtM} Let $(M^n,g)$ be a homogeneous compact gradient Einstein-type manifold with $\alpha$ and $\gamma$ nonzero and $\Lambda$ being a constant. If $g$ is nondegenerate in $int(M)$ and $\beta^2\neq-\mu\alpha,$ then $(M^n,g)$ is an Einstein manifold. In particular, it is isometric to a geodesic ball in a simply connected space form when its boundary is connected. \end{theorem} \begin{proof} First we observe that the theorem still holds under the weaker condition that the Ricci curvatures of $(M^n, g)$ are constant instead of the homogeneous assumption. For the proof note that equations~$(3.11)$, $(3.12)$ and $(3.13)$ of~\cite{nazareno} still hold in $int(M)$. Since $\ell$ is nonconstant, we are in position to apply the same argument as in \cite[Theorem~3]{nazareno} to conclude that $\|\mathring{Ric}\|^2$ vanishes in $int(M)$, and the result follows by continuity. For the sake of completeness we shall present a brief sketch of the last claim. Indeed, from the second contracted Bianchi identity and \eqref{3.5-[7]}, we obtain \begin{equation*} \frac{\beta^2+\mu\alpha}{\mu\alpha}Ric(\nabla u) = -(n-1)u\nabla\ell -[(n-1)\ell-S]\nabla u. \end{equation*} Nondegenerate condition implies that $\nabla\ell=\psi\nabla u$ and then \begin{equation*} \frac{\beta^2+\mu\alpha}{\mu\alpha}Ric(\nabla u) = -[(n-1)u\psi+(n-1)\ell-S]\nabla u. \end{equation*} So, the assumption $\beta^2\neq-\mu\alpha$ implies that $\nabla u$ is an eigenvector of the Ricci tensor with constant eigenvalue, since we have assumed that the Ricci curvatures of $(M^n, g)$ are constant. Hence, \begin{equation*} \mathring{Ric}(\nabla u)=C\nabla u, \quad C= -\frac{\mu\alpha}{\beta^2+\mu\alpha}[(n-1)u\psi+(n-1)\ell-S]-\frac{S}{n}. \end{equation*} Combining this latter equation with \eqref{EqMain} and \eqref{3.5-[7]}, we get \begin{equation*} C(n\ell-S)=C\frac{\beta^2}{\alpha\mu u}\Delta u=-\|\mathring{Ric}\|^2. \end{equation*} Since $\|\mathring{Ric}\|^2$ is constant and $\ell$ is nonconstant, we conclude, by simple analysis on this latter equation that $\|\mathring{Ric}\| ^2$ vanishes in $int(M),$ and then the result follows by continuity and Theorem~\ref{teogammaneq0}. \end{proof} Finally, we observe that by taking $\alpha=0$ and $\Lambda$ to be constant in \eqref{1-1}, we know from the Reilly's result~\cite[Theorem~B, Parts (I) and (II)]{reilly2} that: \begin{itemize} \item[(i)] If $\Lambda=0$, then $\gamma$ must be nonzero by the maximum principle, and $g$ is flat; \item[(ii)] If $\Lambda\neq0$ and $\gamma=0$, then $g$ is of constant sectional curvature $-\frac{\mu\Lambda}{\beta}.$ \end{itemize} \section*{Acknowledgements} Both authors have been partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), of the Ministry of Science, Technology and Innovation of Brazil, Grants 316080/2021-7 and 310458/2021-8, respectively. The first author has also been partially funded by the public call n.~03 Produtividade em Pesquisa proposal code PIA13495-2020 and Programa Primeiros Projetos proposal code - FAPESQ/PB and MCTIC/CNPq.
1,477,468,751,247
arxiv
\section[Introduction]{Introduction} \label{sec:intro} Outflows in the form of winds are commonly associated with various astrophysical sources like AGNs, X-ray binaries, YSOs etc. In radio quiet AGNs, blue-shifted iron lines are frequently reported. This blue shift is believed to be generated from resonance absorption of Fe-xxv or Fe-xxvi by propagating winds away from the source. The speeds of these winds are found to be relativistic and may reach up to $0.4c$ \citep{CH02, CH03, Mrk06, DM05, C09, RJN09}. Similarly this blue shift is observed in case of X-ray binaries as well \citep{ML07, TB16, GP12}. These winds are observed in nearly half of these sources \citep{TF2010} indicating that the feature is quite general. Further, their short variability timescales ($\sim100$ks) suggest that the winds might be outflowing from the central source from a region within 100 Schwarzschild radii ($r_{\rm \small S}$). In X-ray binaries, the winds are observed in soft state only where the observed spectra is mostly dominated by thermal emissions. In soft state, the accretion discs are well described by standard thin disc model, where the discs are optically thick but geometrically thin and emit thermally distributed radiation \citep{SS73}. Though, theoretically these discs are highly stable against most perturbations, but is susceptible to magneto-rotational instabilities \citep{BH91, SUZ2009, Yuan12}, which apart from providing an origin of shear viscosity, may also contribute to outflows. Independent of such instabilities, magnetic field can remove energy and angular momentum from the accretion disc, such that centrifugally driven outflow along the magnetic field is possible \citep{BP82}. Not only magnetic field can drive winds, but winds can also be generated by thermal and radiation pressure from the accretion discs \citep{BMS83}. It may be noted that outflows within sub-Eddington limit from optically thick discs were also studied by performing MHD simulations \citep{OH2009,OH2011}. \cite{LD2019} also studied the MHD outflows from the accretion discs in general relativistic limit. As the winds are found to be traveling up to mildly relativistic speeds, they need driving agents. The radiation driving of outflows (jets or winds) are studied by various authors through semi analytic works \citep{F96, TF96,CC00a,CC00b,cc02, IC05,kcm14, VKMC15, VC18, VC19} and radiation was shown to be an effective factor to accelerate jets and winds up to relativistic speeds. Similarly simulations were also carried out to study the effects of radiation on outflows \cite{PR97, PSD98, Y18, PR00, NO2017, PR03}. \cite{PR03b} and \cite{NO2017} studied line driven winds. However, the line force may not be that effective when the temperature of the wind exceeds the ionization temperature significantly ($>10^5$ K). Hence the winds driven by the radiation from inner region of the accretion disc especially in microquasars, where the temperatures are hotter, one may need other mechanisms. In that case, the radiation drives the winds directly by depositing the momentum and/or energy. \citet{Y18} studied winds driven from hot corona by radiation force of the underlying Keplerian disc (hereafter KD). It may be noted that, \citet{Y18} did not consider the role of radiation drag although they maintained the optically thin condition through out in their simulation. Moreover, \citet{Y18} considered the outflow from a hot corona. In most of the previous attempts mentioned above, the radiation drag was not part of their analysis. We would like to investigate whether continuum emission of a thin accretion disc can radiatively drive matter to form a wind, in presence of radiation drag. Apart from this, we investigate how much angular momentum of the accretion disc is transmitted to the winds above it. We would also like to study the effect of radiation drag on the wind solution, and will discuss the role of angular momentum removal in the winds due to radiation. As this is an exploratory study, we intend to study how accretion rate affects these aspects of wind generation. In section \ref{sec:assump}, we discuss the underlying assumptions, then we will show the set of governing equations and the radiation field in section \ref{sec_equations}. Afterwards we describe the simulation set up, initial and boundary conditions for the simulations, method of solving the equations, and numerical technique in section \ref{sec_numeri_approach}. We then proceed to results (section \ref{sec_results}) and conclude the paper (section \ref{sec_conclusions}) with the significance of the analysis. \section{Assumptions} \label{sec:assump} We perform hydrodynamic simulation in cylindrical coordinate system $r,\phi,z$. Axisymmetry in the system is assumed. In our simulation, the source of the wind is KD around a $10 M_\odot$ black hole, which occupies the equatorial plane and winds are launched from the KD in the $r-z$ plane. The radiation field above the KD interacts with the out-flowing wind through Thomson scattering, and drives the wind by depositing momentum onto the matter. We restrict ourselves to non relativistic regime and hence, while calculating the radiation field above the disc, relativistic transformations are ignored. However, to take care of strong gravity near the central source, we have assumed the Paczy\'nski \& Wiita potential \citep{PW} that mimics the general relativistic effects. In this paper, all ten independent components of the moments of radiation field are calculated, hence the effect of radiation drag is also incorporated. In this paper the distances are scaled and shown in Schwarzschild units. \section{Governing equations} \label{sec_equations} The equations of motion for a fluid in the radiation-hydrodynamic regime \citep[correct up to first order in $v_i$, see ][for details]{mm84,kfm98} with density $\rho$, pressure $p$, propagating with velocity components $v_i\equiv (v_r, v_\phi{~\rm and~} v_z)$, are given by \begin{equation} \frac{\partial{\rho}}{\partial t} + \frac{1}{r}\frac{\partial (r\rho v_r)}{\partial r} + \frac{\partial{(\rho v_z)}}{\partial z} = 0 \label{eq.cont} \end{equation} \begin{eqnarray} \label{eq.momentum_r} \frac{\partial{(\rho v_r)}}{\partial t}&+& \frac{1}{r}\frac{\partial (r\rho v_r^2)}{\partial r} + \frac{\partial p}{\partial r} + \frac{\partial{(\rho v_r v_z)}}{\partial z} \\ \nonumber &=& \rho v_{\phi}^2/r + \rho f_{{\rm g},r} + \frac{\rho k}{c} {\cal F}_r \end{eqnarray} \begin{eqnarray} \label{eq.momentum_phi} \frac{\partial{(\rho v_{\phi})}}{\partial t} &+& \frac{1}{r}\frac{\partial (r\rho v_r v_{\phi})}{\partial r} + \frac{\partial{(\rho v_z v_{\phi})}}{\partial z} \\ \nonumber &=& - \frac{\rho v_r v_{\phi}}{r} + \frac{\rho k}{c}{\cal F}_\phi \end{eqnarray} \begin{eqnarray} \label{eq.momentum_z} \frac{\partial{(\rho v_z)}}{\partial t} &+& \frac{1}{r}\frac{\partial (r\rho v_r v_z)}{\partial r} + \frac{\partial{(\rho v_z^2+P)}}{\partial z} \\ \nonumber &=& \rho f_{{\rm g},z} + \frac{\rho k}{c}{\cal F}_z \end{eqnarray} \begin{equation} \frac{\partial \cal E}{\partial t} + \frac{1}{r}\frac{\partial [r({\cal E}+p)v_r]}{\partial r} + \frac{\partial{[({\cal E}+p)v_z}]}{\partial z} = -\rho\left(\frac{k}{c}v_i{\cal F}_i+v_if_{{\rm g},i}\right) \label{eq.energy} \end{equation} In the above equations, ${\cal E}=\rho v^2/2+e$ is the energy density of the fluid and $e=p/(\Gamma -1)$ is the thermal energy density, where $\Gamma=5/3$ is the adiabatic index of the fluid. The local sound speed is defined as $c_s=(\Gamma p/\rho)^{1/2}$. The fluid is being driven by the radiation field of the underlying thin accretion disc. The components of radiation terms are: \begin{equation} {\cal F}_i= F_i - E v_i - v_r P_{ir} - v_{\phi} P_{i \phi} - v_z P_{iz}, \label{radcomp.eq} \end{equation} where $i\equiv (r,\phi,z)$. Various moments of the radiation field are $E$, $F_i$s and $P_{ij}$s (here, $i,j \rightarrow r,\phi, z$), which are basically the radiation energy density, various components of radiation flux and various components of radiation pressure, respectively. The scattering opacity is $k=\sigma_{\rm \small T}/m_{\rm p}$ with $\sigma_{\rm \small T}$ being Thomson scattering cross section and $m_{\rm p}$ is the proton mass. Moreover, $f_{{\rm g}r}$ and $f_{{\rm g}z}$ are $r$ and $z$ components of the gravitational force and are given by \begin{equation} f_{{\rm g}z} = \frac{GM}{r_{\rm s}^2} \frac{z}{R(R/r_{\rm s} - 1)^2}, \end{equation} \begin{equation} f_{{\rm g}r} = \frac{GM}{r_{\rm s}^2} \frac{r}{R(R/r_{\rm s} - 1)^2}, \end{equation} where $G$ and $M$ are the universal constant of gravity and the mass of the black hole, respectively. The Schwarzschild radius of black hole defined as $r_{\rm s}=2GM/c^2$. All the lengths mentioned in the paper are in terms of $r_{\rm s}$ and we will refer to them in dimensionless form afterwards. Further, $R$ is the radial distance from the centre of the black hole defined as \begin{equation} R = \sqrt{r^2 + z^2} \end{equation} In the R.H.S of the components of momentum equations (\ref{eq.momentum_r}, \ref{eq.momentum_phi} and \ref{eq.momentum_z}), the components of radiative flux accelerates, while other velocity dependent terms with $E$ and $P_{ij}$, have negative sign and therefore decelerates the flow. These are called radiation-drag terms and they show that radiation can also reduce the momentum of the flow. As the radiation drag depends upon various components of the fluid velocity, it becomes effective as the flow speed increases. The radiative acceleration and deceleration depend upon the relative strengths of various radiative moments and the components of flow speeds, therefore, the effect of radiation on the outflow can behave in a very nonlinear manner. We will show in section \ref{sec_results} that radiation acceleration drives the winds to the infinity. However, consideration of radiative drag term reduces the outflow speed, to the extent that it can even disrupt the ejected winds. Below we discuss the accretion disc and the radiative moments computed from its radiation field. \subsection{Accretion disc properties} An accretion disc around a black hole, on one hand, supplies matter to the black hole, on the other hand also supplies matter flowing out as outflow. In the present case, the outflow is driven by the disc radiation. Since the KD is defined on the equatorial plane so the dynamical coordinates of the KD is represented by $R_{\rm d}\equiv (r_{\rm d},\phi, 0)$. From the mass conservation equation, we have the expression of accretion rate to be \begin{equation} \label{eqn:masscon} \dot{M} = 2\pi r_{\rm d} \rho v_{r{\rm K}} (2H), \end{equation} where, $H$ is the height of the disc from equatorial plane. $v_{r{\rm K}}$ is the radial inflow speed due to accretion. The KD rotation velocity is \citep{PW,kfm98}, \begin{equation} v_{\rm K} = \sqrt{\frac{GMr_{\rm d}}{(r_{\rm d}-r_{\rm s})^2}} \label{vfi.eq} \end{equation} In KD $v_{\rm K}>>v_{r{\rm K}}$, and the radial velocity distribution is given by \citep{kfm98} \begin{equation} v_{r{\rm K}} = 3.1 \times 10^6 \alpha^{\frac{4}{5}} \dot{m}^{\frac{2}{5}} m^{-\frac{1}{5}}x^{-\frac{2}{5}} \left(1-\sqrt{\frac{3}{x}}\right)^{-\frac{3}{5}}, \label{radvel.eq} \end{equation} where $x=r_{\rm d}/r_{\rm s}$ and $\alpha$ is the viscosity parameter. Now, the distribution of the equatorial density along $r_{\rm d}$ is obtained to be \citep{SS73} \begin{equation} \rho = 4.423 \times 10^4 m^{-1} \dot{m} x^{-1} v_{r{\rm K}}^{-1} \label{dens.eq} \end{equation} The distribution of density along $z$ is $\tilde{\rho}=\rho e^{-{(z/r_{\rm s})}^2}$ \citep{SS73}. It may be noted that at high accretion rates the inner part of the disc may become radiation pressure dominated and in such cases the disc thickness is controlled by vertical radiative pressure rather than by gas pressure. The density profile would change and instability might set in. However, we assume that since radiation pressure is driving winds from the inner region, so the density profile of the disc may not depart significantly from equation \ref{dens.eq}. For a KD, viscosity is required for angular momentum transport in a manner such that the matter occupies subsequent Keplerian orbits. Viscosity heats up the matter, the dissipated heat is locally radiated as blackbody emission at each radius of the disc. Assuming the surface temperature ($T_{\rm disc}$) as the temperature of each annulus, its radial distribution is given by \begin{equation} \sigma {T_{\rm disc}}^4 = \frac{3GM\dot{M}}{8\pi r_{\rm d}^3}\left(1-\sqrt{\frac{r_{\rm in}}{r_{\rm d}}}\right), \label{temp1.eq} \end{equation} where $\sigma$ is Stefan-Boltzmann’s constant, $r_{\rm in} =3$ (in units of $r_{\rm \small S}$) is the inner radius of the disc, and the disc extends up to an outer boundary $r_{\rm o}=512$. Expressing accretion rate and mass of the black hole in units of Eddington accretion rate, the previous equation becomes: \begin{equation} T_{\rm disc} = 4.35 \times 10^7 \dot{m}^\frac{1}{4} m^{-\frac{1}{4}} x^{-\frac{3}{4}}\left(1-\sqrt{\frac{3}{x}}\right)^{1/4}, \label{temp2.eq} \end{equation} where, ${\dot m}={\dot M}/{\dot M}_{\rm Edd}$ and $M=m~M_\odot$, moreover, the Eddington accretion rate is ${\dot M}_{\rm Edd}= 1.44\times 10^{17}m$ (gm s$^{-1}$) and $M_\odot=2\times10^{33}$gm. \subsection{Radiation field above a thin accretion disc} \begin{figure*} {\caption{Contours of radiative moments computed at each point in $r-z$ plane around the black hole which resides at the origin and the disc on the equatorial plane. Radiation energy density (a); radiative flux terms, $F_r$, $F_z$ and $F_\phi$ (b)-(d) respectively; and the components of radiation pressure tensor, $P_{rr}$, $P_{r\phi}$, $P_{rz}$, $P_{\phi\phi}$, $P_{\phi z}$, $P_{zz}$ from (e) to (j), respectively. Only the inner $51.2r_{\rm s} \times 51.2r_{\rm s}$ region is shown.} \label{lab_rad_field}} {\includegraphics[height=21cm,width=10cm]{fig1.eps}} \end{figure*} In the following we present the expression of various radiative moments. For the convenience of representation, we define the radiative moments in the following forms, $$ \frac{kE}{c} = {E}_{0}{\varepsilon};~~~~\frac{kF_i}{c} = F_{0} f_i~~~{\rm and}~~~\frac{kP_{ij}}{c} = P_{0} p_{ij} $$ {with, $$ {E}_{0}=F_{0}=P_{0}= \frac{3GM_B\dot{M}_K \sigma_T}{8 \pi^2 r_s^3 m_p c} $$ The dimensionless radiation energy density ($\varepsilon$), the three components of radiative flux ($f_i$) , as well as the six components of pressure tensor ($p_{ij}$) are given by \citet{IC05}; \begin{eqnarray} {\varepsilon} = {\int}^{r_{\rm o}}_{r_{\rm in}} {\int}^{2 \pi}_0\frac{z(r^{-2}_{\rm d}-{\sqrt {3}}r^{-5/2}_{\rm d})d{\phi}^{\prime} }{(r^2+z^2+r^2_{\rm d}-2rr_{\rm d}{\rm cos}{\phi}^{\prime})^{3/2}(1-v_il_i)^4 }dr_{\rm d \label{eq:radeng} \end{eqnarray} \begin{eqnarray} f_i = {\int}^{r_{\rm o}}_{r_{\rm in}} {\int}^{2 \pi}_0 \frac{z(r^{-2}_{\rm d}-{\sqrt {3}}r^{-5/2}_{\rm d}){\hskip 0.1cm} l_id{\phi}^{\prime}} {(r^2+z^2+r^2_{\rm d}-2rr_{\rm d}{\rm cos}{\phi}^{\prime})^{3/2}(1-v_il_i)^4 } dr_{\rm d} \label{eq:radflux} \end{eqnarray} \begin{eqnarray} p_{ij} = {\int}^{r_{\rm o}}_{r_{\rm in}} {\int}^{2 \pi}_0 \frac{z(r^{-2}_{\rm d}-{\sqrt {3}}r^{-5/2}_{\rm d}){\hskip 0.1cm} l_i{\hskip 0.1cm}l_jd{\phi}^{\prime}} {(r^2+z^2+r^2_{\rm d}-2rr_{\rm d}{\rm cos}{\phi}^{\prime})^{3/2}(1-v_il_i)^4 }dr_{\rm d}, \label{eq:radpres} \end{eqnarray} where $l_i$s are the direction cosines from the disc to the field point. Since the accretion disc is not a static radiator, but the disc matter is in motion, therefore the radiation field is Doppler beamed by this disc motion. It can be shown that the frequency integrated radiation intensity measured by the comoving observer ($I_0$) has the following transformation relation with that measured by an inertial observer ($I$) \citep{kfm98} \begin{equation} \frac{I_0}{I}=\gamma^4(1-v_il_i)^4 \approx (1-v_il_i)^4 \label{eq:dopfac} \end{equation} The Lorentz factor $\gamma \approx 1$ for KD. This factor appears in the expression of the moment equations and affects the radiation field. In particular the disc motion along $\phi$ direction generates non-zero $f_\phi$ and also various components of $p_{i \phi}$ The coordinates of the thin Keplerian disc are ($r_{\rm d},\phi^\prime$) and the integration limits of accretion disc are $r_{\rm in}$= $3$ and $r_{\rm o}= 512$. We plot the dimensionless radiation moments {\em i.e. } $E$ (Figure \ref{lab_rad_field}a), radiative fluxes $F_r$ (Figure \ref{lab_rad_field}b), $F_z$ (Figure \ref{lab_rad_field}c), $F_\phi$ (Figure \ref{lab_rad_field}d) and the 6 independent components of radiative pressure $P_{rr}$ (Figure \ref{lab_rad_field}e), $P_{r\phi}$ (Figure \ref{lab_rad_field}f), $P_{rz}$ (Figure \ref{lab_rad_field}g), $P_{\phi \phi}$ (Figure \ref{lab_rad_field}h), $P_{\phi z}$ (Figure \ref{lab_rad_field}i) and $P_{zz}$ (Figure \ref{lab_rad_field}j). The moments are plotted in $r-z$ plane. Each panel zooms the inner $51.2~\times ~51.2$ in order to resolve the contours of the radiative moments. The radiative moments are distinctly anisotropic, especially close to the black hole. Since the KD only extends up to $3$, and the KD flux maximizes at $r\sim 4$, so the radiative moments maximizes at around $4-5$. $E$ or the radiative energy density is by far the most dominant of all the moments. Close to the axis $F_r\approx F_\phi \approx 0$, while $F_z$ is very important. In general, $|F_z| \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} |F_r|$ and dominates $F_\phi$. In addition, $P_{\phi \phi}$ is quite strong, hence the azimuthal velocity gained by the wind due to $F_\phi$ will also be reduced due to the radiation drag along $\phi$ direction. Moreover, none of the components of the radiative pressure is greater than all the radiative flux components. This would confine the effect of radiative drag. This augurs well for the wind, so that it can be driven away from the KD, but would not be spread by a very large angle due to the gain in angular momentum from the radiation field. We have plotted the radiative moments in a region very close to the horizon ($\leq 51.2$), and in that region, the radiation field is from an extended source (KD), and therefore the moments show a complicated space dependence. At large distances, the space dependence of radiation field follows $\sim R^{-2}$, although not in the computational domain we have chosen. \section{Numerical approach} \label{sec_numeri_approach} \subsection{The numerical scheme and simulation set up} The hydrodynamic equations (\ref{eq.cont}-\ref{eq.energy}) are solved in this paper using Total Variation Diminishing (TVD) scheme, introduced and developed by \cite{AH83}. The scheme (or, the modified version of it) is applicable to hydrodynamic problems and has been used extensively in relevant astrophysical applications \citep{DR93,rbol95,rjf95,lrc11,IC12,lckhr16}. TVD scheme is an Eulerian, second order accurate, nonlinear, finite difference scheme, which accurately captures shock. The temporal and spatial evolution of the conserved quantities $\rho$, $\rho v_i$, and ${\cal E}$ is computed using approximate Roe type Riemann solver to solve the differential equations, followed by application of a non-oscillatory first order accurate scheme to the modified flux functions to achieve second order accuracy \citep[see,][]{ROE81, DR93, AH83}. Equations of motion (\ref{eq.cont}-\ref{eq.energy}) are similar to those solved in \citet{IC12}. In \citet{IC12} the galactic outflow was powered by the radiation from the galactic disc, while being decelerated by the gravity of the galactic disc, the halo and the bulge matter. In contrast, in this paper, the accretion disc outflow is powered by the radiative fluxes and the centrifugal force from the KD, and is decelerated by the radiative drag terms as well as, the gravity of the central black hole. To solve the equations of motion (\ref{eq.cont}-\ref{eq.energy}), we considered the TVD scheme \citep[see,][for details]{IC12} for the resolution $512 \times 512$. A schematic representation of the computational arrangement is presented in Figure \ref{lab_grid}, which marks the ghost cells where the boundary conditions are implemented and also the computational domain. We employed continuous boundary condition at $z=0$ boundary, and outflow boundary condition at the outer $r$ and $z$ boundaries (i. e., no inflow but continuous if ${\bf v}>0$). At $r=0$, or the axis of symmetry, reflection boundary condition has been employed. The type of boundary conditions employed, are also mentioned in Figure (\ref{lab_grid}). We simulate a region of $512$ from the black hole, each in $r$ and $z$ direction, therefore the dimension of each cell is equivalent to $1$. The gravity of the black hole is described by Paczy\'nski \& Wiita potential \citep{PW}. In order to avoid the coordinate singularity on the horizon, the black hole is covered by a sink region of radius $3$ around the origin, which do not affect the physics since the inner edge of KD is $3$. The KD is on the equatorial plane, ranging from $3-512$, the density, pressure and the components of velocity distribution given by equations (\ref{eqn:masscon} -\ref{temp2.eq}) are maintained in a region described by $r\rightarrow 3$---$512$ and $z\rightarrow 0$---$3$. We supply the dynamical variables of the KD at every time step within a height of $3$ above the equatorial plane. Therefore, KD acts as a boundary condition and is not dynamically sustained, so one may say its a quasi-KD. The moments of the radiation field are computed in a region outside the KD. We compute the outflowing winds due to the action of the radiation field of a KD. The speed of light in vacuum $c$ is the unit of velocity in the code, the unit of length is $r_{\rm o}=512$. Since the KD flux used in the code is for $m=10$ so the unit of time is $5.12\times 10^{-2}$ s. The reference density is $\rho_{\rm ref}=10^{-5}$ gm cm$^{-3}$. The ambient medium or the computational domain is kept initially ($t=0$) tenuous enough w.r.t the accretion disc with constant parameters. The ambient density is considered to be as low as a factor of $10^{-8}$ and pressure is $10^{-9}$ in code units, so the outflow from the disc is not suppressed artificially by the initial distribution of right above the disc. \begin{figure} \centering \includegraphics[width=80mm]{fig2.eps} \caption{Uniform grids in computational domain, with two ghost cells in each boundary. Respective boundary conditions are mentioned accordingly. Grids drawn are not to the scale.} \label{lab_grid} \end{figure} \begin{figure*} \begin{minipage}[c]{\textwidth} \begin{center} \includegraphics[width=180mm]{fig3.eps} \caption{Contours of Density $log_{10}(\rho)$ over plotted with respective net velocity vector arrows. These profiles are for $\dot{m} = 3$. Panels correspond to the snapshots at run time $t=2, 6, 62, 72, 82$ and $92$ from (a) to (f).} \label{lab_den_md3} \end{center} \end{minipage} \end{figure*} \section{Results} \label{sec_results} \subsection{Wind propagation above the disc: density and velocity evolution} In Figure \ref{lab_den_md3}(a)-(f), we overplot velocity vectors ($v_{\rm p} \equiv \sqrt{[v_r^2+v_z^2]}$) on the density contours of radiatively driven winds from a KD, for ${\dot m}=3$ and at different time steps $t=2$ (a), $t=6$ (b), $t=62$ (c), $t=72$ (d), $t=82$ (e), and $t=92$ (f). Arrows represent velocity vectors in $r-z$ plane, where the magnitude of the velocity ($v_p$) is proportional to the length of the arrows. All densities in this paper are scaled to $\rho_{\rm ref}$. The KD is hotter and denser near the inner edge, and the radiative flux maximizes at around $4$. Therefore, both the thermal gradient force and the radiative force drive matter in the form of wind from the inner parts of the disc. Very little matter is ejected from the region $r_{\rm d} > 100$, even if the simulation is run for a longer time. As the wind emerges from the inner regions of the disc, the general direction of motion is away from the axis of symmetry [Figure \ref{lab_den_md3}(b)]. However, at a later time, a part of the wind moves towards the axis of symmetry [Figure \ref{lab_den_md3}(d)], but the wind again moves away from the axis. The entire wind-fan oscillates as a whole, somewhat dancing like a flame in a breeze. All the matter that is being ejected, does not flow out, but a tiny fraction of it falls back, and hits the wind base, which causes a perturbation propagating along the wind. Moreover, $F_r$ near the wind base is directed towards the axis, but higher up it is directed away from the axis. The inner radius of KD is $r=3$, so there is no source of radiation for $r < 3$. Hence, close to the axis of symmetry and just above the disc, $r$ component of the radiative flux points inward {\em i.e. } $F_r<0$. The centrifugal force is always directed away from the axis. $F_\phi$ which is weaker than the fluxes in the other two directions, will spin up the wind, but stronger pressure components boosts the drag in the $\phi$ direction. Additionally the radiative force along $z$ powers the wind upwards. And finally gravity attracts every part of the wind towards the black hole. All these factors together interact with the ejected matter and generates a wind which originates from the inner region of the KD, but fans out in the $r-z$ plane. This effect is quite clearly presented in various panels (Figure \ref{lab_den_md3}c-f). It may also be noted that all the matter coming out of the KD do not become a wind but sits above the KD. \subsection{Angular momentum transport} \label{sec_results_angular_momentum} \begin{figure} \centering \includegraphics[width=80mm]{fig4.eps} \caption Radial variation of $v_\phi$ at a height $z=6$ from the equatorial plane, for three run times $30$, $50$ and $80$ as shown in legends for $\dot{m} = 3$. The disc $v_\phi$ is plotted for comparison.} \label{lab_v_phi} \end{figure} \begin{figure*} \begin{minipage}[c]{\textwidth} \begin{center} \includegraphics[width=150mm]{fig5.eps} \caption{Contours of $v_\phi$. Time elapsed is $t = 92$, for winds generated from accretion discs with (a) $\dot{m}=3$ and (b) $\dot{m}=4$} \label{lab_angmom} \end{center} \end{minipage} \end{figure*} Due to high rotational speed of the inner region of the disc, winds produced from this region propagate with a fraction of the rotational speed of the disc. Hence matter ejected from the KD carry a part of the disc angular momentum along with them. This can be seen as one of the ways through which the disc removes its angular momentum. In Figure (\ref{lab_v_phi}), we plot $v_{\phi}$ as a function of $r$ at a height of $z=6$ from the equatorial plane, measured in three different ru times $t=30$ (solid), $50$ (dashed) and $80$ (dashed-dotted). The disc $v_\phi$(dotted) is also included for comparison. It can be seen that near the disc inner edge, almost $64\%$ of the azimuthal component of velocity is effectively removed by the wind which therefore would reduce angular momentum too. All the terms with negative sign in the last term of Equation(\ref{eq.momentum_phi}) resist rotation, thereby remove angular momentum (and $v_\phi$) from the wind. The rotational speed of the wind can be as high as $0.3$ near the axis of symmetry but are much less than the disc rotational velocity. Since the radiative moments are weak above the outer part of the disc, the rotation velocity just above the disc is similar to that on the disc at the same $r$. Since all the curves (solid, dashed, dash-dotted) almost overlap each other in Figure(\ref{lab_v_phi}) we conclude that the $v_\phi$ distribution close to the disc is almost steady. Further, we plot the contours of $v_\phi$ of winds generated from accretion discs with accretion rates $\dot{m}=3$ and $4$ (Figures \ref{lab_angmom}a \& \ref{lab_angmom}b, respectively). As the winds are stronger for higher accretion rates, the outflowing matter driven by radiation from a disc with higher $\dot{m}$, matter with higher $v_\phi$ are injected. Winds from an accretion disc with $\dot{m}=4$, posses higher values of azimuthal velocities in a larger region above the accretion disc, compared to the winds from a disc of lower ${\dot m}$. It may be noted that, a fraction of outlfowing matter near the axis of symmetry fall back, and at some height above the disc interacts with the outflowing wind and makes it to bend away. For KDs with higher $\dot{m}$ which are ejecting fast matter and with higher rotation, traps these relatively higher rotating matter in the region where the inner boundary of the wind bend away. And as the matter further move away, $v_\phi$ is reduced by radiation drag. \begin{figure*} \begin{minipage}[c]{\textwidth} \begin{center} \includegraphics[width=150mm]{fig6.eps} \caption{Density distribution for $\dot{m}$ $\approx$ 1.3 (a \& b), $\dot{m}$ $\approx$ 2.5 (c \& d) and $\dot{m}$ $\approx$ 3 (e \& f), snapshots are at run time t = 72; frames in the left column are generated considering the radiation drag and the right panel frames are without the drag effect. It is evident that for low mass accretion rate (here, $\dot{m}\approx 1.3$), the winds cannot be driven away due to the presence of the radiation drag.} \label{lab_rad_drag1} \end{center} \end{minipage} \end{figure*} \subsection{Effect of radiation drag} In section (\ref{sec_equations}) the expression of the radiation term (equation \ref{radcomp.eq}) contains both positive and negative terms. The flux terms ($F_i$) are positive and therefore would accelerate the flow along its direction. However, the terms having radiation energy density and pressure components, appear with a negative sign and are also proportional to various velocity components. The negative terms causes deceleration and would reduce relevant components of momentum density. These negative terms are called radiative drag terms. For example, the radial component of momentum density equation (\ref{eq.momentum_r}) will be increased by $F_r$, but will be reduced provided any or all the terms containing $E$, $P_{rr}$, $P_{r\phi}$, $P_{rz}$ are dominant. It may be noted that, the radiative drag terms are highly non-linear, for example, $P_{\phi r}$ will couple with $v_r$ and hinder the growth of azimuthal momentum density ($\rho v_\phi$), but will also couple with $v_\phi$ and oppose the growth of $\rho v_r$ (refer to, equation(\ref{radcomp.eq}) for radiative terms and the equations of motion \ref{eq.momentum_r}-\ref{eq.momentum_phi}). To show the impact of radiation drag on the dynamics of the winds, we compare solutions with and without drag terms. We plot the density contours and velocity field of wind solutions with drag terms in the left panels (a, c, e) of Figure (\ref{lab_rad_drag1}), while solutions without drag terms are plotted in the right panels (b, d, f) of the same figure. The accretion rates for each pair of comparable panels are ${\dot m}=1.3$ (Figures \ref{lab_rad_drag1}a \& b), ${\dot m}=2.5$ (Figures \ref{lab_rad_drag1}c \& d) and ${\dot m}=3$ (Figures \ref{lab_rad_drag1}e \& f). All the plots obtained are at run time $t=72$. For lower ${\dot m}$ the wind is launched, but as it accelerates to higher velocities the drag terms suppress the wind. However, in absence of radiation drag the wind freely propagates outwards. For a slightly higher $\dot{m}(=2.5)$, a weaker wind is generated in presence of drag terms (Figure \ref{lab_rad_drag1}c) however, without drag terms the wind is relatively stronger (Figure \ref{lab_rad_drag1}d). Similar effect can be seen for $\dot{m}=3$ in which the wind in presence of drag terms (Figure \ref{lab_rad_drag1}e) is weaker than the the one in absence of drag term ({\em i.e. } Figure \ref{lab_rad_drag1}f}). It may be noted, in absence of radiation drag term, a lower luminosity disc will produce stronger winds than that above a luminous disc in which drag terms are considered (compare Figures \ref{lab_rad_drag1}d \& \ref{lab_rad_drag1}e). Here Figure \ref{lab_rad_drag1}(e) is identical to Figure \ref{lab_den_md3}(d). Radiation driving ($F_r,~F_z$) is weaker above low luminosity accretion discs. As the wind is launched from such a disc, it has low poloidal velocity ($v_p=\sqrt{v_r^2+v_z^2}$) to start with, but high $v_\phi$. It means radiation drag is not important along $r,~z$ directions, but, $v_\phi$ being high, it boosts the drag terms in all the three direction. Therefore, matter ejected from the disc will not flow out as wind but will be smothered down by the drag term. Above a luminous disc, however, jets are strongly driven in $r,~z$ direction and can overcome the radiation drag due to high $v_\phi$. At larger distances, where poloidal velocity increases, drag becomes more effective and limits the terminal speed of the outflow. This figure illustrates the effect of radiative drag. \subsection{Terminal speeds of the winds} \begin{figure} \centering \includegraphics[width=8.5cm]{fig7.eps} \caption{Terminal speeds as a function of $\dot{m}$ for winds with radiation drag terms (solid curve) and without radiation drag terms (dashed) at run time $t=100$.} \label{lab_rad_speeds} \end{figure} Once the winds leave the computational domain and escape to infinity, the maximum speeds they acquire while escaping is what we call terminal speed ($v_{\rm \small T}$). These are the speeds that correspond to the observed blue shift in the spectra from these sources. In this paper, we consider the the maximum speed at the outer boundary as the $v_{\rm \small T}$ i.e., $v_{\rm \small T}=v_p(r,512)$. In Figure (\ref{lab_rad_speeds}), we plot the terminal speeds as a function of $\dot{m}$ at run time $t={100}$. The dashed curve corresponds to $v_{\rm \small T}$ when all the radiative terms are effective (including drag terms). To show the effect of radiation drag, we over plot corresponding terminal speeds without considering the radiation drag terms (solid). The winds are faster for higher accretion rates and safely reach mildly relativistic values. In absence of radiation drag, the terminal speeds are overestimated by about an order of magnitude. \subsection{Mass outflow properties} \label{sec_outflow} \begin{figure} \centering \includegraphics[width=90mm]{fig8a.eps} \includegraphics[width=90mm]{fig8b.eps} \includegraphics[width=90mm]{fig8c.eps} \caption{Radial variation of mass outflow rate $d\dot{M}_{\rm out}(r)$, (a) just above the accretion disc and (b) at outer $z$ boundary. (c) Variation of mass outflow ${\dot M}$ at the outer z boundary, w.r.t ${\dot m}$, with and without considering the effect of radiation drag on the outflows. All the outflows are measured in Eddington unit, at run time $t = 100$.} \label{lab_mdout} \end{figure} \begin{figure} \centering \includegraphics[width=90mm]{fig9.eps} \caption{The percentage of the total outflow which are transonic ${\dot R}_{\rm out}$ as a function of time, for two accretion rates $\dot m=3.0$ (dotted, blue) and ${\dot m}=3.5$ (solid, red).} \label{lab_transwind} \end{figure} As the matter is ejected from the upper disc surface, the mass flux due to the emission can be represented in differential form as \begin{equation} d\dot{M}_{\rm out} (r) = 2\pi r \rho v_z dr \end{equation} The total integrated value of $\dot{M}_{out}$ along radial direction can be written as \begin{equation} \dot{M}_{\rm out} = \int_{r_{i}}^{r_{o}} 2\pi r \rho v_z dr \end{equation} Here we have spatial resolution of $dr=1$. We calculate the radial variation of outflow at a certain height, z from the disc, as well as the net integrated outflow along r at that specific z, for a particular time step. In Figure \ref{lab_mdout}(a), we plot $d{\dot M}_{\rm out}$ (in Eddington units) with $r$, calculated just above the disc for $\dot{m} = 2.5, 3$ \& $4$ at run time $t=100$, signifying the outflow from the launching point of the wind. Further, in Figure \ref{lab_mdout}(b), we estimate the radial variation of the outflow rate at the outer $z$ boundary of the computational domain. The outflow at the ejection base mostly comes out from the inner regions of the accretion disc and with time it gradually covers the entire numerical domain diagonally leaving the domain through the outer boundaries. Further, the outflow rates at the outer boundary are significantly less than the outflow at the launching, indicating that all the matter ejected from the disc does not leave the domain but only a fraction of it escapes. In Fig. \ref{lab_mdout}(c), we show the integrated outflow rates ${\dot M}_{\rm out}$ (black dashed) from the disc calculated at outer $z$ boundary ($z=512$). These values are obtained by integrating the outflow rates along $r$, at run time $t=100$. For comparison, we show the outflow rates without drag (blue solid) and observed that, as expected, the radiation drag has significant effect in suppressing the matter ejection in form of the winds. Furthermore, very small magnitudes of the outflow rates compared to the accretion rates justify our assumption that the disc can remain mostly in steady state and is not affected by the matter ejection. In other words, the accretion rates remain time-independent. The computational domain in this paper is just $512\times 512$, so it is intriguing to wonder what fraction of the computed outflow will actually escape the gravity of the black hole. Since the wind is also a fluid, therefore if the wind is transonic then it will definitely escape the the black hole gravity. In Fig. \ref{lab_transwind} we plot the percentage of the calculated mass outflow rate at $z=512$ which are transonic or $v_T(r,512)/c_s(r,512) > 1$, and is measured as $${\dot R}_{\rm out}=\frac{{\dot M}_{\rm out}(\mbox{trans})}{{\dot M}_{\rm out}(\mbox{total})},$$ where both ${\dot M}_{\rm out}(\mbox{trans})$ and ${\dot M}_{\rm out}(\mbox{total})$ are measured at $z=512$ or the upper boundary of the computational box. Figure \ref{lab_transwind} shows that of all the matter which is ejected from the computational domain, only a fraction of it is transonic and therefore actually can leave the gravitational attraction of the central black hole. For $\dot{m}=3$ (dotted, blue) the transonic mass outflow rate is about 10\% of the total mass leaving the computational domain as winds. For $\dot{m}=3.5$ (solid, red) it varies between 30-70\% of the total outflow. We compute the wind flowing out of the computational domain only through upper $z$ boundary, because mass-outflow rate through outer $r$ boundary is about an order of magnitude less compared to that through the upper $z$. Moreover, the mass flowing out through the outer $r$-boundary is subsonic and would not contribute significantly in the net outflow rate. The wind outflow rate is also variable. \section{Conclusions} \label{sec_conclusions} In this paper, we have studied the generation mechanism and properties of the winds around black hole accretion discs. The winds are generated by a Shakura - Sunyaev Keplerian accretion disc which is steady in nature and act as a source of wind and the radiation field which drives them. The radiation field is controlled by $\dot{m}$. We computed all components of radiative moments numerically. The radiation field generated by the steady KD are also steady. It may be noted that, our assumption of optically thin nature of the medium above the KD is justified since the cumulative optical depth along the $z$-direction is much less than 1 (see appendix A). Although we keep our analysis in non relativistic regime, we have used pseudo-Newtonian gravitational potential to take care of strong gravity near the black hole. We show that the radiation pressure inside the disc along with the thermal pressure is able to push the matter out of the disc. The winds are mostly generated from inner region of the accretion disc ($r<30$). The matter emitted out not only carries matter with it but also removes angular momentum from the disc. Highly rotating winds are driven by a combined effect of thermal pressure, radiation field and centrifugal force and typically for $\dot{m}>1.8$ the winds escape to infinity. While for smaller accretion rates we showed that the winds fall back to the disc as they don't have sufficient radiation drive to push them to escape. One curious fact which these simulations showed is that a part of the matter ejected does not become wind but may accumulate above the disc. We showed that the radiation drag limits the jet speed. In fact below a certain luminosity, the wind is destroyed by the drag term. Only for a luminous disc, radiation can generate a wind against gravity and its own drag. So radiation drag is a significant factor in determining the dynamical properties of the winds. The $\phi$ component of the radiation drag is also capable to reduce the angular momentum of the wind. The work of \citet{Y18} is similar to ours, except that they considered outflows from a corona and they did not consider radiation drag. These authors considered more luminous disc (up to 0.75 Eddington luminosity) while we considered only up to 0.66 Eddington luminosity ($\equiv 4 {\dot M}_{\rm Edd}$). However, the maximum terminal speeds are somewhat similar for luminous disc, although we predict a lower cutoff of disc luminosity to drive a wind from KD. We analyzed the terminal properties of the winds and found that the terminal velocities of the disc winds are sub relativistic and higher accretion rate leads to higher magnitudes of the wind speed. The wind speeds are found to be mildly relativistic which is consistent with observations. We show that if radiation drag is ignored, the terminal speeds are overestimated significantly Detailed study of mass outflow rate shows that the mass loss from the disc is indeed a very small fraction of the disc mass and hence we may conclude that the radiative property of the KD will not be significantly affected by radiatively driven winds. Inclusion of radiation drag sufficiently suppresses the mass outflow rates at outer boundary of our computational domain. So one needs to take care of radiation drag effects while carrying out the analysis of radiation driving in the winds. It is a non relativistic study of the disc wind dynamics under impact of radiation field in Thomson scattering regime. In upcoming works, we would examine the role of Compton scattering in driving such winds. \section*{Acknowledgments} The authors would like to thank the anonymous reviewer for insightful comments and suggestions that help us to improve the manuscript. SR acknowledges the hospitality extended by ARIES during her many academic visits. MKV acknowledges his brief postdoc tenure in ARIES where this work was initiated. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author.
1,477,468,751,248
arxiv
\section{Introduction} Machine learning models are often deployed in a target domain that differs from the domain in which they were trained and validated in. This leads to the practical challenges of adapting and evaluating the performance of models on a new domain without costly labeling of the dataset of interest. For example, in the Inclusive Images challenge \citep{shankar2017No}, the training data largely consists of images from developed countries in North America and Western Europe. If a model trained on this data is presented with images from developing countries in Africa and Asia, then (i) it is likely to perform poorly, and (ii) its performance in the training (source) domain may not mirror its performance in the target domain. However, due to the presence of a small fraction of images from developing countries in the source data, it may be possible to reweigh the source samples to mimic the target domain. In this paper, we consider the problem of learning a set of importance weights so that the reweighted source samples closely mimic the distribution of the target domain. We pose an exponential tilt model of the distribution shift between the train and the target data and an accompanying method that leverages unlabeled target data to fit the model. Although similar methods are widely used in statistics \cite{rosenbaum1983central} and machine learning \cite{sugiyama2012Density} to train and evaluate models \emph{under covariate shift}, one of the main benefits of our approach is it allows \emph{concept drift} \citep{cai2019Transfer,gama2014survey} between the source and the target domains. We summarize our contributions below: \begin{itemize} \item We develop a model and an accompanying method for learning source importance weights to mimic the distribution of the target domain \emph{without} labeled target samples. \item We establish theoretical guarantees on the quality of the weight estimates and their utility in the downstream tasks of fine-tuning and model selection. \item We demonstrate applications of our method on \textsc{Waterbirds}\ \citep{sagawa2019Distributionally} and \textsc{Breeds}\ \citep{santurkar2020breeds} datasets. \end{itemize} \section{Related work} \textbf{Out-of-distribution generalization} is essential for safe deployment of ML models. There are two prevalent problem settings: domain generalization and subpopulation shift \citep{koh2020WILDS}. Domain generalization typically assumes access to several datasets during training that are related to the same task, but differ in their domain or environment \citep{blanchard2011Generalizing,muandet2013Domain}. The goal is to learn a predictor that can generalize to unseen related datasets via learning invariant representations \citep{ganin2016Domainadversarial,sun2016Deep}, invariant risk minimization \citep{arjovsky2019Invariant,krueger2021out}, or meta-learning \citep{dou2019Domain}. Domain generalization is a very challenging problem and recent benchmark studies demonstrate that it rarely improves over vanilla empirical risk minimization (ERM) on the source data unless given access to labeled target data for model selection \citep{gulrajani2020Search,koh2020WILDS}. Subpopulation shift setting assumes that both train and test data consist of the same groups with different group fractions. This setting is typically approached via distributionally robust optimization (DRO) to maximize worst group performance \citep{duchi2016Statistics,sagawa2019Distributionally} or various reweighing strategies \citep{shimodaira2000Improving,byrd2019What,sagawa2020Investigation,idrissi2021simple}. These methods require group annotations which could be expensive to obtain in practice. Several methods were proposed to sidestep this limitation, however they still rely on a validation set with group annotations for model selection to obtain good performance \citep{hashimoto2018Fairness,liu2021Just,zhai2021DORO,creager2021Environment}. Our method is most appropriate for the subpopulation shift setting (see Section \ref{sec:exponential-tilt}), however it differs in that it does not require group annotations, but requires unlabeled target data. \textbf{Model selection} on out-of-distribution (OOD) data is an important and challenging problem as noted by several authors \citep{gulrajani2020Search,koh2020WILDS,zhai2021DORO,creager2021Environment}. \citep{xu2022estimation,chen2021mandoline} propose solutions specific to covariate shift based on parametric bootstrap and reweighing; \citep{garg2022leveraging} align model confidence and accuracy with a threshold; \citep{jiang2021assessing,chen2021detecting} train several models and use their ensembles or disagreement. Our importance weighting approach is computationally simpler than the latter and is more flexible in comparison to the former, as it allows for concept drift and can be used in downstream tasks beyond model selection as we demonstrate both theoretically and empirically. \textbf{Domain adaptation} is another closely related problem setting. Domain adaptation (DA) methods require access to labeled source and unlabeled target domains during training and aim to improve target performance via a combination of distribution matching \citep{ganin2016Domainadversarial,sun2016Deep,shen2018wasserstein}, self-training \citep{shu2018DIRTT,kumar2020Understanding}, data augmentation \citep{cai2021Theory,ruan2021optimal}, and other regularizers. DA methods are typically challenging to train and require retraining for every new target domain. On the other hand, our importance weights are easy to learn for a new domain allowing for efficient fine-tuning, similar to test-time adaptation methods \citep{sun2020TestTime,wang2020Tent,zhang2020Adaptive}, which adjust the model based on the target unlabeled samples. Our importance weights can also be used to define additional regularizers to enhance existing DA methods. \textbf{Importance weighting} has often been used in the domain adaptation literature on label shift \cite{lipton2018Detecting,azizzadenesheli2019Regularized,maity2020Minimax} and covariate shift \cite{sugiyama2007covariate,hashemi2018weighted} but the application has been lacking in the area of concept drift models \cite{cai2019Transfer,maity2021linear}, due to the reason that it is generally impossible to estimate the weights without seeing labeled data from the target. In this paper, we introduce an exponential tilt model which accommodates concept drift while allowing us to estimate the importance weights for the distribution shift. \section{Conclusion} In this paper, we developed an importance weighing method for approximating expectations of interest on new domains leveraging unlabeled samples (in addition to a labeled dataset from the source domain). We demonstrated the applicability of our method on downstream tasks such as model evaluation/selection and fine-tuning both theoretically and empirically. Unlike other importance weighing methods that only allow covariate shift between the source and target domains, we permit concept drift between the source and target. Despite its benefits, the exponential tilt model does suffer from a few limitations. Implicit in the exponential tilt assumption is that the supports of the target class conditionals are subsets of the corresponding source class conditionals. Although these assumptions are typically satisfied in the subpopulation shift setting, they are violated in domain generalization problems \cite{koh2020WILDS}. An important avenue for future study is to accommodate support alignment in the distribution shift model, {\it i.e.}\ to align the supports for class conditioned feature distributions in source and target domains. One way to approach this it to utilize distribution matching techniques from domain adaptation literature \citep{ganin2016Domainadversarial,sun2016Deep,shen2018wasserstein}, similarly to \citep{cai2021Theory}. We hope aligning supports via distribution matching will allow our method to succeed on domain generalization problems. \section{Proofs} \begin{proof}[Proof of Lemma \ref{lemma:eqiv-rep}] The statement \emph{1.} is immediate. To establish \emph{2.} we note that with $\alpha_y = - \log \left(\int p_y(x) e^{ \beta_y ^\top \Phi(x)} dx \right)$ for $y = 0$ and $1$ we have $\int p_y(x) e^{\alpha_y +\beta_y ^\top \Phi(x)}dx = 1. $ This means \[ \begin{aligned} 1 &= \int q_X(x)dx \\ & = \int \left(p_0(x) e^{a_0 +\beta_0 ^\top \Phi(x)} + p_1(x) e^{a_1 +\beta_1 ^\top \Phi(x)} \right) dx\\ & = e^{a_0 - \alpha_0} + e^{a_1 - \alpha_1}\,. \end{aligned}\] Letting $\pi_Q = e^{a_1 - \alpha_1}$ we have \emph{2}. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:uniqueness}] Define \[ F(\alpha_0, \alpha_1, \beta_0, \beta_1) \triangleq p_0(x) e^{\alpha_0 +\beta_0 ^\top \Phi(x)} + p_1(x) e^{\alpha_1 +\beta_1 ^\top \Phi(x)}\,. \] For any two solutions $(a_0, a_1, b_0, b_1)$ and $(\alpha_0, \alpha_1, \beta_0, \beta_1)$ we note that $F(a_0, a_1, b_0, b_1) - F(\alpha_0, \alpha_1, \beta_0, \beta_1) = 0$ implies \[ \begin{aligned} p_1(x) e^{a_1 + b_1^\top \Phi(x)} + p_0(x) e^{a_0 + b_0^\top \Phi(x)} = p_1(x) e^{\alpha_1 + \beta_1^\top \Phi(x)} + p_0(x) e^{\alpha_0 + \beta_0^\top \Phi(x)} \end{aligned} \] or \[ \frac{p_1(x)}{p_0(x)} = \frac{e^{\alpha_0 + \beta_0^\top \Phi(x)} - e^{a_0 + b_0^\top \Phi(x)}}{e^{a_1 + b_1^\top \Phi(x)} - e^{\alpha_1 + \beta_1^\top \Phi(x)}} \] which implies \[ \frac{p_1(x)}{p_0(x)} e^{u_0 + u^\top \Phi(x)} = \frac{e^{v_0 + v^\top \Phi(x)} - 1}{e^{w_0 + w^\top \Phi(x)} - 1} \] for $u_0 = \alpha_1 - a_0$, $u = \beta_1 - b_0$, $v_0 = \alpha_0 - a_0$, $v = \beta_0 - b_0$, $w_0 = a_1 - \alpha_1$ and $w = b_1 - \beta_1$. Since $\frac{p_1(x)}{p_0(x)} e^{u_0 + u^\top \Phi(x)} > 0$ we have \[ \big(v_0 + v^\top \Phi(x)\big)\big(w_0 + w^\top \Phi(x)\big) > 0 \ \text{for any }x. \] If the range of $\Phi(x)$ is unbounded then $v \parallel w$ in a way that $v = \alpha w$ for $\alpha > 0$, which also implies $v_0 = \alpha w_0$. From the Assumption \ref{assmp:non-linear} we conclude $v = w = 0$ or $\beta_0 = b_0$ and $\beta_1 = b_1$. Furthermore, we conclude $\alpha_0 = a_0$ and $\alpha_1 = a_1$. If the assumption \ref{assmp:non-linear} is not true then there exists a $\theta_0, \delta_0\in {\bR}$, $\alpha>0$ and $\theta, \delta \in{\bR}^d$ such that for any $x$ we have \[p_1(x) e^{\theta_0 + \theta^\top \Phi(x)}\left(e^{\delta_0 + \delta^\top \Phi(x)} - 1\right) = p_0(x) \left(e^{\alpha\delta_0 + \alpha\delta^\top \Phi(x)} - 1\right)\,. \] If $\delta \neq 0$ then \[ p_1(x) e^{(\theta_0 + \delta_0) + (\theta + \delta)^\top \Phi(x)} + p_0(x) = p_1(x)e^{\theta_0 + \theta^\top \Phi(x)} + p_0(x)e^{\alpha \delta_0 + \alpha \delta^\top \Phi(x)} \] which gives non-unique values for the parameters. If $\delta = 0$ then \[ \frac{p_1(x)}{p_0(x)} e^{\theta_0 + \theta^\top \Phi(x)} = \frac{e^{\alpha \delta_0}-1}{e^{\delta_0} - 1} > 0 \text{ or } {p_1(x)} e^{\theta_0' + \theta^\top \Phi(x)} = p_0(x) \] which implies replacing $p_1(x) \gets p_0(x)e^{-\theta_0' - \theta^\top \Phi(x)}$ and $p_0(x)\gets {p_1(x)} e^{\theta_0' + \theta^\top \Phi(x)}$ we get another representation with different parameter values. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:unique-reg-fn}] If the solution to the equation is unique then these quantities are unique. To establish the other way we see that any two solutions $(a_0, a_1, b_0, b_1)$ and $(\alpha_0, \alpha_1, \beta_0, \beta_1)$ of the equation must satisfy \[ \beta_0 - b_0 = \alpha(b_1 - \beta_1) \text{ and }\alpha_0 - a_0 = \alpha(a_1 - \alpha_1) \] for some $\alpha>0.$ If $a_1 - a_0 = \alpha_1 - \alpha_0$ and $b_1 - b_0 = \beta_1 - \beta_0$ we shall establish that $(a_0, a_1, b_0, b_1)$ and $(\alpha_0, \alpha_1, \beta_0, \beta_1)$ are same. We only establish it for $\beta_0 = b_0$ and $\beta_1 = b_1$, and the other proofs follow similarly. Replacing $b_1 \gets b_0 + \beta_1 - \beta_0$ in $\beta_0 - b_0 = \alpha(b_1 - \beta_1)$ we see that \[ \beta_0 - b_0 = \alpha (b_0 - \beta_0) \text{ or } (1+\alpha) (\beta_0 - b_0) = 0. \] This implies $\beta_0 = b_0 $ and $\beta_1 = b_1$ since $\alpha> 0.$ \end{proof} \begin{theorem}[Theorem \ref{th:multi-class-exponential}] Suppose $p_j(x), ~ q_j(x); ~ j = 1, 2, \dots, K$ are exponential family distributions with statistic $x$ and the following are satisfied. \begin{enumerate} \item $\text{span} \Big\{ \log\big(\frac{q_j(x)}{p_j(x)}\big); ~ j = 1, \dots, K \Big\} \subseteq \text{span} \big\{T(x), 1\big\}$. \item There does not exist a pair $(j, k)$, a vector $u \in {\bR}^{|T(x)|}$, and a scalar $v\in {\bR}$ such that $\frac{p_j(x)}{p_k(x)} = e^{u^\top T(x) + v}$. \end{enumerate} Then there exists a unique solution to the equation \begin{equation} \label{eq:target-marginal-multi} \sum_{j = 1}^K p_j(x) \alpha_j e^{\beta_j^\top T(x) } = q_X(x)\,, \end{equation} such that $\alpha_j \ge 0$. The conditions are also necessary for existence of unique solution. \end{theorem} \begin{proof}[Proof of Theorem \ref{th:multi-class-exponential}] We will show that if the equation \eqref{eq:target-marginal-multi} has two solutions $\{\alpha_j, \beta_j\}_{j = 1}^K$ and $\{a_j, b_j\}_{j = 1}^K$ then either $a_j = \alpha_j = 0$ or $a_j = \alpha_j, ~ b_j = \beta_j $ for all $j$. For the two solutions we have \[ \sum_{j = 1}^K p_j(x) \alpha_j e^{\beta_j^\top T(x) } = \sum_{j = 1}^K p_j(x) a_j e^{b_j^\top T(x) } \] Since $p_j(x) = \gamma_j e^{\delta_j^\top x}$ for some $\{\gamma_j, \delta_j\}_{j = 1}^K$ we have \begin{equation} \sum_{j = 1}^K \alpha_j \gamma_j e^{\beta_j^\top T(x) + \delta_j^\top x } = \sum_{j = 1}^K a_j \gamma_j e^{b_j^\top T(x) + \delta_j^\top x } \label{eq:non-id} \end{equation} We require the following lemma. \begin{lemma} \label{lemma:linear-indep} Let $b_1, \dots, b_T\in {\bR}^d$ be pairwise distinct vectors. Then $\big\{e^{x^\top b_j}\big\}_{j = 1}^T$ are linearly independent. \end{lemma} The condition 2. ensures that for any $j \neq k$ as a function of $x$ we have $\beta_j^\top T(x) + \delta_j^\top x \neq b_k^\top T(x) + \delta_k^\top x$, $\beta_k^\top T(x) + \delta_k^\top x \neq b_j^\top T(x) + \delta_j^\top x$ and $b_j ^\top T(x) + \delta_j ^\top x \neq b_k^\top T(x) + \delta_k^\top x$. Because if either of them is not true then for some $j \neq k$ we have $(\delta_j - \delta_k)^\top x = u^\top T(x)$ for some $u$ and this implies \[ \begin{aligned} \log \big(\frac{p_j(x)}{p_k(x)}\big) &= (\delta_j - \delta_k)^\top x + \log\big(\frac{\gamma_j}{\gamma_k}\big)\\ & =u^\top T(x) + \log\big(\frac{\gamma_j}{\gamma_k}\big)\,. \end{aligned} \] This is a direct contradiction to Condition 2 in Theorem \ref{th:multi-class-exponential}. Recalling that for a full row-rank matrix $\mathbf{A}$ the transformation is $T(x) = \mathbf{A} x$, the condition 2. implies that for any $j \neq k$ we have $\mathbf{A}^\top \beta_j + \delta_j \neq \mathbf{A}^\top b_k + \delta_k$, $\mathbf{A}^\top b_j + \delta_j \neq \mathbf{A}^\top \beta_k + \delta_k$ and $\mathbf{A}^\top b_j + \delta_j \neq \mathbf{A}^\top b_k + \delta_k$. Using the transformation $T(x) = \mathbf{A} x$ the Equation \eqref{eq:non-id} can rewritten as \begin{equation} \begin{aligned} 0 & = \sum_{j= 1}^k \alpha_j \gamma_j e^{ (\mathbf{A}^\top \beta_j + \delta_j)^\top x} - \sum_{j= 1}^k a_j \gamma_j e^{ (\mathbf{A}^\top b_j + \delta_j)^\top x}\\ & = \sum_{j: \beta_j = b_j} \gamma_j (\alpha_j - a_j) e^{(\mathbf{A}^\top b_j + \delta_j)^\top x} + \sum_{j: \beta_j \neq b_j} \gamma_j \alpha_j e^{(\mathbf{A}^\top \beta_j + \delta_j)^\top x}\\ & ~~~~ - \sum_{j: \beta_j \neq b_j} \gamma_j a_j e^{(\mathbf{A}^\top b_j + \delta_j)^\top x}\,. \end{aligned} \end{equation} Denoting $I \triangleq \{j: \beta_j = b_j\}$ we see that the set \[ \{\mathbf{A}^\top b_j + \delta_j\}_{j \in I} \cup \{ \mathbf{A}^\top b_j +\delta_j \}_{j \in I^\complement} \cup \{ \mathbf{A}^\top \beta_j +\delta_j \}_{j \in I^\complement} \] is a set of distinct vectors. We now use the lemma \ref{lemma:linear-indep} to conclude that $\gamma_j (\alpha_j- a_j) = 0$ for $j \in I$ and $\gamma_j \alpha_j = \gamma_j a_j = 0$ for $j \in I^\complement$. Since $\gamma_j \neq 0$, we conclude that \[ \begin{cases} \alpha_j = a_j & \text{if} ~~ \beta_j = b_j,\\ \alpha_j = a_j = 0 & \text{if} ~~ \beta_j \neq b_j. \end{cases} \] If $\beta_j \neq b_j$ for some class $j$ we have $a_j = \alpha_j = 0$, which implies the index $j$ can be eliminated from the left hand sum in equation \eqref{eq:target-marginal-multi}. This implies unique solution for the equation \eqref{eq:target-marginal-multi}. \begin{lemma} \label{lemma:linear-indep} Let $b_1, \dots, b_T\in {\bR}^d$ be pairwise distinct vectors. Then $\big\{e^{x^\top b_j}\big\}_{j = 1}^T$ are linearly independent. \end{lemma} \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:linear-indep}] We shall prove that the only solution to the equation \begin{equation} \label{eq:li} \sum_{j = 1}^T \alpha_j e^{x^\top b_j} = 0 \end{equation} is $\alpha_1 = \dots = \alpha_T = 0$. Without loss of generality we assume $\|b_T\|_2 = \max_{j = 1,\dots, T} \|b_j\|_2$. Restricting to $x = a \frac{b_T}{\|b_T\|_2}, ~ a \in {\bR}$ we see that the equation \ref{eq:li} can be modified as the following. \[ \begin{aligned} & \sum_{j = 1}^T \alpha_j e^{u\frac{b_T^\top b_j}{\|b_T\|_2}} = 0\,,\\ \text{or,} ~ & \sum_{j = 1}^T \alpha_j e^{u\Big(\frac{b_T^\top b_j}{\|b_T\|_2} -\|b_T\|_2 \Big)} = 0\,. \end{aligned} \] Fix $j \neq T$. If $\frac{b_T^\top b_j}{\|b_T\|_2} < \|b_j\|_2$ then $ \frac{b_T^\top b_j}{\|b_T\|_2} < \|b_j\|_2 \le \|b_T\|_2$. If $\frac{b_T^\top b_j}{\|b_T\|_2} = \|b_j\|_2$ then $b_j = \rho b_T$ for some $-1 \le \rho <1$ ($-1 \le \rho <1$ holds because $b_j \neq b_T$ and $\|b_j\|\le \|b_T\| $) which again implies $\frac{b_T^\top b_j}{\|b_T\|_2} = \rho \|b_T\|_2 < \|b_T\|_2$. Since $\frac{b_T^\top b_j}{\|b_T\|_2} - \|b_T\|_2 < 0$ for any $j \neq T$ letting $u \to \infty$ we get $\alpha_T = 0$. This also implies \begin{equation} \sum_{j = 1}^{T-1} \alpha_j e^{x^\top b_j} = 0\,. \end{equation} We use iterative argument to conclude the lemma. \end{proof} \section{Theoretical properties of exponential tilting} \subsection{Identifiability of the exponential tilt model} \label{sec:identifiability} To show that the $\theta_k$'s and $\alpha_k$'s are identifiable from \eqref{eq:distribution-matching}, we must show that there is a unique solution to \eqref{eq:distribution-matching}. Unfortunately, this is not the case without additional assumptions. For example, consider a linear discriminant analysis (LDA) problem in which the class conditionals drift between the source and target domains: \[ \begin{aligned} p\{x,Y=k\} &= \pi_k\phi(x-\mu_{P,k}), ~~~ q\{x,Y=k\} &= \pi_k\phi(x-\mu_{Q,k})\,, \end{aligned} \] where $\phi$ is the standard multivariate normal density, $\pi_k\in(0,1)$ are the class proportions in both source and target domains, and $\mu_{P,k}$'s (resp.\ the $\mu_{Q,k}$'s) are the class conditional means in the source (resp.\ target) domains. We see that this problem satisfies the exponential tilt model with $T(x) = x$: \[ \textstyle \log\frac{q\{x,Y=k\}}{p\{x,Y=k\}} = (\mu_{Q,k} - \mu_{P,k})^\top x - \frac12\|\mu_{Q,k}\|_2^2 +\frac12\|\mu_{P,k}\|_2^2\,. \] Unfortunately, this instance of the exponential tilt model is not identifiable; any permutation of the class labels $\sigma:[K]\to[K]$ also leads to the same (marginal) distribution of inputs: \[ \begin{aligned} &\textstyle\sum_{k=1}^Kp\{x,Y=k\}\textrm{exp}\left(\textstyle(\mu_{Q,k} - \mu_{P,k})^\top x + \frac12\|\mu_{P,k}\|_2^2 - \frac12\|\mu_{Q,k}\|_2^2\right) \\ &\textstyle\quad= \sum_{k=1}^Kp\{x,Y=k\}\textrm{exp}\left(\textstyle(\mu_{Q,\sigma(k)} - \mu_{P,k})^\top x + \frac12\|\mu_{P,k}\|_2^2 - \frac12\|\mu_{Q,\sigma(k)}\|_2^2\right). \end{aligned} \] From this example, we see that the non-identifiability of the exponential tilt model is closely related to the label switching problem in clustering. Intuitively, the exponential tilt model in the preceding example is too flexible because it can tilt any $p\{x,Y=k\}$ to $q\{x,Y=l\}$. Thus there is ambiguity in which $p\{x,Y=k\}$ tilts to which $q\{x,Y=l\}$. In the rest of this subsection, we present an identification restriction that guarantees the exponential tilt model is identifiable. A standard identification restriction in related work on domain adaptation is a clustering assumption. For example, \citet{tachet2020Domain} assume there is a partition of $\mathcal{X}$ into disjoint sets $\mathcal{X}_k$ such that $\textrm{supp}(P\{\cdot\mid Y=k\}),\ \textrm{supp}(Q\{\cdot\mid Y=k\}) \subset \mathcal{X}_k$ for all $k\in[K]$. This assumption is strong: it implies there is a perfect classifier in the source and target domains. Here we consider a weaker version of the clustering assumption: there are sets $\mathcal{S}_k$ such that \[ P\{Y=k\mid X\in\mathcal{S}_k\} = Q\{Y=k\mid X\in\mathcal{S}_k\} = 1. \] We note that the $\mathcal{S}_k$'s can be much smaller than the $\mathcal{X}_k$'s; this permits the supports of $P\{\cdot\mid Y=k\}$ and $P\{\cdot\mid Y=l\}$ to overlap. \begin{definition}[anchor set] \label{def:anchor-set} A set $\mathcal{S}_k\subset\mathcal{X}$ is an \textbf{anchor set} for class $k$ if $p\{x,Y=k\} > 0$ and $p\{x,Y=l\} = 0$, $l\ne k$ for all $x\in\mathcal{S}_k$. \end{definition} \begin{proposition}[identifiability from anchor sets] \label{prop:anchor-points} If there are anchor sets $\mathcal{S}_k$ for all $K$ classes (in the source domain) and $T(\mathcal{S}_k)$ is $p$-dimensional, then there is at most one set of $\theta_k$'s and $\alpha_k$'s that satisfies \eqref{eq:distribution-matching}. \end{proposition} This identification restriction is also closely related to the linear independence assumption in \citet{gong2016Domain}. Inspecting the proof of proposition \ref{prop:anchor-points}, we see that the anchor set assumption implies linear independence of $\{p_k(x)\textrm{exp}(\theta_k^\top T(x) + \alpha_k)\}_{k=1}^K$ for any set of $\theta_k$'s and $\alpha_k$'s. \subsection{Consistency in estimation of the tilt parameters and the importance weights} Here, we establish a convergence rate for the estimated tilt parameters in Lemma \eqref{lemma:KL-distribution-matching} and the ExTRA importance weight estimates from \eqref{eq:weights-estimate}. To simplify the notation, we let $S(x) = (1, T(x)^\top)^\top$ be the extended sufficient statistics for the exponential tilt and denote the corresponding tilt parameters as $\xi_k = (\alpha_k, \theta_k^\top)^\top$. We let $\xi_k^\star = (\alpha_k^\star, {\theta_k^\star}^\top )^\top$'s be the true values of the tilt parameters $\xi_k$'s and let $\xi = (\xi_1^\top, \dots, \xi_K^\top)^\top\in {\bR}^{K(p+1)}$ be the long vector containing all the tilt parameters. We recall that estimating the parameters from the optimization stated in Lemma \ref{lemma:KL-distribution-matching} requires a classifier $\widehat{\eta}_P$ on the source data. So, we define our objective for estimating $\xi$ through a generic classifier $\eta : \mathcal{X} \rightarrow \Delta^{K}$. Denoting $\eta_k(x)$ as the $k$-th co-ordinate of $\eta(x)$ we define the expected log-likelihood objective as: \[ \textstyle \mathfrak{L}(\eta, \xi) = {\bE}_{Q_X}[\log \{ \sum_{k = 1}^K \eta_k(X) \textrm{exp}(\xi_k^\top S(X)) \}] - \log [{\bE}_{P} \{ \textrm{exp}(\xi_Y^\top S(X)) \}]\,, \] and its empirical version as \[ \textstyle \hat \mathfrak{L}(\eta, \xi) = {\bE}_{\widehat{Q}_X}[\log \{ \sum_{k = 1}^K \eta_k(X) \textrm{exp}(\xi_k^\top S(X)) \}] - \log [{\bE}_{\widehat{P}} \{ \textrm{exp}(\xi_Y^\top S(X)) \}]\,. \] To establish the consistency of MLE we first make an assumption that the loss $\xi \mapsto -\mathfrak{L}(\eta^\star_P, \xi)$ is strongly convex at the true parameter value. \begin{assumption} \label{assmp:strong-convexity} The loss $\xi \mapsto -\mathfrak{L}(\eta^\star_P, \xi)$ is strongly convex at $\coef^\star$, {\it i.e.}, there exists a constant $\mu>0$ such that for any $\xi$ it holds: \[ \textstyle -\mathfrak{L}(\eta^\star_P, \xi) \ge -\mathfrak{L}(\eta^\star_P, \coef^\star) - \partial_\xi\mathfrak{L}(\eta^\star_P, \coef^\star) ^\top (\xi - \coef^\star) +\frac{\mu}{2}\|\xi - \coef^\star\|_2^2\,. \] \end{assumption} We note that the assumption is a restriction on the distribution $Q$ rather than the objective itself. For technical convenience we next assume that the feature space is bounded. \begin{assumption} \label{assump:bounded-covariate} $\mathcal{X}$ is bounded, {\it i.e.}, there exists an $M>0$ such that $\mathcal{X} \subset B(0, M)$. \end{assumption} Recall, from Lemma \ref{lemma:KL-distribution-matching}, that we need a fitted source classifier $\widehat{\eta}_P$ to estimate the tilt parameter: $\coef^\star$ is estimated by maximizing $\hat \mathfrak{L}(\widehat{\eta}_P, \xi)$ rather than the unknown $\hat \mathfrak{L}(\eta_P^\star, \xi)$. While analyzing the convergence ${\hat \coef}$ we are required to control the difference $\hat \mathfrak{L}(\hat \eta_P, \xi) - \hat \mathfrak{L}(\eta_P^\star, \xi)$. To ensure the difference is small, assume the pilot estimate of the source regression function $\widehat{\eta}_P$ is consistent at some rate $r_{n_P}$. \begin{assumption} \label{assmp:source-classifier} Let $f_{P, k}^\star(x) = \log\{\eta_{P, k}^\star(x)\} - \frac 1K \sum_{j = 1}^K \log\{\eta_{P, j}^\star(x)\}$. We assume that there exist an estimators $\{\hat f _{P, k}(x)\}_{k =1 }^K$ for $\{f^\star_{P, k}(x)\}_{k = 1}^K$ such that the following holds: there exists a constant $c>0$ and a sequence $r_{n_P} \to 0$ such that for almost surely $[\mathbb{P}_X]$ it holds \[\textstyle {\bP}(\|\hat f_P(x) - f_P^\star(x)\|_{2} > t) \le \textrm{exp}(-c t^2 /r_{n_P}^2), ~ t > 0\,. \] \end{assumption} We use the estimated logits $\{\hat f _{P, k}(x)\}_{k =1 }^K$ to construct the regression functions as $\widehat{\eta}_{P, k}(x) = \textrm{exp}(\hat f_{P, k}(x))/\{\sum_{j = 1}^K \textrm{exp}(\hat f_{P, j}(x))\}$, which we use in the objective stated in Lemma \ref{lemma:KL-distribution-matching} to analyse the convergence of the tilt parameter estimates and the ExTRA weights \eqref{eq:weights-estimate}. With the above assumptions we're now ready to state concentration bounds for ${\hat \coef} - \coef^\star$ and $\hat \omega - \omega^\star$, where the true importance weight $\omega^\star$ is defined as $\omega^\star(x, y) = \textrm{exp}({\xi_y^\star}^\top S(x))$. \begin{theorem} \label{thm:tilt-concentration} Let the assumptions \ref{assmp:strong-convexity}, \ref{assump:bounded-covariate} and \ref{assmp:source-classifier} hold. For the sample sizes $n_P, n_Q$ define $\alpha_{n_P, n_Q} = r_{n_P} \sqrt{\log(n_Q)}+ {\{(p+1)K/{n_P}\} }^{1/2} + {\{(p+1)K/n_Q\} }^{1/2}$. There exists constants $k_1, k_2>0$ such that for any $\delta>0$ with probability at least $1 - (2K+ 1)\delta$ the following hold: \[ \|{\hat \coef} - \coef^\star \|_2 \le k_1 \alpha_{n_P, n_Q}\sqrt{\log(1/\delta)}, \text{ and } \|\hat \omega - \omega^\star\|_{1, P} \le k_2 \alpha_{n_P, n_Q}\sqrt{\log(1/\delta)}. \] \end{theorem} In Theorem \ref{thm:tilt-concentration} we notice that as long as $r_{n_P} \log(n_Q) \to 0$ for $n_P, n_Q\to \infty$ we have $\alpha_{n_P,n_Q} \to 0$. This implies both the estimated tilt parameters parameters and the ExTRA weights converge to their true values as $n_P, n_Q\to \infty$. We next provide theoretical guarantees for the downstream tasks (1) fine-tuning and (2) target performance evaluation that we described in Section \ref{sec:exponential-tilt}. \paragraph{Fine-tuning} We establish a generalization bound for the fitted model \eqref{eq:weighted-ERM} using weighted-ERM on source domain. We denote $\mathcal{F}$ as the classifier hypothesis class. For $f \in \mathcal{F}$ and a weight function $\omega: \mathcal{X} \times \mathcal{Y} \to {\bR}_{\ge 0} $ define the weighted loss function and its empirical version on source data as: \[ \textstyle \mathcal{L}_P(f, w) = {\bE}_{P} [\omega(X, Y)\ell(f(X), Y)], ~ \hat \mathcal{L}_P (f, w) = {\bE}_{\widehat{P}} [\omega(X, Y)\ell(f(X), Y)]\,. \] We also define the loss function on the target data as: $ \mathcal{L}_Q(f) = {\bE}_{Q} \big[\ell(f(X), Y)\big]\,.$ If $\{(\theta_k^\star, \alpha_k^\star)\}_{k = 1}^K$ is the true value of the tilt parameters in \eqref{eq:exponential-tilt}, {\it i.e.}, the following holds: \[ \textstyle q\{x, Y = k\} = p\{x, Y = k\} \textrm{exp}\{\alpha_k^\star + (\theta_k^\star)^\top T(x)\}; \ k \in [K]\,, \] then defining $\omega^\star(x, k) = \textrm{exp}\{\alpha_k^\star + (\theta_k^\star)^\top T(x) \}$ as the true weight we notice that $\mathcal{L}_P(f, \omega^\star) = \mathcal{L}_Q(f)$, which is easily observed by setting $g(x, y) = \ell(f(x), y)$ in the display \eqref{eq:true-weighted-expectation}. We next define the Rademacher complexity \cite{bartlett2002rademacher} that has been frequently used in machine learning literature to establish a generalization bound. Instead of considering the Rademacher complexity on $\mathcal{F}$ we define the class of weighted losses $\mathcal{G}(\ell, \mathcal{F}) = \{g_f(x, y) = w^\star(x, y) \ell(f(x), y): \ f \in \mathcal{F}\}$ and for $n \in \mathbb{N}$ we define its Rademacher complexity measure as \[ \textstyle \mathcal{R}_n(\mathcal{G}) \triangleq {\bE}_{\{(u_i, v_i)\}_{i = 1}^n\stackrel{\textrm{IID}}{\sim} P} \big[{\bE}_{\{\xi_i\}_{i = 1}^n}[\sup_{f \in \mathcal{F}} \frac1{n} \sum_{i = 1}^n \xi_i \omega^\star(u_i, v_i) \ell \{f(u_i), v_i\}]\big] \] where $\{\xi_i\}_{i = 1}^n $ are \textrm{IID} \ symmetric Rademacher random variables. To establish our generalization bound we need the following assumption on the loss function. \begin{assumption} \label{assump:bounded-loss} The loss function $\ell$ is bounded, {\it i.e.}, there exists $B>0$ such that for any $f\in \mathcal{F}$, $x \in \mathcal{X}$ and $y \in [K]$ it holds: $|\ell\{f(x), y\}| \le B$. \end{assumption} With the above definitions and the assumption we're now ready to establish our generalization bound. \begin{lemma} \label{lemma:generalization-bound} For a weight function $\omega$ and the source samples $\{(X_{P, i}, Y_{P, i})\}_{i = 1}^{n_P}$ of size ${n_P}$ let $\hat f_{\omega} = {\arg\min}_{f\in\mathcal{F}} \hat \mathcal{L}_P(f, \omega)$. There exists a constant $c> 0$ such that the following generalization bound holds with probability at least $1 -\delta$ \begin{equation}\textstyle \label{eq:gen-bound} \mathcal{L}_Q(\hat f_{ \omega}) - \min_{f\in\mathcal{F}}\mathcal{L}_Q (f) \le 2 \mathcal{R}_{n_P}(\mathcal{G}) + B \| \omega - w^\star \|_{1, P} + c \sqrt{\frac{\log(1/\delta)}{{n_P}}} \,. \end{equation} \end{lemma} In Theorem \ref{thm:tilt-concentration} we established an upper bound for the estimated weights $\hat \omega$, which concludes that $\hat f_{\hat \omega}$ has the following generalization bound: for any $\delta > 0$, with probability at least $1 - (2K+ 2)\delta$ \[ \textstyle \mathcal{L}_Q(\hat f_{ \hat \omega}) - \min_{f\in\mathcal{F}}\mathcal{L}_Q (f) \le 2 \mathcal{R}_{n_P}(\mathcal{G}) + k_2\alpha_{{n_P}, {n_Q}} \sqrt{\log(1/\delta)} + c \sqrt{\log(1/\delta)/{n_P}} \,, \] where $k_2$ is the constant in Theorem \ref{thm:tilt-concentration} and $c$ is the constant in Lemma \ref{lemma:generalization-bound}. \paragraph{Target performance evaluation} We provide a theoretical guaranty for the target performance evaluation \eqref{eq:target-performance} using our importance weights. Here we only consider the functions $g:\mathcal{X} \times \mathcal{Y} \to {\bR}$ which are bounded by some $B>0$, {\it i.e.}\ $|g(x, y)| \le B$ for all $x\in \mathcal{X}$ and $y \in \mathcal{Y}$. The simplest and the most frequently used example is the model accuracy which uses $0\text{-} 1$-loss as the loss function: for a model $f$ the loss $g(x, y) = \mathbb{I}\{f(x) = y\}$ is bounded with $B = 1$. For such functions we notice that ${\bE}_{Q}[g(X, Y)] = {\bE}_{P}[g(X, Y)\omega^\star(X, Y)]$, as observed in display \eqref{eq:true-weighted-expectation}. This implies the following bound on the target performance evaluation error \[ \begin{aligned} \big|{\bE}_{Q}[g(X, Y)] - {\bE}_{P}[g(X, Y)\hat \omega(X, Y)]\big|& = \big|{\bE}_{P}[g(X, Y)\omega^\star(X, Y)] - {\bE}_{P}[g(X, Y)\hat \omega(X, Y)]\big|\\ & \le B {\bE}_{P}[|\hat \omega^\star(X, Y)- \omega^\star (X, Y)|] \le B \|\hat \omega - \omega^\star\|_{1, P}\,. \end{aligned} \] We recall the concentration bound for $\|\hat \omega - \omega^\star\|_{1, P}$ that we established in Theorem \ref{thm:tilt-concentration} and conclude that the estimated target performance in \eqref{eq:target-performance} converges to the true target performance at rate $\alpha_{{n_P}, {n_Q}}$. \section{Proofs} \subsection{Proof of lemma \ref{lemma:KL-distribution-matching}} If $D$ is the Kulback-Leibler (KL) divergence, then we can rewrite the objective in \eqref{eq:robust-distribution-matching} as: \[ \begin{aligned} & {\bE}_{\widehat{Q}_X}\left[\textstyle\log\widehat{q}_X\{X\} - \log\Big\{\sum_{k=1}^K\widehat{p}\{X,Y=k\}\textrm{exp}(\theta_k^\top T(X) + \alpha_k)\Big\}\right] \\ &\quad=\textstyle{\bE}_{\widehat{Q}_X}\left[\textstyle\log\Big\{\sum_{k=1}^K\widehat{\eta}_{P,k}(X)\textrm{exp}(\theta_k^\top T(X) + \alpha_k\Big\}\right] - {\bE}_{\widehat{Q}_X} \Big[\log\Big\{\frac{q_X\{x\}}{p_X\{x\}}\Big\}\Big], \end{aligned} \] where the term $-{\bE}_{\widehat{Q}_X} \big[\log\big\{\frac{q_X\{x\}}{p_X\{x\}}\big\}\big]$ in our objective does not involve any tilt parameters and we drop it from our objective. To simplify the notations we define \[ \begin{aligned} O(\theta, \alpha) &\triangleq {\bE}_{\widehat{Q}_X}\left[\textstyle\log\Big\{\sum_{k=1}^K\widehat{\eta}_{P,k}(X)\textrm{exp}(\theta_k^\top T(X) + \alpha_k\Big\}\right]\\ N(\theta, \alpha) &\triangleq {\bE}_{\widehat{P}} \big[ \textrm{exp}(\theta_Y^\top T(X) + \alpha_Y) \big]\,. \end{aligned} \] where $(\theta, \alpha) \in {\bR}^q$ for $q = K(p+1)$. In terms of $O$ and $N$, \eqref{eq:distribution-matching} is \begin{equation} \label{eq:objs1} (\hat \theta , \hat \alpha) = {\arg\max}_{(\theta, \alpha)} \left\{ O(\theta, \alpha)\mid N(\theta, \alpha) = 1 \right\}. \end{equation} Let $F_1 \triangleq \{ (\theta, \alpha)\mid N(\theta, \alpha) = 1 \}$ be the feasible set. We introduce a change of variables: \[ c(\theta , \alpha') = (\theta, \alpha(\theta, \alpha')) ~ \text{ where } ~ \alpha(\theta, \alpha') = \alpha' - 1_K\log(N(\theta, \alpha')). \] Note that for any $(\theta , \alpha') \in {\bR}^q,~ c(\theta , \alpha') \in F_1$ because it holds: \[ \begin{aligned} N(c(\theta, \alpha')) & = N(\theta, \alpha' - \log(N(\theta, \alpha')) \times 1_K)\\ & = {\bE}_{\widehat{P}} \big[ \textrm{exp}(\theta_Y^\top T(X) + \alpha_Y' - \log(N(\theta, \alpha'))) \big]\\ & = \frac{{\bE}_{\widehat{P}} \big[ \textrm{exp}(\theta_Y^\top T(X) + \alpha_Y' ) \big]}{N(\theta, \alpha')} = 1\,. \end{aligned} \] and the objective value changes to \[ \begin{aligned} O(c(\theta, \alpha')) & = {\bE}_{\widehat{Q}_X}\left[\textstyle\log\Big\{\sum_{k=1}^K\widehat{\eta}_{P,k}(X)\textrm{exp}[\theta_k^\top T(X) + \alpha_k' - \log(N(\theta, \alpha')]\Big\}\right] \\ & = {\bE}_{\widehat{Q}_X}\left[\textstyle\log\Big\{\sum_{k=1}^K\widehat{\eta}_{P,k}(X)\textrm{exp}[\theta_k^\top T(X) + \alpha_k' ]\Big\}\right] - \log(N(\theta, \alpha'))\\ & = O(\theta, \alpha') - \log(N(\theta, \alpha'))\,. \end{aligned} \] Defining $F_2 = \{c(\theta, \alpha')\mid (\theta, \alpha')\in {\bR}^q\}$ we see that $F_1 = F_2$, whose argument follows. We first notice that $F_2 \subset F_1$ since for any $(\theta , \alpha')\in {\bR}^q $ it holds $ c(\theta , \alpha') \in F_1$. We also notice that $F_1 \subset F_2$. This is argued by noticing the following: if $(\theta, \alpha') \in F_1$ then $N(\theta,\alpha') = 1$ which implies $c(\theta, \alpha') = (\theta, \alpha')$. Here, we summarize the crux of the proof. Though there are multiple $(\theta, \alpha')$ that produces the same value of $c(\theta, \alpha')$, each of these $(\theta, \alpha')$'s produce the same value for the objective \[ O(\theta, \alpha') - \log(N(\theta, \alpha')) = O(c(\theta, \alpha'))\,, \] and $c(\theta, \alpha')$ always satisfy the constraint. So, the optimal point $(\theta, \alpha)$ of \eqref{eq:objs1} corresponds to multiple $(\theta, \alpha')$'s and each of them maximizes $O(\theta, \alpha') - \log(N(\theta, \alpha'))$. Furthermore, we obtain the optimal $(\theta, \alpha)$ from any of $(\theta, \alpha')$ (which optimizes $O(\theta, \alpha') - \log(N(\theta, \alpha'))$) using the transformation $c(\theta, \alpha')$. The mathematical description of the change of variable follows. With the change of variable we rewrite \eqref{eq:objs1} as \[ \begin{aligned} & (\hat \theta , \hat \alpha) = c(\hat \theta , \hat \alpha'), ~ (\hat \theta , \hat \alpha')= {\arg\max}_{(\theta, \alpha')} \left\{ O(c(\theta, \alpha'))\mid N(c(\theta, \alpha')) = 1, (\theta, \alpha')\in {\bR}^q \right\}\\ \text{or }& (\hat \theta , \hat \alpha) = c(\hat \theta , \hat \alpha'), ~ (\hat \theta , \hat \alpha') = {\arg\max}_{(\theta, \alpha')} \left\{ O(\theta, \alpha') - \log(N(\theta, \alpha')) \mid (\theta, \alpha') \in {\bR}^q \right\} \end{aligned} \] where the constraint disappear because $N(c(\theta, \alpha')) = 1$ for any $(\theta, \alpha') \in {\bR}^q$. This completes the proof. \subsection{Proof of proposition \ref{prop:anchor-points}} Suppose there are two sets of tilt parameters $(\theta_k, \alpha_k)$'s and $(\eta_k, \beta_k)$'s that satisfy \eqref{eq:distribution-matching}: \[ q_X\{x\} = \sum_{k=1}^Kp\{x,Y=k\}\textrm{exp}(\theta_k^\top T(x)+ \alpha_k) = \sum_{k=1}^Kp\{x,Y=k\}\textrm{exp}(\eta_k^\top T(x) +\beta_k). \] For any $x\in\mathcal{S}_k$, the terms that include $p\{x,Y=l\}$, $l\ne k$ vanish: \[ p\{x,Y=k\}\textrm{exp}(\theta_k^\top T(x)+\alpha_k) = p\{x,Y=k\}\textrm{exp}(\eta_k^\top T(x)+\beta_k). \] This implies \[ \theta_k^\top T(x) + \alpha_k = \eta_k^\top T(x) + \beta_k \text{ for all }x\in\mathcal{S}_k. \] We conclude $\theta_k = \eta_k$ and $\alpha_k = \beta_k$ because $T(\mathcal{S}_k)$ is $d$-dimensional, so there are $p$ points $x_1,\dots,x_p\in\mathcal{S}_k$ such that $T(x_1),\dots,T(x_p)\in{\bR}^p$ are linearly independent. \subsection{Proof of Theorem \ref{thm:tilt-concentration}} For a probabilistic classifier $\eta: \mathcal{X} \to \Delta^K$ and the parameter $\xi = (\xi_1^\top , \dots, \xi_K^\top )^\top$ we define the centered logit function $f:\mathcal{X} \to {\bR}^K$ as $f_a(x) = \log\{\eta_a(x)\} - \frac 1K \sum_{b = 1}^K \log\{\eta_b(x)\}$. We define the functions $u_k(f, \xi) = \eta_a(x) \textrm{exp}(\xi_k^\top S(x))$, $u_{\cdot} (f, \xi) = \sum_{k=1}^K u_k(f, \xi)$ and $v_k(f, \xi) = u_k(f, \xi)/u_{\cdot}(f, \xi)$, and notice that the objective is \begin{equation} \textstyle \hat L(f, \xi) = {\bE}_{\widehat{Q}_X} [\log\{ u_{\cdot}(f, \xi) \}] - \log\{ {\bE}_{\widehat{P}}[\textrm{exp}(\xi_y ^\top S(x))] \}\,, \end{equation} whereas the true objective is \begin{equation} \textstyle L^\star( f, \xi) = {\bE}_{Q_X} [\log\{ u_{\cdot}(f, \xi) \}] - \log\{ {\bE}_{P}[\textrm{exp}(\xi^\top_y S(x))] \}\,. \end{equation} We see that the first order optimality conditions in estimating $\hat \xi$ are \begin{equation} \textstyle \label{eq:optimality-estimation} \begin{aligned} 0 & = \partial_{\xi_a} \hat L(\hat f, \hat \xi)\\ & = \partial_{\xi_a}\big[ {\bE}_{\widehat{Q}_X} [\log\{ u_{\cdot}(\hat f, \hat \xi) \}] - \log\{ {\bE}_{\widehat{P}}[\textrm{exp}( {\hat \xi_y} ^\top S(x))] \}\big]\\ & = {\bE}_{\widehat{Q}_X}\big[ \partial_{\xi_a} \{u_{\cdot}(\hat f, \hat \xi)\}/u_{\cdot}(\hat f, \xi) \big] - \frac{\partial_{\xi_a}\{ {\bE}_{\widehat{P}}[\textrm{exp}({\hat \xi_y} ^\top S(x))] \}}{ {\bE}_{\widehat{P}}[\textrm{exp}({\hat \xi_y} ^\top S(x))]}\\ & = {\bE}_{\widehat{Q}_X} [S(x)v_a(\hat f, \hat \xi)] - {\bE}_{\widehat{P}}[S(x)\textrm{exp}({\hat \xi_a} ^\top S(x))\mathbf{I}\{y = a\}] \end{aligned} \end{equation} where the last inequality holds because ${\bE}_{\widehat{P}}[\textrm{exp}({\hat \coef}_y ^\top S(x))] = 1$. Similarly, the first order optimality condition at truth (for $\xi^\star$) are \begin{equation} \textstyle \label{eq:optimality-truth} \begin{aligned} 0 &= \partial_{\xi_a} L^\star( f^\star, \xi^\star)\\ &= {\bE}_{Q_X} [S(x)v_a( {f^\star}, \coef^\star)] - {\bE}_{P}[S(x)\textrm{exp}({\xi_a^\star} ^\top S(x))\mathbf{I}\{y = a\}]\\ &= {\bE}_{Q_X} [S(x)v_a( {f^\star}, \coef^\star)] - {\bE}_{P_X}[S(x)\textrm{exp}({\xi_a^\star} ^\top S(x))\eta_{P, a}^\star(x)] \end{aligned} \end{equation} We decompose \ref{eq:optimality-estimation} using the Taylor expansion and obtain: \begin{equation} \textstyle \label{eq:optimality-theta-hat} 0 = \partial_{\xi_a} \hat L ({f^\star}, {\hat \coef}) + \langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L( {\tilde f}, {\hat \coef})\rangle \end{equation} where $\tilde f$ is a function in the bracket $[f^\star, \hat f]$, {\it i.e.}\ for every $x$, $\tilde f(x)$ is a number between $\hat f(x)$ and $f^\star(x)$ \paragraph{Bound on $\langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L( {\tilde f}, {\hat \coef})\rangle $:} To bound the term we define $\zeta = {\hat f} - {f^\star}$ and notice that \[ \begin{aligned} \langle \zeta, \partial_{f} \partial_{\theta_a} \hat L( f, {\hat \coef})\rangle & =\textstyle \sum_{b}\langle \zeta_b, \partial_{f_b} \partial_{\theta_a} \hat L( f, {\hat \coef})\rangle\\ &=\textstyle \sum_{b}\langle \zeta_b, \partial_{f_b} \{{\bE}_{\widehat{Q}_X} [S(x)v_a(f, \hat \xi)]\} \rangle \\ &=\textstyle \sum_{b}{\bE}_{\widehat{Q}_X}[S(x) \zeta_b(x) \partial_{f_b}\{v_a(f, \hat \xi) \} ] \\ &=\textstyle \sum_{b}{\bE}_{\widehat{Q}_X}[S(x) \zeta_b(x) v_a(f, {\hat \coef}) (\delta_{ab} - v_a(f, {\hat \coef})) ]\,. \end{aligned} \] The derivative in third equality in the above display is calculated in Lemma \ref{lemma:derivatives}. Here, from Assumption \ref{assump:bounded-covariate} we notice that $S(x)$ is bounded, {\it i.e.}, there exists a $c_1>0$ such that $\|S(x)\|_2 \le c_1$ for all $x \in \mathcal{X}$. This implies the followings: we have \[ \begin{aligned} \textstyle \|{\bE}_{\widehat{Q}_X}[S(x) \zeta_b(x) v_a(f, \xi) (\delta_{ab} - v_a(f, \xi)) ]\|_2 \le c_1{\bE}_{\widehat{Q}_X}[ |\zeta_b(x)| ] \end{aligned} \] since $0 \le v_a(f, \xi) (\delta_{ab} - v_a(f, \xi))\le 1$ and we have \[ \begin{aligned} \textstyle \|\langle \zeta, \partial_{f} \partial_{\theta_a} \hat L( f, {\hat \coef})\rangle\|_2 \le \sum_b c_1{\bE}_{\widehat{Q}_X}[ |\zeta_b(x)| ] \le c_1 \sqrt{K} {\bE}_{\widehat{Q}_X}[ \|\zeta(x)\|_2 ]\,. \end{aligned} \] It follows from Assumption \ref{assmp:source-classifier}: with probability at least $ 1- \delta$ it holds $\sup_{i \in[n_Q]} \|{\hat f}(x_{Q, i}) - {f^\star}(x_{Q,i})\|_2 \le c_2 r_{n_P} \sqrt{\log(n_Q)\log\{1/\delta\}}$, we conclude that \begin{equation} |\langle {\hat f} - {f^\star}, \partial_{f} \partial_{\theta_a} \hat L( f, \theta)\rangle| \le \sqrt{K} c_1 c_2 r_{n_P} \sqrt{\log(n_Q)\log\{1/\delta\}} \end{equation} holds with probability at least $ 1- \delta$. \paragraph{The term $\partial_{\xi_a} \hat L({f^\star}, {\hat \coef})$} We use strong convexity \ref{assmp:strong-convexity} and convergence of the loss that \[\sup_{\xi \in K}|\hat L(f^\star, \xi) - L^\star(f^\star, \xi)| \stackrel{n_P, n_Q \to \infty}{\longrightarrow} 0\] for any compact $K$ in \citet[Corollary 3.2.3.]{vaart2000Weak} to conclude that ${\hat \coef} \to \coef^\star$ in probability and hence ${\hat \coef}$ is a consistent estimator for $\coef^\star$. Following the consistency of ${\hat \coef}$ we see that for sufficiently large $n_P, n_Q$ we have $\|{\hat \coef} - \coef^\star\|_2 \le \delta_\xi$ ($\delta_\xi$ is chosen according to Lemma \ref{lemma:lb-normalizer}) with probability at least $ 1 -\delta$ and on the event it holds: $\|{\hat \coef}\|_2 \le \|\coef^\star\|_2 + \delta_\xi$. We define empirical process \begin{equation}\label{eq:emp-1st-derivative} Z_{a, n_P, n_Q} = \sup_{\|\xi\|_2\le \|\coef^\star\|_2 + 1} \|\partial_{\xi_a}\hat L({f^\star}, \xi) - \partial_{\xi_a}{L^\star}({f^\star}, \xi)\|_2 \end{equation} for which we shall provide a high probability upper bound. We denote $Z_{a, n_P, n_Q}(\xi) = \partial_{\xi_a}\hat L({f^\star}, \xi) - \partial_{\xi_a}{L^\star}({f^\star}, \xi)$ and notice that \[ \begin{aligned} & \partial_{\xi_a} \hat L({f^\star}, \xi) - \partial_{\xi_a}{L^\star}({f^\star}, \xi)\\ & = \underbrace{{\bE}_{\widehat{Q}_X} [S(x)v_a({f^\star}, \xi)] - {\bE}_{Q_X} [S(x)v_a({f^\star}, \xi)]}_{\triangleq \mathbf{A}(\xi)}\\ & ~~~~ \underbrace{ - \frac{{\bE}_{\widehat{P}}[S(x)\textrm{exp}( \xi_a ^\top S(x))\mathbf{I}\{y = a\}]}{{\bE}_{\widehat{P}}[\textrm{exp}( \xi_y ^\top S(x))]} + \frac{{\bE}_{P}[S(x)\textrm{exp}( \xi_a ^\top S(x))\mathbf{I}\{y = a\}]}{{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))]} }_{\triangleq \mathbf{B}(\xi)} = \mathbf{A}(\xi) + \mathbf{B}(\xi) \end{aligned} \] where to bound $\mathbf{A}(\xi)$ we notice that $S(x_{Q, i})v_a({f^\star}, \xi)$ are \textrm{IID}\ and bounded by $c_1$ ($\|S(x)v_a({f^\star}, \xi)\| \le \|S(x)\|_2 \le c_1$ for all $x\in \mathcal{X}$) and hence sub-gaussian. We apply Hoeffding's concentration inequality for a sample mean of \textrm{IID}\ sub-gaussian random variables and obtain a constant $c_2>0$ such that for any $\delta>0$ with probability at least $1 - \delta$ it holds \[ \textstyle \mathbf{A}(\xi) = {\bE}_{\widehat{Q}_X} [S(x)v_a({f^\star}, \xi)] - {\bE}_{Q_X} [S(x)v_a({f^\star}, \xi)] \le c_1 c_2 \sqrt{\frac{\log (1/\delta)}{n_Q}}\,. \] Using a chaining argument over an $\ell_2$ ball of radius $\|\coef^\star\|_2 + \delta_\xi$ we obtain a uniform bound as the following: there exists a constant $c_2>0$ such that for any $\delta>0$ with probability at least $1 - \delta$ it holds \begin{equation} \textstyle \label{eq:bound.A} \sup_{\xi: \|\xi \|_2 \le \|\coef^\star\|_2 + \delta_\xi}\mathbf{A}(\xi) \le c_1 c_3 \sqrt{\frac{K(p+1)\log (1/\delta)}{n_Q}}\,. \end{equation} To bound $\mathbf{B}$ we first define \[ \begin{aligned} \mathbf{B}.1(\xi, n_P) \triangleq & ~{\bE}_{\widehat{P}}[S(x)\textrm{exp}( \xi_a ^\top S(x))\mathbf{I}\{y = a\}] - {\bE}_{P}[S(x)\textrm{exp}( \xi_a ^\top S(x))\mathbf{I}\{y = a\}]\\ \mathbf{B}.2(\xi, n_P) \triangleq &~ {\bE}_{\widehat{P}}[\textrm{exp}( \xi_a ^\top S(x))] - {\bE}_{P}[\textrm{exp}( \xi_a ^\top S(x))] \end{aligned} \] and notice that both the random variables $\{ S(x_{P, i})\textrm{exp}( \xi_a ^\top S(x_{P, i}))\mathbf{I}\{y_{P, i} = a\} \}_{i = 1}^{n_P}$ and $\{ \textrm{exp}( \xi_a ^\top S(x_{P, i})) \}_{i = 1}^{n_P}$ are bounded for all $\|\xi \|_2 \le \|\coef^\star\|_2 + \delta_\xi$. Similarly as before we obtain constant $c_4, c_5 > 0$ such that the following hold with probability at least $1- \delta$: \begin{equation} \label{eq:sup-con-b1b2} \begin{aligned} \sup_{\xi: \|\xi \|_2 \le \|\coef^\star\|_2 + \delta_\xi} \big|\mathbf{B}.1(\xi, n_P)\big| & \le\textstyle c_4 \sqrt{\frac{K(p+1)\log (1/\delta)}{n_P}}\\ \sup_{\xi: \|\xi \|_2 \le \|\coef^\star\|_2 + \delta_\xi} \big|\mathbf{B}.2(\xi, n_P)\big| & \le\textstyle c_5 \sqrt{\frac{K(p+1)\log (1/\delta)}{n_P}}\,. \end{aligned} \end{equation} In Lemma \ref{lemma:lb-normalizer} we notice that \begin{equation}\label{eq:lb-normalizer} \inf_{\xi: \|\xi \|_2 \le \|\coef^\star\|_2 + \delta_\xi} {\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))] \ge \frac 12\,. \end{equation} Gathering all the inequalities in $\mathbf{B}$ we obtain \[ \begin{aligned} \mathbf{B}(\xi) & = - \frac{{\bE}_{\widehat{P}}[S(x)\textrm{exp}( \xi_a ^\top S(x))\mathbf{I}\{y = a\}]}{{\bE}_{\widehat{P}}[\textrm{exp}( \xi_y ^\top S(x))]} + \frac{{\bE}_{P}[S(x)\textrm{exp}( \xi_a ^\top S(x))\mathbf{I}\{y = a\}]}{{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))]}\\ & = - \frac{{\bE}_{P}[S(x)\textrm{exp}( \xi_a ^\top S(x))\mathbf{I}\{y = a\}] +\mathbf{B}.1(\xi, n_P) }{{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))] + \mathbf{B}.2(\xi, n_P)} + \frac{{\bE}_{P}[S(x)\textrm{exp}( \xi_a ^\top S(x))\mathbf{I}\{y = a\}]}{{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))]}\\ & = \frac{-\mathbf{B}.1(\xi, n_P){\bE}_{P}[\textrm{exp}( \xi_a ^\top S(x))] + \mathbf{B}.2(\xi, n_P){\bE}_{P}[S(x)\textrm{exp}( \xi_a ^\top S(x))\mathbf{I}\{y = a\}]}{\Big\{{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))] + \mathbf{B}.2(\xi, n_P)\Big\}{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))]} \end{aligned} \] and this implies \[ \begin{aligned} |\mathbf{B}(\xi)| & \le \frac{\|\mathbf{B}.1(\xi, n_P)\|_2{\bE}_{P}[\textrm{exp}( \xi_a ^\top S(x))] + |\mathbf{B}.2(\xi, n_P)|\|{\bE}_{P}[S(x)\textrm{exp}( \xi_a ^\top S(x))\mathbf{I}\{y = a\}]\|_2}{\Big\{{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))] - |\mathbf{B}.2(\xi, n_P)|\Big\}{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))]} \end{aligned} \] where we use \eqref{eq:sup-con-b1b2} and \eqref{eq:lb-normalizer} to obtain a constant $c_6>0$ such that with probability at least $1 - \delta$ it holds \begin{equation}\textstyle\label{eq:bound.B} \sup_{\xi: \|\xi \|_2 \le \|\coef^\star\|_2 + \delta_\xi} |\mathbf{B}(\xi)| \le c_6 \sqrt{\frac{K(p+1)\log (1/\delta)}{n_P}}\,. \end{equation} We now combine \eqref{eq:bound.A} and \eqref{eq:bound.B} and obtain a constant $c_7>0$ such that with probability at least $1 - 2\delta$ we have \begin{equation} \textstyle\label{eq:emp-ub} Z_{a, n_P, n_Q} \le c_7 \left\{\sqrt{\frac{K(p+1)\log (1/\delta)}{n_P}} + \sqrt{\frac{K(p+1)\log (1/\delta)}{n_Q}}\right\}\,. \end{equation} Returning to the first order optimality condition \eqref{eq:optimality-theta-hat} for estimating ${\hat \coef}$ we notice that \[ \begin{aligned} 0 & = \textstyle\sum_{a} ({\hat \coef}_a - \coef^\star_a)^\top \big\{\partial_{\xi_a} \hat L ({f^\star}, {\hat \coef}) + \langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L( {\tilde f}, {\hat \coef})\rangle\big\}\\ & = \textstyle\sum_{a} ({\hat \coef}_a - \coef^\star_a)^\top \partial_{\xi_a} {L^\star} ({f^\star}, {\hat \coef}) + \sum_{a} ({\hat \coef}_a - \coef^\star_a)^\top \big\{Z_{a, n_P, n_Q}({\hat \coef}) + \langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L( {\tilde f}, {\hat \coef})\rangle\big\}\\ & = \textstyle({\hat \coef} - \coef^\star) ^\top \partial_{\xi} {L^\star} ({f^\star}, {\hat \coef}) + \sum_{a} ({\hat \coef}_a - \coef^\star_a)^\top \big\{Z_{a, n_P, n_Q}({\hat \coef}) + \langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L( {\tilde f}, {\hat \coef})\rangle\big\} \end{aligned} \] We combine it with the first ordder optimality condition for $\coef^\star$ \eqref{eq:optimality-truth} to obtain \[ \begin{aligned} ({\hat \coef} - \coef^\star) ^\top \big\{\partial_{\xi} {L^\star} ({f^\star}, {\hat \coef}) - \partial_{\xi} {L^\star} ({f^\star}, \coef^\star)\big\}\\ + \textstyle\sum_{a} ({\hat \coef}_a - \coef^\star_a)^\top \big\{Z_{a, n_P, n_Q}({\hat \coef}) + \langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L( {\tilde f}, {\hat \coef})\rangle\big\} = 0 \end{aligned} \] which can be rewritten as \begin{equation} \label{eq:eqA.24} \textstyle ({\hat \coef} - \coef^\star) ^\top \big\{\partial_{\xi} {L^\star} ({f^\star}, {\hat \coef}) - \partial_{\xi} {L^\star} ({f^\star}, \coef^\star)\big\} = - \sum_{a} ({\hat \coef}_a - \coef^\star_a)^\top \big\{Z_{a, n_P, n_Q}({\hat \coef}) + \langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L( {\tilde f}, {\hat \coef})\rangle\big\}\,. \end{equation} Using the strong convexity assumption at $\coef^\star$ we obtain that the left hand side in the above equation is lower bounded as \begin{equation} \label{eq:lhs} ({\hat \coef} - \coef^\star) ^\top \big\{\partial_{\xi} {L^\star} ({f^\star}, {\hat \coef}) - \partial_{\xi} {L^\star} ({f^\star}, \coef^\star)\big\} \ge \mu \|{\hat \coef} - \coef^\star\|_2^2 \,. \end{equation} Let $\mathcal{E}$ be the event on which the following hold: \begin{enumerate} \item $\|{\hat \coef} - \coef^\star\|_2 \le \delta_\xi$, \item $|\langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L(\tilde f, \hat \xi)\rangle| \le \sqrt{K}c_1c_2 r_{n_P} \sqrt{\log(n_Q)\log\{1/\delta\}}$ for all $a$, \item $Z_{a, n_P, n_Q} \le c_7\Big\{\sqrt{\frac{ K(p+1) \log(1/\delta)}{n_P}} + \sqrt{\frac{K(p+1) \log(1/\delta)}{n_Q}}\Big\}$ for all $a$. \end{enumerate} We notice that the event $\mathcal{E}$ has probability $1 - (2 K + 1)\delta$. Under the event there exists a $c_9>0$ such that the right hand side in \eqref{eq:eqA.24} is upper bounded as \begin{equation} \label{eq:rhs} \begin{aligned} & \textstyle\left | - \sum_{a} ({\hat \coef}_a - \coef^\star_a)^\top \big\{Z_{a, n_P, n_Q}({\hat \coef}) + \langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L( {\tilde f}, {\hat \coef})\rangle\big\}\right|\\ & \le \textstyle\sum_{a} \|{\hat \coef}_a - \coef^\star_a\|_2 \Big\{\|Z_{a, n_P, n_Q}({\hat \coef})\|_2 + \big\|\langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L( {\tilde f}, {\hat \coef})\rangle\big\|_2 \Big\}\\ & \le\textstyle \sum_{a} \|{\hat \coef}_a - \coef^\star_a\|_2 \Big\{Z_{a, n_P, n_Q} + \big\|\langle {\hat f} - {f^\star}, \partial_{f} \partial_{\xi_a} \hat L( {\tilde f}, {\hat \coef})\rangle\big\|_2 \Big\}\\ & \le \textstyle \sum_{a} \|{\hat \coef}_a - \coef^\star_a\|_2 c_{9} \Big\{r_{n_P} \sqrt{\log(n_Q)\log\{1/\delta\}} + \sqrt{\frac{ K(p+1)\log(1/\delta)}{n_P}} + \sqrt{\frac{K(p+1) \log(1/\delta)}{n_Q}}\Big\} \\ & \le \textstyle c_{9} \Big\{r_{n_P} \sqrt{\log\{n_Q/\delta\}} + \sqrt{\frac{ K(p+1)\log(1/\delta)}{n_P}} + \sqrt{\frac{K(p+1) \log(1/\delta)}{n_Q}}\Big\} \sqrt{K} \|{\hat \coef} - \coef^\star\|_2 \end{aligned} \end{equation} Combining the bounds \eqref{eq:lhs} and \eqref{eq:rhs} for left and right hand sides we obtain a $c_{10}>0$ such that on the event $\mathcal{E}$ it holds \[\textstyle\|{\hat \coef} - \coef^\star\|_2 \le c_{10}\Big\{r_{n_P} \sqrt{\log(n_Q)\log\{1/\delta\}} + \sqrt{\frac{K(p+1) \log(1/\delta)}{n_P}} + \sqrt{\frac{K(p+1) \log(1/\delta)}{n_Q}}\Big\}\,. \] Having a concentration on ${\hat \coef}$ we now notice that \[ \begin{aligned} & \|\hat \omega - \omega^\star\|_{1, P} \\ & = \textstyle\int \big|\textrm{exp}({\hat \coef}_y^\top S(x)) - \textrm{exp}({\coef^\star_y}^\top S(x))\big| p_X(x)dx\\ & = \textstyle\int |({\hat \coef}_y - \coef^\star_y)^\top S(x)| \textrm{exp}(\xi_x) p_X(x)dx \end{aligned} \] where $\xi_x$ is a number between ${\hat \coef}_y^\top S(x)$ and ${\coef^\star_y}^\top S(x)$. On the event $\mathcal{E}$ we notice that $\|{\hat \coef}\|_2 \le \|\coef^\star\|_2 + \delta_\xi$ and hence it holds: $|{\hat \coef}_y^\top S(x)|\le \|{\hat \coef}_y\|_2 \|S(x)\|_2 \le \|{\hat \coef}\|_2 \|S(x)\|_2 \le c_1 (\|\coef^\star\|_2 + \delta_\xi)$. Furthermore, we notice that $|{\coef^\star_y}^\top S(x)|\le \|\coef^\star\|_2 \|S(x)\|_2 \le c_1\|\coef^\star\|_2$, which implies $|\xi_x| \le c_1 (\|\coef^\star\|_2 + \delta_\xi)$. Returning to the integral we obtain that on the event $\mathcal{E}$ it holds: \[ \begin{aligned} &\textstyle \int |({\hat \coef}_y - \coef^\star_y)^\top S(x)| \textrm{exp}(\xi_x) p_X(x)dx\\ & \le\textstyle \|{\hat \coef}_y - \coef^\star_y\|_2 \int \|S(x)\|_2\textrm{exp}\{c_1 (\|\coef^\star\|_2 + \delta_\xi)\} p_X(x)dx \\ & \le \|{\hat \coef} - \coef^\star\|_2 c_1 \textrm{exp}\{c_1 (\|\coef^\star\|_2 + \delta_\xi)\}\\ & \le\textstyle c_{11}\Big\{r_{n_P} \sqrt{\log(n_Q)\log\{1/\delta\}} + \sqrt{\frac{K(p+1) \log(1/\delta)}{n_P}} + \sqrt{\frac{K(p+1) \log(1/\delta)}{n_Q}}\Big\} \end{aligned} \] for some $c_{11}>0$, which holds with probability at least $ 1- (2K + 1)\delta$. \begin{lemma} \label{lemma:lb-normalizer} There exists $\delta_\xi>0$ such that \[ \inf_{\xi: \|\xi \|_2 \le \|\coef^\star\|_2 + \delta_\xi} {\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))] \ge \frac 12\,. \] \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:lb-normalizer}] To establish a bound on ${\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))] - {\bE}_{P}[\textrm{exp}( {\coef^\star_y} ^\top S(x))]$ for any $\|\xi - \coef^\star\|_2 \le \delta_\xi$ we notice that \[ \begin{aligned} & |{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))] - {\bE}_{P}[\textrm{exp}( {\coef^\star_y} ^\top S(x))]| \\ & = \textstyle\int \big|\textrm{exp}(\xi_y^\top S(x)) - \textrm{exp}({\coef^\star_y}^\top S(x))\big| p_X(x)dx\\ & = \textstyle\int |(\xi_y - \coef^\star_y)^\top S(x)| \textrm{exp}(\xi_x) p_X(x)dx \end{aligned} \] where $\xi_x$ is a number between $\xi_y^\top S(x)$ and ${\coef^\star_y}^\top S(x)$. We notice that $\|\xi\|_2 \le \|\coef^\star\|_2 + \delta_\xi$ and hence it holds: $|\xi_y^\top S(x)|\le \|\xi_y\|_2 \|S(x)\|_2 \le \|\xi\|_2 \|S(x)\|_2 \le c_1 (\|\coef^\star\|_2 + \delta_\xi)$. Furthermore, we notice that $|{\coef^\star_y}^\top S(x)|\le \|\coef^\star\|_2 \|S(x)\|_2 \le c_1\|\coef^\star\|_2$, which implies $|\xi_x| \le c_1 (\|\coef^\star\|_2 + \delta_\xi)$. Returning to the integral we obtain \[ \begin{aligned} &\textstyle \int |(\xi_y - \coef^\star_y)^\top S(x)| \textrm{exp}(\xi_x) p_X(x)dx\\ & \le\textstyle \|\xi_y - \coef^\star_y\|_2 \int \|S(x)\|_2\textrm{exp}\{c_1 (\|\xi\|_2 + \delta_\xi)\} p_X(x)dx \\ & \le \delta_\xi c_1 \textrm{exp}\{c_1 (\|\coef^\star\|_2 + \delta_\xi)\}\,. \end{aligned} \] We choose $\delta_\xi>0$ small enough such that $\delta_\xi c_1 \textrm{exp}\{c_1 (\|\coef^\star\|_2 + \delta_\xi)\} \le 1/2$. Since ${\bE}_{P}[\textrm{exp}( {\coef^\star_y} ^\top S(x))] = 1$ we obtain that for any $\xi$ with $\|\xi - \coef^\star\|_2 \le 1/2$ we have \[ \begin{aligned} \textstyle{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))] \ge {\bE}_{P}[\textrm{exp}( {\coef^\star_y} ^\top S(x))] - |{\bE}_{P}[\textrm{exp}( \xi_y ^\top S(x))] - {\bE}_{P}[\textrm{exp}( {\coef^\star_y} ^\top S(x))]| \ge \frac 12\,. \end{aligned} \] This implies the lemma. \end{proof} \begin{lemma}[Derivatives] \label{lemma:derivatives} The following holds: \begin{enumerate} \item $\partial_{\theta_b} u_a(f, \theta) = T(x)u_a(f, \theta) \delta_{a,b}$, $\partial_{\theta_b} u_{\cdot}(f, \theta) = T(x)u_b(f, \theta) $ and $\partial_{\theta_b} v_a(f, \theta) = T(x)v_a(f, \theta)\{\delta_{a, b} - v_b(f, \theta)\}$. \item $\partial_{f_b} \eta_a = \eta_a (\delta_{a, b} - \eta_b)$, $\partial_{f_b} \{u_a(f, \theta)\} = (\delta_{a, b} - \eta_b)u_a(f, \theta)$, $\partial_{f_b} \{u_{\cdot}(f, \theta)\} = u_b(f, \theta) - \eta_b u_{\cdot}(f, \theta)$ and $\partial_{f_b} \{v_a(f, \theta)\} = v_a(f, \theta) (\delta_{a, b} - v_b(f, \theta))$. \end{enumerate} \end{lemma} \begin{proof}[Proof of lemma \ref{lemma:derivatives}] We calculate the derivatives one by one. \paragraph{1:} We notice that \[ \begin{aligned} \partial_{\theta_b} u_a(f, \theta) &= \partial_{\theta_b} \{\eta_a(x) \textrm{exp}(\theta_a^\top T(x))\}\\ & = \eta_a(x) \textrm{exp}(\theta_a^\top T(x)) T(x) \delta_{a, b} = T(x)u_a(f, \theta) \delta_{a,b} \end{aligned} \] and that \[ \textstyle\partial_{\theta_b} u_{\cdot}(f, \theta) = \sum_{a = 1}^K T(x)u_a(f, \theta) \delta_{a,b} = T(x)u_b(f, \theta) \] which finally implies \[ \begin{aligned} \partial_{\theta_b} v_a(f, \theta) &=\textstyle \frac{\partial_{\theta_b} \{u_a(f, \theta)\} u_{\cdot }(f, \theta) - \partial_{\theta_b} \{u_{\cdot}(f, \theta)\} u_{a }(f, \theta) }{\{u_{\cdot} (f, \theta)\}^2}\\ & =\textstyle \frac{T(x)u_a(f, \theta) \delta_{a,b} u_{\cdot }(f, \theta) - T(x)u_b(f, \theta) u_{a }(f, \theta) }{\{u_{\cdot} (f, \theta)\}^2}\\ & = T(x) \{u_a / u_{\cdot}\} \{\delta_{a, b} - u_a / u_{\cdot}\} = T(x) v_a (\delta_{a, b} - v_a) \end{aligned} \] \paragraph{2:} Here \[ \begin{aligned} \partial_{f_b} \eta_a = \textstyle\partial_{f_b} \left\{\frac{e^{f_a}}{\sum_{j} e^{f_j}} \right\} = \textstyle\frac{\delta_{a, b} e^{f_a}\sum_{j} e^{f_j} - e^{f_a} e^{f_b} }{\Big\{\sum_{j} e^{f_j}\Big\}^2} = \eta_a (\delta_{a, b} -\eta_b)\,, \end{aligned} \] \[ \begin{aligned} \partial_{f_b} \{u_a(f, \theta)\} & = \partial_{f_b} \{\eta_a\} \textrm{exp}(\theta_a^\top T(x)) \\ & = \eta_a (\delta_{a, b} -\eta_b)\textrm{exp}(\theta_a^\top T(x)) = (\delta_{a, b} -\eta_b)u_a(f, \theta)\,, \end{aligned} \] \[ \textstyle \partial_{f_b} \{u_{\cdot}(f, \theta)\} = \sum_{a} (\delta_{a, b} -\eta_b)u_a(f, \theta) = u_{b}(f, \theta) - \eta_b u_{\cdot}(f, \theta) \] and finally, \[ \begin{aligned} \partial_{f_b} \{v_a(f, \theta)\} & = \frac{ \{\partial_{f_b} u_a(f, \theta)\} u_{\cdot}(f, \theta) - \{\partial_{f_b} u_{\cdot}(f, \theta)\} u_{a}(f, \theta) }{\{u_{\cdot}(f, \theta)\}^2}\\ & = \frac{ (\delta_{a, b} -\eta_b)u_a(f, \theta) u_{\cdot}(f, \theta) - \{u_{b}(f, \theta) - \eta_b u_{\cdot}(f, \theta)\} u_{a}(f, \theta) }{\{u_{\cdot}(f, \theta)\}^2}\\ & = (u_a / u_{\cdot}) \{ \delta_{a, b} u_{\cdot} - \cancel{\eta_b u_{\cdot}} - u_a +\cancel{\eta_b u_{\cdot}} \}/ u_{\cdot}\\ & = (u_a / u_{\cdot}) \{ \delta_{a, b} - (u_a/u_{\cdot}) \} = v_a \{ \delta_{a, b} - v_a \} \end{aligned} \] \end{proof} \subsection{Proof of Lemma \ref{lemma:generalization-bound}} We start by decomposing the loss difference in the left hand side of equation \eqref{eq:gen-bound}. \begin{equation} \begin{aligned} \mathcal{L}_Q(\hat f_{\hat \omega}) - \mathcal{L}_Q(f^\star) = \mathcal{L}_P(\hat f, \omega^\star) - \mathcal{L}_P(f^\star, \omega^\star)\\ = \textstyle\underbrace{\mathcal{L}_P(\hat f, \omega^\star) - \mathcal{L}_{\widehat{P}}(\hat f, \omega^\star)}_{(a)} + \underbrace{\mathcal{L}_{\widehat{P}}(\hat f, \omega^\star) - \mathcal{L}_{\widehat{P}}(\hat f, \hat \omega)}_{(b)} + \underbrace{\mathcal{L}_{\widehat{P}}(\hat f, \hat \omega) - \mathcal{L}_{\widehat{P}}( f^*, \hat \omega)}_{\le 0}\\ \textstyle + \underbrace{\mathcal{L}_{\widehat{P}}( f^*, \hat \omega) - \mathcal{L}_{\widehat{P}}( f^*, \omega^\star)}_{(c)} + \underbrace{\mathcal{L}_{\widehat{P}}( f^*, \omega^\star) - \mathcal{L}_P( f^*, \omega^\star)}_{(d)}\,, \end{aligned} \end{equation} where we write $\hat f_{\hat \omega} \equiv \hat f$. \paragraph{Uniform bound on (a)} To control (a) in \eqref{eq:gen-bound} we establish a concentration bound on the following generalization error \[ \begin{aligned} \sup_{f \in \mathcal{F}} \big\{ \mathcal{L}_P( f, \omega^\star) - \mathcal{L}_{\widehat{P}}( f, \omega^\star) \big\} \\ =\textstyle \sup_{f \in \mathcal{F}} \Big\{ {\bE}\big[g_f(X, Y)\big] -\frac1 {{n_P}} \sum_{i = 1}^{{n_P}} g_f(X_{P, i}, Y_{P, i}) \Big\} \triangleq F(z_{1:{n_P}})\,, \end{aligned} \] where, for $i \ge 1$ we denote $z_{1:i} = (z_1, \dots, z_i)$ and $z_i = (X_{P, i}, Y_{P, i})$. First, we use a modification of McDiarmid concentration inequality to bound $F(z_{1:{n_P}})$ in terms of its expectation and a $O(1/\sqrt{{n_P}})$ term, as elucidated in the following lemma. \begin{lemma} \label{lemma:mcdiarmid} There exists a constant $c_1>0$ such that with probability at least $1-\delta$ the following holds \begin{equation} \label{eq:conc-mcdirmid} \textstyle F(z_{1:{n_P}}) \le {\bE}\big[F(z_{1:{n_P}})\big] + c_1\sqrt{\frac{\log(1/\delta)}{{n_P}}}\,. \end{equation} \end{lemma} Next, we use a symmetrization argument (see \cite[Chapter 2, Lemma 2.3.1]{wellner2013weak} ) to bound the expectation ${\bE}\big[F(z_{1:{n_P}})\big]$ by the Rademacher complexity of the hypothesis class $\mathcal{G}$, {\it i.e.}, \begin{equation} \label{eq:symmetrization} {\bE}\big[F(z_{1:{n_P}})\big] \le 2 \mathcal{R}_{{n_P}} (\mathcal{G})\,. \end{equation} Combining \eqref{eq:conc-mcdirmid} and \eqref{eq:symmetrization} we obtain \begin{equation} \textstyle (a) = \mathcal{L}_P(\hat f, \omega^\star) - \mathcal{L}_{\widehat{P}}(\hat f, \omega^\star) \le 2 \mathcal{R}_{{n_P}}(\mathcal{G}) + c_1\sqrt{\frac{\log(1/\delta)}{{n_P}}}\,,\label{eq:bound-a} \end{equation} with probability at least $ 1- \delta$. \paragraph{Bound on (b) and (c)} Denoting $z_i = (X_{P, i}, Y_{P, i})$ and $\ell_f(z_i) = \ell(f(X_{P,i}), Y_{P, i})$ we notice that for any $f\in \mathcal{F}$ we have \[ \begin{aligned} \textstyle ~~\big|\mathcal{L}_{\widehat{P}}( f, \omega^\star) - \mathcal{L}_{\widehat{P}}( f, \hat \omega)\big| & = \textstyle\Big|\frac1{{n_P}}\sum_{i = 1}^{{n_P}}\big\{ \hat \omega(z_i) - \omega^\star (z_i) \big\} \ell_f(z_i)\Big|\\ & \le\textstyle \frac1{{n_P}}\sum_{i = 1}^{{n_P}}\big|\big\{ \hat \omega(z_i) - \omega^\star (z_i) \big\} \ell_f(z_i)\big| \le\textstyle \frac{\|\ell\|_\infty}{{n_P}}\sum_{i = 1}^{{n_P}}\big| \hat \omega(z_i) - \omega^\star (z_i) \big|\,. \end{aligned} \] Since $\hat \omega(z) - \omega^\star (z)$ is a sub-gaussian random variable, we use sub-gaussian concentration to establish that for some constant $c_2>0$ \begin{equation} \textstyle \text{for any }f \in \mathcal{F}, ~~\big|\mathcal{L}_{\widehat{P}}( f, \omega^\star) - \mathcal{L}_{\widehat{P}}( f, \hat \omega)\big| \le \|\ell\|_\infty \Big\{ {\bE}_{z_1}\big[|\hat w(z_1) - w^\star (z_1)|\big] + c_2 \sqrt{\frac{\log(1/\delta)}{{n_P}}} \Big\} \label{eq:bound-bc} \end{equation} with probability at least $1 - \delta$. This provides a simultaneous bound (on the same probability event) for both (b) and (c) with $f = \hat f$ and $f = f^\star$. \paragraph{Bound on (d)} We note that \[ \begin{aligned} \textstyle ~~ \mathcal{L}_{\widehat{P}}( f^*, \omega^\star) - \mathcal{L}_P( f^*, \omega^\star) = \frac{1}{{n_P}}\sum_{i = 1}^{{n_P}} \omega^\star (z_i) \ell_{f^\star} (z_i) - {\bE}_P\big\{\omega^\star (z_1) \ell_{f^\star} (z_1)\big\}\,, \end{aligned} \] where $\{\omega^\star (z_i) \ell_{f^\star} (z_i)\}_{i = 1}^{{n_P}}$ are \textrm{IID}\ sub-gaussian random variables. Using Hoeffding concentration bound we conclude that there exists a constant $c_3>0$ such that for any $\delta>0$ the following holds with probability at least $1 - \delta$ \begin{equation} \label{eq:bound-d} \textstyle \frac{1}{{n_P}}\sum_{i = 1}^{{n_P}} \omega^\star (z_i) \ell_{f^\star} (z_i) - {\bE}_P\big\{\omega^\star (z_1) \ell_{f^\star} (z_1)\big\} \le c_3 \sqrt{\frac{\log(1/\delta)}{{n_P}}}\,. \end{equation} Finally, using \eqref{eq:bound-a} on (a) (which is true on an event of probability $\ge 1 - \delta$), \eqref{eq:bound-bc} on (b) and (c) (simultaneously true on an event of probability $\ge 1- \delta$), and \eqref{eq:bound-d} on (d) (holds on an event of probability $\ge 1- \delta$) we conclude that with probability at least $1 - 3\delta$ the following holds \begin{equation} \textstyle \mathcal{L}_Q(\hat f_{\hat w}) - \mathcal{L}_Q(f^\star) \le 2 \mathcal{R}_n(\mathcal{G}) + \|\ell\|_\infty \cdot {\bE}_{z_1}\big[|\hat \omega(z_1) - \omega^\star (z_1)|\big] + c_4 \sqrt{\frac{\log(1/\delta)}{{n_P}}} \end{equation} where $c_4 = c_1 + \|\ell\|_\infty c_2 + c_3$. \begin{proof}[Proof of Lemma \ref{lemma:mcdiarmid}] For the simplicity of notations we drop the subscript from ${n_P}$ and denote the sample size simply by $n$. For $i \le n$ we define ${\bE}_{i:n}$ as the expectation with respect to the random variables $z_i, \dots, z_n$, and for $i > n$ we define ${\bE}_{i:n}\big[F(z_{1:n})\big] = F(z_{1:n})$ and notice that \begin{equation} \label{eq:martingale-sum} \textstyle F(z_{1:n}) - {\bE}\big[F(z_{1:n})\big] = \sum_{i = 1}^{n} \Big\{ {\bE}_{(i+1):n}\big[F(z_{1:n})\big] - {\bE}_{{i:n}}\big[F(z_{1:n})\big] \Big\}\,. \end{equation} Here, \begin{equation} \label{eq:martingale-diff} \begin{aligned} & ~~ {\bE}_{(i+1):n}\big[F(z_{1:n})\big] - {\bE}_{{i:n}}\big[F(z_{1:n})\big]\\ & = {\bE}_{(i+1):n} \Big\{F(z_{1:n}) - {\bE}_{z_i}\big[F(z_{1:n})\big]\Big\}\\ & = {\bE}_{(i+1):n} \Big\{F(z_1, \dots, z_{i-1}, z_i, z_{i+1}, \dots, z_n) - {\bE}_{z_i'}\big[F(z_1, \dots, z_{i-1}, z_i', z_{i+1}, \dots, z_n)\big]\Big\}\\ & = {\bE}_{(i+1):n} {\bE}_{z_i'} \Big\{F(z_1, \dots, z_{i-1}, z_i, z_{i+1}, \dots, z_n) - F(z_1, \dots, z_{i-1}, z_i', z_{i+1}, \dots, z_n)\Big\} \end{aligned} \end{equation} where, $z_i'$ is an \textrm{IID}\ copy of $z_i$. We notice that \begin{equation} \label{eq:fluctuation-bound} \begin{aligned} & ~~ F(z_1, \dots, z_{i-1}, z_i, z_{i+1}, \dots, z_n) - F(z_1, \dots, z_{i-1}, z_i', z_{i+1}, \dots, z_n) \\ & = \textstyle\sup_{f \in \mathcal{F}} \Big\{ {\bE}\big[g_f(z_1)\big] -\frac1 {n} \sum_{i = 1}^{n} g_f(z_i) \Big\}\\ & ~~~~~~~~- \sup_{f \in \mathcal{F}} \Big\{ {\bE}\big[g_f(z_1)\big] -\frac1 {n} \sum_{i = 1}^{n} g_f(z_i) +\frac 1n g_f(z_i') - \frac1n g_f(z_i) \Big\}\\ & \le \textstyle\sup_{f \in \mathcal{F}} \Big\{ - \frac 1n g_f(z_i') + \frac1n g_f(z_i) \Big\} \end{aligned} \end{equation} where the last inequality is obtained by setting $A_f = {\bE}\big[g_f(z_1)\big] -\frac1 {n} \sum_{i = 1}^{n} g_f(z_i)$ and $B_f = \frac 1n g_f(z_i') - \frac1n g_f(z_i)$ in the following stream of inequalities \[ \begin{aligned} \sup_{f\in \mathcal{F}} \{A_f\} - \sup_{f\in \mathcal{F}} \{A_f+ B_f\} & = \textstyle\sup_{f\in \mathcal{F}} \{A_f+B_f - B_f\} - \sup_{f\in \mathcal{F}} \{A_f+ B_f\} \\ & \le \textstyle\sup_{f\in \mathcal{F}} \{A_f+B_f\} + \sup_{f\in \mathcal{F}} \{-B_f\} - \sup_{f\in \mathcal{F}} \{A_f+ B_f\}\\ & = \sup_{f\in \mathcal{F}} \{-B_f\}\,, \end{aligned} \] and that \begin{equation} \label{eq:sup-diff-bound} \begin{aligned} \textstyle\sup_{f \in \mathcal{F}} \Big\{ - \frac 1n g_f(z_i') + \frac1n g_f(z_i) \Big\} & \textstyle\le \frac 1n \Big\{ \sup_{f \in \mathcal{F}} |g_f(z'_i)| + \sup_{f \in \mathcal{F}} |g_f(z_i)|\Big\}\\ & = \textstyle\frac 1n \Big\{ w^\star(z_i')\sup_{f \in \mathcal{F}} |\ell(z_i')| + w^\star(z_i)\sup_{f \in \mathcal{F}} |\ell(z_i')|\Big\}\\ & \le \textstyle\frac{\|\ell\|_\infty}{n}\big\{ w^\star(z_i') + w^\star(z_i)\big\}\,, \end{aligned} \end{equation} where $\ell(z) = \ell(f(x), y)$. We use inequalities \eqref{eq:fluctuation-bound} and \eqref{eq:sup-diff-bound} in \eqref{eq:martingale-diff} and get the following \begin{equation} \label{eq:martingale-diff-ub} \begin{aligned} {\bE}_{(i+1):n}\big[F(z_{1:n})\big] - {\bE}_{{i:n}}\big[F(z_{1:n})\big] &\le \frac{\|\ell\|_\infty}{n}\big\{ {\bE}_{z_i'} [w^\star(z_i')] + w^\star(z_i)\big\}\,. \end{aligned} \end{equation} Now, we use \eqref{eq:martingale-sum} and \eqref{eq:martingale-diff-ub} to bound the moment generating function of $F(z_{1:n}) - {\bE}[F(z_{1:n})]$ as seen in the following inequalities. For $\lambda > 0$ \begin{equation} \label{eq:mgf-bound} \begin{aligned} & ~~ {\bE}\Big\{ \textrm{exp}\Big(\lambda\big\{F(z_{1:n}) - {\bE}[F(z_{1:n})]\big\}\Big)\Big\}\\ & = {\bE}\Big\{ \textrm{exp}\Big(\lambda\sum_{i = 1}^n\big\{{\bE}_{(i+1):n}\big[F(z_{1:n})\big] - {\bE}_{{i:n}}\big[F(z_{1:n})\big]\big\}\Big)\Big\}\\ & \le {\bE}\Big\{ \textrm{exp}\Big(\lambda\sum_{i = 1}^n\frac{\|\ell\|_\infty}{n}\big\{ {\bE} [w^\star(z_i')] + w^\star(z_i)\big\}\Big)\Big\}\\ & \le {\bE}\Big\{ \textrm{exp}\Big(\lambda\sum_{i = 1}^n\frac{\|\ell\|_\infty}{n}\big\{ w^\star(z_i')+ w^\star(z_i)\big\}\Big)\Big\}, ~~ \text{since} ~~ e^{{\bE}\{X\}} \le {\bE}\big\{e^X\big\}\\ & = \prod_{i = 1}^n{\bE}\Big\{ \textrm{exp}\Big(\frac{\lambda \|\ell\|_\infty}{n} w^\star(z_i)\Big)\Big\} {\bE}\Big\{ \textrm{exp}\Big(\frac{\lambda \|\ell\|_\infty}{n} w^\star(z_i')\Big)\Big\}\\ & \le \prod_{i = 1}^n \textrm{exp}\Big(\frac{2c\lambda^2 \|\ell\|_\infty^2}{n^2} \Big) = \textrm{exp}\Big( \frac{2c\lambda^2 \|\ell\|_\infty^2}{n} \Big)\,. \end{aligned} \end{equation} Following the bound on moment generating function in \eqref{eq:mgf-bound} we get \[ \begin{aligned} & ~~{\bP}\big\{F(z_{1:n}) - {\bE}[F(z_{1:n})] > t\big\}\\ &\le e^{-\lambda t} {\bE}\Big\{ \textrm{exp}\Big(\lambda\big\{F(z_{1:n}) - {\bE}[F(z_{1:n})]\big\}\Big)\Big\}\\ & = \textrm{exp}\Big(-\lambda t + \frac{2c\lambda^2 \|\ell\|_\infty^2}{n} \Big)\,, \end{aligned} \] where letting $\lambda = nt /(4c \|\ell\|_\infty^2)$ we obtain \[ {\bP}\big\{F(z_{1:n}) - {\bE}[F(z_{1:n})] > t\big\} \le \textrm{exp} \Big( -\frac{nt^2}{8c \|\ell\|_\infty^2} \Big)\,, \] and letting $t = \|\ell\|_\infty\sqrt{8c\log(1/\delta)/n}$ we establish the lemma with $C = \|\ell\|_\infty \sqrt{8c}$. \end{proof} \section{Experiment details} \subsection{Data details} \label{sup:exp:data} \subsubsection{\textsc{Waterbirds}} \paragraph{Data} The training data has 4795 sample points with group-wise sample sizes $\{0: 3498, 1: 184, 2: 56, 3: 1057\}$. We combine the test and the validation data to create test data which has 8192 sample points, and the group vise sample sizes are $\{0: 3189, 1:3187, 2:908, 3:908\}$. The images are embedded into 512 dimensional feature vectors using ResNet18 \citep{he2016Deep} pre-trainined on Imagenet \citep{deng2009imagenet}, which we use as covariates. \paragraph{Source and target domains} For the source domain we use the original training set of images. We consider five different target domains: (1) the target domain with all the groups $g \in \{0, 1, 2, 3\}$ from test data, (2) with groups $g \in \{0, 3\}$ {\it i.e.}, landbirds on land backgrounds and waterbirds on water backgrounds, (3) with groups $g \in\{ 0, 2\}$, {\it i.e.}, landbirds on land backgrounds and waterbirds on land backgrounds, (4) with groups $g \in \{1, 3\}$, and (5) with groups $g \in \{1, 2\}$. Note that all of the target domains have landbirds and waterbirds. \subsubsection{\textsc{Breeds}} \paragraph{Data} \textsc{Breeds}\ \citep{santurkar2020breeds} is a subpopulation shift benchmark derived from ImageNet \citep{deng2009imagenet}. It uses the class hierarchy to define groups within classes. For example, in the Entity-30 task considered in this experiment, class fruit is represented by strawberry, pineapple, jackfruit, Granny Smith in the source and buckeye, corn, ear, acorn in the target. Each source and target datasets are split into training and test datasets. In the source domain the training (resp. test) data has 159037 (resp. 6200) sample points, whereas in the target domain the sample sizes are 148791 (resp. 5800) for training (resp. test) data. There are 30 different classes in both source and target domains. The highest (resp. lowest) class proportion in source training data is 4.9\% (resp. 1.58\%) and in target training data is 4.9\% (resp. 1.53\%). Here, the images are embedded using SwAV \citep{caron2020unsupervised}. The embedding is of dimension 2048, which we consider as covariates for our analysis. \paragraph{Source and target domains} In our \textsc{Breeds}\ case study we mix a small amount of labeled target samples into the source data. We mix $\pi$ proportion of labeled target samples into the source domain and evaluate the performance of our method for several mixing proportions ($\pi$). Below we describe the step by step procedure for creating mixed source and target datasets: \begin{enumerate} \item In both the source and target domains we combine the training and test datasets. \item Let $m$ be the sample size of the combined source data. We add $\lfloor m\pi\rfloor$ many labeled target samples into the source data. \item Resulting source and target datasets are then split to create training ($80\%$) and test ($20\%$) data. \end{enumerate} \subsection{Model details} \label{sup:exp:model} \subsubsection{Implementation for ExTRA} We describe the implementation details for ExTRA weights in Algorithm \ref{alg:extra}. \paragraph{The normalization regularizer $\lambda$} is required to control the value of the normalizer $\hat N_t$. It makes sure that the value of $\hat N_t$ remains close to 1, as the function $x + x^{-1}, x>0$ is minimized at $x = 1$. The regularizer is particularly important when the feature distribution between source and target data has very little overlap (happens in \textsc{Breeds}\ case study). \begin{algorithm} \caption{Exponential Tilt Reweighting Alignment (ExTRA)\label{alg:extra}} \begin{algorithmic}[1] \Require \begin{itemize} \item \textbf{Dataset:} labeled source data $\{(X_{P, i}, Y_{P, i})\}_{i = 1}^{{n_P}}$ and unlabeled target data $\{X_{Q, i}\}_{i = 1}^{n_Q}$. \item \textbf{Hyperparameters:} learning rate $\eta>0$, batch size $B\in \mathbb{N}$, normalization regularizer $\lambda > 0$. \item \textbf{Probabilistic source classifier:} $\widehat{\eta}_P : \mathcal{X} \to \Delta^K$. \item \textbf{Initial values}: $\{(\hat \theta_{k, 0}, \hat \beta_{k, 0})\}_{k = 1}^K$ \end{itemize} \State Initialize $\hat \theta_0$ at some value. \Repeat{ $t \ge 0$} \State sample minibatchs $(X_{P, 1}, Y_{P, 1}), \dots, (X_{P, B}, Y_{P, B}) \sim \widehat{P}$, and $X_{Q, 1}, \dots, X_{Q, B}\sim \widehat{Q}_X$ \State Compute loss $ \hat L_t = \frac 1 B \sum_{i =1}^B \log\Big\{\sum_{k=1}^K\widehat{\eta}_{P,k}(X_{Q, i})\textrm{exp}(\widehat{{\boldsymbol\theta}}_{k, t}^\top T(X_{Q, i}) + \hat \beta_k\Big\} $ and normalizer $\hat N_t = \frac 1B\sum_{i = 1}^B \textrm{exp}\big\{ \widehat{{\boldsymbol\theta}}_{Y_{P, i}, t}^\top T(X_{P, i}) + \hat \beta_{Y_{P, i}, t} \big\}$ \State Objective $\hat O_t = - \hat L_t + \log(\hat N_t) + \lambda \hat N_t + \lambda \hat N_t^{-1} $ \State Update $\hat \theta_{k, t+1} \gets \hat \theta_{k, t} - \eta \partial_{\theta_k}\hat O_t\{(\hat \theta_{k, t}, \hat \beta_{k, t}), k = 1, \dots , K\} $ and $\hat \beta_{k, t+1} \gets \hat \beta_{k, t} - \eta \partial_{\beta_k}\hat O_t\{(\hat \theta_{k, t}, \hat \beta_{k, t}), k = 1, \dots , K\} $ \Until{converges} \State Estimated value $\{(\hat \theta_{k}, \hat \beta_{k})\}_{k = 1}^K$ \State $\hat \alpha _k \gets \hat \beta_k - \log \hat N(\{(\hat \theta_{k}, \hat \beta_{k})\}_{k = 1}^K)$\\ \Return parameters $\{(\hat \theta_{k}, \hat \alpha_{k})\}_{k = 1}^K$ and the weight function $\omega(x, y) = \textrm{exp}(\hat \theta_y T(x) + \hat \alpha_y)$ \end{algorithmic} \end{algorithm} \subsubsection{\textsc{Waterbirds}} \paragraph{Source classifier $\widehat{\eta}_P$} is a logistic regression model fitted on source data using \texttt{sklearn.linear\_model.LogisticRegression} module with the parameters \{solver = `lbfgs', C = 0.1, tol = 1e-6, max\_iter=500\} and the rest set at their default values. We also use several calibration techniques for the source classifier \cite{shrikumar2019calibration}: (1) no model calibration (none), (2) temperature scaling (TS), (3) bias corrected temperature scaling (BCTS), and (4) vector scaling (VS). For TS, BCTS and VS we use the implementation in \citet{shrikumar2019calibration}. \paragraph{ExTRA importance weights} In each iterations of ExTRA importance weight calculations (Algorithm \ref{alg:extra}) for \textsc{Waterbirds}\ data we fix an initialization of the parameters and compute the parameters for several values of the hyperparameters learning rate $\eta\in \{5\times 10 ^{-4}, 4 \times 10^{-5}\}$, batch size $B = 500 $, epochs $E \in \{100, 200, 400\}$ and source model calibrations \{none, TS, BCTS, VS\}. The details can be found in the supplementary codes. Since there are significant overlap between source and target feature distributions we set $\lambda = 0$. We select the hyperparameter setup that produces the lowest value of the objective $-\hat L + \log (\hat N) $ over the full data $\{(X_{P, i}, Y_{P, i})\}_{i = 1}^{{n_P}}$ and $\{X_{Q, i}\}_{i = 1}^{n_Q}$. \paragraph{Runtime} ExTRA algorithm requires solving a simple stochastic optimization problem. In the \textsc{Waterbirds}\ case-study, 100 epochs of the Adam \citep{kingma2017Adam} optimizer took $11.92 \pm 0.48$ seconds. \paragraph{Model selection} We considered 120 different models which are logistic regression models of three categories: \begin{enumerate} \item A vanilla model. \item A model fitted on weighted data to balance the class proportion on source data. \item A model fitted on weighted data to balance the group proportions on the source data. \end{enumerate} Each of these models are fitted with scikit-learn logistic regression module where we use 2 different regularizers, $\ell_1$ and $\ell_2$ and 20 different regularization strengths (\texttt{numpy.logspace(-4, -1, 20)}) and we use \texttt{liblinear} solver to fit the models. Rest of the parameters are set at their default values. For model selection experiments the source and target data are each split into equal parts to create source and target training and test datasets. The models are fitted using test data on the source domain. We then calculate its (1) ScrVal accuracy using the source training data, (2) ACT-NE accuracy using labeled training data on source and unlabeled training data on target, (3) ExTRA accuracy on training data on source, and finally (4) oracle target accuracy on test data on target. We then summarize the accuracies in (1) oracle \emph{target accuracies} for the models chosen according to the best ScrVal, ACT-NE and ExTRA accuracies on the source domain, and (2) the \emph{rank correlations} between the ScrVal, ACT-NE, and ExTRA accuracies with the corresponding oracle target accuracies. \subsubsection{\textsc{Breeds}} \paragraph{Source classifier $\widehat{\eta}_P$} The source classifier $\widehat{\eta}_P$ used in \textsc{Breeds}\ case study is similar to the one in \textsc{Waterbirds}. It uses same model and parameters to obtain a probabilistic classifier $\widehat{\eta}_P$. We use bias corrected temperature scaling (BCTS) \citep{shrikumar2019calibration} for calibrating $\widehat{\eta}_P$. \paragraph{ExTRA importance weights} We set the hyperparameters at some fixed values and obtain our weights for several random initializations of the tilt parameters. The hyperparameter values are: (1) learning rate $\eta = 10^{-4}$, (2) batch size $B = 1500$, (3) number of epochs $\text{epochs} = 500$, and (4) regularization strength for normalizer $\lambda = 10^{-6}$. Rest of the setup is same as in \textsc{Waterbirds}. \subsection{Additional results} \label{sup:exp:results} \subsubsection{\textsc{Waterbirds}} \paragraph{Precision and recall} We report the precision and recall for the weights. They are defined as following: within an $x$ proportion of samples with the highest weights (call this set $A$) \begin{enumerate} \item precision is the proportion of sample points from the groups comprising the target domain in $A$, {\it i.e.}\ \[\frac{\#\{\text{sample points in }A \text{ with } g \in \{\text{target groups}\}\}}{|A|}\,, \] and, \item recall is the ratio between the number of sample points in $A$ that are from the target groups and the total number of points in source data that are from the target groups, {\it i.e.} \[\text{precision} = \frac{\#\{\text{sample points in }A \text{ with } g \in \{\text{target groups}\}\}}{\#\{\text{sample points in source data}\text{ with } g \in \{\text{target groups}\}\}}\,. \] \end{enumerate} Target domains consisting of a majority and a minority group, {\it e.g.}\ $\{0,2\}$, are noticeably imbalanced in the source data, thus we also report precision and recall conditioned on the class label ({\it i.e.}\ treating source and target as consisting of either only landbirds ($y=0$) or only waterbirds ($y=1$) in the aforementioned precision and recall definitions). We report results for four target domains in Figure \ref{fig:waterbirds-precision-recall-all}. Overall, the ExTRA weights are informative (precision curves have downward trends and the recall curves are above the non-informative baseline, {\it i.e.}\ solid black lines). We note that for class $y = 0$ for target domain $\{0, 2\}$ and for class $y = 1$ for target domain $\{1, 3\}$ we see that the ExTRA weights are almost non-informative (precision curve is almost flat and the recall curve is almost aligned with the baseline). This is due to the group imbalance within a class. In the example of class $y = 0$ with target domain $\{0, 2\}$, the groups with $y = 0$ are $g = 0$ and $g = 1$. Since the sample size for $g = 1$ is very small compared to $g = 0$, most of the samples in $y = 0$ class are from the correct group $g =0$ when we consider $\{0, 2\}$ as our target domain, and any weights would have precision-recall curves that are close to the non-informative baseline. Similar behavior is observed for the other example. \begin{figure} \centering \includegraphics[scale = 0.35]{plots/precision-recall-5.pdf} ~ \includegraphics[scale = 0.35]{plots/precision-recall-6.pdf}\\ \includegraphics[scale = 0.35]{plots/precision-recall-9.pdf} ~ \includegraphics[scale = 0.35]{plots/precision-recall-10.pdf} \caption{ExTRA precision and recall on \textsc{Waterbirds}\ for different targets. The black solid line refers to a baseline for the recall curve when the weights are completely non-informative of the target domain.} \label{fig:waterbirds-precision-recall-all} \end{figure} \paragraph{Upweighted images} We visualize images from the \textsc{Waterbirds}\ dataset corresponding to the 16 largest ExTRA weights for the $\{1,2\}$ target domain consisting of the two minority groups. Among these 16 images, 12 correspond to the correct groups, {\it i.e.}\ either landbirds on water or waterbirds on land. We emphasize that in the source domain there are only 5\% of images corresponding to groups $\{1,2\}$ and ExTRA weights upweigh them as desired. The 4 images from the other groups (highlighted with red border) are: (i) 2nd row, 3rd column (waterbird on water); (ii) 3rd row, 2nd column (waterbird on water); (iii) 4th row, 3rd column (waterbird on water); (iv) 4th row, 4th column (landbird on land). Arguably, the background in (i) is easy to confuse with the land background and the blue sky in (iv) is easy to confuse with the water background, suggesting that these images might be representative of the target domain of interest despite belonging to different groups. \begin{figure} \centering \includegraphics[scale=0.3]{plots/birds.pdf} \caption{\textsc{Waterbirds}\ images with 16 largest ExTRA weights for $\{1,2\}$ as the target domain. The four images highlighted with red border are from other groups.} \label{fig:waterbirds-img} \end{figure} \subsubsection{\textsc{Breeds}} We present precision and recall curves for the target samples identified within the source samples with larger ExTRA weights (analogous to the corresponding \textsc{Waterbirds}\ experiment) in Figure \ref{fig:breeds-precision-recall} for varying mixing proportion $\pi$. In comparison to \textsc{Waterbirds}, we note that both precision and recall are lower, which we think is due to a larger amount of the original source samples representative of the target domain distribution as can be seen from the improved performance of the ExTRA fine-tuned model in Figure \ref{fig:breeds} even when $\pi=0$. \begin{figure} \centering \includegraphics[scale = 0.5]{plots/precision-recall-breeds.pdf} \caption{ExTRA precision and recall on \textsc{Breeds}} \label{fig:breeds-precision-recall} \end{figure} \section{Learning importance weights} \paragraph{Fitting the exponential tilt model} We fit the exponential tilt model by distribution matching. This step is based on the observation that under the exponential tilt model \eqref{eq:exponential-tilt} \begin{equation} \textstyle q_X\{x\} = \sum_{k=1}^Kp\{x,Y=k\}\textrm{exp}(\theta_k^\top T(x) + \alpha_k), \label{eq:distribution-matching} \end{equation} where $q_X$ is the (marginal) density of the inputs in the target domain. It is possible obtain an estimate $\widehat{q}_X$ of $q_X$ from the unlabeled samples $\{X_{i, Q}\}_{i=1}^{n}$ and estimates $\widehat{p}\{x,Y=k\}$ of the $p\{x,Y=k\}$'s from the labeled samples $\{(X_{i, P},Y_{i, P})\}_{i=1}^{m}$. This suggests we find $\theta_k$'s and $\alpha_k$'s such that \[ \textstyle \sum_{k=1}^K\widehat{p}\{x,Y=k\}\textrm{exp}(\theta_k^\top T(x) + \alpha_k) \approx \widehat{q}_X\{x\}. \] Note that the $\theta_k$'s and $\alpha_k$'s are dependent because $\widehat{q}_X$ must integrate to one. We enforce this restriction as a constraint in the distribution matching problem: \begin{equation} \{(\widehat{{\boldsymbol\theta}}_k, \hat \alpha_k)\}_{k=1}^K \in \left\{ \begin{aligned} & {\arg\min}_{\{(\theta_k, \alpha_k)\}_{k=1}^K}D\left(\textstyle\widehat{q}_X\{x\}\|\sum_{k=1}^K\widehat{p}\{x,Y=k\}\textrm{exp}(\theta_k^\top T(x) + \alpha_k)\right)\\ & \text{subject to }\textstyle \int_{\mathcal{X}} \sum_{k=1}^K\widehat{p}\{x,Y=k\}\textrm{exp}(\theta_k^\top T(x) + \alpha_k)dx = 1 \end{aligned} \right. \label{eq:robust-distribution-matching} \end{equation} where $D$ is a discrepancy between probability distributions on $\mathcal{X}$. Although there are many possible choices of $D$, we pick the Kullback-Leibler (KL) divergence in the rest of this paper because it leads to some computational benefits. We reformulate the above optimization for KL-divergence to relax the constraint which we state in the following lemma. \begin{lemma} \label{lemma:KL-distribution-matching} If $D$ is the Kullback-Leibler (KL) divergence then optima in \eqref{eq:distribution-matching} is achieved at $\{(\widehat{{\boldsymbol\theta}}_k, \hat\alpha_k)\}_{k = 1}^K$ where \[ \begin{aligned} \textstyle\{(\widehat{{\boldsymbol\theta}}_k, \hat \alpha_k')\}_{k=1}^K \in {\arg\max}_{\{(\theta_k, \alpha_k')\}_{k=1}^K}{\bE}_{\widehat{Q}_X} \left[\textstyle\log\Big\{\sum_{k=1}^K\widehat{\eta}_{P,k}(X)\textrm{exp}(\theta_k^\top T(X) + \alpha_k'\Big\}\right]\\ - \textstyle\log \big\{{\bE}_{\widehat{P}} \big[ \textrm{exp}(\theta_Y^\top T(X) + \alpha_Y') \big]\big\} \end{aligned}\] and $\hat \alpha_k = \hat \alpha_k' - \log \big\{{\bE}_{\widehat{P}} \big[ \textrm{exp}(\widehat{{\boldsymbol\theta}}_Y^\top T(X) +\hat \alpha_Y') \big]\big\}$. \end{lemma} One benefit of minimizing the KL divergence is that the learner does not need to estimate the $p\{x,Y=k\}$'s; they merely need to train a discriminative model to estimate $\eta_P$ from the (labeled) samples from the source domain. In other words, the learner does not need to train a generative model. We plug the fitted $\widehat{{\boldsymbol\theta}}_k$'s and $\hat \alpha_k$'s into \eqref{eq:weights-estimate} to obtain Exponential Tilt Reweighting Alignment (ExTRA) importance weights: \begin{equation} \widehat{\omega}(x,y) = \textrm{exp}(\widehat{{\boldsymbol\theta}}_y^\top T(x) + \hat \alpha_y). \label{eq:weights-estimate} \end{equation} We summarize the ExTRA procedure in the Appendix Algorithm 1. Next we describe two downstream tasks where ExTRA weights can be used: \begin{enumerate} \item \textbf{ExTRA model evaluation in the target domain.} Practitioners may estimate the performance of a model in the target domain by reweighing the empirical risk in the source domain: \begin{equation} \textstyle \label{eq:target-performance} {\bE}\left[\ell(f(X_Q),Y_Q)\right] \approx \frac{1}{n_P}\sum_{i=1}^{n_P}\ell(f(X_{P,i}),Y_{P,i})\widehat{\omega}(X_{P,i}, Y_{P,i}), \end{equation} where $\ell$ is a loss function. This allows to evaluate model accuracy in the target domain without target labeled samples \emph{even in the presence of concept drift between the training and target domain}. \item \textbf{ExTRA fine-tuning for target domain performance.} Since the reweighted empirical risk (in the source domain) is a good estimate of the risk in the target domain, practitioners may fine-tune models for the target domain by minimizing the reweighted empirical risk: \begin{equation} \widehat{f}_Q\in{\arg\min}_{f\in\mathcal{F}}{\bE}_{\widehat{P}}\left[\ell(f(X),Y)\widehat{\omega}(X, Y)\right]. \label{eq:weighted-ERM} \end{equation} \end{enumerate} We note that the correctness of \eqref{eq:robust-distribution-matching} depends on the identifiability of the $\theta_k$'s and $\alpha_k$'s from \eqref{eq:distribution-matching}; {\it i.e.}\ the uniqueness of the parameters that satisfy \eqref{eq:distribution-matching}. As long as the tilt parameters are identifiable, then \eqref{eq:robust-distribution-matching} provides consistent estimates of them. Unfortunately, without additional assumptions on the $p\{x,Y=k\}$'s and $T$, the tilt parameters are generally unidentifiable from \eqref{eq:distribution-matching}. Next we elaborate on the identifiability of the exponential tilt model. \section{The exponential tilt model} \label{sec:exponential-tilt} \paragraph{Notation} We consider a $K$-class classification problem. Let $\mathcal{X} \in {\bR}^p$ and $\mathcal{Y}\triangleq[K]$ be the space of inputs and set of possible labels, and $P$ and $Q$ be probability distributions on $\mathcal{X}\times\mathcal{Y}$ for the source and target domains correspondingly. A (probabilistic) classifier is a map $f:\mathcal{X}\to\Delta^{K-1}$. We define $p\{x, Y = k\}$ as the weighted source class conditional density, {\it i.e.}\ $p\{x, Y = k\} = p\{x\mid Y = k\} \times P\{Y = k\}$, where $p\{x \mid Y = k\}$ is the density of the source feature distribution in class $k$ and $P\{Y = k\}$ is the class probability in source. We similarly define $q\{x , Y = k\}$ for target. We consider the problem of learning importance weights on samples from a source domain so that the weighted source samples mimic the target distribution. We assume that the learner has access to labeled samples $\{(X_{P,i},Y_{P,i})\}_{i=1}^{n_P}$ from the source domain and and an unlabeled samples $\{X_{Q,i}\}_{i=1}^{n_Q}$ from the target domain. The learner's goal is to estimate a weight function $\omega:\mathcal{X}\times\mathcal{Y}\to{\bR}$ such that \begin{equation} {\bE}\left[\omega(X_P,Y_P)g(X_P,Y_P)\right] \approx {\bE}\big[g(X_Q,Y_Q)\big]\text{ for all (reasonable) }g: \mathcal{X} \times \mathcal{Y} \to {\bR}. \label{eq:true-weighted-expectation} \end{equation} Ideally, $\omega = \frac{dQ}{dP}$ is the likelihood ratio between the source and target domains (this leads to equality in \eqref{eq:true-weighted-expectation}), but learning this weight function is generally impossible without labeled samples from the target domain \cite{david2010Impossibility}. Thus we must impose additional restrictions on the source and target domains. \paragraph{The exponential tilt model} We assume that there is a vector of sufficient statistics $T:\mathcal{X} \to {\bR}^p$ and the parameters \{$\theta_k \in {\bR}^p$, $\alpha_k\in {\bR}\}_{k=1}^K$ such that \begin{equation} \log\frac{q\{x,Y=k\}}{p\{x,Y=k\}} = \theta_k^\top T(x) + \alpha_k\text{ for all }k\in[K]; \label{eq:exponential-tilt} \end{equation} {\it i.e.}\ $q\{x,Y=k\}$ is a member of the exponential family with base measure $p\{x,Y=k\}$ and sufficient statistics $T$. We call \eqref{eq:exponential-tilt} the \textbf{exponential tilt} model. It implies the importance weights between the source and target samples are \[ \omega(x,y) = \textrm{exp}(\theta_y^\top T(x) + \alpha_y). \] The exponential tilt model is motivated by the rich theory of exponential families in statistics. It is also closely related to several common models in transfer learning and domain adaptation. In particular, it implies there is a linear concept drift between the source and target domains. It also extends the widely used \textbf{covariate shift} \cite{sugiyama2012Machine} and \textbf{label shift} models \cite{alexandari2020EM,lipton2018Detecting,azizzadenesheli2019Regularized,maity2020Minimax,garg2020Unified} of distribution shifts. It extends the covariate shift model because the exponential tilt model permits (linear in $T(X)$) \textbf{concept drifts} between the source and target domains; it extends the label shift model because it allows the class conditionals to differ between the source and target domains. It does, however, come with a limitation: implicit in the model is the assumption that there is some amount of overlap between the source and target domains, thus it is more suitable for subpopulation shift setting. \paragraph{Choosing $T$} We comment on the choice of the sufficient statistics $T$ under the subpopulation shift setting. The goal of $T$ is to identify the common subpopulations in the source and target domains, {\it i.e.}\ \[(X_P,Y_P)\mid\{T(X_P) = t, Y_P = k\}\overset{d}{\approx}(X_Q,Y_Q)\mid\{T(X_Q) = t, Y_P = k\}.\] If $T$ segments the source domain into its subpopulations ({\it i.e.}\ the subpopulations are $\{(x, y)\in\mathcal{X}\times \mathcal{Y}\mid T(x) = t, y = k\}$ for different values of $t$'s and $k$'s), then it is possible to achieve perfect reweighing of the source domain with the exponential tilt model: the weight of the $\{T(X) = t, Y = k\}$ subpopulation is $\textrm{exp}(\theta_k^\top t+\alpha_k)$. However, in practice, such a $T$ that perfectly segments the subpopulations may not exist ({\it e.g.}\ the subpopulations may overlap) or is very hard to learn ({\it e.g.}\ we don't have prior knowledge of the subpopulations to guide $T$). If no prior knowledge of the domains is available, we can use a neural network to parameterize $T$ and learn its weights along with the tilt parameters, or simply set $T(x)=x$, which we demonstrate to be sufficiently effective in our empirical studies. \section{\textsc{Waterbirds}\ case study} To demonstrate the efficacy of the ExTRA algorithm for reweighing the source data we (i) verify the ability of ExTRA to upweigh samples most relevant to the target task; (ii) evaluate the utility of weights in downstream tasks such as fine-tuning and (iii) model selection. \textbf{\textsc{Waterbirds}\ dataset} combines bird photographs from the Caltech-UCSD Birds-200-2011 (CUB) dataset \cite{wah2011caltech} and the image backgrounds from the Places dataset \cite{zhou2017places}. The birds are labeled as one of $\mathcal{Y} =$ \{waterbird, landbird\} and placed on one of $\mathcal{A} =$ \{water background, land background\}. The images are divided into four groups landbirds on land (0); landbirds on water (1); waterbirds on land (2); waterbirds on water (3). The source dataset is highly imbalanced, i.e. the smallest group (2) has 56 samples. We embed all images with a pre-trained ResNet18 \citep{he2016Deep}. See Appendix \ref{sup:exp:data} for details. We consider five subpopulation shift target domains: all pairs of domains with different bird types and the original test set \citep{sagawa2019Distributionally} where all 4 groups are present with proportions vastly different from the source. For all domains we fit ExTRA weights from 10 different initialization and report means and standards deviations for the corresponding metrics. See Appendix \ref{sup:exp:model} for the implementation details. \begin{wrapfigure}[11]{r}{0.5\linewidth} \vspace{-0.77cm} \centering \includegraphics[width=\linewidth]{plots/precision-recall.pdf} \vspace{-.75cm} \caption{ExTRA precision and recall} \label{fig:precision-recall} \end{wrapfigure} \textbf{ExTRA weights quality} For a given target domain it is most valuable to upweigh the samples in the source data corresponding to the groups comprising that domain. The most challenging is the target $\{1,2\}$ consisting only of birds appearing on their atypical backgrounds. Groups $\{1,2\}$ correspond to 5\% of the source data making them most difficult to ``find''. To quantify the ability of ExTRA to upweigh these samples we report precision (proportion of samples from groups $\{1,2\}$ within the top $x\%$ of the weights) and recall (proportion of all $\{1,2\}$ samples within the top $x\%$ of the weights) in Figure \ref{fig:precision-recall}. We notice that samples corresponding to $10\%$ largest ExTRA weights contain slightly over $80\%$ of the groups $\{1,2\}$ in the source data (recall). This demonstrates the ability of ExTRA to upweigh relevant samples. We present examples of upweighted images and results for other target domains in Appendix \ref{sup:exp:results}. \textbf{Model fine-tuning} We demonstrate the utility of ExTRA weights in the fine-tuning downstream task \eqref{eq:weighted-ERM}. The basic goal of such importance weighing is to improve the performance in the target in comparison to training on uniform source weights S -> T, {\it i.e.}\ ERM. Another baseline is the DRO model \citep{hashimoto2018Fairness} that aims to maximize worst-group performance without access to the group labels. We consider two additional baselines that utilize group annotations to improve worst-group performance: re-weighing the source to equalize group proportions (RW$_\text{gr}$) and group DRO (gDRO) \citep{sagawa2019Distributionally}. The aforementioned baselines do not try to adjust to the target domain. Finally, we compare to an ``oracle'' baseline T -> T that fine-tunes the model only using the subset of the source samples corresponding to the target domain groups. For the model class we use logistic regression in all cases. \begin{wrapfigure}[15]{r}{0.5\linewidth} \vspace{-0.3in} \centering \includegraphics[width=\linewidth]{plots/waterbirds.pdf} \vspace{-0.23in} \caption{Performance on \textsc{Waterbirds}} \label{fig:waterbirds} \end{wrapfigure} We compare target accuracy across domains in Figure \ref{fig:waterbirds}. Model trained with ExTRA weights outperforms all ``fair'' baselines and matches the performance of the three baselines that had access to additional information. In all target domains ExTRA fine-tuning is comparable with the oracle T -> T baseline supporting its ability to upweigh relevant samples. Notably, on the \{1,2\} domain of both minority groups \emph{and} on the \{0,3\} domain of both majority groups, ExTRA outperforms RW$_\text{gr}$ and gDRO that utilize group annotations. This emphasizes the advantage of adapting to the target domain instead of pursuing a more conservative goal of worst-group performance maximization. Finally, we note that ExTRA fine-tuning did not perform as well on the domain \{1,3\}, however in this case the oracle T -> T baseline also did not do well. \begin{table}[] \caption{Model selection results on \textsc{Waterbirds}} \centering \begin{tabular}{l@{\hskip 0.3in}ccc@{\hskip 0.3in}ccc} \toprule {} & \multicolumn{3}{c}{target accuracy} & \multicolumn{3}{c}{rank correlation} \\ \cmidrule[1pt](lr){2-7} target groups & ExTRA & SrcVal & ATC-NE & ExTRA & SrcVal & ATC-NE \\ \midrule \{0, 2\} & 0.819$\pm$0.012 & 0.854 & \textbf{0.871} & 0.419$\pm$0.01 & \textbf{0.807} & 0.760 \\ \{1, 2\} & \textbf{0.741}$\pm$0.047 & 0.616 & 0.646 & \textbf{0.747}$\pm$0.106 & -0.519 & -0.590 \\ \{0, 3\} & \textbf{0.978}$\pm$0.001 & \textbf{0.978} & 0.976 & \textbf{0.962}$\pm$0.004 & 0.956 & 0.906 \\ \{1, 3\} & \textbf{0.757}$\pm$0.011 & 0.737 & 0.747 & \textbf{0.361}$\pm$0.168 & -0.318 & -0.411 \\ \{0, 1, 2, 3\} & \textbf{0.856}$\pm$0.034 & 0.803 & 0.818 & \textbf{0.658}$\pm$0.295 & 0.263 & 0.178 \\ \midrule average & \textbf{0.83} & 0.798 & 0.812 & \textbf{0.753} & 0.166 & 0.110 \\ \bottomrule \end{tabular} \label{tab:waterbirds-model-validation} \end{table} \textbf{Model selection} out-of-distribution is an important task, that is difficult to perform without target data labels and group annotations \citep{gulrajani2020Search,zhai2021DORO}. We evaluate the ability of choosing a model for the target domain based on accuracy on the ExTRA reweighted source validation data. We compare to the standard source validation model selection (SrcVal) and to the recently proposed ATC-NE \citep{garg2022leveraging} that uses negative entropy of the predicted probabilities on the target domain to score models. We fit a total of 120 logistic regression models with different weighting (uniform, label balancing, and group balancing) and varying regularizers. See Appendix \ref{sup:exp:model} for details. In Table \ref{tab:waterbirds-model-validation} we compare the target performance of models selected using each of the model evaluation scores and rank correlation between the corresponding model scores and true target accuracies. Model selection with ExTRA results in the best target performance and rank correlation on 4 out of 5 domains and on average. Importantly, the rank correlation between the true performance and ExTRA model scores is always positive, unlike the baselines, suggesting its reliability in providing meaningful information about the target domain performance. \section{\textsc{Breeds}\ case study} \textsc{Breeds}\ \citep{santurkar2020breeds} is a subpopulation shift benchmark derived from ImageNet \citep{deng2009imagenet}. It uses the class hierarchy to define groups within classes. For example, in the Entity-30 task considered in this experiment, class fruit is represented by strawberry, pineapple, jackfruit, Granny Smith in the source and buckeye, corn, ear, acorn in the target. This is an extreme case of subpopulation shift where source and target groups have zero overlap. We modify the dataset by adding a small fraction $\pi$ of random samples from the target to the source for two reasons: (i) our exponential tilt model requires some amount of overlap between source and target; (ii) arguably, in practice, it is more likely that the source dataset has at least a small representation of all groups. \begin{wrapfigure}[13]{r}{0.5\linewidth} \vspace{-0.55cm} \centering \includegraphics[width=\linewidth]{plots/breeds.pdf} \vspace{-0.7cm} \caption{Performance on \textsc{Breeds}} \label{fig:breeds} \end{wrapfigure} Our goal is to show that ExTRA can identify the target samples mixed into the source for efficient fine-tuning. We obtain feature representations from a pre-trained self-supervised SwAV \citep{caron2020unsupervised}. We then train logistic regression models on (i) the source dataset re-weighted with ExTRA, (ii) uniformly weighted source (S -> T), (iii) only the target samples mixed into source (an oracle baseline T -> T). See Appendix \ref{sup:exp:data}, \ref{sup:exp:model} for details. We report performance for varying mixing proportion $\pi$ in Figure \ref{fig:breeds}. First, we note that even when $\pi=0$, i.e. source and target have completely disjoint groups, ExTRA improves over the vanilla S -> T. Next, we see that S -> T improves very slowly in comparison to ExTRA as we increase the mixing proportion; oracle T -> T improves faster as we increase the number of target samples it has access to, but never suppresses ExTRA and matches its improvement slope for the larger $\pi$ values. We conclude that ExTRA can effectively identify target samples mixed into source that are crucial for the success of fine-tuning \emph{and} find source samples most relevant to the target task allowing it to outperform the oracle T -> T baseline. We report precision and recall analogous to the corresponding \textsc{Waterbirds}\ experiment in Appendix \ref{sup:exp:results}.
1,477,468,751,249
arxiv
\section{Introduction} While there are relatively efficient approximate similarity search algorithms, it is widely supposed that the exact search suffers from dimensionality \cite{Pestov2012}. Thus, solving the problem in the most general case for an arbitrary dataset is impossible. We investigate exact indexing for a vector space $V$ and a distance function $d$. Exact indexing is based on exact similarity search, and no data points are lost during range queries. For a range query vector $\textbf{y}$ from a collection of $s$ vectors, \[ \textbf{x}_1, \textbf{x}_2, \textbf{x}_3, \cdots, \textbf{x}_s \] \textit{all} vectors $\textbf{x}_i$ that are $\epsilon$-similar according to the distance function $d$ are searched \begin{equation} d( \textbf{x}_i,\textbf{y} ) < \epsilon. \end{equation} In approximate indexing, the data points that may be lost as some distances are distorted. Approximate indexing \cite{Indyk98}, \cite{Indyk04} seems to be in some sense free from the curse of dimensionality, \cite{Pestov2012}. Distance-based exact indexing is based on the 1-Lipschitz property \cite{Pestov2012}. A mapping function $F()$ maps two vectors $\bf{x}$ and $\bf{y}$ into a lower dimensional space, where $d$ is a metric in the original space and $d_{feature}$ is a metric in the feature space that satisfies the 1-Lipschitz property \begin{equation} d_{feature}(F( \textbf{x} ),F( \textbf{y} )) \leq d( \textbf{x},\textbf{y} ). \end{equation} This equation is also known as the lower bounding postulate \cite{Faloutsos94b}, \cite{Faloutsos99}. Using the 1-Lipschitz property, a bound that is valid in both spaces can be determined. The distance from similar vectors to a query vector $\textbf{y}$ is smaller or equal in the original space and, consequently, is smaller or equal in the lower dimensional space as well. During the computation, all the points below the bound are discarded. In the second step, the wrong candidates are filtered by comparisons in the original space. The application of the 1-Lipschitz property as used in metric trees and pivot tables does not resolve the curse of dimensionality, as shown in \cite {Pestov2011}. For high-dimensional spaces, the functions that obey the 1-Lipschitz property discard fewer points as the number of dimensions grows \cite {Pestov2011}. The number of points discarded drops as fast as the number of dimensions grows. As stated in \cite{Pestov2012}, every 1-Lipschitz function concentrates sharply near its mean (or median) value, which results from the fact that a sphere with a constant radius increases exponentially with growing dimensions. A linear radial increase leads to an exponential increase of points inside the sphere \cite{Bohm01}, \cite{Pestov2012}, which leads to a degradation of the method's performance. This situation leads to the ``curse of dimensionality'', which states that for an exact nearest neighbor, any algorithm for high dimension $d$ and $n $ objects must either use an $n^d$-dimension space or have a query time of $n \times d $ \cite{Bohm01}, \cite{Pestov2012}. However, \cite{Wichert08}, \cite{Wichert10}, and \cite{Wichert12} show how the recursive application of the 1-Lipschitz property can be used to overcome the curse of dimensionality for certain cases of points equally distributed by subspace trees. A high-dimensional space is divided into low-dimensional sub-spaces \cite{Wichert08}, \cite{Wichert10}. In the low-dimensional sub-spaces, 1-Lipschitz functions can be successfully applied. The main contributions of this paper are as follows: \begin{itemize} \item Introduction of a new adaptive projection. The optimal projection is not fixed by orthogonal projection but learned. \item Extension of the technique beyond the Euclidean norm ($l_2$). Many applications rely on the $l_1$ norm. It is shown that $l_1$ norm gives better results than $l_2$ norm. \item Simplification of the mathematical framework. \end{itemize} The paper is organized as follows: \begin{itemize} \item We review the projection operators. \item We introduce the adaptive projection and the $l_p$ norm dependency. \item The adaptive projection and the $l_p$ norm dependency are integrated into the subspace tree. \item We empirically compare the adaptive projection with the orthogonal mapping. We empirically compare the $l_1$, $l_2$, $l_4$ and $l_{\infty}$ norms. \end{itemize} \section{Projection Operators} Ideally, the mapping function $F()$ should preserve the exact distances \cite{Faloutsos94b}, \cite{Faloutsos99}. An example of such a function for real vectors is a norm preserving linear operator $Q$. Such an operator can be represented by an orthogonal matrix with $Q^T=Q^{-1}$ performing a rotation or a reflection. An example of such an operator is the Karhunen-Lo\`{e}ve transform, which rotates the coordinate system in such a way that the new covariance matrix will be diagonal, resulting in each dimension being uncorrelated. A mapping that reduces the dimensionality of a vector space can be represented by a projection operator in a Hilbert space, which extends the two or three dimensional Euclidean space to spaces with any finite or infinite number of dimensions. In such a space, the Euclidean norm is induced by the inner product \begin{equation} \|\textbf{x}\|=\sqrt{ \langle \textbf{x}|\textbf{x} \rangle}. \end{equation} If $W$ is a subspace of $V,$ then the orthogonal complement of $W$ is also a subspace of $V.$ The orthogonal complement $W^\bot$ is the set of vectors \begin{equation} W^\bot=\{ \textbf{y} \in V | \langle \textbf{y}|\textbf{x} \rangle=0~~ \textbf{x} \in V \} \end{equation} and \begin{equation} V=W \oplus W^\bot. \end{equation} Each vector $\textbf{x} \in V$ can be represented as $\textbf{x}= \textbf{x}_W + \textbf{x}_{W^\bot} $ with $\textbf{x}_W \in W$ and $ \textbf{x}_{W^\bot} \in W^\bot$. The mapping $P \cdot \textbf{x}=\textbf{x}_W$ is an orthogonal projection. Such a projection is always a linear transformation and can be represented by a projection matrix $P$. The matrix is self-adjoint with $P=P^2$. An orthogonal projection can never increase a norm \begin{equation} \|P \cdot \textbf{x}\|^2 = \|\textbf{x}_W\|^2 \leq \| \textbf{x}_W \|^2+ \| \textbf{x}_{W^\bot} \|^2 =\| \textbf{x}_W + \textbf{x}_{W^\bot} \|^2 = \|\textbf{x}\|^2. \end{equation} Using the triangle inequality \begin{equation} \| \textbf{x} + \textbf{y} \| \leq \| \textbf{x} \| + \|\textbf{y}\| \end{equation} setting \begin{equation} \| \textbf{x} \| = \| \textbf{y} + (\textbf{x} - \textbf{y}) \| \leq \|\textbf{y}\|+\|\textbf{x}-\textbf{y}\| \end{equation} the tighten triangle inequality \begin{equation} \| \textbf{x} \| - \| \textbf{y} \| \leq \|\textbf{x}-\textbf{y} \|. \end{equation} follows. From the fact that the orthogonal projection can never increase the norm and the tightened triangle inequality, any orthogonal projection operator has the 1-Lipschitz property \begin{equation} \| P \cdot \textbf{x} \| - \| P \cdot \textbf{y} \| | \leq \| P \cdot\textbf{x}- P \cdot \textbf{y}\| = \| P \cdot( \textbf{x}- \textbf{y}) \| \leq \|\textbf{x}-\textbf{y} \|. \end{equation} It follows that any projection satisfies the 1-Lipschitz property, which means that the lower bounding postulate \cite{Faloutsos94b},\cite{Faloutsos99} and any orthogonal projection are satisfied. For example, the ``Quadratic Distance Bounding'' theorem is satisfied \cite{Faloutsos94b}. There is no the need for a more complicated proof based upon the unconstrained minimization problem using Lagrange multipliers \cite{Faloutsos94b}. \subsection{Projection onto one-dimensional subspace} For $\| \textbf{p} \|=1$, $\textbf{p} \cdot \textbf{p}^\top$ is an orthogonal projection onto a one-dimensional space generated by $ \textbf{p} $. For example for the vector \begin{equation} \textbf{p}= \left( \frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}, \cdots, \frac{1}{\sqrt{n}} \right) \end{equation} the orthogonal projection from $R^n$ onto one-dimensional space $R$ is \begin{equation} P=\textbf{p} \cdot \textbf{p}^\top= \left( \begin{array}{cccc} \frac{1}{n} & \frac{1}{n} & \cdots & \frac{1}{n} \\ \frac{1}{n} & \frac{1}{n} & \cdots & \frac{1}{n} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{1}{n} & \frac{1}{n} & \cdots & \frac{1}{n} \\ \end{array} \right). \end{equation} For $f \leq n$ orthogonal subspaces \begin{equation} R^n=E_1 \oplus E_2 \oplus \ldots \oplus E_f \label{decomp} \end{equation} of the vector space $R^n$ we can define a projection $P: R^n \mapsto R^f$ as a sum of $f$ projections onto one dimensional space \begin{equation} P= \textbf{p}_1\cdot \textbf{p}_1^\top +\textbf{p}_2 \cdot \textbf{p}_2^\top \ldots +\textbf{p}_f \cdot \textbf{p}_f^\top \label{Proj_eq} \end{equation} with $ \textbf{p}_i \cdot \textbf{p}_i^\top : E_i \mapsto R$ and $\| \textbf{p}_i \|=1$. The 1-Lipschitz property of the projection from the subspace $E_i$ the one dimensional space $R$ is \begin{equation} \left| \| \textbf{p}_i \cdot \textbf{p}_i^\top \cdot \textbf{x} \| - \| \textbf{p}_i \cdot \textbf{p}_i ^\top\cdot \textbf{y} \| \right| \ \leq \left| \|\textbf{x} \| -\| \textbf{y}\| \right| \leq \|\textbf{x}-\textbf{y}\|. \label{1Lip_eq} \end{equation} The projection $P$, represented by Equation \ref{Proj_eq}, should distort the distances between the vector space $R^n$ and $R^f$ as little as possible. As a consequence, the distortion for each subspace $E_i$ should be minimized. Because of the 1-Lipschitz property for the one-dimensional space, according to the Equation \ref {1Lip_eq}, we need to minimize the distance in the one-dimensional space between the length of the vector and the length of its projected counterpart \begin{equation} \left| \| \textbf{p}_i \cdot \textbf{p}_i^\top \cdot \textbf{x} \| -\|\textbf{x} \| \right|. \end{equation} Suppose the dimensionality of the subspace $E_i$ is $m$. We define the vector $\textbf{a}$ as \begin{equation} \textbf{a} = \textbf{p}_i\cdot \textbf{p}_i^\top \cdot \textbf{x}. \end{equation} It follows that \begin{equation} a= \sqrt{m} \cdot \alpha = \| \textbf{a} \| =\| \textbf{p}_i\cdot \textbf{p}_i^\top \cdot \textbf{x} \| \end{equation} and with \[ a_1=a_2=...=a_k=...=a_m=\alpha \] \begin{equation} \textbf{a}=(a_1,a_2,..a_k,..,a_m). \end{equation} With $a$ being the length of the projected vector we preform the following operation \begin{equation} \min \{ |a -\|\textbf{x} \|| \}. \end{equation} From the tighten triangle inequality, it follows that \begin{equation} \min \{ a -\|\textbf{x} \| \} \leq \min \{ \| \textbf{a} - \textbf{x} \| \} \end{equation} according to the Euclidean distance function. To minimize the Euclidean metric $ \| \textbf{a} - \textbf{x} \|$, how do we choose the value of $\alpha$ \cite{Wichert12}? It follows that \begin{equation} \min_\alpha \left( \sqrt{( x_1- \alpha)^2 +(x_2- \alpha)^2 +...+(x_m- \alpha)^2 } \right) \end{equation} \begin{equation} 0=\frac { \partial d (\vec{x},\vec{a} ) } { \partial \alpha } = \frac { m \cdot \alpha - \left( \sum_{i=1}^{m} x_i \right) } { \sqrt { m \cdot \alpha^2 + \sum_{i=1}^{m} x_i^2-2 \cdot \alpha \cdot \left( \sum_{i=1}^{m} x_i \right)}} \end{equation} with the solution \begin{equation} \alpha = \frac{\sum_{i=1}^{m} x_i}{m} \end{equation} which is the mean value of the vector $\textbf{x}$. It follows \begin{equation} a= \sqrt{m} \cdot \alpha = \sqrt{m} \cdot \frac{\sum_{i=1}^{m} x_i}{m} =\| \textbf{p}_i \cdot \textbf{p}_i^\top \cdot \textbf{x} \| \end{equation} with the corresponding projection matrix $P_i$ \begin{equation} P_i=\textbf{p}_i \cdot \textbf{p}_i^\top = \left( \begin{array}{cccc} \frac{1}{m} & \frac{1}{m} & \cdots & \frac{1}{m} \\ \frac{1}{m} & \frac{1}{m} & \cdots & \frac{1}{m} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{1}{m} & \frac{1}{m} & \cdots & \frac{1}{m} \\ \end{array} \right). \end{equation} $P_i$ is generated by the normalised vector $\textbf{p}_i$ \begin{equation} \textbf{p}_i = \left( \frac{1}{\sqrt{m}}, \frac{1}{\sqrt{m}}, \cdots, \frac{1}{\sqrt{m}} \right). \end{equation} which indicates the direction of the $m$-secting line, which is a continuous map from a one-dimensional space to an $m$-dimensional space given by \begin{equation} \begin{array}{c} x_1=x_1\\ x_2=x_1\\ x_3=x_1\\ \vdots\\ x_m=x_1\\ \end{array} . \end{equation} For $m=2$, this equation is the bisecting line with $x_1=x_2$ or, represented as a curve, \begin{equation} \begin{array}{c} x_1=x_1\\ x_2=x_1\\ \end{array} \end{equation} which, for uncorrelated data $P_i$, is the best projection onto one dimension, as indicated in next section. The projection can be computed efficiently without needing matrix operations as the mean value of the vector multiplied with the square root of its dimensionality.\begin{equation} \sqrt{m} \cdot \ \frac{\sum_{i=1}^{m} x_i}{m} = \left| \left| \left( \begin{array}{c} \frac{\sum_{i=1}^{m} x_i}{m} \\ \frac{\sum_{i=1}^{m} x_i}{m} \\ \vdots\\ \frac{\sum_{i=1}^{m} x_i}{m} \\ \end{array} \right) \right| \right| = \left| \left| \left( \begin{array}{cccc} \frac{1}{m} & \frac{1}{m} & \cdots & \frac{1}{m} \\ \frac{1}{m} & \frac{1}{m} & \cdots & \frac{1}{m} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{1}{m} & \frac{1}{m} & \cdots & \frac{1}{m} \\ \end{array} \right) \cdot \left( \begin{array}{c} x_1\\ x_2\\ \vdots\\ x_m\\ \end{array} \right) \right| \right|. \end{equation} A projection $P: R^n \mapsto R^f$ given the decomposition into $f$ orthogonal spaces according to Equation \ref{decomp} is composed of a sum of $f$ projections onto a one-dimensional space. Each projection is a projection on an $m$-secting line with $P_i:R ^{m} \mapsto R$. The method works with the space split in any way. For simplicity, we assume that the $n$-dimensional space is split into $f$ equal-dimensional subspaces. In this case, the projections are efficiently computed as the mean value of each sub-vector. The corresponding mean values are multiplied with the constant $c=\sqrt{m}=\sqrt{\frac{n}{f}}$. The selection of the division can be determined by empirical experiments in which we relate $m$ to $n$ with the constraint that $n$ is divisible by $m$. \subsection{The First Principal Component} The covariance matrix represents the tendency of two dimensions varying in the same direction as indicated by the data points. The Karhunen-Lo\`{e}ve transform rotates the coordinate system in such a way that the new covariance matrix will be diagonal. Therefore, each dimension will be uncorrelated. The transformation is described by an orthonormal matrix, which is composed of the normalized eigenvectors of the covariance matrix. The squares of the eigenvalues represent the variances along the eigenvectors. The first principal component corresponds to the normalized eigenvector $\textbf{z}$ with the highest variance. $\| \textbf{z} \|=1$ with $Z=\textbf{z} \cdot \textbf{z}^\top$ is the best projection onto one-dimensional space because, in a Hillbert space, the first principal component passes through the mean and minimizes the sum of squares of the distances of the points from the line. It follows that \begin{equation} \| \textbf{x} \| \geq \| P \cdot \textbf{x} \| \geq \| Z \cdot \textbf{x} \|. \end{equation} For uncorrelated data, $Z=P$ represents the projection on the $m$-secting line. For correlated data contrary to the projection on the $m$-secting line, all the components of the vector $\textbf{z}$ do not need to be equal, and the projection cannot be computed efficiently. For a vector $\textbf{o}$ of length $\sqrt{m}$ in the direction of the $m$-secting line where $P$ is the projection on the $m$-secting line, \begin{equation} \textbf{o}=\underbrace{(1,1,1,\cdots,1)}_{m} \end{equation} it follows that \begin{equation} \sqrt{m}= \| \textbf{o} \|= \| P \cdot \textbf{o} \| \geq \| Z \cdot \textbf{o} \| \geq 1. \end{equation} The value \begin{equation} \sqrt{m} - \| Z \cdot \textbf{o} \| \label{eq:abstand} \end{equation} indicates the diversion from the $m$-secting line with value $0$ corresponding to uncorrelated data and $\sqrt{m}-1$ one dimension data with \begin{equation} \sqrt{m}-1 \geq \sqrt{m} - \| Z \cdot \textbf{o} \| \geq 0. \end{equation} For a given decomposition into $f$ orthogonal spaces according to the Equation \ref{decomp} the data points are mapped into corresponding subspaces $E_i$. For each subspace $E_i$ the covariance matrix $C_i$ is computed. In the next step for each covariance matrix $C_i$ the first principal component with the highest variance is determined. It is represented it by the normalised eigenvector $\textbf{z}_i$. Each projection \begin{equation} Z_i=\textbf{z}_i \cdot \textbf{z}_i^\top \end{equation} is a projection onto the first principal component with $Z_i:R ^{m} \mapsto R$. An adaptive projection $A: R^n \mapsto R^f$, given the decomposition into $f$ orthogonal spaces according to the Equation \ref{decomp}, is composed of a sum of $f$ projections $Z_i$ onto a one-dimensional space. \begin{equation} A= \textbf{z}_1 \cdot \textbf{z}_1^\top +\textbf{z}_2 \cdot \textbf{z}_2^\top \ldots +\textbf{z}_f \cdot \textbf{z}_f^\top. \end{equation} The method works under any splitting of the space, such as the projection $P: R^n \mapsto R^f$. \subsection{$l_p$ norm dependency} Some applications require distance functions that differ from the Euclidian distance function. In addition to the Euclidean distance function, the Manhattan distance and the Chebyshev distance function are commonly used. In the following, we generalize the Euclidean norm to the $l_p$ norm that induces a corresponding metric. The $l_p$ norm is defined as the following (for $p=2$ it is the Euclidean norm): \begin{equation} \|\textbf{x}\|_p=\left( |x_1|^p+|x_2|^p+\cdots+|x_m|^p \right)^{\frac{1}{p}} \end{equation} $l_p$ norms are equivalent and the following relation holds for $0 < q < p$ \begin{equation} \|x\|_p \leq \|x\|_q \leq m^{ \frac{1}{q}-\frac{1}{p} } \cdot \|x\|_p \end{equation} and \begin{equation} m^{ \frac{1}{p}-\frac{1}{q} } \cdot \|x\|_q \leq \|x\|_p \leq \|x\|_q. \end{equation} The tighten triangle inequality is valid in any $l_p$ norm due to the definition of norm. Because the $l_p$ norms are equivalent the following equation is valid as well for any $l_p$ norm \begin{equation} \| P \cdot \textbf{x} \|_p - \| P \cdot \textbf{y} \| |_p \leq \| P \cdot\textbf{x}- P \cdot \textbf{y}\|_p = \| P \cdot( \textbf{x}- \textbf{y}) \|_p \leq \|\textbf{x}-\textbf{y}\|_p \end{equation} and \begin{equation} \| Z \cdot \textbf{x} \|_p \leq \| P \cdot \textbf{x} \|_p \leq \| \textbf{x} \|_p. \end{equation} The linear projection operator $P$ has the 1-Lipschitz property in any $l_p$ norm and \begin{equation} m^\frac{1}{p} \cdot \ \frac{\sum_{i=1}^{m} x_i}{m} = \left| \left| \left( \begin{array}{c} \frac{\sum_{i=1}^{m} x_i}{m} \\ \frac{\sum_{i=1}^{m} x_i}{m} \\ \vdots\\ \frac{\sum_{i=1}^{m} x_i}{m} \\ \end{array} \right) \right| \right|_p = \left| \left| \left( \begin{array}{cccc} \frac{1}{m} & \frac{1}{m} & \cdots & \frac{1}{m} \\ \frac{1}{m} & \frac{1}{m} & \cdots & \frac{1}{m} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{1}{m} & \frac{1}{m} & \cdots & \frac{1}{m} \\ \end{array} \right) \cdot \left( \begin{array}{c} x_1\\ x_2\\ \vdots\\ x_m\\ \end{array} \right) \right| \right|_p. \end{equation} The projection $P$ can be computed efficiently without needing a matrix operation as the mean value of the vector multiplied with the constant $c= m^\frac{1}{p}$. For the dimension $m$, for the $l_1$ norm, $c=m$, for the $l_2$ norm, $c= \sqrt{m}$, and for the $l_\infty$ norm, $c= 1$. A lower $l_p$ norm corresponds to a higher constant $m \geq c \geq 1$ and less information loss. We cannot gain any advantage of the 1-Lipschitz property using the different $l_p$ norms. The behavior of the constant $c$ is related to the equivalence of the norms relation. For example, the $l_1$ and $l_2$ relation is \begin{equation} \|x\|_2 \leq \|x\|_1 \leq \sqrt{m} \cdot \|x\|_2. \end{equation} For $\| \textbf{q} \|_p=1$ with $Q=\textbf{q}^\top \cdot \textbf{q}$ is a mapping onto one dimensional space generated by $ \textbf{q} $. It is not a projection for $p > 2$ because the matrix is not self-adjoint with $Q=Q^2$. The mapping on the $m$-secting line. the operator can be understood as \begin{equation} \ Q=\textbf{q}^\top \cdot \textbf{q} = \left( \begin{array}{cccc} \frac{1}{m^\frac{2}{p}} & \frac{1}{m^\frac{2}{p}} & \cdots & \frac{1}{m^\frac{2}{p}}\\ \frac{1}{m^\frac{2}{p}} & \frac{1}{m^\frac{2}{p}} & \cdots & \frac{1}{m^\frac{2}{p}}\\ \vdots & \vdots & \ddots & \vdots \\ \frac{1}{m^\frac{2}{p}} & \frac{1}{m^\frac{2}{p}} & \cdots & \frac{1}{m^\frac{2}{p}}\\ \end{array} \right). \end{equation} $Q$ is generated by the $l_p$ normalized vector $\textbf{q}$ indicating the direction of the $m$-secting line. \begin{equation} \textbf{q}^\top = \left( \frac{1}{m^\frac{1}{p}}, \frac{1}{m^\frac{1}{p}}, \cdots, \frac{1}{m^\frac{1}{p}} \right). \end{equation} The mapping can be computed efficiently without requiring matrix operations as the mean value of the vector multiplied with the constant $d=m^\frac{p-1}{p}$. \begin{equation} m^\frac{p-1}{p} \cdot \ \frac{\sum_{i=1}^{m} x_i}{m} = \left| \left| \left( \begin{array}{c} \frac{\sum_{i=1}^{m} x_i}{m^\frac{2}{p}} \\ \frac{\sum_{i=1}^{m} x_i}{m^\frac{2}{p}} \\ \vdots\\ \frac{\sum_{i=1}^{m} x_i}{m^\frac{2}{p}} \\ \end{array} \right) \right| \right|_p = \left| \left| \left( \begin{array}{cccc} \frac{1}{m^\frac{2}{p}} & \frac{1}{m^\frac{2}{p}} & \cdots & \frac{1}{m^\frac{2}{p}}\\ \frac{1}{m^\frac{2}{p}} & \frac{1}{m^\frac{2}{p}} & \cdots & \frac{1}{m^\frac{2}{p}}\\ \vdots & \vdots & \ddots & \vdots \\ \frac{1}{m^\frac{2}{p}} & \frac{1}{m^\frac{2}{p}} & \cdots & \frac{1}{m^\frac{2}{p}}\\ \end{array} \right) \cdot \left( \begin{array}{c} x_1\\ x_2\\ \vdots\\ x_m\\ \end{array} \right) \right| \right|_p. \end{equation} However this mapping can increase a norm. For the norm $l_p$ the induced matrix norm is \begin{equation} \| Q \|_p= \max_{\|x\_p|} \|Q \cdot \textbf{x}\|_p \end{equation} and for $\textbf{x}=\textbf{q}$ \begin{equation} \|Q\|_p= m^\frac{p-2}{p}. \end{equation} It follows that for $p >2$ \begin{equation} \|Q \cdot \textbf{x}\|_p > \| \textbf{x}\|_p \end{equation} the norm is increased. Only for $p \leq 2$ the norm is not increased with $l_2$ the projection $P$ and $l_1$ the simple mean value. \section{Subspace tree revisited} An adaptive projection, $A: R^n \mapsto R^f$, maps two vectors, $\bf{x}$ and $\bf{y}$, into a lower-dimensional space and satisfies the 1-Lipschitz property: \begin{equation} \| A \cdot \textbf{x} - A \cdot \textbf{y} \| |_p \leq \ \|\textbf{x}-\textbf{y}\|_p. \end{equation} Using the 1-Lipschitz property, a bound that is valid in both spaces can be determined. The distance of similar vectors to a query vector $\textbf{y}$ is smaller or equal in the original space of dimensionality $n$, and, consequently, it is also smaller or equal in the lower-dimensional space of the dimensionality $f$. During the computation, all the points below the bound are discarded. In the second step, the wrong candidates are filtered by comparisons in the original space. The number of points discarded drops as fast as the relation between the dimensionalities $\frac{n}{f}$ grows. Depending on the correlation between the dimensionalities, the 1-Lipschitz property is only useful if the relation is sufficiently small with \begin{equation} \frac{n}{f} \leq d \end{equation} where $d$ varies between $2 \leq d \leq 16$ in relation to the data set. However, high-dimensional indexing requires that the mapping $F: R^n \mapsto R^d$ with $ n\gg d$ satisfies the 1-Lipschitz property. For such a function, only a tiny fraction of the points of a given set are below the bound. Thus, the majority of the points have to be filtered by comparisons in the original space. Therefore, no speed up, compared tp the use of a simple list matching, can be achieved, as proclaimed by the conjecture ``the curse of dimensionality''. If at least some points of a given set are below the bound, there is a way to build a recursive function that achieves a considerable speed up using a simple list matching. Motivated by the divide and conquer principle and the tree structure, one can build such a function recursively, indicating that the ``the curse of dimensionality'' conjecture is \textit{wrong} for some data sets. It is well known that, for a dimensionality $d$ ($2 \leq d \leq 16$), metric index trees operate efficiently. Thus, in the next step we define an efficient indexing structure that builds on the mapping $F: R^n \mapsto R^d$ that satisfies the 1-Lipschitz property, with $F$ being a projection or an adaptive projection. Suppose there exist a sequence of subspaces $U_0, U_1, U_2, \ldots, U_t$ with $R^n=U_0$ and $R^d=U_t$ in which each subspace is a subspace of another space \begin{equation} U_0 \supset U_1 \supset U_2 \supset \ldots \supset U_t \end{equation} and with $dim(U_i)$ indicating the dimension of the subspace $U_i$ \[dim(U_0) > dim(U_1) > dim(U_2) \ldots > dim(U_t) \] and the relation between neighbouring subspaces is sufficiently small with \begin{equation} \frac{dim(U_0)}{dim(U_1)} \leq d, \frac{dim(U_1)}{dim(U_2)} \leq d, \ldots \frac{dim(U_{t-1})}{dim(U_t)} \leq d. \end{equation} We define a family of projections for the sequence of subspaces (either adaptive or not) with the following \begin{equation} A_1: U_0 \mapsto U_1; A_2: U_1 \mapsto U_2; \ldots ; A_t: U_{t-1} \mapsto U_t. \end{equation} The family of projections defines the sequence of subspaces. Given a bound $\epsilon$ and a query vector $\textbf{y}$ for each subspace including the original space $U_0$, certain points are below the bound. For each subspace $U_i$, the number of points below the bound $\epsilon$ is indicated by the value $\sigma_i$. It follows that \begin{equation} \sigma_0 < \sigma_1 < \ldots < \sigma_t < s \end{equation} where $s$ is the size of the data set. The resulting computing cost given a bound $\epsilon$ and a query vector $\textbf{y}$ is \begin{equation} cost_s=\sum_{i=1}^{t} \sigma_i \cdot dim(U_{i-1}) + s \cdot dim(U_t). \label{eq:lsubcost} \end{equation} The cost of list matching is \begin{equation} cost_l=s \cdot dim(U_0) \end{equation} The saving $cost_s < cost_l $ is related to the bound $\epsilon$. Empirical experiments suggest that $cost_s \ll cost_l$ for a bound with $\sigma_0 < d$. The described projection based method cannot be applied to sparse representation, as present in the vector space model \cite{Yates99}. \subsection{Tree isomorphy } The isomorphy to a tree results from the assumption that the value of $\sigma_i$ is reciprocal to the preceding dimensionality \cite{Wichert10}. Therefore, a bigger dimensionality $dim(U_{i+1})$ results in a smaller value $\sigma_i$, and vice versa. We can express this relation by \begin{equation} const \cdot \sigma_i =\frac{1}{dim(U_{i+1})}. \label{eq:rep0} \end{equation} The value of $const$ is dependent on the data set and its norm. The value of $\sigma_i$ is reciprocal to the preceding dimensionality $dim(U_{i+1})$ (Equation \ref{eq:rep0}), and the computing costs are expressed by \begin{equation} cost_s \approx 1/const \cdot \left( \frac{dim(U_0)}{dim(U_1)}+\frac{dim(U_1)}{dim(U_2)}+\ldots+\frac{dim(U_{n-1})}{dim(U_t)} \right)+ dim(U_t) \cdot s. \label{eq:hiecost} \end{equation} Supposing $d= dim(U_t)$ and $n= dim(U_0)$ \begin{equation} cost_s \approx 1/const \cdot d \cdot \log_d(n-d) + d \cdot s. \label{eq:hiecost2} \end{equation} For a dimension $d$, the metric index trees operate efficiently with a nearly logarithmic search time. For the bound with $\sigma_0 < d$, the value $1/const \ll s$ \begin{equation} cost_s \approx 1/const \cdot d \cdot \left( \log_d(n)-1\right) + d \cdot \log_d(s). \label{eq:hiecost3} \end{equation} It follows that the lower bound of the computational cost is \begin{equation} \Omega(\log(n) + \log( s)). \end{equation} \section{Examples of $\epsilon$ similarity} The $\epsilon$ range queries depends on the adequate value of $\epsilon$. A method for the estimation of such a value is described in \cite{Wichert08}. Let $DB$ be a database of $s$ multimedia objects $\textbf{x}^{(i)}$ represented by vectors of dimensionality $n$ in which the index $i$ is an explicit key identifying each object \begin{equation} \{\textbf{x}^{(i)} \in DB | i \in \{1.. s\}\}. \end{equation} For a query object $\textbf{y}$, $all$ objects $\textbf{x}^{(i)}$ are searched that are $\epsilon$-similar \begin{equation} \| \textbf{x}^{(i)} - \textbf{y} \|_p < \epsilon. \end{equation} For high dimensional vector space such a set can be determined by a list matching over the whole data set. \subsection{Computational procedure} For the database $DB$ that is projected into the subspace $U_k$, \begin{equation} \{U_k(\textbf{x})^{(i)} \in U_k(DB) | i \in \{1.. s\}\}. \end{equation} The algorithm to determine all $\epsilon$-similar objects is composed of two loops. The first loop iterates over the elements of the database $DB$, and the second iterates over their representation\footnote{An implementation can be obtained upon request from the author}. We can easily parallelize the algorithm over the first loop; different parts of the database can be processed by different processors, kernels, or computers. \paragraph{Algorithm to determine NN} \begin{tabbing} \textbf{forall} \= $\{\textbf{x}^{(i)} \in DB | i \in \{1.. s\}\}$\\ \> \{ \= \\ \>\>$for$\= $(k=t;k \neq 0, k--)$ \\ \>\>\>\{ \= \\ \>\> \>\> $load(U_k(\textbf{x})^{(i)})$;\\ \>\>\>\>/* 1-Lipschitz property */\\ \>\>\>\>$if$ \= $ (\| U_k(\textbf{x})^{(i)} - U_k(\textbf{y})) \|_p >= \epsilon$) \\ \>\>\>\>\>$break:$;\\ \>\>\>\>\>$if$ \= ($k=0$) $print$ $\textbf{x}^{(i)}$ $is$ $NN$ of $\textbf{y}$\\ \>\> \>\} \\ \>\} \\ \end{tabbing} Each call of the 1-Lipschitz property costs $dim(U_k)$. The cost according to Equation \ref{eq:lsubcost} correspond to the number of 1-Lipschitz property calls, corresponding to the value $\sigma_k$. \subsection{960-dimensional vector space} We apply computational procedure on high-dimensional data set of $100 000$ vectors of dimensionality $960$. The vectors represent the GIST global descriptor of an image and are composed by concatenated orientation image histograms \cite{Jegou11}. The vector $\textbf{ x}$ of dimensionality $960$ is split into $480$ distinct sub-vectors of dimensionality $2$. The data points are described by $480$ covariance matrices $C_i$ for each subspace. For all points, the covariance matrices are computed iteratively. \begin{equation} \textbf{ x} =\underbrace{x_1,x_2}_{ C_1},\underbrace{x_3,x_4}_{ C_2}, \cdots \cdots,\underbrace{x_{479},x_{959}}_{C_{480}}. \end{equation} The resulting $480$ projections, $\textbf{z}_i \cdot \textbf{z}_i^\top $, define the adaptive projection $A: R^{960} \mapsto R^{480}$. We apply the adaptive projection and the determination of the adaptive projection recursively. The resulting family of projections, \begin{equation} A_1: U_0 \mapsto U_1; A_2: U_1 \mapsto U_2; \ldots; A_7: U_{6} \mapsto U_7 \end{equation} defines the dimensionalities of the subspaces. \[ dim(U_0)=960 > dim(U_1)=480 > dim(U_2)=240 > dim(U_3)=120 \] \[ > dim(U_4)=60 > dim(U_5)=30 > dim(U_6)=10 > dim(U_7)=5. \] In Table \ref{tab_res1}, we indicate the mean costs according to Equation \ref{eq:lsubcost} using the $l_2$ norm. \begin{table} [h] \begin{center} \begin{tabular} {|c|c|c|c|c|} \hline projection & $\epsilon$ for $\approx 52$ NN & mean cost & ratio\\ \hline \hline orthogonal & $6300$ & $4584277$ & $21.38$ \\ \hline adaptive & $6300$ & $4393127$ & $22.31$ \\ \hline \end{tabular} \caption{Mean ratio of list matching to the mean computation costs according to Equation \ref{eq:lsubcost}. The values were determined over a disjunct sample of $S \subseteq DB$ with size $|S|=400$. The adaptive projection gives only a slight improvement. The diversion from the $m$-secting line according to Equation \ref{eq:abstand} is always $\ll 0.0001$. } \label{tab_res1} \end{center} \end{table} \subsection{12288-dimensional vector space} The 12288-dimensional vector space corresponds to an image database that consists of $9.876$ 3-band RGB (Red, Green, Blue) images of size $128 \times 96$. Each color is represented by 8 bits \cite{Wichert08}. Each of the tree bands of size $128 \times 96$ is tiled with rectangular windows $W$ of size $4 \times 4$. The data points are described by $32 \times 24$ covariance matrices $C_i$ for each subspace, for each band. The resulting $768=32 \times 24$ projections $\textbf{z}_i \cdot \textbf{z}_i^\top $ define the adaptive projection $A: R^{12288} \mapsto R^{768}$ for each band (Red, Green, Blue). We apply the adaptive projection and the determination of the adaptive projection recursively. The resulting family of projections, \begin{equation} A_1: U_0 \mapsto U_1; A_2: U_1 \mapsto U_2; A_3: U_{2} \mapsto U_3 \end{equation} defines the dimensionalities of the subspaces for each band \[ dim(U_0)=12288 > dim(U_1)=768 > dim(U_2)=48=8 \times 6 > dim(U_3)=12=4 \times 3. \] For an orthogonal projection, the sequence of subspaces $U_0 \supset U_1 \supset U_2 \supset U_3$ corresponds to the ``image pyramid'' \cite{Burt83}, \cite{Gonzales01}, which has a base that contains an image with a high-resolution and an apex that is the low-resolution approximation of the image. In Table \ref{tab_res2}, we indicate the mean costs according to Equation \ref{eq:lsubcost}. The $l_1$ norm gives the best results. \begin{table} [h] \begin{center} \begin{tabular} {|c|c|c|c|c|c|} \hline projection & $l_p$ & $\epsilon$ for $\approx 52$ NN & cost & ratio\\ \hline \hline orthogonal & $l_1$ & $1240000$ & $8571752$ & $42.47$ \\ \hline adaptive & $l_2$ & $8500$ & $10343766$ & $35.20$ \\ \hline orthogonal & $l_2$ & $8500$ & $10386043$ & $35.05$\\ \hline orthogonal & $l_4$ & $825$ & $12464281$ & $29.32$ \\ \hline orthogonal & $l_{\infty}$ & $161$ & $39639239$ & $9.19$ \\ \hline \end{tabular} \caption{Mean ratio of list matching to the mean computation costs according to Equation \ref{eq:lsubcost}. The values were determined over a disjunct sample of $S \subseteq DB$ with size $|S|=400$. The diversion from the $m$-secting line, according to Equation \ref{eq:abstand}, is always $\ll 0.0001$. } \label{tab_res2} \end{center} \end{table} \section{Conclusion} An adaptive projection that satisfies the 1-Lipschitz property defined by the first principal component was introduced. We indicated the behavior of the projections for the $l_p$ norms. The Manhattan distance $l_1$ loses the least information, followed by Euclidean distance function. Most information is lost when using the Chebyshev distance function. Motivated by the tree structure, we indicated a family of projections that defines a mapping that satisfies the 1-Lipschitz property. It is composed of orthogonal or adaptive projections in the $l_p$ space. Each projection is applied recursively in a low-dimensional space, where ``the curse of dimensionality'' conjecture does not apply. \section*{\uppercase{Acknowledgements}} The author would thank \^Angelo Cardoso for the valuable suggestions. This work was supported by Funda\c{c}\~{a}o para a Ci\^{e}ncia e Tecnologia (FCT): PTDC/EIA-CCO/119722/2010 and by Funda\c{c}\~{a}o para a Ci\^{e}ncia e Tecnologia FCT (INESC-ID multiannual funding) through the PIDDAC Program funds. \bibliographystyle{plain}
1,477,468,751,250
arxiv
\section{Introduction} The aim of this paper is to develop efficient importance sampling algorithms for computing the expectations of functionals of solutions to McKean-Vlasov stochastic differential equations (MV-SDE). MV-SDE are stochastic differential equations where the coefficients depend on the law of the solution, typically written in the following form: $$ \dd X_{t} = b(t,X_{t}, \mu_t)\dd t + \sigma(t,X_{t}, \mu_t)\dd W_{t}, \quad X_0 = x, $$ where $\mu_{t}$ denotes the law of the process $X$ at time $t$, and $W$ is a standard Brownian motion. MV-SDEs, also known as mean-field equations, were originally introduced in physics to describe the movement of an individual particle amongst a large number of indistinguishable particles interacting through their mean field. They are now used in a variety of other domains, such as finance, economics, biology, population dynamics etc. Development of algorithms for the simulation of MV-SDEs is a very active area of research. One of the earliest works to consider the error and computational complexity involved in simulating a MV-SDE was \cite{BossyTalay1997}. More recently \cite{GobetPagliarani2018}, \cite{SzpruchTanTse2017} and \cite{CrisanMcMurray2017} among others (see references therein) developed more efficient methods for simulating MV-SDEs under Lipschitz coefficients or stronger settings. A common technique for the simulation of MV-SDEs is to use the interacting particle representation. Namely, we consider $i=1, \dots, N$ particles, where each $X^{i,N}$ satisfies the SDE with ${X}_{0}^{i,N}=x_0$ \begin{align} \label{Eq:MV-SDE Propagation Intro} \dd {X}_{t}^{i,N} = b\Big(t,{X}_{t}^{i,N}, \mu^{X,N}_{t} \Big) \dd t + \sigma\Big(t,{X}_{t}^{i,N} , \mu^{X,N}_{t} \Big) \dd W_{t}^{i}, \quad \mu^{X,N}_{t}(\dd x) := \frac{1}{N} \sum_{j=1}^N \delta_{X_{t}^{j,N}}(\dd x) \end{align} where $\delta_{{X}_{t}^{j,N}}$ is the Dirac measure at point ${X}_{t}^{j,N}$, and the Brownian motions $W^i, i=1,\dots,N$ are independent. The so-called propagation of chaos result (see, e.g., \cite{Carmona2016Lectures}) states that under sufficient conditions, as $N\to \infty$, for every $i$, the process $X^{i,N}$ converges to $X^i$, the solution of the MV-SDE driven by the Brownian motion $W^i$. The system \eqref{Eq:MV-SDE Propagation Intro} is a system of ordinary SDE and can be discretized with one of the many available methods such as the Euler scheme. Let $X^{i,N,n}_t$ be the $i$-th component of the solution of \eqref{Eq:MV-SDE Propagation Intro}, discretized on $[0,T]$ over $n$ steps. The quantity of interest, which, in our case is $\theta = \mathbb E[G(X)]$, will then be approximated by the Monte Carlo estimator $$ \hat \theta^{N,n} = \frac{1}{N} \sum_{i=1}^N G(X^{i,N,n}). $$ The precision of this approximation is affected by three sources of error. \begin{itemize} \item The statistical error, that is the difference between $\hat \theta^{N,n} $ and $\mathbb E[G(X^{i,N,n})]$. \item The discretization error, that is, the difference between $\mathbb E[G(X^{i,N,n})]$ and $\mathbb E[G(X^{i,N})]$. \item The propagation of chaos error of approximating the MV-SDE with the interacting particle system, that is, the difference between $\mathbb E[G(X^{i,N})]$ and $\mathbb E[G(X)]$. \end{itemize} The discretization error of ordinary SDEs has been analyzed by many authors, and it is well known that, e.g., under the Lipschitz assumptions the Euler scheme has weak convergence error of order $\frac{1}{n}$. It is of course well known, the standard deviation of the statistical error is of order of $\frac{1}{\sqrt{N}}$. There has also been some work detailing the error from the propagation of chaos as a function of $N$, essentially for $G$ and $X$ nice enough the weak error is also of the order $\frac{1}{\sqrt{N}}$, see for example \cite{KohatsuOgawa1997} and \cite{Bossy2004} for further details. In spite of this relatively slow convergence, many MV-SDEs have a reasonably ``nice'' dependence on the law which makes the particle approximation a good technique. On the other hand, one often wants to consider \emph{rare events} in the context of the MV-SDE, and in this realm the statistical error will dominate the propagation of chaos error. The focus of this paper is therefore on the statistical error of the Monte Carlo method. In view of the poor convergence of the standard Monte Carlo, it is typical to enhance the standard approach with a so-called \emph{variance reduction} technique. Importance sampling, which is the focus of this paper, is one such technique. We will discuss the point of statistical against propagation of chaos error in more detail in Section \ref{sec:Numerics}. Importance sampling is based on the following identity, valid for any probability measure $\mathbb Q$ (absolutely continuous with respect to $\mathbb P$) $$ \mathbb E[G(X)] = \mathbb E_{\mathbb Q}\left[\frac{d\mathbb P}{d\mathbb Q}G(X)\right]. $$ The variance of the Monte Carlo estimator obtained by simulating $X$ under the measure $\mathbb Q$ and correcting by the corresponding Radon-Nikodym density is different from that of the standard estimator, and can be made much smaller by a judicious choice of the sampling measure $\mathbb Q$. Importance sampling is most effective in the context of \emph{rare event simulation}, e.g., when the probability $\mathbb P[G(X)> 0]$ is small. Since the theory of large deviations is concerned with the study of probabilities of rare events, it is natural to use measure changes appearing in or inspired by the large deviations theory for importance sampling. We refer, e.g., to \cite{DupuisWang2004} and references therein for a review of this approach and to \cites{GlassermanEtAl1999,GuasoniRobertson2008,Robertson2010} for specific applications to financial models. The large deviations theory, on the one hand, simplifies the computation of the candidate importance sampling measure, and on the other hand, allows to define its optimality in a rigorous asymptotic framework. The main contribution of this paper is two-fold. Firstly we show how one can apply a change of measure to MV-SDEs, and propose two algorithms that can carry this out: the \emph{complete measure change} algorithm and the \emph{decoupling} algorithm. In the complete measure change approach, the IS measure change is applied simultaneously in the coefficients and in the expectation to be evaluated. In the decoupling approach we first estimate the law of the solution in a first set of simulations without measure change and then perform a second set of simulations under the importance sampling measure using the approximate solution law computed in the first step. Secondly, for both approaches, we use large deviations techniques to obtain an optimisation problem for the candidate measure change. We focus on the class of Cameron-Martin transforms, under which the measure change is given by \begin{align} \label{Eq:Radon Nikodym Derivative} \frac{d\mathbb Q}{d\mathbb P}\Big|_{\mathcal F_T} = \cE\Big( \int_{0}^{T} f_{t} \dd W_{t}\Big) := \exp\left( \int_{0}^{T} f_{t} \dd W_{t} - \frac{1}{2} \int_{0}^{T} f_{t}^{2} \dd t \right), \end{align} where $f_t$ is a deterministic function. Following earlier works on the subject, we use the large deviations theory to construct a tractable proxy for the variance of $G(X)$ under the new measure. Of course, the presence of the interacting particle approximation introduces additional complexity at this point. Moreover, unlike the work of \cite{GuasoniRobertson2008} which considered a very restrictive class of SDEs (the geometric Brownian motion), here we deal with a general class of MV-SDE where the drifts are of super-linear growth and satisfy a monotonicity type condition. This is very important in practice since many MV-SDEs fall into this category. We then minimise the large deviations proxy to obtain a candidate optimal measure change for the two approaches that we consider. We find that the decoupling approach yields an easier optimisation problem than the complete measure change, which results in a high dimensional problem. However, by using exchangeability arguments the latter problem can be transformed into a far simpler two dimensional one. We implement both algorithms for two examples coming from the Kuramoto model from statistical physics and show that the variance of the importance sampling schemes is up to 3 orders of magnitude smaller than that of the standard Monte Carlo. Moreover, the computational only increases by a factor of 2--3 for the decoupling approach and is approximately the same as standard Monte Carlo for the complete measure change. We also estimate the propagation of chaos error and find that this is dominated by the statistical error by one order of magnitude. That being said, although the complete measure change appears to operate well in certain situations, it does rely on a change of measure which isn't too ``large''. We come back to this point throughout. \medskip Concerning the measure change paradigm, in this work we focus on deterministic (open loop) measure changes over stochastic (feedback) measure changes. This is a decision one faces when using importance sampling and there are advantages and disadvantages to both. As pointed out in \cite{GlassermanWang1997}, deterministic measure changes may lead to detrimental results in terms of variance reduction, however, the increase in computational time of the IS is overall negligible. Stochastic measure changes as discussed in \cite{DupuisWang2004} give improved variance reduction in far more generality, however, calculating the measure change is computationally burdensome, so the overall computational gain is less clear. As this is the first paper to marry importance sampling with MV-SDEs we feel it is beneficial to use deterministic based measure changes and leave stochastic measure changes as interesting future work. We provide precise conditions under which our deterministic measure change leads to an asymptotically optimal importance sampling estimator in the class of all possible measure changes. Further, one of our algorithms requires a measure changed propagation of chaos result to hold (Proposition \ref{chaoscomplete}) and it is not clear how to prove such a result if one uses stochastic measure changes. \medskip The manuscript is organized as follows. In Section \ref{Sec:Representation} we gather the preliminary results. In Section \ref{sec:HowToIS} we discuss how importance sampling and measure changes can be carried out for MV-SDE, and in Section \ref{Sec:Optimal Importance Sampling} we introduce our concept of optimality and identify the candidate optimal measure changes using the theory of large deviations. Section \ref{sec:Numerics} illustrates numerically our results while proofs from Section \ref{Sec:Optimal Importance Sampling} are carried out in Section \ref{Sec:Proofs}. \paragraph*{Acknowledgements} The authors would like to thank Daniel Lacker (Columbia University), for the helpful discussion. \section{Preliminaries} \label{Sec:Representation} Throughout the paper we work on a filtered probability space $(\Omega, \cF, (\cF_{t})_{t \ge 0}, \bP)$ satisfying the usual conditions, where $\cF_{t}$ is the augmented filtration of a standard multidimensional Brownian motion $W$. We consider some finite terminal time $T < \infty$ and use the following notation for spaces, which are standard in the McKean-Vlasov literature (see \cite{Carmona2016Lectures}). We define $\bS^{p}$ for $p \ge 1$, as the space of $\bR^{d}$-valued, $\cF_{\cdot}$-adapted processes $Z$, that satisfy, $\bE [ \sup_{0 \le t \le T} |Z(t)|^{p}]^{1/p} < \infty$. Similarly, $L_{t}^{p}(\bR^d)$, defines the space of $\bR^{d}$-valued, $\cF_{t}$-measurable random variables $X$, that satisfy, $\bE [|X|^{p}]^{1/p} < \infty$. We will work with $\bR^d$, the $d$-dimensional Euclidean space of real numbers, and for $a=(a_1,\cdots,a_d)\in\bR^d$ and $b=(b_1,\cdots,b_d)\in\bR^d$ we denote by $|a|^2=\sum_{i=1}^d a_{i}^{2}$ the usual Euclidean distance on $\bR^d$ and by $\langle a,b\rangle=\sum_{i=1}^d a^i b^i$ the usual scalar product. Given the measurable space $(\bR^{d}, \cB (\bR^{d}))$, we denote by $\cP(\bR^{d})$ the set of probability measures on this space, and write $\mu \in \cP_{2}(\bR^{d})$ if $\mu \in \cP(\bR^{d})$ and for some $x \in \bR^{d}$, $\int_{\bR^{d}} |x-y|^{2} \mu(\dd y) < \infty$. We then have the following metric on the space $\cP_2(\mathbb R^d)$ (Wasserstein metric) for $\mu, ~ \nu \in \cP_{2}(\bR^{d})$ (see \cite{dosReisSalkeldTugaut2017}), \begin{align*} W^{(2)}(\mu, \nu) = \inf_{\pi} \left\{ \left( \int_{\bR^{d} \times \bR^{d}} |x-y|^2 \pi( \dd x, \dd y) \right)^{\frac12} ~ : ~ \pi \in \cP(\bR^{d} \times \bR^{d}) ~ \text{with marginals $\mu$ and $\nu$} \right\} \, . \end{align*} \subsection{McKean-Vlasov stochastic differential equations} Let $W$ be an $l$-dimensional Brownian motion and take the progressively measurable maps $b:[0,T] \times \bR^d \times\cP_2(\bR^d) \to \bR^d$ and $\sigma:[0,T] \times \bR^d \times \cP_2(\bR^d) \to \bR^{d\times l}$. MV-SDEs are typically written in the form, \begin{equation} \label{Eq:General MVSDE} \dd X_{t} = b(t,X_{t}, \mu_t)\dd t + \sigma(t,X_{t}, \mu_t)\dd W_{t}, \quad X_{0} =x_{0}, \end{equation} where $\mu_{t}$ denotes the law of the process $X$ at time $t$, i.e.~$\mu_t=\bP\circ X_t^{-1}$. Consider the following assumption on the coefficients. \begin{assumption} \label{Ass:Monotone Assumption} Assume that $\sigma$ is Lipschitz in the sense that there exists $ L>0$ such that for all $t \in[0,T]$ and all $x, x'\in \bR^d$ and $\forall \mu, \mu'\in \cP_2(\bR^d)$ we have that $$ |\sigma(t, x, \mu)-\sigma(t, x', \mu')|\leq L(|x-x'| + W^{(2)}(\mu, \mu') ), $$and let $b$ satisfy \begin{enumerate} \item One-sided Lipschitz growth condition in $x$ and Lipschitz in law: there exists $ L>0$ such that for all $t \in[0,T]$, all $ x, x'\in \bR^d$ and all $\mu, \mu'\in \cP_2(\bR^d)$ we have that $$ \langle x-x', b(t, x, \mu)-b(t, x',\mu) \rangle \leq L|x-x'|^{2} \quad \text{and} \quad |b(t, x, \mu)-b(t, x,\mu')| \le W^{(2)}(\mu, \mu') . $$ \item Locally Lipschitz with polynomial growth in $x$: there exists $q \in \bN$ with $q>1$ such that for all $t \in [0,T]$, $\forall \mu \in \cP_{2}(\bR^{d})$ and all $x, ~ x' \in \bR^{d}$ the following holds. $$ |b(t, x, \mu)-b(t, x',\mu)| \leq L(1+ |x|^{q} + |x'|^{q}) |x-x'| . $$ \end{enumerate} \end{assumption} Under these assumptions, an existence and uniqueness result for the solution of the MV-SDE is given in \cite{dosReisSalkeldTugaut2017}. Note that this can be generalised to include random initial conditions. \begin{theorem}[\cite{dosReisSalkeldTugaut2017}*{Theorem 3.3}] \label{Thm:MV Monotone Existence} Suppose that $b$ and $\sigma$ satisfy Assumption \ref{Ass:Monotone Assumption} and be continuous in time. Further, assume for some $m \ge 2$, $X_{0} \in L_{0}^{m}(\bR^{d})$. Then there exists a unique solution for $X\in \bS^{m}([0,T])$ to the MV-SDE \eqref{Eq:General MVSDE}. For some positive constant $C$ we have \begin{align*} \mathbb E \big[ \sup_{t\in[0,T]} |X_{t}|^{m} \big] \leq C \Big(\bE[|X_0|^m] + \Big(\int_{0}^{T}b(t,0, \delta_{0}) \dd t \Big)^{m} + \Big(\int_{0}^{T}\sigma(t,0, \delta_{0})^{2} \dd t \Big)^{m/2} \Big) e^{C T}. \end{align*} \end{theorem} \subsection{Large Deviation Principles} In this section, we state the main results from the large deviations theory that we use throughout, for a full exposition the reader can consult texts such as \cite{DemboZeitouni2010} or \cite{DupuisEllis2011}. The large deviation principle (LDP) characterizes the limiting behaviour, as $\epsilon \rightarrow 0$, of a family of probability measures $\{ \mu_{\epsilon}\}$ in exponential scale on the space $(\cX , \cB_{\cX})$, with $\cX$ a topological space so that open and closed subsets of $\cX$ are well-defined, and $\cB_{\cX}$ is the Borel $\sigma$-algebra on $\cX$. The limiting behaviour is defined via a so-called rate function. We assume the probability spaces have been completed, consequently, $\cB_{\cX}$ is the complete Borel $\sigma$-algebra on $\cX$. We have the following definition \cite{DemboZeitouni2010}*{pg.4}. \begin{definition} [Rate function] A rate function $I$ is a lower semicontinuous mapping $I : \cX \rightarrow [0, \infty]$ (such that for all $\alpha \in [0,\infty)$, the level set $\Psi_{I}(\alpha) := \{x : I(x) \le \alpha \}$ is a closed subset of $\cX$). A good rate function is a rate function for which all the level sets $\Psi_{I}(\alpha)$ are compact subsets of $\cX$. The effective domain of $I$, denoted $D_{I}$, is the set of points in $\cX$ of finite rate, namely, $D_{I} := \{ x : I(x) < \infty \}$. \end{definition} We use the standard notation: for any set $\Gamma$, $\overline{\Gamma}$ denotes the closure and $\Gamma^{o}$ denotes the interior of $\Gamma$. As is standard practice in LDP theory, the infimum of a function over an empty set is interpreted as $\infty$. We then define what it means for this sequence of measures to have an LDP \cite{DemboZeitouni2010}*{pg.5}. \begin{definition} A family of probability measures, $\{ \mu_{\epsilon}\}$ with $\epsilon >0$ satisfies the large deviation principle with a rate function $I$ if, for all $\Gamma \in \cB$, \begin{align} -\inf_{x \in \Gamma^{o}} I(x) \le \liminf_{\epsilon \rightarrow 0} \epsilon \log \mu_{\epsilon}(\Gamma) \le \limsup_{\epsilon \rightarrow 0} \epsilon \log \mu_{\epsilon}(\Gamma) \le - \inf_{x \in \overline{\Gamma}} I(x) \, . \end{align} \end{definition} It is also typical to have LDP defined in terms of a sequence of random variables $Z_{\epsilon}$, in which case one replaces $\mu_{\epsilon}(\Gamma)$ by $\bP[ Z_{\epsilon}\in \Gamma]$. The following result can be viewed as a generalisation of Laplace's approximation of integrals to the infinite dimensional setting and transfers the LDP from probabilities to expectations (see \cite{DemboZeitouni2010}). \begin{lemma} [Varadhan's Lemma] \label{Lem:Varadhan} Let $\{\mu_{\epsilon}\}$ be a family of measures that satisfies a large deviation principle with good rate function $I$. Furthermore, let $Z_{\epsilon}$ be a family of random variables in $\cX$ such that $Z_{\epsilon}$ has law $\mu_{\epsilon}$ and let $\varphi : \cX \rightarrow \bR$ be any continuous function that satisfies the following integrability (moments) condition for some $\gamma >1$, \begin{align*} \limsup_{\epsilon \rightarrow 0} \epsilon \log \bE \left[ \exp \left( \frac{\gamma}{\epsilon} \varphi (Z_{\epsilon}) \right) \right] < \infty \, . \end{align*} Then, \begin{align*} \lim_{\epsilon \rightarrow 0} \epsilon \log \bE \left[ \exp \left( \frac{1}{\epsilon} \varphi (Z_{\epsilon}) \right) \right] = \sup_{x \in \cX} \left\{ \varphi(x)-I(x) \right \} \, . \end{align*} \end{lemma} As is discussed in \cite{GuasoniRobertson2008}, one needs a slight extension to Varadhan's lemma to allow the function $\varphi$ to take the value $-\infty$. The extension is proved in \cite{GuasoniRobertson2008}. \begin{lemma} Let $\varphi : \cX \rightarrow [-\infty, \infty)$ and assume the conditions in Lemma \ref{Lem:Varadhan} are satisfied. Then the following bounds hold for any $\Gamma \in \cB$ \begin{align*} \sup_{x \in \Gamma^{0}} \{ \varphi(x)-I(x) \} & \le \liminf_{\epsilon \rightarrow 0} \epsilon \log \left( \int_{\Gamma^{o}} \exp\left( \frac{1}{\epsilon} \varphi(Z_{\epsilon}) \right) \dd \mu_{\epsilon} \right) \\ & \le \limsup_{\epsilon \rightarrow 0} \epsilon \log \left( \int_{\overline{\Gamma}} \exp\left( \frac{1}{\epsilon} \varphi(Z_{\epsilon}) \right) \dd \mu_{\epsilon} \right) \le \sup_{x \in \overline{\Gamma}} \{ \varphi(x)-I(x) \} \, . \end{align*} \end{lemma} The previous lemma allows us to control the $\liminf$ and $\limsup$ of the process even when they are not equal (as is the case in Varadhan's lemma). \subsection{Importance Sampling and large deviations} \label{sec:MinimiseVariance} To motivate our approach we recall ideas from the pioneering works \cite{GlassermanEtAl1999}, \cite{GuasoniRobertson2008} and \cite{Robertson2010} which establish a connection between large deviations and importance sampling. Importance sampling uses the following idea. Consider the problem of estimating $\bE_{\bP}[G(X)]$ where $X$ is some random variable/process governed by probability $\bP$. Through Radon-Nikodym theorem we can rewrite this expectation under a new measure $\bQ$ weighted by the Radon-Nikodym derivative, thus $\bE_{\bP}[G(X)]=\bE_{\bQ}[G(X) \frac{\dd \bP}{\dd \bQ}]$. Although the expectations (first moments) are the same, the variance under $\bQ$ is, \begin{align} \label{Eq:Variance under Q} \text{Var}_{\bQ} \Big[G(X) \frac{\dd \bP}{\dd \bQ} \Big]= \bE_{\bP}\Big[ G(X)^{2} \frac{\dd \bP}{\dd \bQ} \Big] -\bE_{\bP}\Big[G(X)\Big]^2 \, . \end{align} As it turns out, if one can choose $\frac{\dd \bQ}{\dd \bP} = \frac{G}{\bE_{\bP}[G]}$, then the variance under $\bQ$ is zero, i.e. we have no error in our Monte Carlo simulation. Unfortunately though, in order to choose such a change of measure one would need to a priori know the value of $\bE_{\bP}[G(X)]$ i.e. the value we wish to estimate in the first place. Instead one typically chooses $\bQ$ to minimise \eqref{Eq:Variance under Q} over a set of equivalent probability measures, chosen to add only a small amount of extra computation and such that the process $X$ is easy to simulate under the new measure. Specializing to the Brownian filtration, a common choice of $\mathbb Q$ is the Girsanov transform, \eqref{Eq:Radon Nikodym Derivative} where $f$ is often taken to be a deterministic function. For example in \cite{TengEtAl2016} the authors develop an importance sampling procedure in the context of Gaussian random vectors through a so-called ``tilting'' parameter, which corresponds to shifting the mean of the Gaussian random vector via a Girsanov transform. Although this method is intuitive, it still requires estimation of the Jacobian of $G$ w.r.t. the tilting parameter and applying Newton's method to select the optimal parameter value. These steps can be computationally expensive, and it is difficult to obtain rigorous optimality results. Even after one has reduced the set of measures $\bQ$ to optimise over, in general the problem of minimizing \eqref{Eq:Variance under Q} will not have a closed form solution. Thus we instead minimize a proxy for the variance obtained in the so-called small noise asymptotic regime as discussed in \cite{GlassermanEtAl1999} and \cite{GuasoniRobertson2008}. Assuming that a Girsanov change of measure is used, we want to minimise \begin{align} \label{girsvar} \bE_{\bP}\left[ G(W)^{2} \frac{\dd \bP}{\dd \bQ} \right] = \bE_{\bP} \left[ \exp\left( 2 F(W) - \int_{0}^{T} f_{t} \dd W_{t} + \frac{1}{2} \int_{0}^{T} f_{t}^{2} \dd t \right) \right],\quad \textrm{with } F=\log (G). \end{align} Typically $G$ is defined as a functional of the SDE, but here with a slight abuse of notation we have redefined it as the functional of the driving Brownian motion. It is important for this type of argument that we are able to write the solution of the SDE in terms of BM as well, i.e. we can write $X_{t}=H(t, W_{\cdot})$. Finding the optimal $f$ by minimizing \eqref{girsvar} is in general intractable, hence an asymptotic approximation of the variance should be constructed. Let us consider, \begin{align*} \epsilon \log \bE_{\bP} \left[ \exp\left( \frac{1}{\epsilon}\left(2 F(\sqrt{\epsilon} W) - \int_{0}^{T} \sqrt{\epsilon} f_{t} \dd W_{t} + \frac{1}{2} \int_{0}^{T} f_{t}^{2} \dd t \right) \right) \right] \, , \end{align*} which equals $\log$ of \eqref{girsvar} when $\epsilon=1$, the \emph{small noise asymptotic approximation} is then, \begin{align*} L(f) := \limsup_{\epsilon \rightarrow 0} \epsilon \log \bE_{\bP} \left[ \exp\left( \frac{1}{\epsilon}\left(2 F(\sqrt{\epsilon} W) - \int_{0}^{T} \sqrt{\epsilon} f_{t} \dd W_{t} + \frac{1}{2} \int_{0}^{T} f_{t}^{2} \dd t \right) \right) \right] \,. \end{align*} One then computes a candidate variance reduction parameter $f^*$ by minimizing $L(f)$, which can be thought of as approximating $\bE_{\bP}\left[ G(W)^{2} \frac{\dd \bP}{\dd \bQ} \right]$ by $\exp(L(f))$. Crucially, $L$ is in a form that can be evaluated using the Varadhan's lemma, i.e., we can change $L$ into a supremum depending on the rate function. The parameter $f^*$, which minimises $L$ over some predefined space is known as \emph{asymptotically optimal}, see \cite{GuasoniRobertson2008}. We will give a precise definition of this concept later. It is important to note, these approximations are not approximations for the original problem (calculate $\bE_{\bP}[G(X)]$), they are only approximations to help choose the change of measure we want to apply. \section{Importance sampling for MV-SDEs} \label{sec:HowToIS} Leaving LDPs and the optimality of the IS (importance sampling) on the side, let us discuss how IS can be achieved for MV-SDEs with a given measure change. Recall that MV-SDEs take the form \eqref{Eq:General MVSDE}. Because we change the measure we make explicit the dependence on the law of the solution process $\mu^X_{t,\bP} = \bP\circ X_t^{-1}$. If one knows the law $\mu^X$ beforehand , then one can treat the MV-SDE as a ``standard'' SDE and use IS as usual. However, typically one does not have access to the law, and the MV-SDE must be approximated by a so-called particle system approximation. \textbf{The interacting particle system approximation.} We approximate \eqref{Eq:General MVSDE} (driven by the $\bP$-Brownian motion $W^\bP$), using an $N$-dimensional system of interacting particles. Let $i=1, \dots, N$ and consider $N$ particles $X^{i,N}$ satisfying the SDE with ${X}_{0}^{i,N}=x_0$ \begin{align} \label{Eq:MV-SDE Propagation} \dd {X}_{t}^{i,N} = b\Big(t,{X}_{t}^{i,N}, \mu^{X,N}_{t} \Big) \dd t + \sigma\Big(t,{X}_{t}^{i,N} , \mu^{X,N}_{t} \Big) \dd W_{t}^{i, \bP}, \quad \mu^{X,N}_{t}(\dd x) := \frac{1}{N} \sum_{j=1}^N \delta_{X_{t}^{j,N}}(\dd x) \end{align} where $\delta_{{X}_{t}^{j,N}}$ is the Dirac measure at point ${X}_{t}^{j,N}$, and the independent $\bP$-Brownian motions $W^{i,\bP}, i=1,\dots,N$ (also independent of the BM $W^\bP$ appearing in \eqref{Eq:General MVSDE}). Due to the several changes of the measure throughout this section we keep track of which $W$ we refer to. \begin{remark}[On the empirical measure $\mu^{X,N}_{t}$] Unlike standard measures, empirical measures do not have dependence on the underlying measure $\bP$, namely empirical measures are maps that depend on a sequence of $\omega^{i} \in \Omega$, thus one should write $\mu^{X,N}_{t}$ instead of $\mu^{X,N}_{t,\bP}$. Of course, this is a pathwise statement, since the $\omega^{i}$ are generated under $\bP$, the \emph{distribution} of the empirical measure does depend on $\bP$. \end{remark} \textbf{Propagation of chaos.} In order to show that the particle approximation is of use, one shows a so-called propagation of chaos result. Although different types exist a common one is the pathwise convergence result where we consider the system of non interacting particles \begin{align} \label{Eq:Non interacting particles} \dd X_{t}^{i} = b(t, X_{t}^{i}, \mu^{X^{i}}_{t,\bP}) \dd t + \sigma(t,X_{t}^{i}, \mu^{X^{i}}_{t,\bP}) \dd W_{t}^{i, \bP}, \quad X_{0}^{i}=x_0 \, ,\quad t\in [0,T] \, , \end{align} which are of course just MV-SDEs and since the $X^{i}$s are independent, then $\mu^{X^{i}}_{t,\bP}=\mu^{X}_{t,\bP}$ $\forall i$. Under sufficiently nice conditions, one can then prove the following convergence result (see \cite{Carmona2016Lectures}*{Theorem 1.10} for example) \begin{align*} \lim_{N \rightarrow \infty} \sup_{1 \le i \le N} \bE_{\bP} \left[ \sup_{0 \le t \le T} |X_{t}^{i,N} - X_{t}^{i}|^{2} \right] = 0 \, . \end{align*} Note that, for all SDEs appearing below, we have initial condition $x_{0}$ and work on the interval $[0,T]$. \textbf{Setup to change measures.} When it comes to changing the measure under which we simulate we are also changing our approximation of the law. Since MV-SDEs depend explicitly on the law, this makes importance sampling more difficult. This will be one of the main points throughout this section. Fix a deterministic square-integrable function $\dot h\in L_{0}^2(\bR)$. Then one can define the probability measure $\bQ$ via the Girsanov transform $\frac{d \bQ}{d\bP}|_{\cF_{T}}:=\cE(\int_{0}^{T} \dot{h}_{t} \dd W_{t}^{\bP})$, see \eqref{Eq:Radon Nikodym Derivative}, so that $\dd W^{\bQ}_t= \dd W_{t}^{\bP}-\dot h_t \dd t$ is a $\mathbb Q$-Brownian motion. We note that the Radon-Nikodym density $\frac{\dd \bQ}{\dd \bP}|_{\cF_{t}}=\cE(\int_0^\cdot \dot h_s \dd W_s^{\bP})_t=:\cE_t$ is itself the solution of the SDE \begin{align*} \dd \cE_{t} = \dot h_t \cE_{t} \dd W_t^{\bP}, \quad \cE_{0}=1 \quad\Rightarrow\quad \cE_{t}=\exp\Big\{ \int_0^t \dot h_s \dd W_s^{\bP} - \frac12 \int_0^t |\dot h_s|^2 \dd s \Big\}. \end{align*} Since $\bP$ and $\bQ$ are equivalent, one can also define $Z_{t}:= \cE_{t}^{-1} :=\frac{\dd \bP}{\dd \bQ}|_{\cF_{t}}$. With our conditions on $\dot{h}$ it is also a straightforward task to show $\cE_{t}$ and $Z_{t}$ are in $\bS^{p}$ for all $p \ge 1$. Recall our goal: \emph{estimate $\bE_{\bP}[G(X_{T})]=\bE_{\bQ}[G(X_{T}) \frac{\dd \bP}{\dd \bQ}]$ for some function $G$ by simulating $X$ under $\bQ$}. In the following paragraphs we present two alternative ways to achieve this goal. \textbf{A running example.} We present our algorithm in general setting with \eqref{Eq:General MVSDE}. For the sake of clarity and easiness of presentation, we often recourse to a particular class of MV-SDEs (under $\bP$), \begin{align} \label{Eq:Simple Example MV-SDE} \dd X_{t} = \hat{b}\big(t, X_{t}, \bE_{\bP}[f(X_{t})]\big) \dd t + \sigma\dd W_{t}^{\bP}, \quad X_{0}=x_0 \, ,\quad t\in [0,T] \, . \end{align} with $\sigma\in\bR_{+}$ and $f,\hat{b}$ nice\footnote{We use $\hat{b}$ here since it takes the expectation rather than a measure input.}. We believe many of the arguments that are used at this level can be extended to cover more general MV-SDEs (such as higher order interactions). However, obtaining analogous results to those of standard MV-SDEs, such as propagation of chaos, is made more challenging by the inclusion of the measure change. Therefore, these have to be considered on a case by case basis. \subsection{Fixing the Empirical Law - a decoupling argument} \label{sec:DecouplingIS} An obvious way to solve the problem of IS is to approximate the law of the MV-SDE under $\mathbb P$ and use that as a fixed input to a new equation which will be simulated under $\mathbb Q$. In this set up the McKean-Vlasov SDE turns into an SDE with random coefficients. The algorithm is as follows. \begin{enumerate} \item Use \eqref{Eq:MV-SDE Propagation} with $N$ particles to approximate \eqref{Eq:General MVSDE}. Use some numerical scheme (under $\bP$, say Euler) to simulate the particles in time, calculating an empirical law over $[0,T]$. This gives an approximation for the empirical law $\mu^{N}_{t}$ which is then fixed. Define a new SDE, approximating the original MV-SDE \eqref{Eq:General MVSDE}, which is now a \emph{standard SDE with random coefficients} \begin{align} \label{eq:Decoupled SDE} \dd \bar X_{t} = b(t, \bar X_{t}, \mu^{N}_{t}) \dd t + \sigma(t,\bar X_{t}, \mu^{N}_{t}) \dd W_{t}^{\bP}, \quad \bar X_{0}=x_0, \end{align} where $W^{\bP}$ is a $\bP$-Brownian motion independent of the $\{W^{i,\bP}\}_{i=1,\cdots,N}$ appearing in \eqref{Eq:MV-SDE Propagation}. SDEs with random coefficients appear typically in optimal control, hence the reader can consult texts such as \cite{YongZhou1999}*{Chapter 1} for further details on existence uniqueness of such SDEs. \item Change the probability measure to $\bQ$, which is our importance sampling measure change. Simulate \eqref{eq:Decoupled SDE} under this new measure, i.e. \begin{align*} \dd \bar{X}_{t} = \left( b(t, \bar{X}_{t}, \mu^{N}_{t}) + \dot{h}_{t} \sigma(t, \bar{X}_{t}, \mu^{N}_{t}) \right) \dd t + \sigma(t, \bar{X}_{t}, \mu^{N}_{t}) \dd W_{t}^{\bQ}, \quad \bar{X}_{0}=x_0 \, . \end{align*} \item This second run is therefore standard importance sampling, but the SDE has random coefficients i.e. the empirical law is random. \end{enumerate} We will refer to algorithms of this form as \emph{Decoupling Algorithms}. This scheme has the disadvantage in that it requires twice the amount of simulation and one will require a handle on the error coming from the original approximation of the law. It is not a requirement to use interacting particles to approximate the law of the SDE, any approximation will work. The goal here is to make the SDEs independent. \subsection{Complete Measure Change} \label{Sec:Complete Measure Change} An alternative is to change the measure under which we are simulating in the coefficients \emph{and} the Brownian motion. This is not a simple problem and as far as we are aware changing the measure of a MV-SDE and its particle approximation is not discussed elsewhere in the literature (for this purpose\footnote{Measures changes for MV-SDE appear in methods requiring to remove the drift altogether, for instance in establishing weak solutions to MV-SDEs, see e.g. \cite{DawsonGaertner1987-DG1987}.}), we therefore provide a discussion along with the pitfalls here. This is more complex than the decoupled case and for clarity we use \eqref{Eq:Simple Example MV-SDE} throughout. The measure changed version of \eqref{Eq:Simple Example MV-SDE} takes the form, \begin{align*} \dd X_{t} &= \Big( \hat{b}(t, X_{t}, \bE_{\bP}[f(X_{t})]) + \sigma \dot{h}_{t} \Big) \dd t + \sigma\dd W_{t}^{\bQ} \\ &=\Big( \hat{b}(t, X_{t}, \bE_{\bQ} \Big[ f(X_{t}) Z_{t}\Big]) + \sigma \dot{h}_{t} \Big) \dd t + \sigma\dd W_{t}^{\bQ} . \end{align*} where again $Z := \cE^{-1}$. In view of simulation, we re-write the measure changed MV-SDE as a system \begin{align*} \dd X_{t} &= \Big( \hat{b} \Big(t, X_{t}, \bE_{\bQ} \big[ F(X_t,Z_t)\big]\Big) + \sigma \dot{h}_{t} \Big) \dd t + \sigma\dd W_{t}^{\bQ}, \quad \text{and} \quad \dd Z_{t} = \dot h_t Z_{t} \dd W_{t}^{\bQ}, \quad Z_0=1 \, , \end{align*} where $F(x,y)=f(x) y$. We now write the interacting particle system for the pair $X,Z$ under $\bQ$: \begin{align} \label{Eq:interacting particles-v01} \dd X_{t}^{i,N} &= \Big( \hat{b} \big(t, X_{t}^{i,N}, \frac1N \sum_{j=1}^N F(X_{t}^{j,N},Z^{j,N}_t) \big) + \sigma \dot{h}_{t} \Big) \dd t + \sigma\dd W_{t}^{i,\bQ}, \\ \dd Z^{i,N}_t &=\dot h_t Z^{i,N} \dd W^{i,\bQ}_t, \quad Z_{0}^{i,N} =1 \, . \notag \end{align} The importance sampling estimator of $\theta = \mathbb E^{\mathbb P}[G(X_T)]$ then takes the form \begin{align} \hat\theta_h = \frac{1}{N}\sum_{i=1}^N Z^{i,N}_T G(X^{i,N}_T). \label{estimatorcomplete} \end{align} \begin{remark} One may be tempted to use a different approach, namely first apply an interacting particle approximation under $\mathbb P$ which yields \begin{align*} \dd X_{t}^{i,N} &= \hat{b} \big(t, X_{t}^{i,N}, \frac1N \sum_{j=1}^N f(X_{t}^{j,N}) \big) \dd t + \sigma\dd W_{t}^{i,\bP}, \end{align*} and then change the measure for the particle system, writing \begin{align*} \dd X_{t}^{i,N} &= \Big(\hat{b} \big(t, X_{t}^{i,N}, \frac1N \sum_{j=1}^N f(X_{t}^{j,N}) \big) + \sigma\dot h_t\Big)\dd t + \sigma\dd W_{t}^{i,\bQ}, \end{align*} where we have taken the same $\dot h$ for every Brownian motion in order for all particles to have the same law. However, it is easy to see by the standard propagation of chaos result that as $N\to \infty$, this particle system converges to the solution of the MV-SDE $$ \dd X_{t} = \Big(\hat{b} \big(t, X_{t}, \mathbb E^{\mathbb Q}[X_t] \big) + \sigma\dot h_t\Big)\dd t + \sigma\dd W_{t}^{\bQ}=\hat{b} \big(t, X_{t}, \mathbb E^{\mathbb Q}[X_t] \big) \dd t + \sigma\dd W_{t}^{\bP}, $$ which is not what one is looking for. \end{remark} To state a propagation of chaos result for the particle system \eqref{Eq:interacting particles-v01} we introduce the auxiliary system of non-interacting particles, \begin{align} \label{eq:non-interacting particles-v01} \dd X_{t}^{i} & = \Big( \hat{b} \Big(t, X_{t}^{i}, \bE_{\bQ} \big[ F(X_{t}^{i},Z_{t}^{i})\big] \Big) + \sigma \dot{h}_{t} \Big) \dd t + \sigma\dd W_{t}^{i,\bQ}\, , \\ \nonumber \dd Z^{i} &= \dot h_t Z^{i} \dd W^{i,\bQ}_t, \quad Z^{i}=1 \, . \end{align} \begin{proposition}\label{chaoscomplete} Consider the following measure changed MV-SDE (see \eqref{eq:non-interacting particles-v01}), \begin{align} \label{Eq:Measure changed example} \dd X_{t}^{i} & = \Big( \hat{b}\Big(t, X_{t}^{i}, \bE_{\bQ} \Big[ f(X_{t}^{i}) Z_{t}^{i} \Big]\Big) + \sigma \dot{h}_{t} \Big) \dd t + \sigma\dd W_{t}^{i,\bQ}\, , \quad \dd Z_{t}^{i} = \dot h_t Z_{t}^{i} \dd W^{i,\bQ}_t, \quad Z_{0}^{i}=1 \, , \end{align} where $\hat{b}$ is continuous in time, $\hat{b}$ and $f$ are Lipschitz in space, and $\hat{b}$ is a bounded Lipschitz function in its third variable. Let $X_{t}^{i,N}$, denote the corresponding particle approximation (see \eqref{Eq:interacting particles-v01}). Then the following pathwise propagation of chaos result holds, \begin{align*} \lim_{N \rightarrow \infty} \sup_{1 \le i \le N} \bE_{\bQ} \left[ \sup_{0 \le t \le T} |X_{t}^{i,N} - X_{t}^{i}|^{2} \right] =0 \, . \end{align*} \end{proposition} This proposition may be used to analyze the convergence of the Monte Carlo estimator \eqref{estimatorcomplete}. Indeed, due to the fact that there is no coupling (or law dependency) in $Z_{t}^{i,N}$, $Z^{i,N}= Z^{i}$ and $\hat \theta_h$ can be represented as follows. $$ \hat \theta_h = \frac{1}{N}\sum_{i=1}^N Z^{i}_T G(X^{i}_T) + \frac{1}{N}\sum_{i=1}^N Z^{i}_T (G(X^{i,N}_T) -G(X^{i}_T) ). $$ The first term above converges to $\theta$ as $N\to \infty$ by the law of large numbers, and the second term can be shown, e.g., to converge to zero in probability using Proposition \ref{chaoscomplete} if $G$ is sufficiently regular. \begin{proof}[Proof of Proposition \ref{chaoscomplete}] The idea of the proof is to appeal to a Gronwall type inequality, but this is made difficult due to the presence of $Z$ term in \eqref{Eq:Measure changed example}. One can note, due to the assumptions on the coefficients of the SDE, all $p$-moments exist. Using our prescribed form of the MV-SDE we obtain, \begin{align*} |X_{t}^{i,N} - X_{t}^{i}|^{2} \le C \int_{0}^{t} \big| \hat{b} \Big(s, X_{s}^{i,N}, \frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j,N}) Z_{s}^{j} \Big) - \hat{b} \left(s, X_{s}^{i}, \bE_{\bQ}[f(X_{s}^{i}) Z_{s}^{i}] \right) \big|^{2} \dd s \, . \end{align*} Let $s \in [0,T]$, then introduce the terms, $\hat{b} \Big(s, X_{s}^{i}, \frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j,N})Z_{s}^{j} \Big)$ and $\hat{b} \Big(s, X_{s}^{i}, \frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j})Z_{s}^{j} \Big)$, where the empirical measure in the second term is the one constructed from the i.i.d. SDEs in \eqref{Eq:Measure changed example}, hence each $X^{j}$ corresponds to a independent realisation of the MV-SDE, namely it has the correct distribution. Splitting the original difference into three, we use the Lipshitz property in space for the first one, to obtain, \begin{align*} \big| \hat{b} \Big(s, X_{s}^{i,N}, \frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j,N}) Z_{s}^{j} \Big) - \hat{b} \Big(s, X_{s}^{i}, \frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j,N})Z_{s}^{j} \Big) \big|^{2} \le C|X_{s}^{i,N}-X_{s}^{i}|^{2} \, . \end{align*} For the second difference we use the fact that $\hat{b}$ is bounded along with the Lipschitz property in the third variable, which yields \begin{align*} &\big| \hat{b} \Big(s, X_{s}^{i}, \frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j,N})Z_{s}^{j} \Big) - \hat{b} \Big(s, X_{s}^{i}, \frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j})Z_{s}^{j} \Big) \big|^{2} \\ & \le C\big| \hat{b} \Big(s, X_{s}^{i}, \frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j,N})Z_{s}^{j} \Big) - \hat{b} \Big(s, X_{s}^{i}, \frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j})Z_{s}^{j} \Big) \big| \le C \frac{1}{N} \sum_{j=1}^{N} Z_{s}^{j}|X_{s}^{j,N} - X_{s}^{j}| \, . \end{align*} Finally, again from the Lipschitz property we obtain, \begin{align*} \big| \hat{b} \Big(s, X_{s}^{i}, \frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j})Z_{s}^{j} \Big) - \hat{b} \left(s, X_{s}^{i}, \bE_{\bQ}[f(X_{s}^{i}) Z_{s}^{i}] \right) \big|^{2} \le C \big|\frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j}) Z_{s}^{j} - \bE_{\bQ}[f(X_{s}^{i}) Z_{s}^{i}] \big|^{2} \, . \end{align*} Hence the following bound holds, \begin{align*} & \bE_{\bQ} \Big[ \sup_{0 \le t \le T} |X_{t}^{i,N} - X_{t}^{i}|^{2} \Big] \\ & \le C \int_{0}^{T} \bE_{\bQ} [ |X_{s}^{i,N}-X_{s}^{i}|^{2}] + \frac{1}{N} \sum_{j=1}^{N} \bE_{\bQ}\Big[ Z_{s}^{j}|X_{s}^{j,N} - X_{s}^{j}| \Big] + \bE_{\bQ} \Big[ \big|\frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j}) Z_{s}^{j} - \bE_{\bQ}[f(X_{s}^{i}) Z_{s}^{i}] \big|^{2} \Big] \dd s \, . \end{align*} One can use Cauchy-Schwarz along with the properties of $Z$ to obtain, \begin{align*} \bE_{\bQ}\Big[ Z_{s}^{j}|X_{s}^{j,N} - X_{s}^{j}| \Big] \le C \bE_{\bQ}\Big[ |X_{s}^{j,N} - X_{s}^{j}|^{2} \Big]^{\frac12} \le C \bE_{\bQ}\Big[ \sup_{0 \le u \le s} |X_{u}^{j,N} - X_{u}^{j}|^{2} \Big]^{\frac12} \, . \end{align*} Although at first it appears one cannot use Gronwall here, there is a nonlinear generalisation due to Perov (see \cite{MitrinovicEtAl2012}*{Theorem 1, p360}) which we can use since the nonlinear term on the RHS is square root of the term on the left. Finally, take the supremum over $i$ and using the fact that the variables $f(X_{s}^{j}) Z_{s}^{j}$ are i.i.d. and square integrable, we obtain, \begin{align*} \sup_{1 \le i \le N}\bE_{\bQ} \Big[ \sup_{0 \le t \le T} |X_{t}^{i,N} - X_{t}^{i}|^{2} \Big] &\le C e^{C}\int_{0}^{T} \bE_{\bQ} \Big[ \big|\frac{1}{N} \sum_{j=1}^{N} f(X_{s}^{j}) Z_{s}^{j} - \bE_{\bQ}[f(X_{s}^{i}) Z_{s}^{i}] \big|^{2} \Big] \dd s \, \\ &\leq \frac{ C e^{C}}{N}\int_{0}^{T}\bE_{\bQ} \Big[ \big| f(X_{s}^{1}) Z_{s}^{1} - \bE_{\bQ}[f(X_{s}^{1}) Z_{s}^{1}] \big|^{2} \Big] \dd s \,\to 0 \end{align*} as $N\to \infty$, which concludes the proof. \end{proof} \subsubsection*{The Complete Measure Change Algorithm} We now describe the algorithm for simulating a general MV-SDE under a complete measure change. \begin{enumerate} \item Simulate the following particle system for the MV-SDE after the measure change: \begin{align*} \dd X_{t}^{i,N} & = \left( b \left( t, X_{t}^{i,N}, \frac{1}{N} \sum_{j =1}^{N} Z_{t}^{j} \delta_{X_{t}^{j,N}} \right) + \dot{h}_{t} \sigma \left( t,X_{t}^{i,N}, \frac{1}{N} \sum_{j =1}^{N} Z_{t}^{j} \delta_{X_{t}^{j,N}} \right) \right) \dd t \\ & \qquad \qquad + \sigma \left( t,X_{t}^{i,N}, \frac{1}{N} \sum_{j =1}^{N} Z_{t}^{j} \delta_{X_{t}^{j,N}} \right) \dd W_{t}^{i,\bQ}, \\ \dd Z_{t}^{i} &= \dot h_t Z_{t}^{i} \dd W^{i,\bQ}_t, \quad Z_{0}^{i}=1\, . \end{align*} \item Compute the importance sampling estimator using the following formula: $$ \hat\theta_h = \frac{1}{N}\sum_{i=1}^N Z^{i,N}_T G(X^{i,N}_T). $$ \end{enumerate} We will refer to algorithms of this form as \emph{Complete Measure Change Algorithms}. An advantage one can immediately see is that one simulates the particles only once. A key disadvantage is that the importance sampling to estimate the object of interest $\bE[G(X_{T})]$, may yield a poorer estimation of the original law $\mu$ and the term $\bE_\bQ[f(X_t)Z_t]$ in \eqref{eq:non-interacting particles-v01}. We will discuss this in Section \ref{sec:Numerics}. \section{Optimal Importance Sampling for McKean-Vlasov SDEs} \label{Sec:Optimal Importance Sampling} The previous section detailed algorithms for simulating MV-SDEs under an arbitrary change of measure. We now use the theory of large deviations to determine, in a certain optimal way, a measure change which will reduce the variance of the estimate. An important point here is that we will be using the LDP for Brownian motion, rather than that for MV-SDEs. There are several works dealing with Large Deviations for MV-SDEs and their associated interacting particles systems, see \cite{BudhirajaDupuisFischer2012}, \cite{Fischer2014}, \cite{dosReisSalkeldTugaut2017} but such results are not of use here since we must be able to cheaply simulate the MV-SDE after the change of measure. We restrict to Girsanov measure changes since we know how the SDE changes under the measure change. In this section we first show how the LDP framework can be applied to both algorithms to yield a simplified optimisation problem for finding the asymptotically optimal measure change (Theorems \ref{Thm:Complete Measure Change} and \ref{Thm:Decoupled}) and then demonstrate how these simplified optimization problems can be solved in practice. \subsection{Preliminaries} \label{Sec:GR08 Results} We recall some of the main concepts for importance sampling with LDP, see \cite{GuasoniRobertson2008} and \cite{GlassermanEtAl1999} for further discussion. We denote by $\bW^d_{T}$ the standard $d$-dimensional Wiener space of continuous functions over the time interval $[0,T]$ which are zero at time zero and in the one-dimensional case we simply write $\bW_T$ instead of $\bW^1_T$. This space is endowed with the topology of uniform convergence and with the usual Wiener measure $\bP$, defined on the completed filtration $\cF_{T}$, which makes the process $\mathbf{W}_{t}(x)=x_{t}$ with $x \in \bW_{T}^{d}$ a standard $d$-dimensional Brownian motion. The goal is to estimate the expected value of some functional $\tilde{G}:\bW_{T}^{d} \rightarrow \bR_{+}$ continuous in the uniform topology ($\tilde{G}$ is explained later). For the change of measure, one considers a Girsanov transform where the allowed functions are from the Cameron-Martin space of absolutely continuous functions with square integrable derivative, i.e. (if $d=1$ we just write $\bH_T=\bH^1_T$) \begin{align*} \bH^d_{T} = \left \{ h:[0,T]\mapsto \bR^d: ~ h_{0}=0 \, , ~ h_{\cdot}=\int_0^\cdot \dot{h}_{t}dt\, , ~ ~ \int_{0}^{T} |\dot{h}_{t}|^{2} \, \dd t < \infty\ \textrm{\ i.e. \ } \dot h_{t} \in L_{t}^2(\bR^d) \right \}. \end{align*} For any deterministic drift $h \in \bH^d_{T}$, the stochastic exponential defines the Radon-Nikodym derivative for an equivalent measure $\bQ$ namely, ($W^\bP$ is a standard $\bP$-Brownian motion) \begin{align} \label{Eq:Deterministic Measure Change} \frac{\dd \bQ}{\dd \bP} = \exp \Big( \int_{0}^{T} \dot{h}_{t} \dd W_{t}^{\bP} - \frac{1}{2} \int_{0}^{T} (\dot{h}_{t})^{2} \, \dd t \Big). \end{align} Under this new measure $\bQ$, the process $W_{\cdot}^{\bQ}= W_{\cdot}^{\bP}-h_{\cdot}$ is a standard $d$-dimensional $\bQ$-Brownian motion. \subsubsection*{Standing assumptions} We consider MV-SDEs with nonlinear interaction between the SDE and its law. In this section we concentrate on one-dimensional SDEs of the form, \begin{align} \label{Eq:More General MV-SDE} \dd X_{t}= b(t,X_{t}, \mu_{t}) \dd t + \sigma \dd W_{t}, \qquad X_0=x_0. \end{align} Throughout this section the following assumptions area assumed to hold (similar to those in Section \ref{Sec:Representation}), for functions $b:[0,T] \times \bR \times\cP_2(\bR) \to \bR$ and $\sigma>0$ constant. \begin{assumption} \label{Ass:Drift Lipschitz Assumption} Assume that $b$ is Lipschitz in the sense that $\exists L>0$ such that $\forall t \in[0,T]$, $\forall x, x'\in \bR$ and $\forall \mu, \mu'\in \cP_2(\bR)$ we have that $$ |b(t, x, \mu)-b(t, x',\mu')| \leq L(|x-x'| + W^{(2)}(\mu, \mu') ). $$ Moreover, $\forall$ $x\in \bR$ and $\mu \in \cP_{2}(\bR)$, $b$ is continuous over the interval $[0,T]$. \end{assumption} \begin{assumption} \label{Ass:Drift Monotone Assumption} Assume $b$ satisfies the one-sided Lipschitz growth and local Lipschitz conditions in Assumption \ref{Ass:Monotone Assumption}. Further, $\forall$ $x\in \bR$ and $\mu \in \cP_{2}(\bR)$, let $b$ be continuous in time over the interval $[0,T]$. \end{assumption} In view of Section \ref{Sec:Representation}, either of these assumptions yield the existence of a unique strong solution to \eqref{Eq:More General MV-SDE}. We further use the following assumption for the terminal function $G$. Note that this assumption is on $G$ as a function of the SDE, rather than the driving Brownian motion as is the case in \cite{GuasoniRobertson2008}. \begin{assumption} \label{Ass:General Payoff Growth} The functional $G$ is non-negative, continuous and satisfies the following growth condition \begin{align*} \log(G(x)) \le C_{1} + C_{2} \sup_{t \in [0,T] } |x_{t}|^{\alpha} \, , \end{align*} for $x:[0,T]\mapsto \bR$ a continuous function starting at $x_0$ where $C_{1},~C_{2}$ are positive constants and $\alpha \in [1,2)$. \end{assumption} The notion of ``optimality'' for the measure change we use is so-called \emph{asymptotically optimal}, as defined in\footnote{A related but slightly weaker definition of optimality is used in \cite{GuasoniRobertson2008}.} \cite{GlassermanEtAl1999}. Following the approach of \cite{GlassermanEtAl1999}, we want to estimate $\bE[ \exp(\log(G(X))) ]$. Here we perform a measure change for the Brownian motion, so for ease of writing let us define $F(W) := \log(G(X(W)))$ and consider the more general problem of estimating, \begin{align*} \alpha( \epsilon) := \bE[ \exp( F(\sqrt{\epsilon}W)/\epsilon)], \qquad \text{for } \epsilon>0 . \end{align*} This is our original problem when $\epsilon=1$, and we can use Varadhan's lemma to understand this quantity as $\epsilon \rightarrow 0$, this is referred to as \emph{small noise asymptotics}. We now consider a general estimator for this quantity $\hat{\alpha}(\epsilon)$ (there is no requirement for $\hat{\alpha}$ to be based on a deterministic measure change). At this point we have no conditions on these estimators so we follow definition \cite{GlassermanEtAl1999}*{Definition 2.1}. \begin{definition} \label{Defn:Asymptotically Unbiased} A family of estimators $\{ \hat{\alpha}(\epsilon)\}$ is said to be \emph{asymptotically relatively unbiased} if the following holds, \begin{align*} \frac{\bE[\hat{\alpha}(\epsilon)] - \alpha(\epsilon)} {\alpha(\epsilon)} \rightarrow 0 \quad \text{as } \epsilon \rightarrow 0 \, . \end{align*} \end{definition} The above definition yields estimators that in some sense converge, but we are interested in comparing such estimators and for this we look at their second moment. \begin{definition} \label{Defn:General Asymptotic Optimality} A family of relatively unbiased estimators $\{\hat{\alpha}_{0}(\epsilon)\}$ is said to be \emph{asymptotically optimal} if, \begin{align*} \limsup_{\epsilon \rightarrow 0} \epsilon \log \bE [\hat{\alpha}_{0}(\epsilon)^{2}] = \inf_{\{\hat{\alpha}(\epsilon)\}} \limsup_{\epsilon \rightarrow 0} \epsilon \log \bE [\hat{\alpha}(\epsilon)^{2}], \end{align*} where the infimum is over all asymptotically relatively unbiased estimators. \end{definition} One of the goals of this section will be obtaining conditions when measure changes of type \eqref{Eq:Deterministic Measure Change} are asymptotically optimal. As it turns out, using this definition it is not difficult to obtain a necessary and sufficient condition for asymptotic optimality, a similar argument is given in \cite{GlassermanEtAl1999}*{pg 126}. Let us consider some asymptotic unbiased estimator $\hat{\alpha}$, and define the difference $\Delta (\epsilon) := \bE[ \hat{\alpha}(\epsilon)] - \alpha(\epsilon)$, it is a straightforward consequence of Jensen's inequality and some rearranging, \begin{align*} \log( \bE[\hat{\alpha}(\epsilon)^{2}]) \ge 2 \log( \bE[\hat{\alpha}(\epsilon)]) = 2 \log( \bE[\alpha (\epsilon)]) + O(\Delta(\epsilon)/ \alpha(\epsilon)) \xrightarrow{\epsilon \rightarrow 0} 2 \log( \bE[\alpha (\epsilon)]) \, . \end{align*} Thus we have a lower bound for an estimator, moreover, note that this implies the degenerate estimator $\hat{\alpha}(\epsilon) = \alpha(\epsilon)$ is asymptotically optimal, since $\alpha$ is not random. One can use Varadhan's lemma and Schilder's theorem (see Section \ref{Sec:Proofs Complete Measure Change}) since we are dealing with Brownian motion to obtain, \begin{align} \label{Eq:General Asymptotically Optimal Condition} \limsup_{\epsilon \rightarrow 0} 2 \epsilon \log(\alpha(\epsilon)) = \sup_{u \in \bH_{T}^{d}} \left\{ 2\log(G(X(u))) - \int_{0}^{T} |\dot{u}_{t}|^{2} \dd t \right\}. \end{align} Therefore any estimator which equals the RHS of this expression is asymptotically optimal. Depending on which algorithm we use this will be a slightly different expression but the argument to obtain the bound is the same. \subsection{The decoupling algorithm} \label{Sec:Measure Change for Decoupling} We first consider the decoupling algorithm presented in Section \ref{sec:DecouplingIS}. We build $\mu_{t}^{N}$, from an independent $N$-particle system which is simulated under a numerical scheme, and then consider the following approximation of SDE\footnote{The measure, $\mu^{N}$ is a random measure, but is independent of the process $\overline{X}$ thus we have decoupled the SDE.} \eqref{Eq:More General MV-SDE}, \begin{align} \label{Eq:Particle Approx General MV-SDE} \dd \overline{X}_{t}= b(t,\overline{X}_{t}, \mu_{t}^{N}) \dd t + \sigma\dd W_{t}, \qquad X_0=x_0 \, . \end{align} In order to distinguish the current SDE from the previous particle approximation we introduce a so-called copy space (see for example \cite{BuckdahnEtAl2017}) $(\tilde{\Omega}, \tilde{\cF} , (\tilde{\cF}_{t})_{t \ge 0}, \tilde{\bP})$ (with the usual conditions and $\tilde{\cF}_{t}$ is the augmented filtration over the $N$-dimensional Brownian motion). The $N$-system SDEs used to approximate this measure is then defined on this space, hence \eqref{Eq:Particle Approx General MV-SDE} is defined on the product space $(\Omega, \cF, \bP) \otimes (\tilde{\Omega}, \tilde{\cF} , \tilde{\bP})$. Our aim is now to minimize over $h\in \mathbb H_T$ the variance conditional on the trajectory of $\mu^N$: $$ \bE_{\bP \otimes \tilde{\bP}}\big[\, G(\overline X_{T})^{2} \mathcal E_T^{-1}\big|\tilde {\mathcal F}_{T}\big], \quad \dd \mathcal E_t = \dot h_t \mathcal E_t \dd W_{t}^{\bP},\quad \mathcal E_0 = 1, $$ and we make use of small noise asymptotics in order to write this variance in a ``LDP'' tractable form, hence we define, for $h\in\bH_T$ \begin{align} \label{Eq:Small Noise for General MV-SDE} L(h; \mu^{N}) := \limsup_{\epsilon \rightarrow 0} \epsilon \log \bE_{\bP\otimes \tilde{\bP}} \left[ \exp\left( \frac{1}{\epsilon}\Big(2 \log(\overline{G}(\sqrt{\epsilon} W)) - \int_{0}^{T} \sqrt{\epsilon} \dot{h}_{t} \dd W_{t} + \frac{1}{2} \int_{0}^{T} \dot{h}_{t}^{2} \dd t \Big) \right) \Big | \tilde{\cF}_{T} \right] \, , \end{align} where $\overline{G}(W) := G(\overline{X}(W))$. One should also keep in mind that $\overline{G}$ also depends on $\mu^{N}$, however, we suppress this notation for ease of presentation. \begin{remark} In \eqref{Eq:Small Noise for General MV-SDE}, we have a conditional expectation, thus $L(h; \mu^{N})$ is technically a random variable in $\tilde{\Omega}$. This is not typically the case when using Varadhan's lemma, however, because the random variable is independent of the Brownian motion and $\overline{G}$ is still $\tilde{\bP}$-a.s. continuous w.r.t. the Brownian motion (Section \ref{Sec:Proofs Decoupled}), upon checking the moment condition, we are still able to use Varadhan's lemma, $\tilde{\bP}$-a.s.. \end{remark} \begin{theorem} \label{Thm:Decoupled} Let Assumptions \ref{Ass:General Payoff Growth} and \ref{Ass:Drift Monotone Assumption} hold and fix $\tilde{\omega} \in \tilde{\Omega}$ (and thus $\mu^{N}$). Furthermore assume that there exists $u \in \bH_{T}$ such that $\overline{G}(u) >0$. Then the following statements hold: \begin{itemize} \item[i.] Let $h\in\bH_T$ such that $\dot{h}$ is of finite variation. Then Varadhan's lemma holds for the small noise asymptotics, namely we can rewrite \eqref{Eq:Small Noise for General MV-SDE} as, \begin{align} \label{Eq:Decoupled L in sup form} L(h; \mu^{N})= \sup_{u \in \bH_{T}} \left\{ 2 \log (\overline{G}(u)) - \int_{0}^{T} \dot h_{t} \dot u_{t} \dd t + \frac{1}{2} \int_{0}^{T} \dot h_{t}^{2} \dd t - \frac{1}{2} \int_{0}^{T} \dot u_{t}^{2} \dd t \right\} ~ ~ ~ \tilde{\bP}\text{-a.s.} \, . \end{align} \item[ii.] There exists an $h^* \in \bH_{T}$ which minimizes \eqref{Eq:Decoupled L in sup form}. \item[iii.] Consider a simplified optimization problem \begin{align} \label{Eq:General Calculus of Variation problem} \sup_{u \in \bH_{T}} \left\{ 2 \log(\overline{G}(u)) - \int_{0}^{T} \dot u_t^2 \dd t \right\} \, . \end{align} There exists a maximizer $h^{**}$ for this problem. If \begin{align} \label{Eq:General Asymptotic Optimality check} L({h}^{**} ; \mu^{N})=2 \log(\overline{G}({h}^{**})) - \int_{0}^{T} (\dot {h}^{**}_{t} )^{2} \dd t \, , \end{align} \end{itemize} then $h^{**}$ defines an asymptotically optimal measure change and is the unique maximizer of \eqref{Eq:General Calculus of Variation problem}. \end{theorem} All of these results are $\tilde{\bP}$-a.s. since the particle system yields a random measure from $\tilde{\Omega}$. The proof of this theorem requires several auxiliary results which we defer to Section \ref{Sec:Proofs Decoupled}. One should also note that the requirement for $\overline{G}>0$ for some $u$ is not restrictive, it is purely there for technical reasons since one cannot have a maximiser if $\log(\overline{G}(u))= - \infty$ for all $u \in \bH_{T}$. The assumption that $\dot h$ has finite variation is necessary to establish the continuity of the functional in Varadhan's lemma. \begin{remark} [Concavity of $\log(\overline{G})$ and asymptotic optimality] Consider the problem of minimizing \eqref{Eq:Decoupled L in sup form} and assume that one can interchange the inf and the sup. Then, \begin{align*} \inf_{h \in \bH_{T}} L(h; \mu^{N})& = \sup_{u \in \bH_{T}}\inf_{h \in \bH_{T}} \left\{ 2 \log (\overline{G}(u)) - \int_{0}^{T} \dot h_{t} \dot u_{t} \dd t + \frac{1}{2} \int_{0}^{T} \dot h_{t}^{2} \dd t - \frac{1}{2} \int_{0}^{T} \dot u_{t}^{2} \dd t \right\} \\ &= \sup_{u \in \bH_{T}} \left\{ 2 \log (\overline{G}(u)) - \int_{0}^{T} \dot u_{t}^{2} \dd t \right\} \end{align*} because the inner problem is solved by $h = u$. Therefore, a sufficient condition for an asymptotically optimal measure change of type \eqref{Eq:Deterministic Measure Change} is the exchangeability of inf and sup above. Since $L$ is a convex function in $h$, and the integral terms in \eqref{Eq:Decoupled L in sup form} are concave in $u$, a sufficient condition for such exchangeability is that $\log(\overline{G})$ is concave. Indeed, in the case of convex-concave functions we can invoke the minimax principle to swap infimum and supremum, see \cite{EkelandTemam1999}*{pg. 175} for example. In \cite{GuasoniRobertson2008}, the process $X$ was a geometric Brownian Motion and the authors were able to explicitly link the concavity of $\log(\overline{G})$ with the properties of the function $G$. Here the dependence of $\overline{G}$ on the Brownian motion is more complex, and it appears to be difficult to check concavity. Hence, in general one has to check numerically whether \eqref{Eq:General Asymptotic Optimality check} holds. However, even if \eqref{Eq:General Asymptotic Optimality check} fails, one can still use $h^{**}$ to construct a candidate importance sampling measure if this is justified by superior numerical performance. \end{remark} \subsection{The complete measure change algorithm} Here we focus on the algorithm discussed in Section \ref{Sec:Complete Measure Change}. Recall that we are interested in evaluating, $\bE_{\bP}[G(X)]$. We now change the measure to $\bQ$ and calculate the variance, \begin{align*} \text{Var}_{\bQ} \Big[ G(X) \frac{\dd \bP}{\dd \bQ} \Big] = \bE_{\bP}\Big[ G(X)^{2} \frac{\dd \bP}{\dd \bQ} \Big] - \bE_{\bP}\Big[ G(X) \Big]^{2} \, . \end{align*} Minimising the variance is equivalent to minimize the first term in the RHS. As a first step to constructing a tractable proxy for this variance we consider a particle approximation of $X$: \begin{align} \label{Eq:Form of MV-SDE} \dd X_{t}^{i,N} &= b\left( t, X_{t}^{i,N}, \frac{1}{N} \sum_{j=1}^{N} \delta_{X_{t}^{j,N}} \right) \dd t + \sigma \dd W_{t}^{i, \bP} \, , \quad X_{0}^{i,N} = x_0 \, , \\ \dd \mathcal E^{i}_t & = \dot h_t \mathcal E^{i}_t \dd W^{i, \bP}_t,\quad \mathcal E^{i}_0 = 1, \end{align} where $W^{i,\bP}$ denotes the driving $\bP$-Brownian motion of particle $i$, and all $W^{i,\bP}$s are independent of each other. We approximate $\bE_{\bP}[G^{2}(X) (\cE_T)^{-1}]$ with $\mathbb E_\bP[ G^2(X^{i,N})(\mathcal E_T^{i,N})^{-1}]$. Since $\mathcal E^{i}=\mathcal E^{i,N}$ (due to the absence of cross dependency), one can equivalently minimize \begin{align} \ \mathbb E_\bP\big[\ G^2(X^{i,N})(\mathcal E_T^{i})^{-1}\ \big] \, ,\label{prelimit} \ \textrm{ over all $h\in \mathbb H_T$.} \end{align} In order to use the LDP theory to minimize \eqref{prelimit}, we define $\tilde{G}$ as the functional dependent on the underlying $\bP$-Brownian motions, i.e., for all $i \in \{1, \dots, N\}$, $\tilde{G}_{i} :\bW^N_T\mapsto \bR$, where, $\tilde{G}_{i}(W^{1}, \dots, W^{N}):= G(X^{i,N}(W^{1}, \dots, W^{N}))$. The corresponding small noise asymptotics takes the following form: \begin{align} \label{Eq:Small Noise for Particles} \notag \bar{L}(h):= \limsup_{\epsilon \rightarrow 0} \epsilon \log \Bigg( \bE_{\bP} \Bigg[ \exp \Bigg( \frac{1}{\epsilon} \Big( 2 \log & \big(\tilde{G}_i\big(\sqrt{\epsilon}W^{1}, \dots, \sqrt{\epsilon}W^{N}\big)\big) \\ &- \int_{0}^{T} \sqrt{\epsilon} \dot h_{t} \dd W^i_{t} + \frac{1}{2} \int_{0}^{T} (\dot h_{t})^{2} \dd t \Big) \Bigg) \Bigg] \Bigg) \, , \quad h \in \bH_{T} \end{align} where we remark that the value of this expression does not depend on the choice of $i$. We then obtain the following result for $\bar{L}$ (compare with Theorem \ref{Thm:Decoupled}). \begin{theorem} \label{Thm:Complete Measure Change} Fix $N\in \bN$ and let Assumptions \ref{Ass:General Payoff Growth} and \ref{Ass:Drift Lipschitz Assumption} hold. Assume that there exists $(u^{1}, \hat{u}) \in \bH_{T}^{2}$ such that $\tilde{G}_{1}(u^{1}, \hat{u}, \dots, \hat{u})>0$. Then the following statements hold \begin{itemize} \item[i.] Let $h\in \bH_T$ such that $\dot h$ is of finite variation. Then Varadhan's lemma holds for the small noise asymptotics and we can rewrite \eqref{Eq:Small Noise for Particles} as \begin{align} \label{Eq:Complete L in sup form} \bar{L}(h)= \sup_{u \in \bH^N_{T}} \left\{ 2 \log (\tilde{G}_{1}(u^{1}, \dots, u^{N})) - \int_{0}^{T} \dot h_{t} \dot u_{t}^{1} \dd t + \frac{1}{2} \int_{0}^{T} (\dot h_{t})^{2} \dd t - \frac{1}{2} \int_{0}^{T} |\dot u_{t}|^{2} \dd t \right\} \, , \end{align} \item[ii.] There exists an $h^* \in \bH_{T}$ which minimizes \eqref{Eq:Complete L in sup form}. \item[iii.] Consider a simplified optimization problem \begin{align} \label{Eq:Complete L minimiser} \sup_{u^{1} \in \bH_{T}, \hat{u} \in \bH_{T}} \left\{ 2 \log(\tilde{G}_{1}(u^{1}, \hat{u}, \dots, \hat{u}) ) - \int_{0}^{T} (\dot u_{t}^{1})^{2} \dd t - \frac{N-1}{2} \int_{0}^{T} \dot{\hat{u}}_{t}^{2} \dd t \right\} \, . \end{align} There exists a maximizer $(h^{**},u^{**})$ for this problem. If \begin{align} \label{Eq:Complete Asymptotic Optimality check} \bar{L}({h}^{**}) = 2 \log \big(\tilde{G}_{1}({h}^{**}, {u}^{**}, \dots, {u}^{**}) \big) - \int_{0}^{T} (\dot {h}^{**}_{t})^{2} \dd t - \frac{N-1}{2} \int_{0}^{T} (\dot{u}^{**}_{t})^{2} \dd t \, . \end{align} then $h^{**}$ is asymptotically optimal and is the unique maximizer of \eqref{Eq:Complete L minimiser}, where we have taken $i=1$ without loss of generality. \end{itemize} \end{theorem} The proof of this theorem is deferred to Section \ref{Sec:Proofs Complete Measure Change}. Similarly to the previous discussion if $\log(\tilde{G}_{1})$ is a concave function in $u \in \bH_{T}^{N}$, then we know that \eqref{Eq:Complete Asymptotic Optimality check} holds (this is discussed at the end of Section \ref{Sec:Proofs Complete Measure Change}). However, in general \eqref{Eq:Complete Asymptotic Optimality check} is difficult to check since, even with $h^{*}$ fixed, $\bar{L}$ is still an $N$-dimensional optimisation problem, since \eqref{Eq:Complete L in sup form} is supremum over $u \in \bH_{T}^{N}$. There is also a difficulty in quantifying how the measure change affects the propagation of chaos error i.e. a measure change that is good for the statistical error may be damaging to the propagation of chaos error. We discuss this point further in Section \ref{sec:Numerics}. \subsection{Computing the optimal measure change} The exponential form of the SDEs (the log-normal class) considered in \cite{GuasoniRobertson2008} and \cite{Robertson2010} allows the maximisation to be written in the form of an Euler-Lagrange equation (calculus of variations approach). Due to the more general coefficients here, we obtain a more complex interaction between the Brownian motion and the value of the SDE. Consequently we need to look towards the more general theory of optimal control to calculate the change of measure\footnote{Even though we are initially dealing with SDEs, in the large deviations asymptotics, the trajectory of the Brownian motion becomes a deterministic control.}. Deterministic optimal control is a large subject area and one can consult \cite{FlemingRishel1975} or \cite{YongZhou1999} for example. We recall that we are working under the $\bP$-measure. One of the most important results from optimal control is Pontryagin's maximum principle. Roughly speaking, Pontryagin's maximum principle gives a set of differential equations that the optimal control must satisfy. Let us recall the main ideas following \cite{YongZhou1999}*{p.102}. We start with the controlled dynamical system $x(t)$ which takes the following form: \begin{align} \label{Eq:General Control System} \begin{cases} \dot{x}(t)=b(t,x(t),u(t)), \quad \text{a.e.} ~ t \in [0,T] \\ x(0)=x_{0} \, , \end{cases} \end{align} where $u$ is our ``control'', which is defined in a metric space $(U,d)$ and associated to this we have a \emph{cost functional} \begin{align} \label{Eq:General Cost Functional} J(u(\cdot)) = \int_{0}^{T} f(t, x(t), u(t)) \dd t + h(x(T)) \, , \end{align} $f$ is typically referred to as the \emph{running cost} and $h$ the \emph{terminal cost}. We then have the following assumption. \begin{assumption} \label{Ass:Optimal Control Assumption} For ease of writing we denote by $\varphi(t,x,u)$ to be any of the functions $b(t,x,u), ~ f(t,x,u)$ or $h(x)$. We then assume the following, \begin{itemize} \item $(U,d)$ is a separable metric space and $T>0$. \item The maps $b:[0,T] \times \bR^{n} \times U \rightarrow \bR^{n}$, $f:[0,T] \times \bR^{n} \times U \rightarrow \bR$ and $h: \bR^{n} \rightarrow \bR$ are measurable and there exists a constant $L>0$ and a modulus of continuity $\eta : [0, \infty) \rightarrow [0, \infty)$ such that, \begin{align*} \begin{cases} | \varphi(t,x,u)-\varphi(t, \hat{x}, \hat{u})| \le L|x- \hat{x}| + \eta(d(u, \hat{u})) \quad &\forall t \in [0,T] ~ x,\hat{x} \in \bR^{n}, ~ u,\hat{u} \in U \, , \\ |\varphi(t,0,u)| \le L & \forall (t,u) \in [0,T] \times U \, . \end{cases} \end{align*} \item The maps $b, ~ f$ and $h$ are $C^{1}$ in $x$ and there exists a modulus of continuity $\eta : [0, \infty) \rightarrow [0, \infty)$ such that, \begin{align*} | \partial_{x} \varphi(t,x,u) - \partial_{x} \varphi(t, \hat{x}, \hat{u})| \le \eta \Big(|x - \hat{x}| + d(u, \hat{u}) \Big) \quad \forall t \in [0,T] ~ x,\hat{x} \in \bR^{n}, ~ u,\hat{u} \in U \, . \end{align*} \end{itemize} \end{assumption} As discussed in \cite{YongZhou1999}*{p.102}, Assumption \ref{Ass:Optimal Control Assumption} implies that \eqref{Eq:General Control System} admits a unique solution and \eqref{Eq:General Cost Functional} is well defined. Let us denote by $\cU [0,T] := \{ u(\cdot): [0,T] \rightarrow U ~|~ u \text{ is measurable}\}$, then optimal control problem is to find $u^{*} \in \cU[0,T]$ that satisfies, \begin{align} \label{Eq:General Optimal Control Problem} J(u^{*}) = \inf_{u \in \cU[0,T]} J(u) \, . \end{align} Such $u^{*}$ is referred to as an \emph{optimal control}, and the corresponding $x^{*}(\cdot):=x(\cdot ; u^{*})$ the \emph{optimal state trajectory}. We can then state the deterministic version of Pontryagin's maximum principle as \cite{YongZhou1999}*{p.103}. \begin{theorem} \label{Thm:Pontryagin} [Pontryagin's Maximum Principle] Let Assumption \ref{Ass:Optimal Control Assumption} hold and let $(x^{*}, u^{*})$ be the optimal pair to \eqref{Eq:General Optimal Control Problem}. Then, there exists a function $p: [0,T] \rightarrow \bR^{n}$ satisfying the following, \begin{align} \label{Eq:General Adjoint Equation} \begin{cases} \dot{p}(t) = - \partial_{x} b(t, x^{*}(t), u^{*}(t))^{\intercal} p(t) + \partial_{x} f(t, x^{*}(t), u^{*}(t)), \quad \text{a.e.} ~ t \in [0,T] \\ p(T)= - \partial_{x} h(x^{*}(T)) \, , \end{cases} \end{align} and \begin{align*} H(t, x^{*}(t), u^{*}(t), p(t)) = \max_{u \in U} \{ H(t, x^{*}(t), u, p(t)) \} \quad \text{a.e.} ~ t \in [0,T] \, , \end{align*} where $H(t,x,u,p):= \langle p, b(t,x,u)\rangle - f(t,x,u)$ for any $(t,x,u,p) \in [0,T] \times \bR^{n} \times U \times \bR^{n}$. \end{theorem} Typically $p$ is referred to as the \emph{adjoint function} and \eqref{Eq:General Adjoint Equation} the \emph{adjoint equation}, and the function $H$ is called the \emph{Hamiltonian}. \begin{remark} [An alternative approach] The maximum principle is not the only way one can use to solve this problem. An alternative is by solving the so-called Hamilton-Jacobi-Bellman (HJB) equation. This approach is typically more difficult since the HJB is a \emph{partial differential equation}. \end{remark} \textbf{Maximum principle for Theorems \ref{Thm:Decoupled} and \ref{Thm:Complete Measure Change}.} The maximum principle allows to translate the simplified optimization problems of Theorems \ref{Thm:Decoupled} and \ref{Thm:Complete Measure Change} into boundary value problems for ODE. One can observe that we are actually interested in $\dot{u}$ rather than $u$, that is, in the decoupled case we can write the controlled dynamics as \begin{align*} X_{t}(\dot{u}) = x_{0} + \int_{0}^{t} b(s, X_{s}(\dot{u}), \mu_{s}^{N}) \dd s + \int_{0}^{t} \sigma \dot{u}_{s} \dd s \, . \end{align*} The theory above is for infimum while we are interested in supremum, therefore we use the fact that $\sup \{f\} = - \inf \{-f\}$. $\triangleright$ \emph{For the decoupling algorithm} Theorem \ref{Thm:Pontryagin} yields the following equations for the adjoint function and trajectory under optimal control $\dot{u}^{*}$ (for a given $\mu^{N}$), \begin{align} \label{Eq:Decoupled Maximum Principle} \text{(Decoupled)} \quad \begin{cases} \dot{p}_{t} = - \partial_{x} b(t, X_{t}(\dot{u}^{*}), \mu_{t}^{N}) p_{t} \, , \qquad & p_{T}= \dfrac{2 G'(X(\dot{u}^{*}))}{G(X(\dot{u}^{*}))} \, , \\ \dot{X}_{t} = b(t, X_{t}, \mu_{t}^{N}) + \frac{1}{2} \sigma^{2} p_{t} \, , & X_{0}=x_{0} \, , \end{cases} \end{align} that is, the optimal control is related to $p$ through, $\dot{u}^{*}_{t}= \frac{1}{2} \sigma p_{t}$. $\triangleright$ \emph{For the complete measure change algorithm} the argument is similar argument to the above one, but here we also need to deal with the measure term. Noting that we have two controls to optimise over (recall Theorem \ref{Thm:Complete Measure Change}) we obtain more complex expressions. Theorem \ref{Thm:Pontryagin} yields the following system of ODEs, \begin{align} \label{Eq:Complete Maximum Principle} \text{(Complete)} ~ \begin{cases} \dot{p}_{t}^{1} = - \partial_{X^{1}} b(t, X_{t}^{1}, \frac{1}{N}\delta_{X_{t}^{1}} + \frac{N-1}{N}\delta_{\hat{X}_{t}} ) p_{t}^{1} - \partial_{X^{1}} b(t, \hat{X}_{t}, \frac{1}{N}\delta_{X_{t}^{1}} + \frac{N-1}{N}\delta_{\hat{X}_{t}} ) p_{t}^{2} \, , ~ & p_{T}^{1}= \frac{2 G'(X^{1})}{G(X^{1})} \, , \\ \dot{p}_{t}^{2} = - \partial_{\hat{X}} b(t, X_{t}^{1}, \frac{1}{N}\delta_{X_{t}^{1}} + \frac{N-1}{N}\delta_{\hat{X}_{t}} ) p_{t}^{1} - \partial_{\hat{X}} b(t, \hat{X}_{t}, \frac{1}{N}\delta_{X_{t}^{1}} + \frac{N-1}{N}\delta_{\hat{X}_{t}} ) p_{t}^{2} \, , ~ & p_{T}^{2}= 0 \, , \\ \dot{X}_{t}^{1} = b(t, X_{t}^{1}, \frac{1}{N}\delta_{X_{t}^{1}} + \frac{N-1}{N}\delta_{\hat{X}_{t}}) + \frac{1}{2} \sigma^{2} p_{t}^{1} \, , & X_{0}^{1}=x_{0} \, , \\ \dot{\hat{X}}_{t} = b(t, \hat{X}_{t}, \frac{1}{N}\delta_{X_{t}^{1}} + \frac{N-1}{N}\delta_{\hat{X}_{t}}) + \frac{1}{2(N-1)} \sigma^{2} p_{t}^{2} \, , & \hat{X}_{0}=x_{0} \, , \end{cases} \end{align} similarly we obtained, $\dot{u}^{*}_{t}= \frac{1}{2} \sigma p_{t}^{1}$ and $\dot{\hat{u}}^{*}_{t}= 0$ as the optimal controls. From Theorem \ref{Thm:Complete Measure Change} we obtain the measure change as $\dot{h} = \dot{u}$. The difference between \eqref{Eq:Decoupled Maximum Principle} and \eqref{Eq:Complete Maximum Principle} comes from the fact that for the complete measure change we have a higher dimensional problem. That is, we have two controls and two ``SDEs'' thus we have more terms to optimise. Recall, when one wishes to assess asymptotic optimality, \eqref{Eq:Complete L in sup form} is still an $N$-dimensional problem. \begin{remark} [Accuracy of Change of Measure] In \cite{GuasoniRobertson2008}, they were able to obtain explicit solutions in certain situations, but here, due to the increase in complexity, we expect this to rarely be the case. We therefore need to set reasonable tolerances in checking whether asymptotic optimality holds. \end{remark} \section{Example: Kuramoto model} \label{sec:Numerics} The Kuramoto model is a special case of a so-called system of coupled oscillators. Such models are of particular interest in physics and are used to study many different phenomena such as active rotator systems, charge density waves and complex biological systems amongst other things, see \cite{KosturEtAl2002} for further details. The corresponding SDE for the Kuramoto model is \begin{align*} \dd X_{t} = \left( K \int_{\bR}\sin (y - X_{t}) \mu_{t, \bP}^{X}(\dd y) -\sin(X_{t}) \right) \dd t + \sigma \dd W_{t}^\bP \, , ~ ~ ~ t \in [0,T] , ~ ~ X_{0}=x_{0} \, , \end{align*} where $K$ is the coupling strength and $\sigma$ has the physical interpretation of the temperature in the system. We consider a terminal condition $G(x)= a \exp (bx)$ (satisfying Assumption \ref{Ass:General Payoff Growth}). Our goal is to obtain the asymptotically optimal change of measure that improves the estimation of $\bE_\bP[ G(\bar{X}_{T})]$. One can see that such a model easily satisfies the assumptions required in the paper. Let us now apply the theory from the previous section to calculate the optimal change of measure. We should point out here that we do not have the concavity required for asymptotic optimality to hold automatically, therefore we need to check this condition. By our previous discussion, to apply the decoupling algorithm here we would generate a set of $N$ weakly interacting SDEs which we denote by $Y^{i,N}$ and approximate the original SDE by, \begin{align*} \dd \bar{X}_{t} = \left( \frac{K}{N} \sum_{i=1}^{N} \sin (Y_{t}^{i,N} - \bar{X}_{t}) -\sin(\bar{X}_{t}) \right) \dd t + \sigma \dd W_{t}^\bP \, , ~ ~ ~ t \in [0,T] , ~ ~ \bar{X}_{0}=x_{0} \, . \end{align*} Let us now apply the theory from the previous section to calculate the optimal change of measure. Our optimal control argument implies solving $\tilde \bP$-a.s. \begin{align*} \text{(Decoupled)} \quad \begin{cases} \dot{p}_{t} = \left( \frac{K}{N} \sum_{i=1}^{N} \cos(Y_{t}^{i,N} - X_{t}) + \cos(X_{t}) \right) p_{t} \, , \qquad & p_{T}= 2b \, , \\ \dot{X}_{t} = \left( \frac{K}{N} \sum_{i=1}^{N} \sin (Y_{t}^{i,N} - X_{t}) -\sin(X_{t}) \right) + \frac{1}{2} \sigma^{2} p_{t} \, , & X_{0}=x_{0} \, . \end{cases} \end{align*} The complete measure change algorithm yields the following system, \begin{align*} \text{(Complete)} ~ \begin{cases} \dot{p}_{t}^{1} =K \big(\frac{N-1}{N}\cos(\hat{X}_{t} - X_{t}^{1}) - \cos(X_{t}^{1}) \big) p_{t}^{1} - \frac{K}{N}\cos(X_{t}^{1}- \hat{X}_{t}) p_{t}^{2} \, , ~ & p_{T}^{1}= 2b \, , \\ \dot{p}_{t}^{2} = - K \frac{N-1}{N} \cos(\hat{X}_{t} - X_{t}^{1}) p_{t}^{1} + K \big( \frac{1}{N} \cos(X_{t}^{1}- \hat{X}_{t}) + \cos(\hat{X}_{t}) \big) p_{t}^{2} \, , ~ & p_{T}^{2}= 0 \, , \\ \dot{X}_{t}^{1} = K \big( \frac{N-1}{N} \sin (\hat{X}_{t} - X_{t}^{1}) - \sin(X_{t}^{1}) \big) + \frac{1}{2} \sigma^{2} p_{t}^{1} \, , & X_{0}^{1}=x_{0} \, , \\ \dot{\hat{X}}_{t} = K \big( \frac{1}{N} \sin(X_{t}^{1} - \hat{X}_{t}) - \sin( \hat{X}_{t}) \big) + \frac{1}{2(N-1)} \sigma^{2} p_{t}^{2} \, , & \hat{X}_{0}=x_{0} \, , \end{cases} \end{align*} To show the numerical advantages one can achieve by using importance sampling we consider how the time taken and the estimate given by the algorithms change with the number of particles $N$. For this example we use, $T=1$, $\bar{X}_{0}=0$, $K=1$, $\sigma=0.3$, $a=0.5$ and $b=10$. For the numerics we use an Euler scheme with step size of $\Delta t=0.02$. The systems of equations are solved using MATLAB's \verb|bvp4c| function. For the importance sampling, we use the particle positions from the first Monte Carlo simulation as the empirical law. \begin{table}[!ht] \centering \small \begin{tabular}{ c | c | c | c | c | c | c | c | c | c} ~ & \multicolumn{3}{c}{Monte Carlo} \vline & \multicolumn{3}{c}{Decoupled} \vline & \multicolumn{3}{c}{Complete} \\ \hline N & Payoff & Error & Time & Payoff & Error & Time & Payoff & Error & Time \\ \hline $1 \times 10^3$ & 1.5066 & 0.1490 & 3 & 1.5729 & 0.0028 & 9 & 1.5419 & 0.0024 & 3 \\ $5\times 10^3$ & 1.5895 & 0.0626 & 27 & 1.5840 & 0.0013 & 54 & 1.5710 & 0.0013 & 28 \\ $1 \times 10^4$ & 1.6813 & 0.0693 & 76 & 1.5728 & 0.0009 & 153 & 1.5860 & 0.0009 & 75 \\ $5\times 10^4$ & 1.5899 & 0.0200 & 1 025 & 1.5820 & 0.0004 & 2 052 & 1.5738 & 0.0004 & 1 062 \\ $1 \times 10^5$ & 1.5807 & 0.0176 & 3 433 & 1.5731 & 0.0003 & 6 935 & 1.5882 & 0.0003 & 3 644 \\ \hline \end{tabular} \caption{Results from standard Monte Carlo and the importance sampling algorithms. Time is measured in seconds and error refers to square root of the variance.} \label{Table:Results from Algorithms} \end{table} We recall that the decoupling importance sampling requires two runs, here we use the same $N$ for both of these. The first note one can make is how the time scales when increasing the number of particles, namely one can truly observe the $N^{2}$ complexity. As expected the decoupling algorithm takes approximately twice as long as the standard Monte Carlo (computing the change of measure is not time consuming). Following this point we also observe that the complete measure change has roughly the same computational complexity as standard Monte Carlo. The other key point is the reduction in variance (standard error) one obtains with importance sampling. For this example we see that both importance sampling schemes reduce the variance by several orders of magnitude. Further, if one is interested in the decoupling algorithm it may be more efficient to take less simulations in the second importance sampled run. Finally, we checked the asymptotic optimality (for the decoupling) numerically and there is only a small difference between the two sides in \eqref{Eq:Complete Asymptotic Optimality check}, we therefore believe we are close to the optimal. Table \ref{Table:Results from Algorithms} does show that the use of importance sampling in MV-SDEs is both viable and worthwhile. $\triangleright$ \emph{Estimating the propagation of chaos error.} As was mentioned in the introduction, theoretically the statistical error and the propagation of chaos error converge to zero at the same rate. We now use this example to show that the statistical error dominates. Since the Euler scheme is the same in all examples we can neglect the bias caused by that. We can then decompose the error as \begin{align*} \frac{1}{N} \sum_{i=1}^{N} G(\bar{X}^{i,N}) - \bE_{\bP}[G(\bar{X}^{1})] = \frac{1}{N} \sum_{i=1}^{N} G(\bar{X}^{i,N}) - \bE_{\bP}[G(\bar{X}^{1,N})] + \bE_{\bP}[G(\bar{X}^{1,N})] - \bE_{\bP}[G(\bar{X}^{1})] \, . \end{align*} The first difference on the RHS is the statistical error, and the second one is the propagation of chaos error. It is then clear that if one considers $M$ realisations of $\frac{1}{N} \sum_{i=1}^{N} G(\bar{X}^{i,N})$ and takes the average this approximates $\bE_{\bP}[G(\bar{X}^{1,N})]$ but does not change the propagation of chaos error. Hence for large $M$ the error reduces to the propagation of chaos error. To show the propagation of chaos error is negligible compared to the statistical error here, we repeat the simulation for $N=5 \times 10^{3}$ particles, $M=10^{3}$ times and we obtain an average terminal value of $1.5772$ (with an average standard error of $0.06533$, which agrees with the result in Table \ref{Table:Results from Algorithms}). Comparing this to the $10^{5}$ decoupled entry (which has almost no statistical error) in Table \ref{Table:Results from Algorithms}, we can conclude the propagation of chaos error at least an order of magnitude smaller than the statistical error. \subsubsection*{Another example: a terminal condition function with steep slope} Let us consider the terminal condition $G(x)= \big(\tanh(a(x-b))+1 \big)/2$, for $a$ large ($G$ can be understood as a mollified indicator function). Then $\bE_\bP[G(X_{T})] \approx \bP(X_{T} \ge b)$. We take the same set up as before but with $a=15$ and $b=1$ and note that the terminal condition for adjoint takes the form, \begin{align*} p_{T} = 2a \Big(1- \tanh \big(a \big( X_{T}(\dot{u}^{*}) -b \big) \big) \Big) \, . \end{align*} We obtain the following table (we omit the times here since they are similar). \begin{table}[!ht] \centering \small \begin{tabular}{ c | c | c | c | c | c | c} ~ & \multicolumn{2}{c}{Monte Carlo} \vline & \multicolumn{2}{c}{Decoupled} \vline & \multicolumn{2}{c}{Complete} \\ \hline N & Payoff ($10^{-9}$) & Error ($10^{-9}$) & Payoff ($10^{-9}$) & Error ($10^{-9}$) & Payoff ($10^{-9}$) & Error ($10^{-9}$) \\ \hline $1 \times 10^3$ & 1.015 & 0.671 & 3.864 & 0.0250 & 8.456 & 0.101 \\ $5\times 10^3$ & 1.093 & 0.752 & 3.952 & 0.0112 & 5.564 & 0.0185 \\ $1 \times 10^4$ & 8.829 & 7.071 & 3.910 & 0.0077 & 32.956 & 0.1520 \\ $5\times 10^4$ & 1.106 & 0.271 & 3.970 & 0.0035 & 2.101 & 0.0024 \\ $1 \times 10^5$ & 5.158 & 1.990 & 3.901 & 0.0024 & 16.781 & 0.019 \\ \hline \end{tabular} \caption{Results from standard Monte Carlo and the importance sampling algorithms. Note that for ease of presentation the payoff and error are all scaled to be $10^{-9}$ of the values presented.} \label{Table:Results for Tanh} \end{table} The results in Table \ref{Table:Results for Tanh} highlight the key differences in the algorithms. Clearly this is a difficult problem for standard Monte Carlo to solve. The reason of course being that although $G$ is mollified it still changes value quickly over a small interval. For example $G$ at $0.25$ is approximately $10^{-10}$, but $G(0.5) \approx 10^{-7}$ and $G(0.75)\approx 10^{-4}$, hence a reasonably small change in the value of the SDE can influence the outcome significantly. However, for the standard Monte Carlo run, only $60$ of the $100,000$ were $>1/2$ at the terminal time and none were above $3/4$. Hence standard Monte Carlo is not giving much information about the most important region of the function. The importance sampling schemes again give reduced errors, however, this example highlights the differences between them. Although the complete measure change does have a smaller error than standard Monte Carlo the payoff oscillates around and hence the decoupled algorithm appears to be superior since the payoffs are consistent and the error decreases in the expected manner. $\triangleright$ \emph{Robustness of complete measure change.} The above table shows why one has to consider the effect of the measure change on the propagation of chaos error. The reason this is more prominent here than in the previous example is because the magnitude of the optimal measure change is far larger. Hence, even when we use a large number of particles they may provide a poor approximation of the law, this is where this algorithm lacks robustness. \begin{remark} [Requirement for improved simulation] It is clear from these examples that combining importance sampling with MV-SDEs can provide a major reduction in the required computational cost, namely we can achieve a smaller variance with far less simulations (and hence time). When using decoupling, unfortunately one has to approximate the law first, which is computationally expensive to do using a particle approximation. Hence, one may look towards more sophisticated simulation techniques to speed up the first run, for example \cite{GobetPagliarani2018} or towards multilevel Monte Carlo such as \cite{SzpruchTanTse2017}. However, with the ability to almost eliminate the variance one should always keep in mind the benefits from importance sampling. \end{remark} \section{Proof of Main Results} \label{Sec:Proofs} We now provide the proofs of our two main theorems. Throughout we work under the $\bP$-measure and we omit it as a superscript in our Brownian motions. Some arguments align with those of \cite{GuasoniRobertson2008} and we quote them where appropriate. \subsection{Proofs for Theorem \ref{Thm:Complete Measure Change}} \label{Sec:Proofs Complete Measure Change} Continuity of the SDE w.r.t. Brownian motion is key as it allows to apply directly the contraction principle transferring Schilder's LDP for the Brownian motion to an LDP for the solution of the SDE; otherwise difficulties would arise when using Varadhan's lemma. Unlike the decoupled case, we will stick to Lipschitz coefficients here, the reason for this is that Lemma \ref{Lem:Bounding Xbar} does not generalise well for SDEs of the type \eqref{Eq:Form of MV-SDE}. \begin{lemma} \label{Lem:Continuity of Interacting SDE} Fix $N \in \bN$, let Assumption \ref{Ass:Drift Lipschitz Assumption} hold and let $X \in \bS^{p}$ for $p \ge 2$ denote the $N$-dimensional strong solution to the SDE system defined in \eqref{Eq:Form of MV-SDE}. Then $X$ is continuous w.r.t. the set of $N$ Brownian motions in the uniform topology. \end{lemma} \begin{proof} To show continuity in the uniform topology we consider two sets of iid Brownian motions, $W_{t}=(W_{t}^{1}, \dots, W_{t}^{N})$ and $\tilde{W}_{t}= (\tilde{W}_{t}^{1}, \dots, \tilde{W}_{t}^{N})$ and show continuity by analyzing the difference between, $\tilde{X}_{t}^{i} := X_{t}^{i}(\tilde{W}_{t}^{1}, \dots, \tilde{W}_{t}^{N})$ and $X_{t}^{i}$ with $i \in\{1,\cdots,N\}$. We have, \begin{align*} | \tilde{X}_{t}^{i,N} - X_{t}^{i,N}| \le \int_{0}^{t} | b(s, \tilde{X}_{s}^{i,N}, \frac{1}{N} \sum_{j=1}^{N} \delta_{\tilde{X}_{s}^{j,N}}) - b(s, X_{s}^{i,N}, \frac{1}{N} \sum_{j=1}^{N} \delta_{X_{s}^{j,N}}) | \dd s + | \int_{0}^{t} \sigma \dd \tilde{W}_{s}^{i} - \int_{0}^{t} \sigma \dd W_{s}^{i} | \, . \end{align*} Considering the time integral first, we can bound as follows, \begin{align*} &\Big| b(s, \tilde{X}_{s}^{i,N}, \frac{1}{N} \sum_{j=1}^{N} \delta_{\tilde{X}_{s}^{j,N}}) - b(s, X_{s}^{i,N}, \frac{1}{N} \sum_{j=1}^{N} \delta_{X_{s}^{j,N}}) \Big| \le C \Big( | \tilde{X}_{s}^{i,N} - X_{s}^{i,N}| + \Big( \frac{1}{N} \sum_{j=1}^{N} ( \tilde{X}_{s}^{j,N} - X_{s}^{j,N})^{2} \Big)^{\frac12} \Big) \, , \end{align*} where we used the Lipschitz property and the definition of the Wasserstein-$2$ metric for empirical distributions (see \cite{BerntonEtAl2017}, for example). Noting that for the second term, \begin{align*} \Big( \frac{1}{N} \sum_{j=1}^{N} ( \tilde{X}_{s}^{j,N} - X_{s}^{j,N})^{2} \Big)^{\frac12} \le \max_{j \in \{1, \dots, N\}} | \tilde{X}_{s}^{j,N} - X_{s}^{j,N}| \le \sum_{j=1}^{N} | \tilde{X}_{s}^{j,N} - X_{s}^{j,N}| \, . \end{align*} Hence we can bound the drift by terms of the form $| \tilde{X}_{s}^{j,N} - X_{s}^{j,N}|$. This yields the following, \begin{align*} | \tilde{X}_{t}^{i,N} - X_{t}^{i,N}| \le & \int_{0}^{t} C \Big( | \tilde{X}_{s}^{i,N} - X_{s}^{i,N}| + \sum_{j=1}^{N} | \tilde{X}_{s}^{j,N} - X_{s}^{j,N}| \Big) \dd s + C \sup_{0 \le s \le t} | \tilde{W}_{s}^{i} - W_{s}^{i}| \, . \end{align*} Taking supremums and summing over $i$ on both sides yields, \begin{align*} \sum_{i=1}^{N} \sup_{0 \le t \le T} | \tilde{X}_{t}^{i,N} - X_{t}^{i,N}| \le & \int_{0}^{T} C \sum_{i=1}^{N} \sup_{0 \le t \le s} | \tilde{X}_{t}^{i,N} - X_{t}^{i,N}| \dd s + C \sum_{i=1}^{N} \sup_{0 \le s \le T} | \tilde{W}_{s}^{i} - W_{s}^{i}| \\ \le & C e^{CT} \sum_{i=1}^{N} \sup_{0 \le s \le T} | \tilde{W}_{s}^{i} - W_{s}^{i}| \, , \end{align*} where the final step follows from Gronwall's inequality. It is then clear that $\sum_{i=1}^{N} \sup_{0 \le s \le T} | \tilde{W}_{s}^{i} - W_{s}^{i}| \rightarrow 0$ implies $\sum_{i=1}^{N} \sup_{0 \le t \le T} | \tilde{X}_{t}^{i,N} - X_{t}^{i,N}| \rightarrow 0$, hence we obtain the required continuity. \end{proof} We next show that one can use Varadhan's lemma in this case. \begin{lemma} \label{Lem:Particle Varadhan UI} Fix $N\in \bN$, let $h \in \bH_T$ and let Assumptions \ref{Ass:General Payoff Growth} and \ref{Ass:Drift Lipschitz Assumption} hold. Then the integrability condition in Varadhan's lemma holds for \eqref{Eq:Small Noise for Particles}. Namely for $\gamma>1$ \begin{align*} & \limsup_{\epsilon \rightarrow 0} \epsilon \log \left( \bE_{\bP} \left[ \exp \left( \frac{\gamma}{\epsilon} \left( 2 \log \Big(\tilde{G}_{1}(\sqrt{\epsilon}W^{1}, \dots, \sqrt{\epsilon}W^{N})\Big) \int_{0}^{T} \sqrt{\epsilon} \dot{h}_{t}\dd W_{t}^{1} + \frac{1}{2} \int_{0}^{T} (\dot{h}_{t})^{2} \dd t \right) \right) \right] \right) < \infty. \end{align*} \end{lemma} \begin{proof} Using that $h\in \bH_T$ is deterministic, $\dot h \in L^{2}([0,T],\bR^N)$ and Cauchy-Schwarz we obtain, \begin{align*} &\epsilon \log \left( \bE_{\bP} \left[ \exp \left( \frac{\gamma}{\epsilon} \Big( 2 \log (\tilde{G}_{1}(\sqrt{\epsilon}W^{1}, \dots, \sqrt{\epsilon}W^{N})) - \int_{0}^{T} \sqrt{\epsilon} \dot{h}_{t}\dd W_{t}^{1} + \frac{1}{2} \int_{0}^{T} (\dot{h}_{t})^{2} \dd t \Big) \right) \right] \right) \\ & \qquad \le \frac{\gamma}{2} \int_{0}^{T} (\dot{h}_{t})^{2} \dd t + \frac{\epsilon}{2} \log \left( \bE_{\bP} \left[ \exp \left( \frac{4\gamma}{\epsilon} \log (\tilde{G}_{1}(\sqrt{\epsilon}W^{1}, \dots, \sqrt{\epsilon}W^{N})) \right) \right] \right) \\ & \hspace{7cm} + \frac{\epsilon}{2} \log \left( \bE_{\bP} \left[ \exp \left(- \frac{2\gamma}{\epsilon} \int_{0}^{T} \sqrt{\epsilon} \dot{h}_{t} \dd W_{t}^{1} \right) \right] \right) \, . \end{align*} It is then sufficient to show that the three terms are finite when we take $\limsup_{\epsilon \rightarrow 0} $. The first term is clearly finite by the conditions on $h$. Finiteness of the third term follows from \cite{GuasoniRobertson2008}*{pg.16}, namely $\forall ~ i \in \{1, \dots, N\}$ the stochastic integral has the distribution $\int_{0}^{T} \dot h_{t} \dd W_{t}^{i} \sim \cN(0,\int_{0}^{T} \big(\dot h_{t})^{2} \dd t\big)$. Thus we obtain, \begin{align*} \limsup_{\epsilon \rightarrow 0} \frac{\epsilon}{2} \log \left( \bE_{\bP} \left[ \exp \left(- \frac{2\gamma}{\epsilon} \int_{0}^{T} \sqrt{\epsilon} \dot{h}_{t} \dd W_{t}^{1} \right) \right] \right) = \gamma^{2} \int_{0}^{T} (\dot h_{t})^{2} \dd t < \infty \, . \end{align*} The final term to consider is the terminal term, $\log (\tilde{G}_{1})$. By definition of $\tilde{G}_{1}$ and Assumption \ref{Ass:General Payoff Growth} we have, \begin{align*} \log \left( \tilde{G}_{1}( \sqrt{\epsilon} W^{1}, \dots, \sqrt{\epsilon} W^{N}) \right) \le C_{1} + C_{2} \sup_{0 \le t \le T} | X^{1,N}( \sqrt{\epsilon} W^{1}, \dots, \sqrt{\epsilon} W^{N})|^{\alpha} \, . \end{align*} Applying similar arguments as in Lemma \ref{Lem:Continuity of Interacting SDE} we obtain \begin{align*} |X_{t}^{1,N}&( \sqrt{\epsilon} W^{1}, \dots, \sqrt{\epsilon} W^{N})| \\ & \le C +\int_{0}^{t} C \Big( |X_{s}^{1,N}( \sqrt{\epsilon} W^{1}, \dots, \sqrt{\epsilon} W^{N})| + \sum_{j=1}^{N} | X_{s}^{j,N}( \sqrt{\epsilon} W^{1}, \dots, \sqrt{\epsilon} W^{N})| \Big) \dd s +C \sqrt{\epsilon} \sup_{0 \le s \le t} | W_{s}^{1}| \, . \end{align*} Noting that for $\alpha \ge 1$ and $a_{i}$ nonnegative, $(\sum_{i=1}^{N} a_{i})^{\alpha} \le C^{\alpha} \sum_{i=1}^{N} a_{i}^{\alpha}$ and that the above estimate is true for any $X^{i,N}$, then taking supremums and summing over $i$ yields, \begin{align*} \sum_{i=1}^{N} \sup_{0 \le t \le T} & |X_{t}^{i,N}( \sqrt{\epsilon} W^{1}, \dots, \sqrt{\epsilon} W^{N})|^{\alpha} \\ & \le C^{\alpha} + \int_{0}^{T} C^{\alpha} \sum_{i=1}^{N} \sup_{0 \le t \le s} | X_{t}^{i,N}( \sqrt{\epsilon} W^{1}, \dots, \sqrt{\epsilon} W^{N})|^{\alpha} \dd s +C^{\alpha} \sqrt{\epsilon}^{\alpha} \sum_{i=1}^{N} \sup_{0 \le s \le T} |W_{s}^{i}|^{\alpha} \\ & \leq C^{\alpha} e^{C^{\alpha}} \sqrt{\epsilon}^{\alpha} \sum_{i=1}^{N} \sup_{0 \le s \le T} |W_{s}^{i}|^{\alpha} \, , \end{align*} where the final line comes from Gronwall's inequality. It is useful for us to note this yields the bound \begin{align} \label{Eq:Tilde G bound} \log \left( \tilde{G}_{1}( W^{1}, \dots, W^{N}) \right) \le C_{1} + C_{2} \sum_{i=1}^{N} \sup_{0 \le s \le T} |W_{s}^{i}|^{\alpha} \, . \end{align} Using the previous results we have the following bound, \begin{align*} &\frac{\epsilon}{2} \log \left( \bE_{\bP} \left[ \exp \left( \frac{4\gamma}{\epsilon} \log (\tilde{G}_{1}(\sqrt{\epsilon}W^{1}, \dots, \sqrt{\epsilon}W^{N})) \right) \right] \right) \\ & \qquad \qquad \le C + \sum_{i=1}^{N} \frac{\epsilon C}{2} \log \left( \bE_{\bP} \left[ \exp \left( \frac{4\gamma C^{\alpha}}{\epsilon^{1- \alpha/2}} \sup_{0 \le s \le T} |W_{s}^{i}|^{\alpha} \right) \right] \right) \, , \end{align*} where we have used the independence of the Brownian motions to obtain the sum over $i$. Finiteness of this term then follows by arguments similar to those in Lemma 7.6 and 7.7 in \cite{GuasoniRobertson2008}. To conclude, we have shown that all terms are finite and the result follows. \end{proof} Before finishing the proof of Theorem \ref{Thm:Complete Measure Change}, we note that the LDP for Brownian motion in pathspace is given by Schilder's theorem, which states that for a $d$-dimensional Brownian motion $W$, then $\sqrt{\epsilon}W$ satisfies a LDP with good rate function (see \cite{DemboZeitouni2010}), \begin{align*} I(y)= \begin{cases} \frac{1}{2} \int_{0}^{T} |\dot{y}_{t}|^{2} \dd t \, , & \text{ if} ~ y \in \bH_{T}^{d} \, , \\ \infty, & \text{ if} ~ y \in \bW_{T}^{d} \backslash \bH_{T}^{d} \, . \end{cases} \end{align*} \begin{proof}[Proof of Theorem \ref{Thm:Complete Measure Change}] The continuity of the SDE from Lemma \ref{Lem:Continuity of Interacting SDE} along with existence of a unique strong solution under Assumptions \ref{Ass:Drift Lipschitz Assumption}, ensure $\tilde{G}_{1}$ is a continuous function under Assumption \ref{Ass:General Payoff Growth}. By assumption, there exists a point $(u^{1}, \hat{u}) \in \bH_{T}^{2}$ such that $\tilde{G}(u^{1}, \hat{u}, \dots, \hat{u})>0$, this along with \eqref{Eq:Tilde G bound} and recalling $\alpha < 2$ we obtain the existence of maximisers by Lemma 7.1 of \cite{GuasoniRobertson2008}. Similarly the $+ \dot{h}^{2}$ yields existence of a minimising $h$ for $\bar{L}$. Moreover, continuity of $\tilde{G}$ w.r.t. the Brownian motion and finite variation of $\dot{h}$ implies the exponential term in \eqref{Eq:Small Noise for Particles} is continuous. Thus to use Varadhan's lemma we only need to check the integrability condition, which is given in Lemma \ref{Lem:Continuity of Interacting SDE}, hence relation \eqref{Eq:Complete L in sup form} follows. The remaining part to be proved is that \eqref{Eq:Complete Asymptotic Optimality check} implies asymptotically optimal. This essentially relies on showing that \eqref{Eq:Complete L minimiser} is a lower bound for the RHS of \eqref{Eq:General Asymptotically Optimal Condition}. Using the same arguments to derive \eqref{Eq:General Asymptotically Optimal Condition}, one obtains the following expression for an asymptotically optimal estimator \begin{align*} \sup_{u \in \bH^N_{T}} \left\{ 2 \log (\tilde{G}_{1}(u^{1}, \dots, u^{N})) - \frac{1}{2} \int_{0}^{T} |\dot u_{t}|^{2} \dd t \right\} \, . \end{align*} It is then clear that the supremum is bounded below by the case $u^{2}= \dots = u^{N}$, which yields the expression \eqref{Eq:Complete L minimiser}. Strict convexity along with arguments on page 18 in \cite{GuasoniRobertson2008} yields the uniqueness which completes the proof. \end{proof} \subsection{Proofs for Theorem \ref{Thm:Decoupled}} \label{Sec:Proofs Decoupled} We recall, that due to the independence of the original particle system from the SDE in question, we work on the product of two probability spaces, consequently (since $\mu^{N}$ will be a ``realisation'' coming from the space $\tilde{\Omega}$) our results are all $\tilde{\bP}$-a.s.. As before we need to prove that the SDE is a continuous map of the Brownian motions. We were unable to find any results for the one-sided Lipschitz and locally Lipschitz case, we therefore provide a proof of this result here (Lemma \ref{Lem:Continuity of SDE}). The proof of this relies on the following lemma. \begin{lemma} \label{Lem:Bounding Xbar} Let Assumption \ref{Ass:Drift Monotone Assumption} hold and let $\bar{X}$ be the solution to \eqref{Eq:Particle Approx General MV-SDE}. Then consider the following stochastic processes \begin{align*} X_{t}^{+} & := x_{0} \1_{\{x_{0} \ge 0 \}} + \int_{0}^{t} C (|X_{s}^{+}| +1) \dd s + \sigma \left( \sup_{0 \le s \le t} W_{s} - \inf_{0 \le s \le t} W_{s} \right), \\ X_{t}^{-} & := x_{0} \1_{\{x_{0} \le 0 \}} - \int_{0}^{t} C (|X_{s}^{-}| +1) \dd s + \sigma \left( \inf_{0 \le s \le t} W_{s} - \sup_{0 \le s \le t} W_{s} \right) \, , \end{align*} where $C$ is the constant in the monotone condition of $b$. Then, $\forall ~ t \ge 0$, $X_{t}^{-} \le \bar{X}_{t} \le X_{t}^{+}$, $\bP\otimes \tilde{\bP}$-a.s.. \end{lemma} \begin{proof} Firstly, one can easily show through a standard Picard iteration argument that both $X^{\pm}$ have unique, progressively measurable solutions in $\bS^{2}$. We argue by contradiction and show the upper bound $\bar{X} \le X^{+}$, the lower bound follows by the same argument in the opposite direction. Since $b$ is monotone (Assumption \ref{Ass:Drift Monotone Assumption}), we can derive the following bounds $\forall ~ s \in [0,T]$ and $\mu \in \cP_{2}(\bR)$, \begin{align*} b(s,x,\mu) \le C(|x|+1) \quad \text{for } x \ge 0 \qquad \text{and} \qquad b(s,x,\mu) \ge -C(|x|+1) \quad \text{for } x \le 0 \, . \end{align*} Assume that there exists a time $t_{2}$ such that $\bar{X}_{t_{2}} > X_{t_{2}}^{+}$. If $\bar{X}_{t} \ge 0$ for all $t \in [0,t_2]$, then, \begin{align*} X_{t_2}^{+}-\bar{X}_{t_2} = x_{0} \1_{\{x_{0} \ge 0 \}} -x_{0}+ \int_{0}^{t_2} C (|X_{s}^{+}| +1) -b(s,\bar{X}_{s}, \mu_{s}^{N}) \dd s + \sigma \left( \sup_{0 \le s \le t_2} W_{s} - \inf_{0 \le s \le t_2} W_{s} \right) - \sigma W_{t_2} \geq0, \end{align*} which yields a contradiction. Alternatively, let $t_1:=\max\{t\leq t_2: \bar X_t = 0\}$. By continuity, $\bar{X}_{t_{1}}=0$ and so \begin{align*} X_{t_{2}}^{+}-\bar{X}_{t_{2}} = x_{0} \1_{\{x_{0} \ge 0 \}}+ & \int_{0}^{t_{2}} C (|X_{s}^{+}| +1) \dd s - \int_{t_{1}}^{t_{2}} b(s,\bar{X}_{s}, \mu_{s}^{N}) \dd s \\ & + \sigma \left( \sup_{0 \le s \le t_{2}} W_{s} - \inf_{0 \le s \le t_{2}} W_{s} \right) - \sigma \left( W_{t_{2}}- W_{t_{1}} \right)\geq0, \end{align*} which contradicts $\bar{X}_{t_{2}} > X_{t_{2}}^{+}$ and thus proves the result. \end{proof} One can now use this lemma to prove the following result. \begin{lemma} \label{Lem:Continuity of SDE} Let $\bar{X}$ be defined as in \eqref{Eq:Particle Approx General MV-SDE}, with coefficients satisfying Assumption \ref{Ass:Drift Monotone Assumption}, then $\bar{X}$ is a $\bP\otimes \tilde{\bP}$-a.s. continuous map of Brownian motion in the uniform norm. \end{lemma} \begin{proof} To prove this result we require that, if $\sup_{0 \le s \le t} |\tilde{W}_{s}-W_{s}| \rightarrow 0$, then $\sup_{0 \le s \le t} |\bar{X}_{s}(\tilde{W})-\bar{X}_{s}(W)| \rightarrow 0$. We note that we work on the uniform topology and hence we may assume that all (a finite number of) Brownian motions are uniformly bounded on $[0,T]$. Lemma \ref{Lem:Bounding Xbar}, implies that we can bound the value $\bar{X}$ takes by the processes $X_{\cdot}^{\pm}$. It is a straightforward application of Gronwall's Lemma to deduce, \begin{align*} X_{t}^{+} & \le \Big(x_{0} \1_{\{x_{0} \ge 0 \}} + Ct +\sigma \Big( \sup_{0 \le s \le t} W_{s} - \inf_{0 \le s \le t} W_{s} \Big) \Big) e^{Ct} \, , \\ X_{t}^{-} & \ge -\Big(|x_{0} \1_{\{x_{0} \le 0 \}}| + Ct +\sigma \Big | \inf_{0 \le s \le t} W_{s} -\sup_{0 \le s \le t} W_{s} \Big | \Big) e^{Ct} \, . \end{align*} Hence we can bound the value $\bar{X}$ can take as a function of its Brownian motion (which itself is bounded by the uniform topology). Let us now consider the difference in the SDEs driven by the different Brownian motions, \begin{align*} | \bar{X}_{t}(\tilde{W})- \bar{X}_{t}(W) | \le \int_{0}^{t} | b(s, \bar{X}_{s}(\tilde{W}), \mu_{s}^{N}) - b(s, \bar{X}_{s}(W), \mu_{s}^{N})| \dd s + \sigma | \tilde{W}_{t} - W_{t}| \, . \end{align*} By Assumption \ref{Ass:Drift Monotone Assumption}, $b$ is locally Lipschitz, hence, \begin{align*} | b(s, \bar{X}_{s}(\tilde{W}), \mu_{s}^{N}) - b(s, \bar{X}_{s}(W), \mu_{s}^{N})| \le C(\tilde{W},W) |\bar{X}_{s}(\tilde{W})- \bar{X}_{s}(W) | \, . \end{align*} Noting further that $\sigma | \tilde{W}_{t} - W_{t}| \le \sigma \sup_{0 \le s \le t} | \tilde{W}_{s} - W_{s}| $, then by Gronwall's inequality we obtain, \begin{align*} | \bar{X}_{t}(\tilde{W})- \bar{X}_{t}(W) | \le \sigma \Big( \sup_{0 \le s \le t} | \tilde{W}_{s} - W_{s}| \Big) e^{C(\tilde{W}, W)t}\, . \end{align*} Again, by the uniform topology, we must have $\tilde{W}$ and $W$ bounded, thus $C(\tilde{W},W) < \infty$ and hence, $\sup_{0 \le s \le t} |\bar{X}_{s}(\tilde{W})-\bar{X}_{s}(W)| \rightarrow 0$ when $\sup_{0 \le s \le t} |\tilde{W}_{s}-W_{s}| \rightarrow 0$. \end{proof} We now prove that the uniform integrability condition still holds, namely that we can still apply Varadhan's Lemma, in both settings. \begin{lemma} \label{Lem:UI condition decoupled} Let $h\in \bH_T$, then under Assumption \ref{Ass:General Payoff Growth} and \ref{Ass:Drift Monotone Assumption} the integrability condition in Varadhan's lemma holds for \eqref{Eq:Small Noise for General MV-SDE}. Namely, for some $\gamma >1$ \begin{align*} \limsup_{\epsilon \rightarrow 0} \epsilon \log \bE_{\bP \otimes \tilde{\bP}} \left[ \exp\left( \frac{\gamma}{\epsilon}\left(2 \log(\overline{G}(\sqrt{\epsilon} W)) - \int_{0}^{T} \sqrt{\epsilon} \dot{h}_{t} \dd W_{t} + \frac{1}{2} \int_{0}^{T} \dot{h}_{t}^{2} \dd t \right) \right) \Big | \tilde{\cF} \right] < \infty ~ ~ ~ \tilde{\bP} \text{-a.s.}. \end{align*} \end{lemma} \begin{proof} The $h$ terms can be dealt with using the same arguments as before. The term we are interested in is the $G$ term. Using arguments as in the proof of Lemma \ref{Lem:Particle Varadhan UI}, we only need to prove the following holds, \begin{align*} \limsup_{\epsilon \rightarrow 0} \frac{\epsilon}{2} \log \left( \bE_{\bP\otimes \tilde{\bP}} \left[ \exp \left( \frac{4\gamma}{\epsilon} \log \Big(G(\bar{X}(\sqrt{\epsilon}W)) \Big) \right) \Big | \tilde{\cF} \right] \right) < \infty \, . \end{align*} Recall that Lemma \ref{Lem:Bounding Xbar}, yields the bound, $X_{t}^{-} \le \bar{X}_{t} \le X_{t}^{+}$, $\bP \otimes \tilde{\bP}$-a.s.. Hence, for $\alpha \in [1, 2)$ we have the following bound $\bP \otimes \tilde{\bP}$-a.s., \begin{align*} \sup_{0 \le t \le T} | \bar{X}_{t}|^{\alpha} \le \sup_{0 \le t \le T} | X_{t}^{+}|^{\alpha} + \sup_{0 \le t \le T} | X_{t}^{-}|^{\alpha} = | X_{T}^{+}|^{\alpha} + | X_{T}^{-}|^{\alpha} \, , \end{align*} where the final equality comes from the fact $|X^{\pm}|$ are nondecreasing processes. Due to the dependence on the external measure $\mu^{N}$, all of these results are $\tilde{\bP}$-a.s., but for ease of presentation we will omit it here. Further recall that by Gronwall's lemma (or see proof of Lemma \ref{Lem:Continuity of SDE}), we can bound the processes $|X^{\pm}|$, thus, \begin{align*} |X_{T}^{+}|^{\alpha} & \le C^{\alpha}\Big(x_{0}^{\alpha} \1_{\{x_{0} \ge 0 \}} + C^{\alpha} +\sigma^{\alpha} \Big( \sup_{0 \le s \le T} W_{s} - \inf_{0 \le s \le T} W_{s} \Big)^{\alpha} \Big) e^{C \alpha} \, , \\ |X_{T}^{-}|^{\alpha} & \le C^{\alpha}\Big(|x_{0} \1_{\{x_{0} \le 0 \}}|^{\alpha} + C^{\alpha} +\sigma^{\alpha} \Big | \inf_{0 \le s \le T} W_{s} -\sup_{0 \le s \le T} W_{s} \Big |^{\alpha} \Big) e^{C \alpha } \, . \end{align*} Due to the fact that $\alpha \ge 1$, and $- \inf_{0 \le s \le T} W_{s}= \sup_{0 \le s \le T} -W_{s} \ge 0$, we have, \begin{align*} \Big | \inf_{0 \le s \le T} W_{s} -\sup_{0 \le s \le T} W_{s} \Big |^{\alpha} = \Big( \sup_{0 \le s \le T} W_{s} - \inf_{0 \le s \le T} W_{s} \Big)^{\alpha} \le C^{\alpha} \Big( \Big(\sup_{0 \le s \le T} W_{s}\Big)^{\alpha} + \Big(\sup_{0 \le s \le T} -W_{s} \Big)^{\alpha} \Big) \, . \end{align*} We express the bound w.r.t. the driving Brownian motion $\sqrt{\epsilon}W$ and obtain, \begin{align*} \sup_{0 \le t \le T} | \bar{X}_{t}(\sqrt{\epsilon}W)|^{\alpha} \le C^{\alpha}\Big( |x_{0}|^{\alpha} + C^{\alpha} +C^{\alpha}\sigma^{\alpha} \sqrt{\epsilon}^{\alpha} \Big( \Big(\sup_{0 \le s \le T} W_{s}\Big)^{\alpha} + \Big(\sup_{0 \le s \le T} -W_{s} \Big)^{\alpha} \Big) \Big) e^{C \alpha} \, . \end{align*} We can simplify this further by noting, \begin{align*} \Big(\sup_{0 \le s \le T} W_{s}\Big)^{\alpha} + \Big(\sup_{0 \le s \le T} -W_{s} \Big)^{\alpha} \le C^{\alpha}\sup_{0 \le s \le T} |W_{s}|^{\alpha} \, . \end{align*} Using these inequalities we obtain, \begin{align*} &\frac{\epsilon}{2} \log \left( \bE_{\bP \otimes \tilde{\bP}} \left[ \exp \left( \frac{4\gamma}{\epsilon} \log (G(\bar{X}(\sqrt{\epsilon}W))) \right) \Big | \tilde{\cF} \right] \right) \\ & \qquad \le \frac{\epsilon}{2} \log \left( \bE_{\bP \otimes \tilde{\bP}} \left[ \exp \left( \frac{4\gamma}{\epsilon}C_{1} + \frac{4\gamma}{\epsilon}C_{2} \Big( C^{\alpha}\Big(|x_{0}|^{\alpha} + C^{\alpha} +C^{\alpha}\sigma^{\alpha} \sqrt{\epsilon}^{\alpha} \sup_{0 \le s \le T} |W_{s}|^{\alpha} \Big) e^{C \alpha} \Big) \right) \Big | \tilde{\cF} \right] \right) \, . \end{align*} By splitting up the terms in the exponential this then reduces to the problem of considering, \begin{align*} \frac{\epsilon}{2} \log \left( \bE_{\bP \otimes \tilde{\bP}} \Big[ \exp \Big( \frac{4\gamma}{\epsilon^{1- \alpha/2}}C_{2} C^{\alpha}\sigma^{\alpha} \sup_{0 \le s \le T} |W_{s}|^{\alpha} \Big) \Big | \tilde{\cF} \Big] \right) \, . \end{align*} One can show that this quantity is finite by following the same arguments as \cite{GuasoniRobertson2008}*{pg.16}. \end{proof} We can now prove the second main theorem, the arguments follow similar lines to those we used to conclude the proof of Theorem \ref{Thm:Complete Measure Change}. \begin{proof}[Proof of Theorem \ref{Thm:Decoupled}] The continuity of the SDE from Lemma \ref{Lem:Continuity of SDE} along with existence of a unique strong solution under Assumption \ref{Ass:Drift Monotone Assumption}, ensure $\overline{G}$ is a $\tilde{\bP}$-a.s. continuous function under Assumption \ref{Ass:General Payoff Growth}. We then obtain the existence of the maximiser by Lemma 7.1 of \cite{GuasoniRobertson2008}. Moreover, the $\tilde{\bP}$-a.s. continuity of $\overline{G}$ w.r.t. the Brownian motion and finite variation of $\dot{h}$ implies that to use Varadhan's lemma we only need to check the integrability condition, which is given in Lemma \ref{Lem:UI condition decoupled}. This with Lemma 7.6 in \cite{GuasoniRobertson2008} is enough to complete the proof by arguments on page 18 in \cite{GuasoniRobertson2008}. \end{proof} \begin{bibdiv} \begin{biblist} \bib{BudhirajaDupuisFischer2012}{article}{ author={Budhiraja, Amarjit}, author={Dupuis, Paul}, author={Fischer, Markus}, title={Large deviation properties of weakly interacting processes via weak convergence methods}, date={2012}, ISSN={0091-1798}, journal={Ann. Probab.}, volume={40}, number={1}, pages={74\ndash 102}, url={https://doi.org/10.1214/10-AOP616}, } \bib{BerntonEtAl2017}{article}{ author={Bernton, Espen}, author={Jacob, Pierre~E.}, author={Gerber, Mathieu}, author={Robert, Christian~P.}, title={Inference in generative models using the {W}asserstein distance}, date={2017}, journal={arXiv:1701.05146}, } \bib{BuckdahnEtAl2017}{article}{ author={Buckdahn, Rainer}, author={Li, Juan}, author={Peng, Shige}, author={Rainer, Catherine}, title={Mean-field stochastic differential equations and associated {PDE}s}, date={2017}, journal={The Annals of Probability}, volume={45}, number={2}, pages={824\ndash 878}, } \bib{Bossy2004}{article}{ author={Bossy, Mireille}, title={Optimal rate of convergence of a stochastic particle method to solutions of 1d viscous scalar conservation laws}, date={2004}, journal={Mathematics of computation}, volume={73}, number={246}, pages={777\ndash 812}, } \bib{BossyTalay1997}{article}{ author={Bossy, Mireille}, author={Talay, Denis}, title={A stochastic particle method for the {M}ckean-{V}lasov and the {B}urgers equation}, date={1997}, journal={Mathematics of Computation of the American Mathematical Society}, volume={66}, number={217}, pages={157\ndash 192}, } \bib{Carmona2016Lectures}{book}{ author={Carmona, Ren\'{e}}, title={Lectures of {BSDE}s, stochastic control, and stochastic differential games with financial applications}, publisher={SIAM}, date={2016}, } \bib{CrisanMcMurray2017}{unpublished}{ author={Crisan, Dan}, author={McMurray, Eamon}, title={Cubature on {W}iener space for {M}c{K}ean--{V}lasov {SDE}s with smooth scalar interaction}, date={2017}, note={arXiv:1703.04177}, } \bib{DupuisEllis2011}{book}{ author={Dupuis, Paul}, author={Ellis, Richard~S.}, title={A weak convergence approach to the theory of large deviations}, publisher={John Wiley \& Sons}, date={2011}, volume={902}, } \bib{DawsonGaertner1987-DG1987}{article}{ author={Dawson, Donald~A.}, author={G\"artner, J\"urgen}, title={Large deviations from the {M}c{K}ean-{V}lasov limit for weakly interacting diffusions}, date={1987}, ISSN={0090-9491}, journal={Stochastics}, volume={20}, number={4}, pages={247\ndash 308}, url={http://dx.doi.org/10.1080/17442508708833446}, } \bib{dosReisSalkeldTugaut2017}{unpublished}{ author={dos Reis, G.}, author={Salkeld, William}, author={Tugaut, Julian}, title={Freidlin-{W}entzell {LDP}s in path space for {McK}ean-{V}lasov equations and the functional iterated logarithm law}, date={2017}, note={arXiv:1708.04961}, } \bib{DupuisWang2004}{article}{ author={Dupuis, Paul}, author={Wang, Hui}, title={Importance sampling, large deviations, and differential games}, date={2004}, journal={Stochastics: An International Journal of Probability and Stochastic Processes}, volume={76}, number={6}, pages={481\ndash 508}, } \bib{DemboZeitouni2010}{book}{ author={Dembo, Amir}, author={Zeitouni, Ofer}, title={Large deviations techniques and applications, volume 38 of stochastic modelling and applied probability}, publisher={Springer-Verlag, Berlin}, date={2010}, } \bib{EkelandTemam1999}{book}{ author={Ekeland, Ivar}, author={Temam, Roger}, title={Convex analysis and variational problems}, publisher={SIAM}, date={1999}, } \bib{Fischer2014}{article}{ author={Fischer, Markus}, title={On the form of the large deviation rate function for the empirical measures of weakly interacting systems}, date={2014}, journal={Bernoulli}, volume={20}, number={4}, pages={1765\ndash 1801}, } \bib{FlemingRishel1975}{book}{ author={Fleming, Wendell~H.}, author={Rishel, Raymond~W.}, title={Deterministic and stochastic optimal control}, publisher={Springer-Verlag, Berlin-New York}, date={1975}, note={Applications of Mathematics, No. 1}, } \bib{GlassermanEtAl1999}{article}{ author={Glasserman, Paul}, author={Heidelberger, Philip}, author={Shahabuddin, Perwez}, title={Asymptotically optimal importance sampling and stratification for pricing path-dependent options}, date={1999}, journal={Mathematical finance}, volume={9}, number={2}, pages={117\ndash 152}, } \bib{GobetPagliarani2018}{article}{ author={Gobet, Emmanuel}, author={Pagliarani, Stefano}, title={Analytical approximations of non-linear {SDE}s of {M}c{K}ean-{V}lasov type}, date={2018}, journal={Journal of Mathematical Analysis and Applications}, } \bib{GuasoniRobertson2008}{article}{ author={Guasoni, Paolo}, author={Robertson, Scott}, title={Optimal importance sampling with explicit formulas in continuous time}, date={2008}, journal={Finance and Stochastics}, volume={12}, number={1}, pages={1\ndash 19}, } \bib{GlassermanWang1997}{article}{ author={Glasserman, Paul}, author={Wang, Yashan}, title={Counterexamples in importance sampling for large deviations probabilities}, date={1997}, journal={The Annals of Applied Probability}, volume={7}, number={3}, pages={731\ndash 746}, } \bib{KohatsuOgawa1997}{article}{ author={Kohatsu-Higa, Arturo}, author={Ogawa, Shigeyoshi}, title={Weak rate of convergence for an {E}uler scheme of nonlinear {SDE}'s}, date={1997}, journal={Monte Carlo Methods and Applications}, volume={3}, pages={327\ndash 345}, } \bib{KosturEtAl2002}{article}{ author={Kostur, Marcin}, author={{\L}uczka, J.}, author={Schimansky-Geier, L.}, title={Nonequilibrium coupled {B}rownian phase oscillators}, date={2002}, journal={Physical Review E}, volume={65}, number={5}, pages={051115}, } \bib{MitrinovicEtAl2012}{book}{ author={Mitrinovic, Dragoslav~S.}, author={Pecaric, Josip}, author={Fink, Arlington~M.}, title={Inequalities involving functions and their integrals and derivatives}, publisher={Springer Science \& Business Media}, date={2012}, volume={53}, } \bib{Robertson2010}{article}{ author={Robertson, Scott}, title={Sample path large deviations and optimal importance sampling for stochastic volatility models}, date={2010}, journal={Stochastic Processes and their applications}, volume={120}, number={1}, pages={66\ndash 83}, } \bib{SzpruchTanTse2017}{unpublished}{ author={Szpruch, Lukasz}, author={Tan, Shuren}, author={Tse, Alvin}, title={Iterative particle approximation for {M}ckean-{V}lasov {SDE}s with application to multilevel {M}onte {C}arlo estimation}, date={2017}, note={ArXiv:1706.00907}, } \bib{TengEtAl2016}{article}{ author={Teng, Huei-Wen}, author={Fuh, Cheng-Der}, author={Chen, Chun-Chieh}, title={On an automatic and optimal importance sampling approach with applications in finance}, date={2016}, journal={Quantitative Finance}, volume={16}, number={8}, pages={1259\ndash 1271}, } \bib{YongZhou1999}{book}{ author={Yong, Jiongmin}, author={Zhou, Xun~Yu}, title={Stochastic controls: Hamiltonian systems and {HJB} equations}, publisher={Springer Science \& Business Media}, date={1999}, volume={43}, } \end{biblist} \end{bibdiv} \end{document}
1,477,468,751,251
arxiv
\section{Introduction} For a given positive integer $p$ and a surface $\Sigma$, we are concerned with obtaining optimal drawings (that is, drawings with the fewest possible number of crossings) for the $K_{p,q}$ family in $\Sigma$. For that, we want to use the duplication operation (see below) akin to its use in Zarankiewicz's drawings \cite{zarankiewicz}, where we can obtain drawings for every $q$ starting with a planar $K_{p,2}$ and alternately duplicating the two vertices of the part of size $2$. Let $u$ and $v$ be two vertices of a graph $G$ with the same neighborhood of size $p$. Let $D$ be a drawing of $G-v$ in a surface and let $\Delta$ be a neighborhood of $v$ in $D$ homeomorphic to a disk such that $\Delta$ only intersects the edges of $G$ incident with $v$. We may \textit{duplicate} $u$ by drawing $v$ in the interior of $\Delta$ and drawing the edges $vw$ incident with $v$ near $uw$ in such way that only edges that cross $uw$ also cross $vw$. This may be done so that the edges incident with $u$ and $v$ cross at most $Z(p)=\ff{p}{2}\ff{p-1}{2}$ times in the interior of $\Delta$\cite{crk7n}. The vertex $v$ is called a \textit{duplicate} of $u$ in $D$. Any drawing $D'$ obtained from $D$ by a sequence of duplications of vertices in $D$ is called an \textit{extension} of $D$. We note that $v$ will end with the same local rotation as $u$. A drawing of a graph in a surface is \textit{good} if: (i) no pair of edges cross more than once; (ii) edges with a common incident vertex do not cross; and (iii) no three edges have a common crossing point. A drawing of graph $G$ in a surface $\Sigma$ is \textit{optimal} if it has the least number of crossings over all possible drawings of $G$ in $\Sigma$. It is folklore that every graph has an optimal drawing that is good. We prove that, for each integer $p \geq 1$ and each surface $\Sigma$, there exists a finite set $\mathcal{D}(p,\Sigma) = \{D_1,...,D_k\}$, where: for each $i \in \{1,..,k\}$, there is an integer $r_i$ such that $D_i$ is a drawing of $K_{p,r_i}$ in $\Sigma$; and for each positive integer $q$, either there is an $i \in \{1,2,\ldots,k\}$ such that either $D_i$ is an optimal drawing of $K_{p,q}$, or there exists an optimal drawing $D$ of $K_{p,q}$ that is an extension of $D_i$. As an example, if Zarankiewicz's conjecture were true, for any $p$, a set composed of an embedding of $K_{p,1}$ and $K_{p,2}$ would suffice for the sphere. The proof of existence of this set is one of the main contributions of this article. \begin{theorem}\label{thm:fbasis} Let $p$ be a positive integer and let $\Sigma$ be a surface. Then there exists a finite set $\mathcal{D}(p,\Sigma)$ of drawings of bipartite complete graphs in $\Sigma$ such that, for every positive integer $q$, there is an optimal drawing of $K_{p,q}$ that is either in $\mathcal{D}(p,\Sigma)$ or an extension of a drawing in $\mathcal{D}(p,\Sigma)$. \end{theorem} We only prove finiteness of the set $\mathcal{D}(p,\Sigma)$. Our upper bound on $|\mathcal{D}(p,q)|$ is surely not very accurate. This theorem is an extension to higher genus (orientable and non-orientable) surfaces of a result of Christian, Richter, and Salazar \cite{zcffm} for the plane/sphere. We denote the \textit{$q$-side} and \textit{$p$-side} of $K_{p,q}$ to be the parts of $K_{p,q}$ of size $q$ and $p$, respectively. As an intermediate step for the proof of Theorem \ref{thm:fbasis}, we bound $q$ as a function of $\Sigma$ and $p$, as expressed in the next theorem. For a pair of vertices $u$ and $v$ of $G$, let $\crn_D(u,v)$ denote the number of crossings between the edges incident with $u$ and $v$ in a drawing $D$. \begin{theorem}\label{thm:qbsp} Let $D$ be a good drawing of $K_{p,q}$ in a surface $\Sigma$ such that, for any two vertices $v$ and $w$ of the $q$-side, $\crn_D(v,w) < Z(p)$. Then, $q$ is bounded by a function of $\Sigma$ and $p$. \end{theorem} In Section 2, we provide a summary of notation and prior results that we use in this work. Sections 3 and 4 provide the proofs of Theorems \ref{thm:qbsp} and $\ref{thm:fbasis}$, respectively. \section{Preliminaries} By surface we mean a connected, compact $2$-manifold (that is a connected, compact Hausdorff space in which every point has a neighborhood homeomorphic to $\mathbb{R}^2$). For a surface $\Sigma$ we denote its Euler characteristic as $\chi(\Sigma)$. For a connected graph $G$ the \textit{genus} $g(G)$ of $G$ is the minimum genus of an orientable surface such that $G$ is embeddable in it. We similarly define the \textit{demigenus} $\tilde{g}(G)$ for non-orientable surfaces. There exists closed formulas for the genus and demigenus of bipartite complete graphs \cite{ringel65a,ringel65b}: \begin{theorem}\label{thm:kmngenus} If $m,n \geq 2$ then: \[ g(K_{m,n}) = \cf{(m-2)(n-2)}{4}. \] If $m \geq 3$ and $n \geq 3$, then: \[ \tilde{g}(K_{m,n}) = \cf{(m-2)(n-2)}{2}. \] \end{theorem} Let $D$ be a drawing of a connected loopless graph $G$ in a surface $\Sigma$. We make no distinction between the elements of the graph and their representations in $D$. Let $e$ be an edge of $G$ whose ends are $v$ and $w$. A \textit{local rotation} $\pi_v$ around $v$ is a cyclic permutation of the edges incident with $v$. An \textit{embedding scheme} of $G$ is the pair $(\{\pi_v\}_{v \in V(G)},\lambda)$, where $\lambda$ is a signal function on the edges of $G$. We may recover an embedding scheme of $G$ from a embedding of $G$. The following Theorem relates the embedding schemes of a graph with cellular embeddings on surfaces. \begin{theorem}\label{thm:rsce} Every cellular embedding of a graph connected $G$ is uniquely determined, up to homeomorphism, by its embedding scheme. \end{theorem} The version of this theorem restricted for orientable surfaces is known as the \textit{Heffter-Edmonds-Ringel rotation principle} \cite{herrp,ehrrp,rherp}. This version was made explicit by Ringel~\cite{cmct} in the 50s and the first formal proof of it was published by Stahl~\cite{ges}. It happens that $Z(p)$ is the lower bound for the number of crossings of duplicates in the plane, as shown by the next lemma. \begin{lemma}\label{lem:crdup} \cite{zcffm} Let $D$ be a drawing in the plane of the graph consisting of two vertices $u$ and $v$ joined by $m$ edges so that $u$ and $v$ have the same clockwise rotation in $D$. Then $D$ has at least $Z(m)$ crossings. \end{lemma} Let $D$ and $D'$ be two good drawings of a connected graph $G$ in a surface $\Sigma$. We say that $D$ and $D'$ are \textit{isomorphic} if there is an homeomorphism of $\Sigma$ to itself taking $D$ into $D'$. For the purpose of finding optimal drawings, it suffices to consider only the isomorphism classes of good drawings of $G$, as the drawings in the same class have the same crossing number. Let $D$ be a good drawing of a loopless connected graph $G$. The \textit{flattening} of $D$ is the graph $P$ obtained from $G$ by inserting a vertex of degree 4 at each crossing in $D$. Thus, if an edge $e$ is crossed $k$ times, $e$ is subdivided into $k+1$ edges. In the remainder of this article, we show that there exists only finitely many (up to isomorphism) good drawings of a connected graph $G$ in any surface $\Sigma$ (Theorem \ref{thm:fdui}). This proof works in two steps. We first show that there exists only finitely many flattenings arising from good drawings of $G$ in $\Sigma$. Afterwards, for a particular flattening $P$ of $G$, we show that there exists only finitely many (up to homeomorphism) embeddings of $P$ in a surface $\Sigma$. Let $\ft(G,\Sigma)$ be the set of all flattenings (up to graph isomorphism) of $G$ arising from good drawings of $G$ in $\Sigma$. \begin{lemma} For any graph $G$ and surface $\Sigma$, $\ft(G,\Sigma)$ is finite. \end{lemma} \begin{proof} Let $D$ and $D'$ be good drawings of $G$ in $\Sigma$. Let $P$ and $P'$ be their flattenings. Let $X$ and $X'$ be the set of pairs of edges that cross in $D$ and $D'$, respectively. Thus, for every element of $X$ we have an associated vertex of $P$. Similarly for $X'$ and $P'$. Let $e$ be an edge of $G$ whose ends are $u$ and $v$. Suppose $f$ and $h$ are two distinct edges that cross $e$ in $G$ and that $e$ is ordered from $u$ to $v$ in $D$. Let $x$ be the intersection point of $e$ and $f$ in $D$. Likewise, let $y$ be the one for $e$ and $h$. If $x$ precedes $y$ in $D[e]$, then we say that $f \prec_e h$. Likewise, let $\prec'_e$ be the order of the edges crossing $e$ obtained from $D'$. Note that, for a particular crossing $x$ between edges $e$ and $f$ in $D$ the neighborhood of the vertex arising from $x$ in $P$ depends only on the order of the crossing in both $e$ and $f$. Now, suppose that $X=X'$ and that for every edge $e$ of $G$, the orders $\prec_e$ and $\prec'_e$ are the same. Thus, there exists a natural isomorphism between $P$ and $P'$ such that every vertex $x$ of $P$, arising from a crossing, is mapped to the vertex $x'$ in $P'$ arising from the crossing of the same pair of edges in $D'$. Therefore a flattening is characterized by the pairs of edges that cross and the ordering of the crossings on the edges of $G$ in a good drawing. As good drawings of $G$ have only finitely many possible crossings (at most one for each pair of edges), we conclude that there exist only finitely many (up to graph isomorphism) flattenings arising from good drawings of $G$. \end{proof} In short, the isomorphism classes of good drawings of $G$ are homeomorphism classes of elements of $\ft(G)$. For our purposes, it suffices to show that, for a $P \in \ft(G)$, there exist only finitely many homeomorphism classes of embeddings of $P$. \begin{lemma} For a loopless connected graph $P$ embeddable in a surface $\Sigma$, there exist only finitely many (up to homeomorphism) embeddings of $P$ in $\Sigma$. \end{lemma} \begin{proof} Let $R$ be a embedding scheme of an (possibly non-cellular) embedding $\Pi$ of $P$ in $\Sigma$. We note that there can be many distinct non-cellular embeddings of $P$ with $R$ as its embedding scheme. The embedding scheme uniquely determines (up to homeomorphism) a cellular embedding $\Pi'$ of $P$ in a surface $\Gamma$ with $R$ as its embedding scheme (Theorem \ref{thm:rsce}). One may see $\Gamma$ as the surface obtained from $\Sigma$ by capping off all the faces of $\Pi$ with disks and thus removing the handles and crosscaps in these faces. Thus $\chi(\Sigma) \leq \chi(\Gamma)$ with equality if and only if $\Pi$ is cellular. Attaching an appropriate number of handles/cross caps to faces of $\Pi'$ in $\Gamma$ will result in an embedding of $P$ in a surface $\Sigma'$ with the same embedding scheme $R$. If $\Sigma$ and $\Sigma'$ have the same number of handles/crosscaps, then $\Sigma$ is homeomorphic to $\Sigma'$. We show that there are only finitely many ways to attach handles and crosscaps to the faces of $\Pi'$ in $\Gamma$ to obtain $\Sigma$. Let $Q$ be the set of facial walks of $\Gamma$ and $Q^*$ a partition on $Q$. For a given part $T$ in $Q^*$ of size $k$ we can attach any surface $\Omega$ with $k$ holes into disks cut from each face of $T$ in $\Gamma$. This operation results in a surface of Euler genus $\chi(\Gamma) + \chi(\Omega)$. Thus $\chi(\Omega)$ is bounded as a function of $\chi(\Sigma)$ which implies that there are finitely many possible surfaces we can attach. This, combined with the finiteness of $Q$, and thus $Q^*$, shows that there are only finitely many possible embeddings of $P$ in $\Sigma$. It is clear, by the construction above, that any embedding of $P$ in $\Sigma$ with rotation system $R$ can be obtained from $\Pi'$ by adding the appropriate numbers of handles and cross caps to the faces of $\Pi'$ in $\Gamma$. Moreover, we showed that there are only finitely many ways to do that. With this, we conclude that there are only finitely many (up to homeomorphism) embeddings of $P$ with rotation scheme $R$ in $\Sigma$. We note that there are only finitely many (up to homeomorphism) possible rotation schemes for $P$. Thus, overall, there exists only finitely many embedding of $P$ in $\Sigma$. \end{proof} \begin{theorem}\label{thm:fdui} For any connected graph $G$ and any surface $\Sigma$, there are only finitely many (up to drawing isomorphism) good drawings of $G$ in $\Sigma$ . \end{theorem} \section{Bounding the $q$-side} \begin{proof} {\em (of Theorem \ref{thm:qbsp})} We may assume that $p \geq 3$. We first note that, for vertices $i$ and $j$ of the $p$-side, if the edges $iv$ and $jw$ of $K_{p,q}$ cross in $D$, then there exists a $4$-cycle that self-crosses in $D$ at least once. Indeed, as the graph is $K_{p,q}$ the edges $iw$ and $jv$ are also in $K_{p,q}$ and together with $iv$ and $jw$ induce a 4-cycle. For each pair of vertices $u$ and $v$ of the $q$-side, we define a function $f_{uv}$ on the pair $i$ and $j$ of the $p$-side such that $f_{uv}(i,j)=1$ if the $4$-cycle of $K_{p,q}$ induced by $\{i,j,u,v\}$ crosses itself in $D$, and $f_{uv}(i,j)=0$ otherwise. We note that the set of all possible such functions has size $k=2^{p \choose 2}$; therefore it is finite. Let $r$ be an integer such $K_{3,r}$ is not embeddable in $\Sigma$ (see Theorem \ref{thm:kmngenus}). Let $K_q$ be a complete graph such that its vertex set is the $q$-side of $K_{p,q}$. We color each edge $uv$ of $K_q$ with ``color'' $f_{uv}$. By Ramsey's Theorem, there exists a function $R:=R_k(r)$ such that if $q \geq R$, then every $k$-edge-coloring of $K_q$ with colors $1,2,\ldots,k$ contains a monochromatic copy of $K_r$. Let $f$ be the color of this $K_r$. Note that $R$ is a function of $r$ and $k$ and both depend only on $\Sigma$ and $p$, respectively. Now let us define a graph $G$ whose vertex set is the $p$-side. We join $i$ and $j$ in $G$ if $f(i,j)=0$. This means that $ij \in E(G)$ if for any $u,v \in V(K_r)$ the 4-cycle induced by $\{u,v,i,j\}$ in $K_{p,q}$ does not self-cross in $D$. If there exists a triangle in $G$, then there exists a drawing of $K_{3,r}$ as a subdrawing of $D$ without crossings, which cannot happen by the choice of $r$. Thus $G$ is triangle-free. Túran's Theorem implies that $G$ has at most $(p^2/4)$ edges. Thus, there are at least ${p \choose 2} - (p^2/4)= Z(p)$ pairs of vertices of the $p$-side that each contribute at least one crossing in $D$. Therefore, for any pair of vertices $u$ and $v$ of $K_r$, we have that $\crn_D(u,v) \geq Z(p)$. \end{proof} \section{Finite number of drawings} \begin{proof} {(\em of Theorem \ref{thm:fbasis})} Theorem \ref{thm:qbsp} implies that there is a number $F(p,\Sigma)$ such that if $q > F(p,\Sigma)$, then there exist distinct vertices $u,v$ such that $\crn_D(u,v) \geq Z(p)$. Let $\mathcal{D}(p,\Sigma)$ consist of all the good drawings in $\Sigma$ of $K_{p,q}$ with $q \leq F(p,\Sigma)$. Theorem \ref{thm:fdui} is finite. For any drawing $D$ of $K_{p,q}$ with $q > F(p,\Sigma)$, we can successively delete $u_1,u_2,\ldots,u_{q-F(p,\Sigma}$ such that, for each $i=1,2,\ldots,q-F(p,\Sigma)$, there is a vertex $v_i$ in $K_{p,q} - \{u_1,\ldots,u_{i-1}\}$ such that $\crn_{D-\{u_1,\ldots,u_{i-1}\}}(u_i,v_i) \geq Z(p)$. The drawing $D - \{u_1,\ldots,u_{i-1}\}$ is in $\mathcal{D}(p,\Sigma)$. Now reinserting $u_i$ to be a duplicate of $v_i$ (in the order $u_{q-F(p,\Sigma)},\ldots,u_2,u_1\})$ produces a drawing $D'$ of $K_{p,q}$ such that $\crn(D') \leq cr(D)$, as required. \end{proof} \bibliographystyle{alpha}
1,477,468,751,252
arxiv
\section{Introduction} Multi-class segmentation problems are common in analysis of biomedical images. A typical solution is to train a neural network pixel classifier. Commonly, these networks predict a probability distribution over all classes in each pixel, which can be thresholded to obtain a final segmentation. These predictions often contains holes, partial misclassifications, shrinkage of small classes and rough borders between classes, resulting in errors in the final segmentation. To improve the segmentation, post-processing is often used to close holes, reclassify uncertain pixel labels based on proximity, grow objects, and smoothen rough boundaries. Mathematical morphology is a powerful framework for post-processing binary and grayscale images. Binary and grayscale morphology are special cases of morphology on complete lattices \cite{serra1994morphological}. A complete lattice is a partially ordered set (poset), where each non-empty subset has an infimum and a supremum. For complete lattices the core operators, dilation and erosion, can be defined using supremum and infimum: for binary morphology using set union and intersection; for grayscale morphology using maximum and minimum under the standard total ordering of the reals, see \cite{serra1994morphological} for an in depth treatment of the theoretical foundations of mathematical morphology. For general multi-class images, there is no natural ordering of the classes, and hence, they do not form a complete lattice. For example, for a segmentation of microscope images of cells into cell membrane, mitochondria and background, any ordering of the classes is task dependent and not given by the images themselves. A natural representation of this kind of data is the categorical distribution, which can represent both crisp segmentation masks and uncertainty as encountered in prediction images. In the remainder of this work we will use the term ``categorical'' instead of ``multi-class''. In this work, we provide a thorough review of previously proposed approaches to morphology on categorical images. We then propose two approaches for morphology on categorical distributions, an indirect approach where we operate on Dirichlet distributions that are then transformed to categorical distributions, and a direct approach where we operate on the categorical distributions themselves. We then define protected variants of the direct operations that allow finer control over the processing. Finally, we illustrate the utility of the proposed approach on two tasks: fixing misclassified mitochondria and modeling annotator bias. \section{Background and related work} In this section we briefly restate morphology on complete lattices and on binary and grayscale images, before we review the most relevant literature \cite{busch1995morphological,koppen2000pareto,hanbury2001morphological,ronse2005morphology,chevallier2016nary,vandegronde2017nonscalar,grossiord2019shape}. What we refer to as categorical images have various names in the literature: color-coded images, label images and n-ary images. In the sections below we will use the original names in the section titles, but otherwise we will refer to categorical images and categorical morphology. In the literature, there are three main approaches for extending morphology to images with values that do not have a natural ordering: impose an order on the values, which is the common approach for color images; operate on all categories simultaneously \cite{busch1995morphological,ronse2005morphology}; and operate on a single category at a time \cite{chevallier2016nary,vandegronde2017nonscalar} Morphology on color images has received a lot of attention, with the primary focus on ordering colors by exploiting the relationship between dimensions of color spaces. See for example \cite{aptoula2007acomparative} for an overview of approaches for defining an ordering of colors. Our focus is on categorical images, where such approaches are less relevant. \subsection{Morphology on complete lattices} \label{sec:complete-lattice} Let $\Gamma$ be a set with the partial order $\le$. The poset $(\Gamma, \le)$ is a complete lattice if every subset of $\Gamma$ has an infimum $\wedge$ and a supremum $\vee$. We define an image as a function $f$ from pixel-coordinates $\mathbb{D} = \mathbb{Z}^d$ to $\Gamma$, and a structuring element $B$ as a subset of $\mathbb{D}$ \begin{align} \label{eq:image} f &\in \mathcal{F} = \left\{ g \mid g : \mathbb{D} \mapsto \Gamma \right\}, \\ \label{eq:structuring-element} B &\subseteq \mathbb{D}. \end{align} The dilation $(\delta)$ and erosion $(\epsilon)$ of $f$ by $B$ are then defined as the supremum and infimum over the local neighborhoods in $f$ given by $B$ \begin{align} \label{eq:dilation-lattice} \delta(f;B)(x) &= \bigvee\limits_{\{y\mid (y-x) \in B\}} f(y),\\ \label{eq:erosion-lattice} \epsilon(f;B)(x) &= \bigwedge\limits_{\{y\mid (y-x) \in B\}} f(y). \end{align} Opening ($\gamma$) and closing ($\phi$) are the compositions of dilation and erosion \begin{align} \label{eq:opening} \gamma(f;B)(x) &= \delta(\epsilon(f;B);B),\\ \label{eq:closing} \phi(f;B)(x) &= \epsilon(\delta(f;B);B). \end{align} \subsection{Binary and grayscale morphology} \label{sec:binary-grayscale} We define a grayscale image as in \refeq{image} with $\Gamma = [0,1]$. Let $\le$ be the usual ordering of the reals, then the poset $([0,1], \le)$ is a complete lattice, where the $\min$ function gives the infimum and the $\max$ function the supremum. Let $B$ be defined as in \refeq{structuring-element}. Dilation and erosion can then be obtained from \refeq{dilation-lattice} and \refeq{erosion-lattice} as \begin{align} \label{eq:grayscale-dilation} \delta(f;B)(x) &= \max\limits_{\{y\mid (y-x) \in B\}} f(y),\\ \label{eq:grayscale-erosion} \epsilon(f;B)(x) &= \min\limits_{\{y\mid (y-x) \in B\}} f(y). \end{align} If we restrict $\Gamma$ to $\{0,1\}$ we obtain binary morphology. \subsection{Morphology on color-coded images} \label{sec:color-coded} In~\cite{busch1995morphological} the authors propose a framework for categorical morphology where pixels have a set of categories. Let $C = \{c_1, c_2, \dots, c_n\}$ be a set of $n$ categories. The powerset of $C$, $\ps{C}$, is the set of all subsets of $C$, including the empty set. An image $f$ is then defined as in \refeq{image} with $\Gamma = \ps{C}$. In this framework the value of a pixel can be any element of $\ps{C}$, e.g $\{c_1\}$, $\{c_1, c_n\}$ or $\{\}$. Let $\subseteq$ be the usual subset relation, then the poset $(\ps{C}, \subseteq)$ is a complete lattice where set intersection is the infimum and set union is the supremum. In~\cite{busch1995morphological} the authors propose to use structuring elements of the same form as $f$, that is $B \in \mathcal{F}$. For the sake of comparison, we first consider the simpler case where $B$ be is defined as in \refeq{structuring-element}. Dilation and erosion can then be obtained from \refeq{dilation-lattice} and \refeq{erosion-lattice} as \begin{align} \label{eq:set-based-dilation} \delta(f;B)(x) &= \bigcup\limits_{\{y\mid (y-x) \in B\}} f(y),\\ \label{eq:set-based-erosion} \epsilon(f;B)(x) &= \bigcap\limits_{\{y\mid (y-x) \in B\}} f(y). \end{align} An example of these operations is shown \reffig{color-coded-set-op}. Let $B \in \mathcal{F}$. Under this scheme, an operation is only performed when one or more categories in the structuring element match a category in the image, and the result depends on the categories in both image and structuring element. Several variations of dilation and erosion are proposed in \cite{busch1995morphological}, here we only consider the ``transparent'' operations. Let $\mathbb{D}_f$ be the domain of $f$ and $\mathbb{D}_B$ the domain of $B$. A specified reference point, $y_0 \in \mathbb{D}_B$, is used to determine if $B$ matches $f$ and could for example be the center of a ball shaped $B$. Dilation and erosion are then defined as \begin{align} \label{eq:color-coded-dilation} \delta(f;B)(x) &= f(x) \cup \bigcup\limits_{\{y \in \mathbb{D}_B \mid f(x+y) \cap B(y_0) \ne \emptyset\}} B(y)\\ \label{eq:color-coded-erosion} \epsilon(f;B)(x) &= \begin{multipartdef} f(x), & \mycase{f(x) \cap B(y_0) = \emptyset}\\ f(x)\setminus B(y_0), & \mycase{[\exists y \in \mathbb{D}_B](f(x+y) \cap B(y_0) = \emptyset)},\\ f(x) \cup B(y_0), & \text{otherwise} \end{multipartdef} \end{align} An example of these operations is shown \reffig{color-coded} using a cross-shaped structuring element with $y_0$ in the center. \subsection{Morphology on label images} \label{sec:label-images} In~\cite{ronse2005morphology} the authors propose a framework for categorical morphology where pixels have no category ($\bot$), a unique category or conflicting categories ($\top$). Let $C = \{c_1, c_2, \dots, c_n\}$ be a set of $n$ categories and let $C_* = C \cup \{\bot, \top\}$. An image $f$ is then defined as in \refeq{image} with $\Gamma = C_*$. The poset ($C_*, \le$), where $\le$ satisfies $[\forall c \in C](\bot \le c \le \top)$ is a complete lattice. Let $B$ be defined as in \refeq{structuring-element} and let $V(x) = \{f(x-y) \mid y \in B\}$. Dilation and erosion are then defined as \begin{align} \label{eq:label-dilation} \delta(f;B)(x) &= \begin{multipartdef} \top, & \mycase{\top \in V(x)}\\ \top, & \mycase{\vert V(x) \cap C\vert > 1}\\ V(x) \cap C, & \mycase{\vert V(x) \cap C \vert = 1}\\ \bot, & \text{otherwise} \end{multipartdef}\\ \label{eq:label-erosion} \epsilon(f;B)(x) &= \begin{multipartdef} \bot, & \mycase{\bot \in V(x)}\\ \bot, & \mycase{\vert V(x) \cap C\vert > 1}\\ V(x) \cap C, & \mycase{\vert V(x) \cap C\vert = 1}\\ \top, & \text{otherwise} \end{multipartdef} \end{align} An example of these operations is shown \reffig{label-images}. In the context of categorical distributions, where we have detailed information about label uncertainty, this approach is unsuitable due to the loss of information. \subsection{N-ary morphology} \label{sec:n-ary} In~\cite{chevallier2016nary} the authors propose a framework for categorical morphology where pixels have a unique category. Let $C = \{c_1,c_2,\dots,c_n\}$ be a set of $n$ categories. An image $f$ is then defined as in \refeq{image} with $\Gamma = C$. Instead of operating on all categories simultaneously, the authors propose to operate on a single category at a time. Let $B$ be defined as in \refeq{structuring-element} and let $i$ be the category we operate on. We use subscripts to distinguish single category operations from standard operations. Dilation and erosion are then defined as \begin{align} \label{eq:n-ary-dilation} \delta_i(f;B)(x) &= \begin{multipartdef} f(x), & \mycase{[\forall y \in B](f(x+y) \ne i)}\\ i, & \text{otherwise} \end{multipartdef}\\ \label{eq:n-ary-erosion} \epsilon_i(f;B)(x) &= \begin{multipartdef} f(x), & \mycase{f(x) \ne i}\\ i, & \mycase{[\forall y \in B](f(x+y) = i)}\\ \theta(x,f), & \text{otherwise} \end{multipartdef} \end{align} where $\theta$ is a function that assigns a value in the case where there are different categories in the neighborhood of $x$. A natural choice for $\theta$, which is also suggested in \cite{chevallier2016nary}, is to pick the value of the closest pixels. However, this does not help when the closest pixels have different values, which is a fundamental problem when pixel values cannot represent uncertainty. This is solved by ranking the categories and using the ranking to break ties. In general there is no obvious way of ranking categories based on the image alone, and as the number of multi-category interfaces increases it becomes more difficult to understand how one particular ranking influence the outcome. Without ranking categories a priori, the above definition implies an ordering $\le_i$, which is not a partial order and thus $(C, \le_i)$ is not a complete lattice. In~\cite{vandegronde2017nonscalar} the authors show that $\le_i$ is a preorder, and formalize constraints for choosing $\theta$ such that dilation and erosion form an adjunction and their compositions are an opening and a closing. However, this does not help decide which category to choose when multiple categories are closest, as the constraints on $\theta$ do not yield a unique rule for breaking ties. An example of these operations is shown in \reffig{n-ary}, where the question marks highlight two pixels that cannot be assigned a value without a method for breaking ties. \begin{figure}[p] \begin{subfigure}{\textwidth} \includegraphics[width=0.29\textwidth]{related-work/color-coded-set-op-A.png} \includegraphics[width=0.10\textwidth]{related-work/color-coded-set-op-B.png} \includegraphics[width=0.29\textwidth]{related-work/color-coded-set-op-A+B.png} \includegraphics[width=0.29\textwidth]{related-work/color-coded-set-op-A+B-B.png} \caption{Morphology on color-coded images using \refeq{structuring-element} as structuring element. The bold boundary highlights the pixels with one or more categories and is only for illustration. } \label{fig:color-coded-set-op} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=0.29\textwidth]{related-work/busch-figure-2A.png} \includegraphics[width=0.10\textwidth]{related-work/busch-figure-2B.png} \includegraphics[width=0.29\textwidth]{related-work/busch-figure-2A+B.png} \includegraphics[width=0.29\textwidth]{related-work/busch-figure-2A+B-B.png} \caption{Morphology on color coded images using structuring element from \cite{busch1995morphological} . This figure extends Figure 2. in \cite{busch1995morphological} with the closing operation. Notice that $B(y_0) = \{plus, square\}$, meaning that $B$ will match any pixel with either $plus$ or $square$. } \label{fig:color-coded} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=0.29\textwidth]{related-work/label-images-A.png} \includegraphics[width=0.10\textwidth]{related-work/label-images-B.png} \includegraphics[width=0.29\textwidth]{related-work/label-images-A+B.png} \includegraphics[width=0.29\textwidth]{related-work/label-images-A+B-B.png} \caption{Morphology on label images.} \label{fig:label-images} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=0.29\textwidth]{related-work/n-ary-A.png} \includegraphics[width=0.10\textwidth]{related-work/n-ary-B.png} \includegraphics[width=0.29\textwidth]{related-work/n-ary-A+B.png} \includegraphics[width=0.29\textwidth]{related-work/n-ary-A+B-B.png} \caption{N-ary morphology. The coloring of the structuring element with $plus$ indicates that operations are on the $plus$ category. The question mark ? indicate pixels that cannot be assigned a valued without additional ordering of the categories. Notice that it is not possible to assign pixels ``no-category'' as in (a), (b) or (c).} \label{fig:n-ary} \end{subfigure} \caption{Comparison of categorical morphologies from the literature. From left to right: $f$, $B$, $\delta(f;B)$, $\gamma(f;B)$.} \label{fig:related} \end{figure} \subsection{Fuzzy n-ary morphology} \label{sec:fuzzy-n-ary} In~\cite{chevallier2016nary} the authors also propose an extension of n-ary morphology to images of categorical distributions. Let $C = \{c_1,c_2,\dots,c_{n+1}\}$ be a set of $n+1$ categories. The categorical distribution of $n+1$ categories is completely determined by a point in the $n$-simplex $\Delta^n = \{ \pi \in \mathbb{R}^{n+1} \mid \pi_k \ge 0, \sum \pi_k = 1\}$, where $\pi_k$ is the probability of $c_k$. An image $f$ is then defined as in \refeq{image} with $\Gamma = \Delta^n$. Operations are again defined on a single category at a time. Let $B_r$ be a closed ball of radius $r$ centered at the origin and let $i$ be the category we operate on. Let $f_k(x) = f(x)_k$ be the probability of observing category $c_k$ in pixel $x$ and let $\omega_k(x) = 1 - f_k(x)$. Dilation is then defined as \begin{align} \label{eq:fuzzy-n-ary-dilation} \delta_i(f;B_r)(x)_k = \begin{multipartdef} \delta(f_k;B_r)(x), & \mycase{k = i},\\ [1 - \delta(f_i;B_r)(x)]\frac{f_k(x)}{\omega_i(x)}, & \mycase{k \ne i}. \end{multipartdef} \end{align} where $\delta(f_i;B_r)(x) = 1 \implies [1 - \delta(f_i;B_r)(x)]\frac{f_k(x)}{\omega_i(x)} = 0$. Two variations on erosion are proposed in \cite{chevallier2016nary}, neither of which we find satisfactory. The first require that we pick a ranking of all categories and does not yield idempotent opening and closing \begin{align} \label{eq:fuzzy-n-ary-erosion-1} \epsilon_i(f;B_r)(x)_k &= \begin{multipartdef} \epsilon(f_k;B_r)(x) & \mycase{k = i}\\ f_k(x) + f_i(x) - \epsilon(f_i;B)(x) & \mycase k = \min( \arg\min\limits_{j\ne i}(\delta(f_j;B)) )\\ f_k(x) & \text{otherwise} \end{multipartdef} \end{align} The second assumes that the image is restricted to the edges of the simplex (at most two categories are non-zero in any pixel) and opening and closing are again not idempotent \begin{align} \label{eq:fuzzy-n-ary-erosion-2} \epsilon_i(f;B_r)(x)_k &= \begin{multipartdef} \epsilon(f_k;B_r)(x) & \mycase{k = i}\\ \frac{1 - \epsilon(f_i;B)(x)}{1 - f_i(x)}f_k(x) & \mycase{f_i(x) \le 0.5 \vee \max\limits_{j\ne i}(\delta(f_j;B)(x) < 0.5)}\\ 1 - \epsilon(f_i;B)(x) & \mycase{k = \min(\arg\max\limits_{j\ne i}\delta(f_j;B)(x))}\\ 0 & \text{otherwise} \end{multipartdef} \end{align} We refer the reader to \cite{chevallier2016nary} for the motivation for these formulations and their properties. \subsection{Fuzzy Pareto morphology} \label{sec:fuzzy-pareto} In~\cite{koppen2000pareto} the authors propose fuzzy Pareto morphology for color images. An RGB color image can be seen as a 3-dimensional fuzzy set, where the membership function for each set correspond to the value of each color channel. This can equivalently be seen as point in the half-open unit cube. An image $f$ is then defined as in \refeq{image} with $\Gamma = (0,1]^d$. For each $a \in \Gamma$ we can associate a hyper-rectangle defined by the vector from the origin to $a$. Fuzzy Pareto morphology is based on the idea of dominance. For $a,b \in \Gamma$ let $a \cap b = \{\min(a_i,b_i)\}_{i=1\dots d}$ be the intersection of $a$ and $b$. Let $A(a) = \prod_i a_i$ be the area function, yielding the area of the hyperrectangle of $a$. The degree to which $a$ dominates $b$ is then \begin{equation} \label{eq:fuzzy-pareto-dominance} \mu_D(a,b) = \frac{A(a \cap b)}{A(b)}, \end{equation} which measures how much of the hyperrectangle of $b$ is contained in the hyperrectangle of $a$. \paragraph{}Let $B(x) = \{x+y \mid y \in B\}$, dilation and erosion are then defined as \begin{align} \label{eq:fuzzy-pareto-dilation} \delta(f;B)(x) &= f\left(\arg\min\limits_{y \in B(x)}\left\{\max\limits_{z \in B(x) \wedge z \ne y}\mu_D(f(z), f(y))\right\}\right),\\ \label{eq:fuzzy-pareto-erosion} \epsilon(f;B)(x) &= f\left(\arg\max\limits_{y \in B(x)}\left\{\min\limits_{z \in B(x) \wedge z \ne y}\mu_D(f(z),f(y))\right\}\right). \end{align} Although not directly applicable to categorical distributions, it could easily be extended by either restricting $\Gamma$ to $\{v \in (0,1]^d \mid \sum_i v_i = 1\}$ or by considering it in the context of the Dirichlet distribution. However, \refeq{fuzzy-pareto-dilation} and \refeq{fuzzy-pareto-erosion} are not guaranteed to yield a unique solution, requiring us to come up with an arbitration rule. \subsection{Morphology on the unit circle} In~\cite{peters1997mathematical} the authors propose morphology on the unit circle for processing the hue space of color images. The idea is to use structuring elements from the hue space and define an ordering based on the shortest distance along the unit circle between values in the image and values in the structuring element. Although not directly applicable to categorical images, it could be relevant to consider structuring elements that are themselves categorical distributions and base morphology on distance between distributions. Morphology on the unit circle is also considered in~\cite{hanbury2001morphological} where the authors propose three approaches: using difference operators (e.g. gradient), using grouped data, and using ``labeled openings''. It is the labeled openings that are most relevant in our context. Let $f$ be an image as defined in \refeq{image} with $\Gamma = [0, 2\pi]$. In a labeled opening the unit circle is partitioned into segments $S(\omega) = \{[0,\omega)$, $[\omega, 2\omega)$,$\dots$, $[2\pi-\omega, 2\pi)\}$ and each segment $s \in S(\omega)$ gives rise to a binary image $f(x;s) = f(x) \in s$. A labeled opening is then the union of the binary openings of all segments, $\gamma_\omega(f) = \cup_{s \in S(\omega)}f(x;s)$, indicating for each pixel if it was opened. Categorical images have a natural partitioning based on the categories, leading to the set-based morphology in \refsec{color-coded}, where a labeled opening is the pixels that do not change when opened. \subsection{Morphology on component graphs} In~\cite{grossiord2019shape} the authors propose a framework for morphology on multi-valued images based on component graphs. Let an image be defined as in \refeq{image} with $\Gamma = \mathbb{R}^d$. The component graph is constructed from the connected components of the level sets of an image. For example, for $d=2$ and $f(x) \in \{0,1\}^2$ the level sets are $\{(0,0), (0,1), (1,0), (1,1)\}$. For each level set we obtain a set of connected components. Each connected component is a node in the component graph and the children of this node are the connected components that are contained in it. In order to construct the component graph it is required that $\Gamma$ allows a minimum, e.g. $\{0\}^d$, such that the graph will be connected. For categorical images this would require that we have a special background category as in \refsec{color-coded} and \refsec{label-images}. Further, it requires that each pixel can have multiple categories, otherwise no component will be nested inside another and the graph will be the root with all connected components as children. Because the component graph directly exposes the spatial relationship between differently valued regions, it is possible to apply morphological filters, e.g. noise reduction, by pruning some nodes and reconstructing the image from the pruned component graph. Directly pruning the component graph can lead to ambiguity in the reconstruction when a node with two non-comparable parents is removed. The authors propose to solve this by building a component tree of the component graph, prune the tree, then reconstruct the graph from the tree, and the image from the graph. In order to construct the component tree it is necessary to impose a total order on the nodes of the component graph. For example by using a shape measure on the connected components in the component graph. Because the component graph only captures spatial relationships when connected components overlap for different level sets, some common post-processing operations, such as closing holes in segmentations, are challenging to perform. \section{Morphology on categorical distributions} In this section we propose two approaches for morphology on categorical distributions. In \refsec{dirichlet} we show how to operate on all categories simultaneously by operating on Dirichlet distributions. The limitations of this approach will then motivate single category operations that work directly on categorical distributions, which we will define in \refsec{single-category}. \subsection{Morphology on Dirichlet distributions} \label{sec:dirichlet} Let $\mathbb{R}_+$ be the positive real line. We consider the Dirichlet distribution of order $n \ge 2$ with parameters $\alpha \in \mathbb{R}_+^n$, written as $\mathrm{Dir}(\alpha)$, as a distribution over the $(n-1)$-simplex $\Delta^{n-1} = \{ \pi \in \mathbb{R}^{n} \mid \pi_k \ge 0, \sum \pi_k = 1\}$ with density function \begin{equation} \label{eq:dirichlet-density} \mathrm{pdf}(\pi) = \frac{1}{\mathrm{B}(\alpha)}\prod\limits_{k=1}^n \pi_k^{\alpha_k-1} \end{equation} where $\mathrm{B}(\cdot)$ is the Beta function. A sample from the Dirichlet distribution of order $n$ can be seen as parameters of the categorical distribution with $n$ categories. Here, we only consider the expectation \begin{equation} \label{eq:dirichlet-projection} \mathbb{E}[\mathrm{Dir}(\alpha)_k] = \frac{\alpha_k}{\sum \alpha}, \end{equation} which maps each Dirichlet distribution to a specific categorical distribution. Note that $0 < \alpha_k < \infty$ implies that we can only represent categorical distributions in the open simplex. In practice this is not a problem as we can get arbitrarily close to the boundary of the simplex. Let $f_k$ be the $k$'th category in $f$. An image $f$ is defined as in \refeq{image} with $\Gamma = \mathbb{R}_+^n$. If we equip $f$ with the ordering $f \le g \iff [\forall k](f_k(x) \le g_k(x))$ we obtain a complete lattice. Dilation and erosion are then defined as their grayscale counterparts applied to each category independently \begin{align} \delta(f;B)(x)_k &= \delta(f_k;B),\\ \epsilon(f;B)(x)_k &= \epsilon(f_k;B). \end{align} An example of these operations is provided in \reffig{dirichlet}. Opening and closing are possibly the most interesting operations as they, respectively, decreases and increases uncertainty at the boundaries between overlapping categories. \begin{figure}[t] \centering \begin{minipage}[c]{0.09\linewidth} $\mathrm{Dir}(\alpha)$ \end{minipage} \begin{minipage}[c]{0.9\linewidth} \includegraphics[width=\textwidth,trim=0 20 0 0,clip=true]{dirichlet/dirichlet-morphology.png} \end{minipage}\\ \begin{minipage}[c]{0.09\linewidth} $\E{\mathrm{Dir}}$ \end{minipage} \begin{minipage}[c]{0.9\linewidth} \includegraphics[width=\textwidth,trim=0 20 0 20,clip=true]{dirichlet/dirichlet-morphology-cat-projection.png} \end{minipage}\\ \begin{minipage}[c]{0.09\linewidth} $H(\mathbb{E})$ \end{minipage} \begin{minipage}[c]{0.9\linewidth} \includegraphics[width=\textwidth,trim=0 20 0 20,clip=true]{dirichlet/dirichlet-morphology-entropy.png} \end{minipage}\\ \begin{minipage}[c]{0.09\linewidth} $\|\alpha\|$ \end{minipage} \begin{minipage}[c]{0.9\linewidth} \includegraphics[width=\textwidth,trim=0 20 0 20,clip=true]{dirichlet/dirichlet-morphology-magnitude.png} \end{minipage} \caption[Morphology on Dirichlet distributions]{Morphology on Dirichlet distributions. The top-left image is an RGB representation of an image $f$ with three categories, where the colors red, green, and blue correspond to points very close to the vertices of $\Delta^2$ and the remaining colors are mixtures of these three colors. The first row is the Dirichlet distribution. The second row is the probability vectors obtained from \refeq{dirichlet-projection}. The third row is entropy of the probability vectors, and the fourth row is magnitude of the parameter vectors. We can see that dilation increases both entropy and magnitude, whereas erosion decreases magnitude and increases or decreases entropy depending on the local distribution.} \label{fig:dirichlet} \end{figure} We can easily extend these operators to operate on a subset of categories $S$ by simply only updating those categories \begin{align} \delta(f;B\vert I)(x)_k &= \begin{multipartdef} \delta(f_k;B), & \mycase{k \in S}\\ f_k, & \text{otherwise} \end{multipartdef}\\ \epsilon(f;B\vert I)(x)_k &= \begin{multipartdef} \epsilon(f_k;B), & \mycase{k \in S}\\ f_k, & \text{otherwise} \end{multipartdef} \end{align} An example of these operations is provided in \reffig{dirichlet-subset} where we operate on the green category. Consider the gray/blue region surrounded by green that is indicated with a white ellipse in the left image of the second row. When we dilate the green category we would expect this region to become green in the probability image, but in the Dirichlet space these pixels already have the same green value as the green region, so they are unaffected by the dilation. We could partly solve this by carefully setting the $\alpha$ values, e.g. setting the pixels with only green to have very large green values. However, if our goal is to work on categorical distributions, this becomes too large a burden to be practical and we now turn our attention to morphological operators that work directly on categorical distributions. \begin{figure}[t] \centering \begin{minipage}[c]{0.09\linewidth} $\mathrm{Dir}(\alpha)$ \end{minipage} \begin{minipage}[c]{0.9\linewidth} \includegraphics[width=\textwidth,trim=0 20 0 0,clip=true]{dirichlet/dirichlet-morphology-subset.png} \end{minipage}\\ \begin{minipage}[c]{0.09\linewidth} $\E{\mathrm{Dir}}$ \end{minipage} \begin{minipage}[c]{0.9\linewidth} \includegraphics[width=\textwidth,trim=0 20 0 20,clip=true]{dirichlet/dirichlet-morphology-subset-probability.png} \end{minipage}\\ \begin{minipage}[c]{0.09\linewidth} $H(\mathbb{E})$ \end{minipage} \begin{minipage}[c]{0.9\linewidth} \includegraphics[width=\textwidth,trim=0 20 0 20,clip=true]{dirichlet/dirichlet-morphology-subset-entropy.png} \end{minipage}\\ \begin{minipage}[c]{0.09\linewidth} $\|\alpha\|$ \end{minipage} \begin{minipage}[c]{0.9\linewidth} \includegraphics[width=\textwidth,trim=0 20 0 20,clip=true]{dirichlet/dirichlet-morphology-subset-magnitude.png} \end{minipage} \caption[Morphology on subset of Dirichlet distribution]{Morphology on Dirichlet distributions using a subset of categories, in this case the green category $\{g\}$. See also \reffig{dirichlet}.} \label{fig:dirichlet-subset} \end{figure} \subsection{Morphology on categorical distributions} \label{sec:single-category} Recall from \refsec{fuzzy-n-ary} that for a set of $n+1$ categories, $C = \{c_1,c_2,\dots,c_{n+1}\}$, the categorical distribution over these categories is completely determined by a point in the $n$-simplex $\Delta^n = \{ \pi \in \mathbb{R}^{n+1} \mid \pi_k \ge 0, \sum \pi_k = 1\}$, where $\pi_k$ is the probability of $c_k$. An image $f$ is then defined as in \refeq{image} with $\Gamma = \Delta^n$. Operations are again defined on a single category at a time. Let $B_r$ be a closed ball of radius $r$ centered at the origin and let $i$ be the category we operate on. Let $f_k(x) = f(x)_k$ be the probability of observing category $c_k$ in pixel $x$ and let $\omega_k(x) = 1 - f_k(x)$. \subsubsection{Dilation} \label{sec:dilation} For the dilated category $i$ the operation is the same as standard grayscale dilation. For the remaining set of categories the operation is a rescaling to ensure that the probabilities sum to one, while the conditional probabilities $\mathrm{Pr}(k \vert x, k \ne i)$ are unchanged \begin{equation} \label{eq:dilation} \delta_i(f;B_r)(x)_k = \begin{multipartdef} \delta(f_k;B_r)(x), & \mycase{k = i},\\ [1 - \delta(f_i;B_r)(x)]\frac{f_k(x)}{\omega_i(x)}, & \mycase{k \ne i}. \end{multipartdef} \end{equation} If $\delta(f_i;B_r) = 1$ then the conditional probabilities are not defined and we simply set the probabilities to $1 - \delta(f_i;B_r) = 0$. This definition is the same as \refeq{fuzzy-n-ary-dilation} and equivalent to the definition from \cite{chevallier2016nary}. \subsubsection{Erosion} \label{sec:erosion} Erosion is defined similarly to dilation, with the exception of the case when $f_i(x) = 1$ where we cannot rescale the remaining categories because $\omega_i(x) = 0$ \begin{equation} \label{eq:erosion} \epsilon_i(f;B_r)(x)_k = \begin{multipartdef} \epsilon(f_k;B_r)(x) & \mycase{k = i}\\ [1 - \epsilon(f_i;B_r)(x)] \frac{f_k(x)}{\omega_i(x)} & \mycase{k \ne i \wedge f_i(x) < 1}\\ [1 - \epsilon(f_i;B_r)(x)] \frac{\theta(f_k,B_r)(x)}{\sum_{j \ne i}\theta(f_j,B_r)(x)} & \mycase{k \ne i \wedge f_i(x) = 1} \end{multipartdef} \end{equation} The function $\theta$ must only depend on the neighborhood defined by $B_r$ and defined such that $\epsilon(f_i;B_r)(x) < 1 \implies [\exists k \ne i]\left(\theta(f_k,B_r)(x) > 0\right)$. In addition we require that, when disregarding discretization issues, eroding with $B_{r+\rho}$ is equivalent to first eroding with $B_{r}$ then eroding with $B_\rho$ \begin{equation} \epsilon_i(\epsilon_i(f,B_r),B_{\rho})(x) = \epsilon_i(f, B_{r+\rho})(x) \label{eq:erosion-req}. \end{equation} Since $\theta$ is only used in the case where $f_i(x) = 1$ we must have that \begin{equation} \epsilon(f_i;B_r)(x) < 1 \implies \frac{\theta(f_k,B_{r+\rho})(x)}{\sum_{j \ne i} \theta(f_j;B_{r+\rho})(x)} = \frac{\theta(f_k;B_r)(x)}{\sum_{j \ne i} \theta(f_j;B_r)(x)} \end{equation} So $\theta$ must only depend on the smallest possible neighborhood $B_{r^*}$ where \\$\epsilon(f_i; B_{r^*}) < 1$, leading to \begin{align} \label{eq:theta} \theta(f_k,B_r)(x) &= \delta(f_k;B_{r^*})(x)\\ r^* &= \arg\min\limits_{r' > 0} r',\; \mathrm{s.t.} \; \epsilon(f_i; B_{r'})(x) < 1. \notag \end{align} This amounts to picking the closest category as suggested for crisp categorical images in \cite{chevallier2016nary,vandegronde2017nonscalar}, although without the need for breaking ties since multiple closest categories are now handled by rescaling. In \refapp{proofs} we show that these definitions have the same properties as the definitions in \cite{vandegronde2017nonscalar} for operating on n-ary images. An example of the proposed operations is provided in \reffig{categorical}, where we operate on the green category. Compared to morphology on Dirichlet distributions using subsets in \reffig{dirichlet-subset} the operations now work directly on the probabilities, making it much easier to understand and control. \begin{figure}[t] \centering \includegraphics[width=\textwidth,trim=0 20 0 0,clip=true]{categorical/categorical-morphology.png} \caption{Morphology on categorical distributions. Here we operate on the green category $g$.} \label{fig:categorical} \end{figure} \section{Protected morphological operations} \label{sec:protected} In~\cite{busch1995morphological} the authors introduce the concept of protected morphological operations, where a subset of categories are protected from being updated. Here we adapt the idea of protected morphological operations to categorical distributions and define protected dilation and erosion. Let $L$ be a set of categories, we then write $\epsilon_i(f;B_r\vert L)$ for an erosion of $i$ that protects $L$. Let $J = C\setminus (\{i\} \cup L)$ be the set of categories that are not protected nor operated on. Let $f_K(x) = \sum_{k \in K} f_k(x)$ be the sum over a set of categories $K \subset C$. If $L$ is empty, or $[\forall x](f_L(x) = 0)$, protected operations reduce to their non-protected counterparts. Because $L$ can change the topology of the domain, we cannot just define operations based on Euclidean distance. Instead we introduce a distance function $d_\Omega(x,y)$, which computes the distance from $x$ to $y$ on the domain $\Omega$. If $\Omega = \mathbb{Z}^d$, then $d_\Omega(x,y)$ is the Euclidean distance. Computing exact Euclidean distance on a Euclidean domain with holes is non-trivial. Here we use the simplified fast marching method (FMM) from \cite{jones20063d} with the update rule defined in \cite{rickett1999second}, which results in a small approximation error. For brevity, when possible we leave out function application and write $f$ instead of $f(x)$ in the following. \begin{figure}[t] \centering \includegraphics[width=\textwidth,trim=0 20 0 0,clip=true]{categorical/categorical-morphology-protected.png} \caption{Protected morphology on categorical distributions. The red category $\{r\}$ is protected while we operate on the green category $g$.} \label{fig:categorical-protected} \end{figure} \subsubsection{Protected dilation} Let $\Omega_p = \{x \in \mathbb{D} \mid f_L(x) \le 1-p\}$, this is the part of $f$ where it is possible to set $f_i = p$. Protected dilation is then defined as \begin{align} \label{eq:protected-dilation} &\delta_i(f;B_r\vert L)(x)_k = \notag\\ &\begin{multipartdef} f_k & \mycase{k \in L}\\ \min\left(1-f_L, \max\limits_{p \in (0,1]}\max\{f_i(y) \mid d_{\Omega_p}(x,y) \le r\}\right) & \mycase{k = i}\\ \left[1 - f_L - \delta_i(f;B_r\vert L)_i\right]\frac{f_k}{f_J} & \text{otherwise} \end{multipartdef} \end{align} \subsubsection{Protected erosion} Protected erosion is defined similarly to protected dilation, with the added complication of normalization \begin{align} &\epsilon_i(f;B_r\vert L)(x)_k = \notag\\ \label{eq:protected-erosion} &\begin{multipartdef} f_k & \mycase{k \in L}\\ f_k & \mycase{\max\limits_{p \in (0,1]}\max\{f_J(y) \mid d_{\Omega_p}(x,y) \le r\}} = 0\\ \min\limits_{p \in (0,1]}\min\{f_i(y) \mid d_{\Omega_p}(x,y) \le r\} & \mycase{k = i} \\ [1 - f_L -\epsilon_i(f;B_r\vert L)_i]\frac{f_k}{f_J} & \mycase{k \in J \wedge f_J > 0}\\ [1 - f_L -\epsilon_i(f;B_r\vert L)_i]\frac{\theta(f_k)}{\sum_{j\in J}\theta(f_j)} & \mycase{k \in J \wedge f_J = 0} \end{multipartdef} \end{align} The first case ensures that all protected categories are unchanged. The second case ensures that a pixel $x$ is not updated, unless there is a path, not blocked by $f_L$, to a pixel $y$ with $f_J(y) > 0$. The importance of this is easily seen by considering the case where $f_i$ varies in an region, but $f_i + f_L = 1$ in the region. The third case states that if there is such a path, then it can be eroded. The fourth and fifth cases handles normalization. The $\theta$ function is defined in a similar manner as for non-protected erosion in \refeq{theta}, \begin{align} \label{eq:protected-theta} \theta(f_k)(x) &= \max_{p \in (0,1]}\max\{f_k(y) \mid d_{\Omega_p}(x,y) \le r^*\}\\ r^* &= \arg\min\limits_{r' > 0} r' \;,\; \mathrm{s.t.} \; [1 - f_L - \epsilon_i(f;r'\vert L)_i(x)] > 0. \notag \end{align} An example of these operations is provided in \reffig{categorical-protected}, where the red category is protected while we operate on the green category. Compared to the non-protected operations in \reffig{categorical} we can see that changes are restricted to the green and blue categories. \section{Examples} The first example illustrates how morphology on categorical distributions (\refsec{single-category}) can be used to remove noisy predictions. The second example illustrates how protected morphology on categorical distributions (\refsec{protected}) can be used to model annotator bias. \subsection{Removing noisy predictions} Despite the impressive performance of neural networks for segmentation, the results are rarely perfect. \reffig{noisy-predictions} shows part of an electron microscopy image of the hippocampus, along with multi-class predictions and segmentations obtained from \cite{stephensen2020measuring}. Notice the noisy mitochondria predictions resulting in misclassifications highlighted in \reffig{noisy-predictions-segmentation}. We can remove these misclassification by opening the mitochondria class before the final classification. \reffig{noisy-predictions-fixed} shows the opened predictions along with the final classifications. Notice in particular how the errors in circle 2 in \reffig{noisy-predictions-fixed-segmentation} are fixed, such that the vesicle (teal) and the endoplasmic reticulum (yellow) are separated by cytosol. This would have been very difficult to achieve by working directly on the final segmentations. That the vesicle and endoplasmic reticulum are probably misclassified just illustrates that not all things should be fixed in post-processing. \begin{figure}[t] \centering \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{remove-noisy-predictions/246_0-1536_0-2048_cutout_em.png} \caption{EM image} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{remove-noisy-predictions/246_0-1536_0-2048_cutout_prediction.png} \caption{Prediction} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{remove-noisy-predictions/246_0-1536_0-2048_cutout_segmentation_errors-highlighted.png} \caption{Segmentation} \label{fig:noisy-predictions-segmentation} \end{subfigure} \caption[Noisy mitochondria predictions]{Electron microscopy image of the hippocampus with predictions of five classes: cytosol (white), membrane (blue), mitochondria (purple), endoplasmic reticulum (yellow), and vesicle (teal). By examining neighboring slices, the areas 1-3 have been confirmed to wrongly contain mitochondria predictions.} \label{fig:noisy-predictions} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{remove-noisy-predictions/246_0-1536_0-2048_cutout_prediction_opened-r12.png} \caption{Mitochondria opened} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{remove-noisy-predictions/246_0-1536_0-2048_cutout_segmentation_opened-r12.png} \caption{Segmentation of (a)} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{remove-noisy-predictions/246_0-1536_0-2048_cutout_segmentation_errors-highlighted.png} \caption{Original segmentation} \label{fig:noisy-predictions-fixed-segmentation} \end{subfigure} \caption[Fixed mitochondria predictions]{Fixing mitochondria misclassifications by opening the mitochondria predictions with $B_{12}$. } \label{fig:noisy-predictions-fixed} \end{figure} \subsection{Modeling annotator bias} Expert annotation is the gold standard in most clinical practice as well as for evaluating computer methods. However, annotation tasks are inherently subjective and prone to substantial inter-rater variation \cite{joskowicz2019inter,becker2019variability}. When investigating the influence of this variation on statistics and decisions it can be interesting to consider specific hypotheses regarding the variation. Consider the brain tumor annotation in \reffig{braintumor}. The annotation is derived from the QUBIQ\footnote{\url{https://qubiq.grand-challenge.org/}} challenge brain tumor dataset, where three annotators each annotated whole tumor, tumor core and active tumor. From this we obtain an image with four categories: background, edema, active core, inactive core. Although the annotators have a high level of agreement, there is still substantial variation in the extent of edema and in how much of the tumor core is active. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{graphics/braintumor/braintumor_T1Gd.png} \includegraphics[width=0.49\textwidth]{graphics/braintumor/braintumor_annotation_highlight.png} \caption[Brain tumor annotation]{Inter-rater variation in annotation of brain tumors. White is background, blue edema, yellow inactive core and purple active core. Variation is indicated by color mixing. The black circles highlights two regions with large variation. } \label{fig:braintumor} \end{figure} Using protected dilation we can for example hypothesize how the merged annotation would appear under the assumption that the tumor core is oversegmented but the active part is undersegmented. \reffig{active-core} shows the results where we first dilate the active core while protecting edema and background, then dilate edema while protecting background. This would allow us to easily investigate if statistical differences in a case-control study could be explained by biased annotations. \begin{figure}[t] \centering \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{graphics/braintumor/braintumor_annotation.png} \caption{Original} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{graphics/braintumor/braintumor_more_active_less_core_r-1.png} \caption{$B_1$} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{graphics/braintumor/braintumor_more_active_less_core_r-2.png} \caption{$B_2$} \end{subfigure} \begin{subfigure}[t]{0.24\textwidth} \includegraphics[width=\textwidth]{graphics/braintumor/braintumor_more_active_less_core_r-3.png} \caption{$B_3$} \end{subfigure} \caption{{\it What could the annotation look like if the core was oversegmented, but the active part undersegmented?} Dilation of active core while protecting edema and background, followed by dilation of edema while protecting background using $B_1, B_2, B_3$.} \label{fig:active-core} \end{figure} \section{Discussion \& Conclusion} We have provided a thorough review of morphology on categorically valued images. Based on this we have defined morphology on Dirichlet distributions and morphology on categorical distributions. Inspired by \cite{busch1995morphological} we have further defined protected morphology on categorical distributions. We have demonstrated the behavior of the proposed operations and shown how they can be used in real-world applications such as noise removal in multi-class predictions and modeling annotator bias. The definition of dilation is straightforward and no obvious alternatives present themselves. This is not so for erosion. In our definition, erosion corresponds to conditioning on a change in probability of the eroded category. An equally valid approach would be to also condition on where this change came from. Instead of simply rescaling the categories with non-zero mass we could include information from the neighborhood. For example, when eroding $i$ we would fill the difference $f_i(x) - \epsilon(f_i;B_r)(x)$ based on the pixels that contribute to the difference, that is, those with minimum mass for $i$. This would result in smoother boundaries, which could be a better representation of uncertainty. A downside is that categories can leak into each other, leading to undesirable results. In this work we have focused on the basic morphological operations, dilation and erosion, and their compositions, closing and opening. A logical next step is to investigate more complex morphological operations, such as the morphological gradient, which may be used to investigate spatial relationship between categories by measuring the change in one category as a function of change in another category. We have defined protected versions of dilation and erosion. From these we could define opening and closing in the standard way. Alternatively, by changing which categories are protected for dilation and erosion we get more control over how a category is opened or closed. In~\cite{busch1995morphological} the authors explore similar ideas for so called ``tunneling'' and ``bridging'' operations on their set-based morphology, which would be interesting to consider in the context of categorical distributions. Our aim in this work was to bring morphological operations to probabilistic representations of categorical images. These representations can be considered as generative processes that can be sampled. Naive sampling will result in noisy and unrealistic samples. Combining the sampling process with the proposed morphological operations could be an easy approach to obtain smoother and more realistic samples. In summary, morphology is an indispensable tool for post-processing segmentations. Extending morphology to categorical images and their probabilistic counterparts presents a particular problem since there is in no inherent ordering of categories. In this paper, we have proposed to view categorical images as images of categorical distributions and defined morphological operations that are consistent with this view.
1,477,468,751,253
arxiv
\section{Introduction} Transition to turbulence is a widely studied phenomenon in hydrodynamical flows, where usually a control parameter such as the Reynolds number ($R_e$) is progressively varied and a sequence of local and/or global bifurcations leads the system unto an erratic state where disorder in space and time takes place. In certain situations a subcritical transition is observed, where the laminar state can be shown to be linearly stable for all values of the control parameter and finite amplitude perturbations are required to drive the system away from the laminar flow and unto the turbulent state. This turbulence can be self-sustained or decaying, the former being due to a chaotic attractor in the phase space and the latter due to a {\em chaotic saddle}, i.e., a nonattracting chaotic set \cite{nusse89,rempel07}. Several works have studied subcritical transition in hydrodynamical shear-flows like the plane-Couette and pipe flows, investigating whether the turbulence is sustained or just a long-lived transient. For low values of $R_e$, the flow quickly becomes laminar, whereas for large values it is turbulent. However, for intermediate values of $R_e$ the description of the asymptotic state and the mechanisms for its formation remain an open question \cite{cerbus18}. Recently, the onset of sustained turbulence in shear flow experiments was described in terms of the spatial coupling of transiently chaotic domains, resulting in a second-order phase transition that falls into the directed percolation universality class \cite{avila11,barkley15,lemoult16}. In astronomy, a typical example of subcritical transition to turbulence is provided by accretion disk models in the absence of magnetic fields and where shear is the only stratification. In such cases, the laminar Keplerian flow is linearly stable \cite{biskamp03}. When a mean magnetic field is present, the magnetorotational instability is a linear instability that can lead the system towards the turbulent state, thus providing a mechanism for outward angular momentum transport and the corresponding inward accretion of matter \cite{balbus91}. Finite magnetic resistive effects can lead to the decay of the magnetic field, so in the absence of an imposed background field a nonlinear dynamo mechanism is required to sustain the magnetic energy and, consequently, the turbulence. Schekochihin et al. \cite{scheko04} conjectured that at low $P_m$ a dynamo might only be possible in the presence of a mean field. In local simulations using the shearing box formalism with zero net magnetic flux, Fromang et al. \cite{fromang07} reported that the turbulence disappears for a magnetic Prandtl number ($P_m$) below a critical value that seems to be a decreasing function of the kinetic Reynolds number. Iskakov et al. \cite{iskakov07} found that the dynamo action is possible for $P_m<$1 at high Reynolds numbers and/or numerical resolution, but with low growth rates, while Boldyrev \& Cattaneo \cite{boldyrev04} proposed through different assumptions that the dynamo action is possible at any low value of $P_m$ at sufficiently high $R_m$. When the magnetic Prandtl number ($P_m$) is greater than one, it was shown by Rempel et al. \cite{rempel10} that the turbulent state found at intermediate Reynolds number decays with time and the average lifetime of the turbulence grows as an exponential function of magnetic Reynolds number ($R_m$), a behavior named supertransient in Refs \cite{kaneko88,tel08}. Recently, Nauman and Pessah \cite{nauman16} showed that turbulence can be sustained for several thousand shear time units in zero net flux shearing box simulations even when $P_m<1$, provided that extended vertical domains are adopted. Other numerical simulations confirm that the use of taller boxes facilitates the presence of sustained dynamo/turbulence in the shearing box model \cite{shi16,walker17}. Accretion disks around stars (protoplanetary disks) are weakly ionized because the X-ray radiation of the protostar only ionizes the surface of the disk such that these disks are too cold to provide an effective temperature for thermal ionization in the body of the disk. As a consequence the large regions that are adjacent to the disk mid-plane are not susceptible to the MRI\cite{gammie96} while the MRI is active close to the surface of the disk. These weakly ionized disks are mainly composed of a neutral fluid with a small fraction of the fluid composed by a number of ionized particles of different species. Due to the interactions of these different species, three non-ideal processes are introduced, namely, the Hall effect, ambipolar diffusion and Ohmic dissipation \cite{wardle12}. The Ohmic dissipation dominates in regions with high density and low ionization fraction (inner disk and disk mid-plane), the ambipolar diffusion dominates in the outer region, while the Hall effect is important in between (1--10au) \cite{bai11,simon15}. The ambipolar diffusion is due to the imperfect coupling of the neutrals with the ionized fluid. Blaes \& Balbus\cite{blaes94} found that when the ion-neutral collision frequency drops below the orbital frequency the MRI is suppressed, a result confirmed by Hawley \& Stone\cite{hawley98}. The Ohmic resistivity is used to model layered accretion disks where their surfaces are sufficiently ionized to allow it to be coupled to the magnetic field and provide conditions for the onset of the MRI turbulence while the mid-plane regions are poorly ionized. Later, the Hall effect can destabilize the plasma depending on the orientation of the magnetic field that otherwise would be stabilized by Ohmic losses\cite{wardle12}. In this letter we extend the analyses of Rempel et al. \cite{rempel10} through shearing box simulations where the Hall effect has been taken into account to fully characterize whether or not the zero net flux Hall-MHD turbulence is sustained or transient. By computing the mean decay time for different initial conditions as a function of the magnetic Reynolds number and the strength of the Hall effect, we conduct a series of statistical studies at transition to weak turbulence. Our results confirm that the exponential dependence of the transient is observed in Low-$R_e$ simulations, but the Hall effect reinforces the production of magnetic energy, therefore, a sustained dynamo might be expected even for $R_e$ below the critical value found in the Hall-free regime. In order to facilitate the dynamo onset, we adopt the tall box aspect ratio employed by references \cite{nauman16,shi16,walker17}, although this is probably not relevant for astrophysical discs. From now on, we loosely employ the term {\em turbulence} to describe irregular regimes that depart from the laminar flow. \section{The Hall MHD equations} We solve the MHD equations in the shearing box formalism \cite{goldreich65} using the Snoopy pseudo-spectral code \cite{lesur05,lesur07} including the Hall effect in the induction equation \cite{kunz13}. The equations are solved inside a box which is rotating with angular velocity $\Omega(r_0)$, where $r_0$ is a fiducial radius that determines the location of the centre of the box. Shearing sheet boundary conditions are adopted \cite{hawley95}. The shearing-box equations in Cartesian coordinates, with $\phi\rightarrow y$ and $r\rightarrow r_0+x$, are given by: \begin{eqnarray} \partial_t\textbf{v}+\textbf{v}\cdot\nabla\textbf{v}=-\frac{1}{\rho}\nabla P+\frac{\nabla\times\textbf{B}\times\textbf{B}}{\mu_0\rho}-2\mathbf{\Omega}\times\textbf{v}+ \nonumber \\ 2\Omega Sx\hat\textbf{x}+\nu\nabla^2\textbf{v}, \label{eq1} \end{eqnarray} \begin{equation} \partial_t\textbf{B}=\nabla\times\left(\textbf{v}\times\textbf{B}-\frac{\nabla\times \textbf{B}\times\textbf{B}}{X_{Hall}}\right)+\eta\nabla^2\textbf{B}, \label{eq2} \end{equation} \begin{eqnarray} \nabla\cdot\textbf{B}=0 \nonumber \\ \nabla\cdot\textbf{v}=0, \label{eq3} \end{eqnarray} \noindent where for a Keplerian disk $\Omega=r^{-3/2}$, $S=-r\partial_r\Omega=(3/2)\Omega$, $\nu$ is the constant kinematic viscosity coefficient, $\eta$ is the constant magnetic diffusivity, $\mu_0$ the magnetic permeability, $\rho$ is the gas density, $P$ is the pressure, $\textbf{B}$ is the magnetic field and $X_{Hall}=\sqrt{\rho}/\ell_H$, where $\ell_H$ is the Hall lengthscale, which is given by \cite{kunz13} \begin{equation} \ell_H=\left(\frac{m_ic^2}{4\pi Z^2e^2n_i}\right)^{1/2}\left(\frac{\rho}{\rho_i}\right)^{1/2}, \label{eq5} \end{equation} \noindent where $m_i$ is the ion mass, $c$ is the speed of light, $Z$ the atomic number, $n_i$ the ion density and $\rho_i$ is the ion mass density. The fluid velocity is decomposed as $\textbf{v}=\textbf{v}_0+\textbf{u}$, where the steady-state solution is given by the shear flow $\textbf{v}_0=-Sx\hat \textbf{y}$ and $\textbf{u}$ is the perturbation field. The kinetic and magnetic Reynolds numbers mentioned below are given by $R_e= 1/\nu$ and $R_m=1/\eta$, respectively. From now on, we set $\Omega=1$, $R_e=70$ and use $R_m$ and $X_{Hall}$ as control parameters. \section{Numerical simulations} For the simulation domain we chose a box as in Riols et al. \cite{riols13}, with sides $(L_x,L_y,L_z)=(0.7,20.0,2.0)$, which was shown to favour the onset of dynamo. Estimating the mean decay time of the turbulence as a function of $R_m$ and $X_{Hall}$ requires a statistical study based on long time series for thousands of different initial conditions, therefore, the numerical resolution employed in this letter is low. Riols et al. \cite{riols13} argued that when the Hall effect is absent for $R_e=70$, simulations with a numerical resolution $(N_x,N_y,N_z)=(24,12,36)$ are well resolved up to $R_m=500$, however we have verified that this is insufficient for simulations with the Hall effect. In order to find the minimum resolution required for our study we compute the averages of the Reynolds and Maxwell stresses, as well as the kinetic and magnetic power spectra for $R_e=70$. We test the resolutions $(N_x,N_y,N_z)=(24,12,36)$ (low resolution), $(N_x,N_y,N_z)=(48,24,72)$ (medium resolution) and $(N_x,N_y,N_z)=(72,36,108)$ (high resolution). In tables \ref{tab.1} and \ref{tab.2} we present the average values of the Reynolds ($<\alpha_{Rey}>=<v_xv_y>$) and Maxwell ($<\alpha_{Max}>=<b_xb_y>$) stresses, averaged over the space and time, for different magnetic Reynolds numbers and in Figure \ref{fig.1} we present the power spectra of the kinetic and magnetic energies for two magnetic Reynolds numbers. Based on our analyses of the average values and the slopes of the power spectra, we conclude that if the medium resolution is adopted, for $X_{Hall}=50$ the values converge up to $R_m=250$. The spectra show some disagreement in low scales, however we have checked that this has low impact on our studies. Similar analysis suggest that for $X_{Hall}=30$ and 40 the values with the medium resolution converge up to $Rm=200$. For this reason we choose the medium resolution for this study and compute the decay times up to $R_m=250$ for $X_{Hall}=50$ and $R_m=200$ for $X_{Hall}=30$ and 40. \begin{table} \caption{Average values of the Reynolds and Maxwell stresses for $R_e=70$, $R_m=200$ and $X_{Hall}=50$.} \label{tab.1} \begin{center} \begin{tabular}{|l|c|r|} Resolution & $<\alpha_{Rey}>$ & $<\alpha_{Max}>$\\ \hline Low & 0.237 & -2.610 \\ Medium & 0.069 & -0.595 \\ High & 0.056 & -0.548 \end{tabular} \end{center} \end{table} \begin{table} \caption{Average values of the Reynolds and Maxwell stresses for $R_e=70$, $R_m=300$ and $X_{Hall}=50$.} \label{tab.2} \begin{center} \begin{tabular}{|l|c|r|} Resolution & $<\alpha_{Rey}>$ & $<\alpha_{Max}>$\\ \hline Low & 0.182 & -1.287 \\ Medium & 0.103 & -0.798 \\ High & 0.072 & -0.673 \end{tabular} \end{center} \end{table} \begin{figure}[!h] \includegraphics[scale=0.45]{fig1.pdf} \caption{Power Spectra of the kinetic (left) and magnetic (right) energies as a function of the wavenumber ($k$) for $R_m=200$ (upper panels) and $R_m=300$ (lower panels). In all case $R_e=70$ and $X_{Hall}=50$. The black lines represent the high resolution simulations, red represents the medium resolution and blue the low resolution.} \label{fig.1} \end{figure} \section{Results} \begin{figure}[h] \includegraphics[width=\columnwidth]{fig2.pdf} \caption{Volume visualization of y-component of the magnetic field. Left: Turbulent flow at $t=1000$ shear times. Right: Laminar flow at $t=14000$ shear times.} \label{fig.2} \end{figure} The statistical determination of the mean decay time ($\tau$) requires the integration of Eqs. (\ref{eq1})--(\ref{eq3}) with a large set of initial conditions in turbulent states until they decay to the laminar state. We obtain these initial conditions from long turbulent time series that were generated using $R_e=70$, $R_m=500$ and different values of $X_{Hall}$ between 10 and 1000. The state variables are saved from those time series at each 500 shear time units (before decay) to obtain uncorrelated turbulent initial conditions. Figure \ref{fig.2} presents snapshots of the azimuthal component of the magnetic field for a simulation with $R_e=70$, $R_m=500$ and $X_{Hall}=50$ at two different times showing a turbulent state in the left panel that decays to the laminar state in the right panel. It is important to emphasize that this simulation is under-resolved and has only been used to generate a long turbulent time series from which we extract different states to use as initial conditions. In figure \ref{fig.3} we show the time series of the kinetic and magnetic energies for two simulations with medium resolution and $R_e$=70, $R_m=500$ and $X_{Hall}=50$. The black line shows the transition from the turbulent state to the laminar state at $t\approx13.000$ shear time units and corresponds to the Hall effect included in the induction equation; for comparison we are also showing in blue the results from a simulation without the Hall effect, which has transitioned to the laminar state much faster, at $t\approx900$. These results reveal the presence of long turbulent transients whose decay times are increased by the Hall effect. \begin{figure}[!h] \includegraphics[scale=0.45]{fig3.pdf} \caption{Time series of the kinetic and magnetic energies for our medium resolution simulation with $R_e=70$, $R_m=500$ and $X_{Hall}=50$. Top: Kinetic energy as function of the time. Bottom: Magnetic energy as function of the time. The black line corresponds to simulations with Hall effect and the blue line corresponds to simulations without the Hall effect.} \label{fig.3} \end{figure} To obtain the average lifetime ($\tau(R_m)$) of the turbulence we calculate the probability ($P(t)$) to find the system in a turbulent state at a corresponding time $t$. For each value of $R_m$ the probability is computed using a set of up to 170 simulations where each simulation corresponds to a different initial condition as described above. The simulations are interrupted when the kinetic energy reaches a level below 10$^{-5}$. Figure \ref{fig.4} shows the probabilities ($P(t)$) in log scale as a function of the time $t$ for different magnetic Reynolds numbers. From these data we perform linear fits to obtain the inverse of the decay time assuming that they have a dependence with the $R_m$ in the form $P(t,R_m)=\exp[-t/\tau(R_m)]$, as expected for transients due to chaotic saddles \cite{hof06}. \begin{figure}[!h] \includegraphics[scale=0.45]{fig4.pdf} \caption{Probability ($P(t)$) in log scale of the decay time as a function of the time ($t$) for five different values of $R_m$ and $X_{Hall}$=50.} \label{fig.4} \end{figure} Using the data obtained from the linear fits we have plotted the inverse of the decay time ($1/\tau$) as a function of $R_m$ and the results are presented in Figure \ref{fig.5}. The inverse of the decay time has a clear exponential dependence with $R_m$. The curves obtained from these linear fits are given by \begin{equation} 1/\tau=\exp(AR_m+B), \label{eq6} \end{equation} \noindent and the values of the constants $A$ and $B$ for the four $X_{Hall}$ values used in this work are provided in table \ref{tab.3}. Due to the exponential dependence with $R_m$ the corresponding lifetimes of the turbulence follow a supertransient law \cite{kaneko88,tel08}. We conclude that the Hall effect allows for supertransients to be present even at very low $R_m$, when the system would rapidly decay to the laminar state if the Hall effect were absent. Whether the Hall effect might support the existence of a self sustained dynamo at low $R_m$ numbers or not is the topic of a future work. \begin{figure}[!h] \includegraphics[scale=0.45]{fig5.pdf} \caption{Inverse of the decay time for four values of $X_{Hall}$ in log scale as function of $R_m$ with their corresponding linear fits.} \label{fig.5} \end{figure} \begin{table} \caption{Values of the constants $A$ and $B$ in Equation \ref{eq6} for four different values of $X_{Hall}$.} \label{tab.3} \begin{center} \begin{tabular}{|l|c|r|} $X_{Hall}$ & $A$ & $B$\\ \hline 30.0 & -0.0169 & 0.248 \\ 40.0 & -0.0166 & 0.258 \\ 50.0 & -0.0192 & 0.760 \\ 60.0 & -0.0189 & 0.890 \end{tabular} \end{center} \end{table} The same data from Figure \ref{fig.5} are shown in Figure \ref{fig.6}, but arranged as a plot of the inverse decay time as a function of $X_{Hall}$ for different values of $R_m$. Once again, an exponential law is found from the linear fit in the log--linear plot. \begin{figure}[!h] \includegraphics[scale=0.45]{fig6.pdf} \caption{Inverse of the decay time for four different values of $R_m$ in log scale as function of $X_{Hall}$ with their corresponding linear fits.} \label{fig.6} \end{figure} There are two types of supertransient laws described in T\'el and Lai \cite{tel08}, the type-I, given by a power-law, and the type-II, described by an exponential-law. The type-I law is observed from a linear fit in a log--log plot, whereas the type-II comes from a linear fit in a log- linear plot. When someone is using few points in a linear regression it might be difficult to tell which type of supertransient is being observed, since both log-linear and log-log plots seem to allow for a linear fit. To ensure that our data support a type-II supertransient, we show in Figure \ref{fig.7} both the log--linear (upper panel) and log--log (lower panel) plots for $R_m=70$, where it is clear that the log--linear plot provides the best linear fit. Thus, assuming that the inverse of the decay time has a dependence with $X_{Hall}$ in the form \begin{equation} 1/\tau=\exp(CX_{Hall}+D), \label{eq7} \end{equation} \noindent the computed values of the constants $C$ and $D$ are presented in the table \ref{tab.4}. The results for $R_m=200$ may have been affected by inappropriate resolution, thus the slightly negative value of $C$. However, even if one were to disregard it from the analyses, it can be argued that there is a universal type-II supertransient law for the decay time of the MHD turbulence for the parameter space used in this work. \begin{figure}[!h] \includegraphics[scale=0.45]{fig7.pdf} \caption{Inverse of the decay time for $R_m=70$ as function of $X_{Hall}$. The upper panel presents the values in log-linear scale and the bottom panel the values in log-log scale.} \label{fig.7} \end{figure} \begin{table} \caption{Values of the constants $C$ and $D$ in Equation \ref{eq7} for four different values of $R_m$.} \label{tab.4} \begin{center} \begin{tabular}{|l|c|r|} $R_m$ & $C$ & $D$\\ \hline 70.0 & 0.90 & -4.02\\ 100.0 & 0.55 & -3.24 \\ 150.0 & 0.30 & -3.58 \\ 200.0 & -0.07 & -2.84 \end{tabular} \end{center} \end{table} \section{Conclusion} Through a series of thousands of long numerical simulations we have characterized the mean decay time of the turbulence in shearing box Hall-MHD simulations of Keplerian shear flows as a type-II supertransient as a function of both the magnetic Reynolds number and the Hall parameter. The Hall effect is crucial in developing the observed transient turbulence, since in its absence the system quickly decays to the laminar state and this conclusion is supported by the result of Kunz \& Lesur \cite{kunz13}. Nonetheless, we have not found evidence of a self sustained dynamo for the parameters used in our study. Fromang et al. \cite{fromang07} found that the shearing box turbulence does not persist for $P_m<2$ in the Hall-free regime. Our Hall-MHD simulations have explored the range $1\le P_m<4$ and although the Hall-effect is responsible for causing the long transient turbulence, so far even at $P_m>2$ we haven't found evidence of dynamo action. This may be due to the low $R_e$ adopted and the Hall effect itself, but a more detailed study is required to find out the critical values for $P_m$ for a transition to sustained turbulence. The geometry of the simulation box may also be crucial in allowing for the development of a sustained turbulence as previously mentioned \cite{shi16,nauman16}. A future study with boxes with different aspect ratios is required to characterize if this is true for our simulations. Another possibility is that there may be no transition to sustained dynamo and the Hall-MHD turbulence is always transient, with huge decay times due to their exponential growth with the control parameters. It is worth noting that despite the importance of MRI for accretion processes in astrophysics, certain regions in protoplanetary disks are so poorly ionized that they may not support MRI turbulence \cite{kunz13,riols18}, so that the accretion would take place among a basically laminar state. Nonetheless, recent observations and simulations have revealed the presence of a wealth of complex structures in protoplanetary disks, such as rings \cite{brogan15}, spiral arms \cite{benisty} and vortices \cite{bethune16}. Although the origin of such structures is still a highly debated topic, two non-ideal MHD effects have often been invoked in a tentative explanation for these observations, the Hall effect and ambipolar diffusion. For example, it has been shown in local numerical simulations that in the presence of a vertical background magnetic field the Hall effect can lead to the formation of large-scale self-organized axisymmetric structures called zonal flows \cite{kunz13,riols18}, which correspond to concentric rings in a global disc and are thought to be ideal locations for planet formation \cite{bethune16,riols18}. In Riols et al. \cite{riols18}, zonal flows become more pronounced for stronger background fields; in the absence of a background field, so far we have not observed zonal flows in our simulations. \acknowledgments We thank Geoffroy Lesur for his help with the SNOOPY code setup and for helpful discussions. DMT thanks the Brazilian agency CAPES (88887.130860/2016-00) and DMT and ELR thank Brazilian agency FAPESP (2013/26258-4) for the financial support.
1,477,468,751,254
arxiv
\section{Introduction} Let $ \mathcal{X}$ be a Banach space and $I_\mathcal{X}$ be the identity operator on $\mathcal{X}$. Carl Neumann's classical result says that if $ T : \mathcal{X}\rightarrow \mathcal{X}$ is a bounded linear operator such that $\|T-I_\mathcal{X}\|<1$, then $T$ is invertible \cite{CARLNEUMANN}. Following two results are consequences of this result. They are known as \textbf{Paley-Wiener theorems}. \begin{enumerate} \item Sequences close to orthonormal bases in Hilbert spaces are Riesz bases \cite{YOUNG, PALEYWIENER}. \item Sequences close to Schauder bases in Banach spaces are Schauder bases \cite{BOAS, SHAFKE}. \end{enumerate} History of Paley-Wiener theorems are nicely presented in \cite{ARSOVE, RETHERFORD}. It was in the setting of Hilbert spaces, Paley-Wiener theorem was first generalized by Pollard \cite{POLLARD}, second generalized by Sz. Nagy \cite{NAGY} and third generalized by Hilding \cite{HILDING}. Hilding proved the following theorem. \begin{theorem}\cite{HILDING}\label{HILDINGTHEOREM} (\textbf{Hilding perturbation}) Let $\mathcal{H}$ be a Hilbert space. If a linear operator $ T : \mathcal{H}\rightarrow \mathcal{H}$ is such that there exists $ \lambda \in \left [0, 1 \right )$ with \begin{align*} \|Th-h\|\leq\lambda\|Th\|+\lambda\|h\|,\quad \forall h \in \mathcal{H}, \end{align*} then $ T $ is bounded, invertible and \begin{align*} &\frac{1-\lambda}{1+\lambda}\|h\|\leq\|Th\|\leq\frac{1+\lambda}{1-\lambda} \|h\|, \quad\forall h \in \mathcal{H};\\ & \frac{1-\lambda}{1+\lambda}\|h\|\leq\|T^{-1}h\|\leq\frac{1+\lambda}{1-\lambda} \|h\|, \quad\forall h \in \mathcal{H}. \end{align*} \end{theorem} It took around 50 years to improve Theorem \ref{HILDINGTHEOREM} to the most generality for Banach spaces. \begin{theorem}\cite{CASAZZAKALTON, VANEIJNDHOVEN, CASAZZACHRISTENSEN}\label{CASAZZAKALTONTHEOREM} (\textbf{Casazza-Kalton-Christensen-van Eijndhoven perturbation}) Let $ \mathcal{X}, \mathcal{Y}$ be Banach spaces and $ S : \mathcal{X}\rightarrow \mathcal{Y}$ be a bounded invertible operator. If a linear operator $ T : \mathcal{X}\rightarrow \mathcal{Y}$ is such that there exist $ \lambda_1,\lambda_2 \in \left [0, 1 \right )$ with \begin{align*} \|Tx-Sx\|\leq\lambda_1\|Sx\|+\lambda_2\|Tx\|,\quad \forall x \in \mathcal{X}, \end{align*} then $ T $ is bounded, invertible and \begin{align*} &\frac{1-\lambda_1}{1+\lambda_2}\|Sx\|\leq\|Tx\|\leq\frac{1+\lambda_1}{1-\lambda_2} \|Sx\|, \quad\forall x \in \mathcal{X};\\ & \frac{1-\lambda_2}{1+\lambda_1}\frac{1}{\|S\|}\|y\|\leq\|T^{-1}y\|\leq\frac{1+\lambda_2}{1-\lambda_1} \|S^{-1}\|\|y\|, \quad\forall y \in \mathcal{Y}. \end{align*} \end{theorem} There is an improvement of Theorem \ref{CASAZZAKALTONTHEOREM} which is due to Guo with an extra assumption that $T$ is bounded. \begin{theorem}\cite{GUO}\label{GUOTHEOREM} Let $ \mathcal{X}, \mathcal{Y}$ be Banach spaces and $ S : \mathcal{X}\rightarrow \mathcal{Y}$ be a bounded invertible operator. If a bounded linear operator $ T : \mathcal{X}\rightarrow \mathcal{Y}$ is such that there exist $ \lambda_1 \in \left [0, 1 \right )$ and $ \lambda_2 \in \left [0, 1 \right ]$ with \begin{align*} \|Tx-Sx\|\leq\lambda_1\|Sx\|+\lambda_2\|Tx\|,\quad \forall x \in \mathcal{X}, \end{align*} then $ T $ is invertible. Further, for every $\varepsilon>0$ satisfying $1>\lambda_2-\varepsilon>0$ and $\lambda_1+\varepsilon \|TS^{-1}\|<1$, we have \begin{align*} &\frac{1-\lambda_1-\varepsilon \|TS^{-1}\|}{1+\lambda_2-\varepsilon}\|Sx\|\leq\|Tx\|\leq\frac{1+\lambda_1+\varepsilon \|TS^{-1}\|}{1-\lambda_2+\varepsilon} \|Sx\|, \quad\forall x \in \mathcal{X};\\ & \frac{1-\lambda_2+\varepsilon}{1+\lambda_1+\varepsilon \|TS^{-1}\|}\frac{1}{\|S\|}\|y\|\leq\|T^{-1}y\|\leq\frac{1+\lambda_2-\varepsilon}{1-\lambda_1-\varepsilon \|TS^{-1}\|} \|S^{-1}\|\|y\|, \quad\forall y \in \mathcal{Y}. \end{align*} \end{theorem} Theorem \ref{CASAZZAKALTONTHEOREM} and its variants are useful in various studies such as stability of frames for Hilbert spaces \cite{CASAZZACHRISTENSEN}, stability of frames and atomic decompositions for Banach spaces \cite{STOEVA}, stability of frames for Hilbert C*-modules \cite{HANJING}, stability of G-frames \cite{SUNSTABILITY}, multipliers for Hilbert spaces \cite{STOEVABALAZS}, quantum detection problem \cite{BOTELHOANDRADE}, continuous frames \cite{GABARDOHAN}, fusion frames \cite{CASAZZAKUTYNIOK}, operator representations of frames (dynamics of frames) \cite{CHRISTENSENHASANNASAB}, pseudo-inverses of operators \cite{DING}, outer inverses of operators \cite{YANGWANG}, shift-invariant spaces \cite{KOOLIM}, frame sequences \cite{CHRISTENSENLENNARD}, sampling \cite{ZHAOCASAZZA} etc.\\ The main objective of this paper is to generalize Theorem \ref{CASAZZAKALTONTHEOREM} for Lipschitz functions between Banach spaces. We do this in Theorem \ref{IMPORTANTTHEOREM}. We show that our result generalizes Soderlind-Campanato Perturbation (Theorem \ref{SODERLINDCAMPANATOAPP}) and Barbagallo-Ernst-Thera perturbation (Theorem \ref{BARBAGALLOAPP}). We then give an application to the theory of frames for metric spaces. Further, the notion of Lipschitz atomic decomposition for Banach spaces is introduced and a perturbation result is derived using Theorem \ref{IMPORTANTTHEOREM}. \section{Improving Casazza-Kalton-Christensen-van Eijndhoven perturbation} Let $\mathcal{M}$ be a metric space and $\mathcal{X}$ be a Banach space. Recall that a function $f:\mathcal{M} \rightarrow \mathcal{X}$ is said to be Lipschitz if there exists $b> 0$ such that \begin{align*} \|f(x)- f(y)\| \leq b\, d(x,y), \quad \forall x, y \in \mathcal{M}. \end{align*} A Lipschitz function $f:\mathcal{M} \rightarrow \mathcal{X}$ is said to be bi-Lipschitz if there exists $a> 0$ such that \begin{align*} a\, d(x,y) \leq \|f(x)- f(y)\| , \quad \forall x, y \in \mathcal{M}. \end{align*} \begin{definition}\cite{WEAVER} Let $\mathcal{X}$ be a Banach space. \begin{enumerate}[\upshape(i)] \item Let $\mathcal{M}$ be a metric space. The collection $\operatorname{Lip}(\mathcal{M}, \mathcal{X})$ is defined as $\operatorname{Lip}(\mathcal{M}, \mathcal{X})\coloneqq \{f:\mathcal{M} \rightarrow \mathcal{X} \operatorname{ is ~ Lipschitz} \}.$ For $f \in \operatorname{Lip}(\mathcal{M}, \mathcal{X})$, the Lipschitz number is defined as \begin{align*} \operatorname{Lip}(f)\coloneqq \sup_{x, y \in \mathcal{M}, x\neq y} \frac{\|f(x)-f(y)\|}{d(x,y)}. \end{align*} \item Let $(\mathcal{M}, 0)$ be a pointed metric space. The collection $\operatorname{Lip}_0(\mathcal{M}, \mathcal{X})$ is defined as $\operatorname{Lip}_0(\mathcal{M}, \mathcal{X})\coloneqq \{f:\mathcal{M} \rightarrow \mathcal{X} \operatorname{ is ~ Lipschitz ~ and } f(0)=0\}.$ For $f \in \operatorname{Lip}_0(\mathcal{M}, \mathcal{X})$, the Lipschitz norm is defined as \begin{align*} \|f\|_{\operatorname{Lip}_0}\coloneqq \sup_{x, y \in \mathcal{M}, x\neq y} \frac{\|f(x)-f(y)\|}{d(x,y)}. \end{align*} \end{enumerate} \end{definition} \begin{theorem}\cite{WEAVER}\label{BANACHALGEBRA} Let $\mathcal{X}$ be a Banach space. \begin{enumerate}[\upshape(i)] \item If $\mathcal{M}$ is a metric space, then $\operatorname{Lip}(\mathcal{M}, \mathcal{X})$ is a semi-normed vector space w.r.t. the semi-norm $\operatorname{Lip}(\cdot)$. \item If $(\mathcal{M}, 0)$ is a pointed metric space, then $\operatorname{Lip}_0(\mathcal{M}, \mathcal{X})$ is a Banach space w.r.t. the norm $\|\cdot\|_{\operatorname{Lip}_0}$. Further, $\operatorname{Lip}_0(\mathcal{X})\coloneqq\operatorname{Lip}_0(\mathcal{X}, \mathcal{X})$ is a unital Banach algebra. In particular, if $T \in \operatorname{Lip}_0(\mathcal{X})$ satisfies $ \|T-I_\mathcal{X}\|_{\operatorname{Lip}_0}<1,$ then $T $ is invertible and $T^{-1} \in \operatorname{Lip}_0(\mathcal{X})$. \end{enumerate} \end{theorem} We now develop perturbation result for Lipschitz functions using various results. Our developments are motivated from the linear version of improvement of Paley-Wiener theorem by van Eijndhoven \cite{VANEIJNDHOVEN}. Since $ \text{Lip}_0(\mathcal{X})$ is a unital Banach algebra, we can talk about the notion of spectrum and resolvents. In the remaining part of the paper, the spectrum of an element $ T\in \text{Lip}_0(\mathcal{X})$ is denoted by $\sigma(T)$ and the resolvent by $\rho(T).$ \begin{lemma}\label{FIRSTLEMMA} Let $ \mathcal{X}$ be a Banach space and $ T\in \text{Lip}_0(\mathcal{X})$. Suppose there are $\alpha_0\in \mathbb{R}$ and $ \beta_0>0$ such that \begin{align}\label{FIRSTLEMMAINEQUALITY} \|Tx-Ty-\alpha(x-y)\|\geq \beta_0\|x-y\|, \quad \forall x,y \in \mathcal{X}, \forall \alpha \leq \alpha_0. \end{align} Then $(-\infty, \alpha_0]\subseteq \rho(T).$ \end{lemma} \begin{proof} If $\sigma(T)\cap\mathbb{R}=\emptyset$, then there is nothing to argue. So let $\sigma(T)\cap\mathbb{R}\neq \emptyset$. Since $\sigma(T)$ is non empty, define \begin{align*} \lambda_0\coloneqq \min\{\lambda\in \mathbb{R}: \lambda \in \sigma(T)\cap \mathbb{R}\}. \end{align*} Claim: $\lambda_0\geq \alpha_0$. If this is false, we have $\lambda_0< \alpha_0$. From the assumption we then have \begin{align*} \|Tx-Ty-\lambda_0(x-y)\|\geq \beta_0\|x-y\|, \quad \forall x,y \in \mathcal{X} \end{align*} which implies \begin{align*} \|(T-\lambda_0I_\mathcal{X})x-(T-\lambda_0I_\mathcal{X})y\|\geq \beta_0\|x-y\|, \quad \forall x,y \in \mathcal{X} \end{align*} which says that $T-\lambda_0I_\mathcal{X}$ is injective. We now try to show that $T-\lambda_0I_\mathcal{X}$ is surjective. Let $y \in \mathcal{X}$. Define \begin{align*} \alpha_n\coloneqq \lambda_0-\frac{1}{n}, \quad \forall n \in \mathbb{N}. \end{align*} From the definition of $\lambda_0$ we then have $T-\alpha_nI_\mathcal{X}$ is invertible for all $n \in \mathbb{N}$. Using Inequality (\ref{FIRSTLEMMAINEQUALITY}) we have \begin{align*} \|(T-\alpha_nI_\mathcal{X})x-(T-\alpha_nI_\mathcal{X})y\|\geq \beta_0\|x-y\|, \quad \forall x,y \in \mathcal{X}, \forall n \in \mathbb{N} \end{align*} which gives \begin{align*} \|u-v\|\geq \beta_0\|(T-\alpha_nI_\mathcal{X})^{-1}u-(T-\alpha_nI_\mathcal{X})^{-1}v\|, \quad \forall u,v \in \mathcal{X}, \forall n \in \mathbb{N} \end{align*} which implies \begin{align*} \|(T-\alpha_nI_\mathcal{X})^{-1}\|_{\text{Lip}_0}\leq \frac{1}{\beta_0}, \quad \forall n \in \mathbb{N}. \end{align*} Now define \begin{align*} x_n\coloneqq (T-\alpha_nI_\mathcal{X})^{-1}y, \quad \forall n \in \mathbb{N}. \end{align*} Now using resolvent identity in Banach algebra \cite{DALES} we get \begin{align*} \|x_n-x_m\|&=\left|\frac{1}{n}-\frac{1}{m}\right|\|(T-\alpha_nI_\mathcal{X})^{-1}(T-\alpha_mI_\mathcal{X})^{-1}y\|\\ &\leq \left|\frac{1}{n}-\frac{1}{m}\right|\|(T-\alpha_nI_\mathcal{X})^{-1}\|_{\text{Lip}_0}\|(T-\alpha_mI_\mathcal{X})^{-1}\|_{\text{Lip}_0}\|y\|\\ &\leq \left|\frac{1}{n}-\frac{1}{m}\right|\frac{1}{\beta_0^2}\|y\|, \quad \forall n,m \in \mathbb{N}. \end{align*} Thus $\{x_n\}_n$ is Cauchy and it converges to an element, say $x \in \mathcal{X}$. Finally \begin{align*} \|(T-\lambda_0I_\mathcal{X})x-y\|&=\|(T-\lambda_0I_\mathcal{X})x-(T-\alpha_nI_\mathcal{X})x_n\|\\ &\leq \|(T-\lambda_0I_\mathcal{X})x-(T-\lambda_0I_\mathcal{X})x_n\|+\|(T-\lambda_0I_\mathcal{X})x_n-(T-\alpha_nI_\mathcal{X})x_n\|\\ &\leq \|(T-\lambda_0I_\mathcal{X})^{-1}\|_{\text{Lip}_0}\|x_n-x\|+|\alpha_n-\lambda_0|\|x_n\|\\ &\leq \|(T-\lambda_0I_\mathcal{X})^{-1}\|_{\text{Lip}_0}\|x_n-x\|+\frac{1}{n}\|x_n\|, \quad \forall n \in \mathbb{N}. \end{align*} Letting $n\to \infty$, we get $(T-\lambda_0I_\mathcal{X})x=y$. Thus $T-\lambda_0I_\mathcal{X} $ is invertible. Hence $\lambda_0\in \sigma(T)\cap\rho(T)=\emptyset$ which is a contradiction which proves the claim as well as the lemma. \end{proof} \begin{remark} Note that Lemma \ref{FIRSTLEMMA} also holds for real Banach spaces. In fact, if spectrum $\sigma(T)$ is empty then the lemma holds trivially and if it is non empty, then it becomes a compact set and proof of Lemma \ref{FIRSTLEMMA} can be carried over. Due to this, all coming results in this paper are valid for real Banach spaces also. \end{remark} \begin{theorem}\label{UNIONTHEOREM} Let $ \mathcal{X}$ be a Banach space and $ T\in \text{Lip}_0(\mathcal{X})$. Suppose there are $\alpha_1\in \mathbb{R}$ and $ \beta_1>0$ such that \begin{align}\label{FIRSTTHMINEQUALITY} \|Tx-Ty-\alpha(x-y)\|\geq \beta_1\|x-y\|, \quad \forall x,y \in \mathcal{X}, \forall \alpha \leq \alpha_1. \end{align} Then $(-\infty, \alpha_1+\beta_1)\subseteq \rho(T).$ \end{theorem} \begin{proof} Let $0<\eta<\beta_1$. Define $\alpha_0\coloneqq \alpha_1+\eta$ and $\beta_0\coloneqq \beta_1-\eta$. Let $\alpha\leq \alpha_0$. Case (i): $\alpha\leq \alpha_1$. Then from Inequality (\ref{FIRSTTHMINEQUALITY}), $\|Tx-Ty-\alpha(x-y)\|\geq \beta_1\|x-y\|, \forall x,y \in \mathcal{X}$. Case (ii): Let $\alpha_1< \alpha\leq \alpha_0$. Then \begin{align*} \|Tx-Ty-\alpha(x-y)\|&\geq \|Tx-Ty-\alpha_1(x-y)\|-(\alpha-\alpha_1)\|x-y\|\\ &\geq(\beta_1-\eta)\|x-y\|=\beta_0\|x-y\|, \quad \forall x,y \in \mathcal{X}. \end{align*} Lemma \ref{FIRSTLEMMA} now gives $(-\infty, \alpha_0]\subseteq \rho(T).$ Result follows by observing that \begin{align*} (-\infty, \alpha_1+\beta_1)=\bigcup\limits_{0<\eta<\beta_1}(-\infty, \alpha_1+\eta]. \end{align*} \end{proof} \begin{theorem}\label{OIDENTITYTHEOREM} Let $ \mathcal{X}$ be a Banach space, $ T : \mathcal{X}\rightarrow \mathcal{X}$ be a map, $T0=0$ and there exist $ \lambda_1,\lambda_2 \in \left [0, 1 \right )$ such that \begin{align} \label{PER} \|Tx-Ty-(x-y)\|\leq\lambda_1\|x-y\|+\lambda_2\|Tx-Ty\|,\quad \forall x,y \in \mathcal{X}. \end{align} Then \begin{enumerate}[\upshape(i)] \item $T$ is Lipschitz and \begin{align}\label{PERE} \frac{1-\lambda_1}{1+\lambda_2}\|x-y\|\leq\|Tx-Ty\|\leq\frac{1+\lambda_1}{1-\lambda_2} \|x-y\|, \quad \forall x,y \in \mathcal{X}. \end{align} \item We have \begin{align*} \left(-\infty, \frac{1-\lambda_1}{1+\lambda_2}\right)\subseteq \rho(T). \end{align*} \item $T$ is invertible and \begin{align*} \frac{1-\lambda_2}{1+\lambda_1}\|x-y\|\leq\|T^{-1}x-T^{-1}y\|\leq\frac{1+\lambda_2}{1-\lambda_1} \|x-y\|, \quad\forall x,y \in \mathcal{X}. \end{align*} \item We have \begin{align*} \frac{1-\lambda_1}{1+\lambda_2}\leq\|T\|_{\text{Lip}_0}\leq\frac{1+\lambda_1}{1-\lambda_2} \quad \text{ and } \quad \frac{1-\lambda_2}{1+\lambda_1}\leq\|T^{-1}\|_{\text{Lip}_0}\leq\frac{1+\lambda_2}{1-\lambda_1}. \end{align*} \end{enumerate} \end{theorem} \begin{proof} Let $x,y \in \mathcal{X}.$ Then using Inequality (\ref{PER}) \begin{align*} \|Tx-Ty\|&\leq \|Tx-Ty-(x-y)\|+\|x-y\|\leq \lambda_1\|x-y\|+\lambda_2\|Tx-Ty\|+\|x-y\|\\ &=(1+\lambda_1)\|x-y\|+\lambda_2\|Tx-Ty\|\\ &\implies \|Tx-Ty\|\leq\frac{1+\lambda_1}{1-\lambda_2} \|x-y\| \end{align*} and \begin{align*} \|x-y\|&\leq \|Tx-Ty-(x-y)\|+\|Tx-Ty\|\leq \lambda_1\|x-y\|+\lambda_2\|Tx-Ty\|+\|Tx-Ty\|\\ &=\lambda_1\|x-y\|+(1+\lambda_2)\|Tx-Ty\|\\ &\implies \frac{1-\lambda_1}{1+\lambda_2}\|x-y\|\leq\|Tx-Ty\|. \end{align*} Hence $T$ is Lipschitz and (i) holds. Let $\alpha\leq0$. Then \begin{align*} \|Tx-Ty-\alpha(x-y)\|&=\|(1-\alpha)(x-y)-(x-y-(Tx-Ty))\|\\ &\geq (1-\alpha)\|x-y\|-\|Tx-Ty-(x-y)\|\\ &\geq (1-\alpha)\|x-y\|-\lambda_1\|x-y\|-\lambda_2\|Tx-Ty\|\\ &= (1-\alpha-\lambda_1)\|x-y\|-\lambda_2\|Tx-Ty\|\\ &\geq (1-\alpha-\lambda_1)\|x-y\|-\lambda_2\|Tx-Ty-\alpha(x-y)\|+\lambda_2\alpha \|x-y\|\\ &=(1-\alpha-\lambda_1+\lambda_2\alpha)\|x-y\|-\lambda_2\|Tx-Ty-\alpha(x-y)\| \end{align*} which implies \begin{align*} \|Tx-Ty-\alpha(x-y)\|&\geq \frac{1-\alpha-\lambda_1+\lambda_2\alpha}{1+\lambda_2}\|x-y\|\\ &=\frac{1-\lambda_1-(1-\lambda_2)\alpha}{1+\lambda_2}\|x-y\|\geq \frac{1-\lambda_1}{1+\lambda_2}\|x-y\|. \end{align*} By applying Theorem \ref{UNIONTHEOREM} for $\alpha_1=0$ and $\beta_1=\frac{1-\lambda_1}{1+\lambda_2}$ we get $ (-\infty, \frac{1-\lambda_1}{1+\lambda_2})\subseteq \rho(T).$ Since $0\in (-\infty, \frac{1-\lambda_1}{1+\lambda_2})$, $T $ is invertible. Using Inequality (\ref{PERE}) we then get \begin{align*} \frac{1-\lambda_1}{1+\lambda_2}\|T^{-1}u-T^{-1}v\|\leq\|u-v\|\leq\frac{1+\lambda_1}{1-\lambda_2} \|T^{-1}u-T^{-1}v\|, \quad \forall u,v \in \mathcal{X} \end{align*} which gives (iii). Finally (iv) follows from (i) and (iii). \end{proof} \begin{theorem}\label{OSECONDTHEOREM} Let $ \mathcal{X}$, $ \mathcal{Y}$ be Banach spaces and $S\in \text{Lip}_0(\mathcal{X}, \mathcal{Y}) $ be invertible. Let $ T : \mathcal{X}\rightarrow \mathcal{Y}$ be a map, $T0=0$ and there exist $ \lambda_1,\lambda_2 \in \left [0, 1 \right )$ such that \begin{align}\label{123} \|Tx-Ty-(Sx-Sy)\|\leq\lambda_1\|Sx-Sy\|+\lambda_2\|Tx-Ty\|,\quad \forall x,y \in \mathcal{X}. \end{align} Then \begin{enumerate}[\upshape(i)] \item $T$ is Lipschitz and \begin{align*} \frac{1-\lambda_1}{1+\lambda_2}\|Sx-Sy\|\leq\|Tx-Ty\|\leq\frac{1+\lambda_1}{1-\lambda_2} \|Sx-Sy\|, \quad \forall x,y \in \mathcal{X}. \end{align*} \item $\alpha S-T$ is invertible for all $\alpha \in \left(-\infty, \frac{1-\lambda_1}{1+\lambda_2}\right)$. \item $T$ is invertible and \begin{align*} \frac{1-\lambda_2}{1+\lambda_1}\frac{1}{\|S\|_{\text{Lip}_0}}\|u-v\|\leq\|T^{-1}u-T^{-1}v\|\leq\frac{1+\lambda_2}{1-\lambda_1} \|S^{-1}\|_{\text{Lip}_0}\|u-v\|, \quad\forall u,v \in \mathcal{Y}. \end{align*} \item We have \begin{align*} \frac{1-\lambda_1}{1+\lambda_2} \|S\|_{\text{Lip}_0}\leq\|T\|_{\text{Lip}_0}\leq \frac{1+\lambda_1}{1-\lambda_2}\|S\|_{\text{Lip}_0} \quad \text{ and } \quad \frac{1-\lambda_2}{1+\lambda_1}\frac{1}{\|S\|_{\text{Lip}_0}}\leq\|T^{-1}\|_{\text{Lip}_0}\leq\frac{1+\lambda_2}{1-\lambda_1}\|S^{-1}\|_{\text{Lip}_0} . \end{align*} \end{enumerate} \end{theorem} \begin{proof} Define $R\coloneqq TS^{-1}$. Then Inequality (\ref{123}) gives \begin{align*} \|TS^{-1}u-TS^{-1}v-(SS^{-1}u-SS^{-1}v)\|\leq \lambda_1\|SS^{-1}u-SS^{-1}v\|+\lambda_2\|TS^{-1}u-TS^{-1}v\|, \quad \forall u,v \in \mathcal{Y}, \end{align*} i.e., \begin{align*} \|Ru-Rv-(u-v)\|\leq \lambda_1\|u-v\|+\lambda_2\|Ru-Rv\|, \quad \forall u,v \in \mathcal{Y}. \end{align*} By applying Theorem \ref{OIDENTITYTHEOREM} to $R$ we get the following. \begin{enumerate}[\upshape(i)] \item $R$ is Lipschitz hence $T$ is Lipschitz. Further, \begin{align*} \frac{1-\lambda_1}{1+\lambda_2}\|Sx-Sy\|\leq\|R(Sx)-R(Sy)\|\leq\frac{1+\lambda_1}{1-\lambda_2} \|Sx-Sy\|, \quad \forall x,y \in \mathcal{X}. \end{align*} But $\|R(Sx)-R(Sy)\|=\|Tx-Ty\|$, $\forall x,y \in \mathcal{X}.$ \item $\alpha I_\mathcal{X}-R$ is invertible for all $ \alpha \in \left(-\infty, \frac{1-\lambda_1}{1+\lambda_2}\right)$. Since $S$ is invertible we then have $\alpha S-T$ is invertible for all $\alpha \in \left(-\infty, \frac{1-\lambda_1}{1+\lambda_2}\right)$. \item $R$ is invertible hence $T$ is invertible. Further, \begin{align*} \frac{1-\lambda_2}{1+\lambda_1}\frac{1}{\|S\|_{\text{Lip}_0}}\|u-v\|&\leq \frac{1}{\|S\|_{\text{Lip}_0}}\|R^{-1}u-R^{-1}v\| \leq \|S^{-1}(R^{-1}u)-S^{-1}(R^{-1}v)\|\\ &= \|T^{-1}u-T^{-1}v\|\leq \|S^{-1}\|_{\text{Lip}_0}\|R^{-1}u-R^{-1}v\|\\ &\leq \frac{1+\lambda_2}{1-\lambda_1}\|S^{-1}\|_{\text{Lip}_0}\|u-v\|, \quad\forall u,v \in \mathcal{Y}. \end{align*} \item This follows easily from (i) and (iii). \end{enumerate} \end{proof} Our next job is to derive the results by removing the condition $T0=0$. \begin{theorem}\label{NOBASEPOINTFIRST} Let $ \mathcal{X}$ be a Banach space, $ T : \mathcal{X}\rightarrow \mathcal{X}$ be a map and there exist $ \lambda_1,\lambda_2 \in \left [0, 1 \right )$ such that \begin{align*} \|Tx-Ty-(x-y)\|\leq\lambda_1\|x-y\|+\lambda_2\|Tx-Ty\|,\quad \forall x,y \in \mathcal{X}. \end{align*} Then \begin{enumerate}[\upshape(i)] \item $T$ is Lipschitz and \begin{align*} \frac{1-\lambda_1}{1+\lambda_2}\|x-y\|\leq\|Tx-Ty\|\leq\frac{1+\lambda_1}{1-\lambda_2} \|x-y\|, \quad \forall x,y \in \mathcal{X}. \end{align*} \item $\alpha I_\mathcal{X}-T$ is invertible for all $\alpha \in \left(-\infty, \frac{1-\lambda_1}{1+\lambda_2}\right)$. \item $T$ is invertible and \begin{align*} \frac{1-\lambda_2}{1+\lambda_1}\|x-y\|\leq\|T^{-1}x-T^{-1}y\|\leq\frac{1+\lambda_2}{1-\lambda_1} \|x-y\|, \quad\forall x,y \in \mathcal{X}. \end{align*} \item We have \begin{align*} \frac{1-\lambda_1}{1+\lambda_2}\leq \text{Lip}(T)\leq\frac{1+\lambda_1}{1-\lambda_2} \quad \text{ and } \quad \frac{1-\lambda_2}{1+\lambda_1}\leq\text{Lip}(T^{-1})\leq\frac{1+\lambda_2}{1-\lambda_1}. \end{align*} \end{enumerate} \end{theorem} \begin{proof} Define \begin{align*} \tilde{T} x \coloneqq Tx-T0, \quad \forall x \in \mathcal{X}. \end{align*} Then $\tilde{T}0=0$ and \begin{align*} \|\tilde{T}x-\tilde{T}y-(x-y)\|&=\|Tx-Ty-(x-y)\|\leq\lambda_1\|x-y\|+\lambda_2\|Tx-Ty\|\\ &=\lambda_1\|x-y\|+\lambda_2\|\tilde{T}x-\tilde{T}y\|,\quad \forall x,y \in \mathcal{X}. \end{align*} Applying Theorem \ref{OIDENTITYTHEOREM} and using the fact that `a map is bijective if and only if its translate is bijective', proof is complete. \end{proof} \begin{theorem}\label{IMPORTANTTHEOREM} Let $ \mathcal{X}$, $ \mathcal{Y}$ be Banach spaces and $S\in \text{Lip}(\mathcal{X}, \mathcal{Y}) $ be invertible. Let $ T : \mathcal{X}\rightarrow \mathcal{Y}$ be a map and there exist $ \lambda_1,\lambda_2 \in \left [0, 1 \right )$ such that \begin{align*} \|Tx-Ty-(Sx-Sy)\|\leq\lambda_1\|Sx-Sy\|+\lambda_2\|Tx-Ty\|,\quad \forall x,y \in \mathcal{X}. \end{align*} Then \begin{enumerate}[\upshape(i)] \item $T$ is Lipschitz and \begin{align*} \frac{1-\lambda_1}{1+\lambda_2}\|Sx-Sy\|\leq\|Tx-Ty\|\leq\frac{1+\lambda_1}{1-\lambda_2} \|Sx-Sy\|, \quad \forall x,y \in \mathcal{X}. \end{align*} \item $\alpha S-T$ is invertible for all $\alpha \in \left(-\infty, \frac{1-\lambda_1}{1+\lambda_2}\right)$. \item $T$ is invertible and \begin{align*} \frac{1-\lambda_2}{1+\lambda_1}\frac{1}{\text{Lip}(S)}\|u-v\|\leq\|T^{-1}u-T^{-1}v\|\leq\frac{1+\lambda_2}{1-\lambda_1} \text{Lip}(S^{-1})\|u-v\|, \quad\forall u,v \in \mathcal{Y}. \end{align*} \item We have \begin{align*} \frac{1-\lambda_1}{1+\lambda_2} \text{Lip}(S)\leq\text{Lip}(T)\leq \frac{1+\lambda_1}{1-\lambda_2}\text{Lip}(S) \quad \text{ and } \quad \frac{1-\lambda_2}{1+\lambda_1}\frac{1}{\text{Lip}(S)}\leq\text{Lip}(T^{-1})\leq\frac{1+\lambda_2}{1-\lambda_1}\text{Lip}(S^{-1}) . \end{align*} \end{enumerate} \end{theorem} \begin{proof} Define $R\coloneqq TS^{-1}$ and the proof is similar to proof of Theorem \ref{OSECONDTHEOREM}. \end{proof} Following two corollaries are motivated from \cite{HILDING}. \begin{corollary} Let $p\geq1$. Let $ \mathcal{X}$, $ \mathcal{Y}$ be Banach spaces and $S\in \text{Lip}(\mathcal{X}, \mathcal{Y}) $ be invertible. Let $ T : \mathcal{X}\rightarrow \mathcal{Y}$ be a map and there exist $ \lambda_1,\lambda_2 \in \left [0, 1 \right )$ such that \begin{align}\label{PGREATER1INEQUALITY} \|Tx-Ty-(Sx-Sy)\|\leq((\lambda_1\|Sx-Sy\|)^p+(\lambda_2\|Tx-Ty\|)^p)^\frac{1}{p},\quad \forall x,y \in \mathcal{X}. \end{align} Then \begin{enumerate}[\upshape(i)] \item $T$ is Lipschitz and \begin{align*} \frac{1-\lambda_1}{1+\lambda_2}\|Sx-Sy\|\leq\|Tx-Ty\|\leq\frac{1+\lambda_1}{1-\lambda_2} \|Sx-Sy\|, \quad \forall x,y \in \mathcal{X}. \end{align*} \item $\alpha S-T$ is invertible for all $\alpha \in \left(-\infty, \frac{1-\lambda_1}{1+\lambda_2}\right)$. \item $T$ is invertible and \begin{align*} \frac{1-\lambda_2}{1+\lambda_1}\frac{1}{\text{Lip}(S)}\|u-v\|\leq\|T^{-1}u-T^{-1}v\|\leq\frac{1+\lambda_2}{1-\lambda_1} \text{Lip}(S^{-1})\|u-v\|, \quad\forall u,v \in \mathcal{Y}. \end{align*} \item We have \begin{align*} \frac{1-\lambda_1}{1+\lambda_2} \text{Lip}(S)\leq\text{Lip}(T)\leq \frac{1+\lambda_1}{1-\lambda_2}\text{Lip}(S) \quad \text{ and } \quad \frac{1-\lambda_2}{1+\lambda_1}\frac{1}{\text{Lip}(S)}\leq\text{Lip}(T^{-1})\leq\frac{1+\lambda_2}{1-\lambda_1}\text{Lip}(S^{-1}) . \end{align*} \end{enumerate} \end{corollary} \begin{proof} Note that if $r,s\geq0 $, then \begin{align*} (r^p+s^p)^\frac{1}{p}\leq r+s \quad \text{if} \quad p\geq 1. \end{align*} Hence Inequality (\ref{PGREATER1INEQUALITY}) gives \begin{align*} \|Tx-Ty-(Sx-Sy)\|&\leq((\lambda_1\|Sx-Sy\|)^p+(\lambda_2\|Tx-Ty\|)^p)^\frac{1}{p}\\ &\leq \lambda_1\|Sx-Sy\|+\lambda_2\|Tx-Ty\|,\quad \forall x,y \in \mathcal{X}. \end{align*} Result follows by applying Theorem \ref{IMPORTANTTHEOREM}. \end{proof} \begin{corollary} Let $0<p<1$. Let $ \mathcal{X}$, $ \mathcal{Y}$ be Banach spaces and $S\in \text{Lip}(\mathcal{X}, \mathcal{Y}) $ be invertible. Let $ T : \mathcal{X}\rightarrow \mathcal{Y}$ be a map and there exist $ \lambda_1,\lambda_2 \in \left [0 ,2^{1-\frac{1}{p}} \right )$ such that \begin{align*} \|Tx-Ty-(Sx-Sy)\|\leq((\lambda_1\|Sx-Sy\|)^p+(\lambda_2\|Tx-Ty\|)^p)^\frac{1}{p},\quad \forall x,y \in \mathcal{X}. \end{align*} Then \begin{enumerate}[\upshape(i)] \item $T$ is Lipschitz and \begin{align*} \frac{1-2^{\frac{1}{p}-1}\lambda_1}{1+2^{\frac{1}{p}-1}\lambda_2}\|Sx-Sy\|\leq\|Tx-Ty\|\leq\frac{1+2^{\frac{1}{p}-1}\lambda_1}{1-2^{\frac{1}{p}-1}\lambda_2} \|Sx-Sy\|, \quad \forall x,y \in \mathcal{X}. \end{align*} \item $\alpha S-T$ is invertible for all $\alpha \in \left(-\infty, \frac{1-2^{\frac{1}{p}-1}\lambda_1}{1+2^{\frac{1}{p}-1}\lambda_2}\right)$. \item $T$ is invertible and \begin{align*} \frac{1-2^{\frac{1}{p}-1}\lambda_2}{1+2^{\frac{1}{p}-1}\lambda_1}\frac{1}{\text{Lip}(S)}\|u-v\|\leq\|T^{-1}u-T^{-1}v\|\leq\frac{1+2^{\frac{1}{p}-1}\lambda_2}{1-2^{\frac{1}{p}-1}\lambda_1} \text{Lip}(S^{-1})\|u-v\|, \quad\forall u,v \in \mathcal{Y}. \end{align*} \item We have \begin{align*} \frac{1-2^{\frac{1}{p}-1}\lambda_1}{1+2^{\frac{1}{p}-1}\lambda_2} \text{Lip}(S)\leq\text{Lip}(T)\leq \frac{1+2^{\frac{1}{p}-1}\lambda_1}{1-2^{\frac{1}{p}-1}\lambda_2}\text{Lip}(S) \quad \text{ and } \\ \frac{1-2^{\frac{1}{p}-1}\lambda_2}{1+2^{\frac{1}{p}-1}\lambda_1}\frac{1}{\text{Lip}(S)}\leq\text{Lip}(T^{-1})\leq\frac{1+2^{\frac{1}{p}-1}\lambda_2}{1-2^{\frac{1}{p}-1}\lambda_1}\text{Lip}(S^{-1}) . \end{align*} \end{enumerate} \end{corollary} \begin{proof} Note that if $r,s\geq0 $, then \begin{align*} (r^p+s^p)^\frac{1}{p}\leq 2^{\frac{1}{p}-1}(r+s) \quad \text{if} \quad p< 1. \end{align*} Hence Inequality (\ref{PGREATER1INEQUALITY}) gives \begin{align*} \|Tx-Ty-(Sx-Sy)\|&\leq((\lambda_1\|Sx-Sy\|)^p+(\lambda_2\|Tx-Ty\|)^p)^\frac{1}{p}\\ &\leq 2^{\frac{1}{p}-1}\lambda_1\|Sx-Sy\|+ 2^{\frac{1}{p}-1}\lambda_2\|Tx-Ty\|,\quad \forall x,y \in \mathcal{X}. \end{align*} Result follows by applying Theorem \ref{IMPORTANTTHEOREM}. \end{proof} Next we generalize Corollary 1 in \cite{CASAZZACHRISTENSEN}. \begin{corollary} Let $ \mathcal{X}$, $ \mathcal{Y}$ be Banach spaces and $S\in \text{Lip}(\mathcal{X}, \mathcal{Y}) $ be invertible. Let $ T : \mathcal{X}\rightarrow \mathcal{Y}$ be a map and there exists $ \lambda \in \left [0, 1 \right )$ such that \begin{align*} \|Tx-Ty-(Sx-Sy)\|\leq\lambda\|Sx-Sy\|+\|Tx-Ty\|,\quad \forall x,y \in \mathcal{X}. \end{align*} Then $T$ is Lipschitz, invertible and \begin{align*} \text{Lip}(T^{-1})\leq\frac{2}{1-\lambda}\text{Lip}(S^{-1}) . \end{align*} \end{corollary} \begin{proof} Define $R\coloneqq TS^{-1}$. Then \begin{align*} \|Ru-Rv-(u-v)\|\leq \lambda \|u-v\|+\|Ru-Rv\|, \quad \forall u,v \in \mathcal{Y}. \end{align*} Note that $\text{Lip}(R)\neq 0$. Let \begin{align*} 0<\varepsilon < \min \left\{1, \frac{1-\lambda}{\text{Lip}(R)}\right\}. \end{align*} Define $\lambda_1 \coloneqq \lambda -\varepsilon \text{Lip}(R)$ and $\lambda_2 \coloneqq 1-\varepsilon$. Then $ \lambda_1,\lambda_2 \in \left [0, 1 \right )$ and \begin{align*} \|Ru-Rv-(u-v)\|&\leq \lambda \|u-v\|+\|Ru-Rv\|\\ &\leq \lambda \|u-v\|+\|Ru-Rv\|+\varepsilon (\text{Lip}(R)\|u-v\|- \|Ru-Rv\|)\\ &=(\lambda+\varepsilon \text{Lip}(R))\|u-v\|+(1-\varepsilon)\|Ru-Rv\| , \quad \forall u,v \in \mathcal{Y}. \end{align*} By applying Theorem \ref{NOBASEPOINTFIRST} we get that $R$ is Lipschitz, invertible and \begin{align*} \text{Lip}(R^{-1})\leq\frac{2-\varepsilon}{1-(\lambda+\varepsilon \text{Lip}(R))}. \end{align*} Since $\varepsilon$ can be made arbitrarily small, we must have \begin{align*} \text{Lip}(R^{-1})\leq\frac{2}{1-\lambda}. \end{align*} Substituting the expression of $R$ gives \begin{align*} \frac{1}{\text{Lip}(S^{-1})}\text{Lip}(T^{-1})\leq \text{Lip}(ST^{-1})=\text{Lip}(R^{-1})\leq\frac{2}{1-\lambda}. \end{align*} \end{proof} We finally derive the following non-linear version of Theorem \ref{GUOTHEOREM}. \begin{theorem}\label{LASTTHEOREM} Let $ \mathcal{X}$, $ \mathcal{Y}$ be Banach spaces and $S\in \text{Lip}(\mathcal{X}, \mathcal{Y}) $ be invertible. Let $ T : \mathcal{X}\rightarrow \mathcal{Y}$ be a Lipschitz map and there exist $ \lambda_1 \in \left [0, 1 \right )$ and $ \lambda_2 \in \left [0, 1 \right ]$ such that \begin{align*} \|Tx-Ty-(Sx-Sy)\|\leq\lambda_1\|Sx-Sy\|+\lambda_2\|Tx-Ty\|,\quad \forall x,y \in \mathcal{X}. \end{align*} Then $T$ is Lipschitz invertible. Further, for every $\varepsilon>0$ satisfying $1>\lambda_2-\varepsilon>0$ and $\lambda_1+\varepsilon \text{Lip}(TS^{-1})<1$, we have \begin{enumerate}[\upshape(i)] \item \begin{align*} \frac{1-\lambda_1-\varepsilon \text{Lip}(TS^{-1})}{1+\lambda_2-\varepsilon}\|Sx-Sy\|\leq\|Tx-Ty\|\leq\frac{1+\lambda_1+\varepsilon \text{Lip}(TS^{-1})}{1-\lambda_2+\varepsilon} \|Sx-Sy\|, \quad \forall x,y \in \mathcal{X}. \end{align*} \item $\alpha S-T$ is invertible for all $\alpha \in \left(-\infty, \frac{1-\lambda_1-\varepsilon \text{Lip}(TS^{-1})}{1+\lambda_2-\varepsilon}\right)$. \item $T$ is invertible and \begin{align*} \frac{1-\lambda_2+\varepsilon}{1+\lambda_1+\varepsilon \text{Lip}(TS^{-1})}\frac{1}{\text{Lip}(S)}\|u-v\|\leq\|T^{-1}u-T^{-1}v\|\leq\frac{1+\lambda_2-\varepsilon}{1-\lambda_1-\varepsilon \text{Lip}(TS^{-1})} \text{Lip}(S^{-1})\|u-v\|, \quad\forall u,v \in \mathcal{Y}. \end{align*} \item We have \begin{align*} &\frac{1-\lambda_1-\varepsilon \text{Lip}(TS^{-1})}{1+\lambda_2-\varepsilon} \text{Lip}(S)\leq\text{Lip}(T)\leq \frac{1+\lambda_1+\varepsilon \text{Lip}(TS^{-1})}{1-\lambda_2+\varepsilon}\text{Lip}(S) \quad \text{ and } \\ & \frac{1-\lambda_2+\varepsilon}{1+\lambda_1+\varepsilon \text{Lip}(TS^{-1})}\frac{1}{\text{Lip}(S)}\leq\text{Lip}(T^{-1})\leq\frac{1+\lambda_2-\varepsilon}{1-\lambda_1-\varepsilon \text{Lip}(TS^{-1})}\text{Lip}(S^{-1}) . \end{align*} \end{enumerate} \end{theorem} \begin{proof} Define $R\coloneqq TS^{-1}$. Then for every $\varepsilon>0$ satisfying $1>\lambda_2-\varepsilon>0$ and $\lambda_1+\varepsilon \text{Lip}(TS^{-1})<1$, \begin{align*} \|Ru-Rv-(u-v)\|&\leq \lambda_1\|u-v\|+\lambda_2\|Ru-Rv\|\\ &=\lambda_1\|u-v\|+(\lambda_2-\varepsilon)\|Ru-Rv\|+\varepsilon\|Ru-Rv\|\\ &\leq \lambda_1\|u-v\|+(\lambda_2-\varepsilon)\|Ru-Rv\|+\varepsilon \text{Lip}(R) \|u-v\|\\ &= (\lambda_1+\varepsilon \text{Lip}(R))\|u-v\|+(\lambda_2-\varepsilon)\|Ru-Rv\|, \quad \forall u,v \in \mathcal{Y}. \end{align*} Remaining parts of the proof is similar to the proof of Theorem \ref{OSECONDTHEOREM}. \end{proof} It is an easy observation that the constant $\lambda_1$ in Theorem \ref{LASTTHEOREM} can not be improved. We are therefore left with the following open problem. \begin{question} Can the constant $\lambda_2$ be improved further in Theorem \ref{LASTTHEOREM}? \end{question} \section{Applications} Our first two applications of Theorem \ref{IMPORTANTTHEOREM} are easy proofs of Soderlind-Campanato perturbation and Barbagallo-Ernst-Thera perturbation. \begin{theorem}\cite{CAMPANATO, SODERLIND}\label{SODERLINDCAMPANATOAPP} (\textbf{Soderlind-Campanato perturbation}) Let $ \mathcal{X}$ be a real Banach space, $ A : \mathcal{X}\rightarrow \mathcal{X}$ be a map and there exist $\alpha>0, 0\leq \beta <1$ such that \begin{align*} \|\alpha Ax-\alpha Ay-(x-y)\|\leq\beta\|x-y\|,\quad \forall x,y \in \mathcal{X}. \end{align*} Then $A$ is Lipschitz, invertible and $\text{Lip}(A^{-1})\leq \frac{\alpha}{1-\beta}$. \end{theorem} \begin{proof} Set $T= \alpha A$ and $\lambda_1=\beta$ in Theorem \ref{IMPORTANTTHEOREM}. Then $\frac{1}{\alpha}\text{Lip}(A^{-1})=\text{Lip}(T^{-1})\leq \frac{1}{1-\lambda_1}= \frac{1}{1-\beta}$. \end{proof} \begin{theorem}\cite{BARBAGALLO}\label{BARBAGALLOAPP} (\textbf{Barbagallo-Ernst-Thera perturbation}) Let $ \mathcal{X}$ be a real Banach space, $ A : \mathcal{X}\rightarrow \mathcal{X}$ be a map and there exist $\alpha>0, 0\leq \beta <1$ such that \begin{align} \label{BET} \| Ax- Ay-(\alpha x-\alpha y)\|\leq\beta \|Ax-Ay\|,\quad \forall x,y \in \mathcal{X}. \end{align} Then \begin{enumerate}[\upshape(i)] \item If $\beta<1/2, $ then $A$ is Lipschitz, invertible and $\text{Lip}(A^{-1})\leq \frac{1-\beta}{\alpha(1-2\beta)}$. \item If $ \mathcal{X}$ is a Hilbert space, then $A$ is Lipschitz, invertible and $\text{Lip}(A^{-1})\leq \frac{1+\beta}{\alpha}$. \end{enumerate} \end{theorem} \begin{proof} Set $T= \frac{1}{\alpha} A$ and $\lambda_2=\beta$ in Theorem \ref{IMPORTANTTHEOREM}. Then \begin{enumerate}[\upshape(i)] \item $\alpha\text{Lip}(A^{-1})=1+\lambda_2=1+\beta\leq \frac{1-\beta}{1-2\beta}$. \item $\alpha\text{Lip}(A^{-1})=1+\lambda_2=1+\beta$. \end{enumerate} \end{proof} \begin{remark} In \cite{BARBAGALLO}, it is stated that ``the number $1/2$ in Theorem \ref{BARBAGALLOAPP} can not be improved" (see line -7, page 20 , line -14, page 20, and line 14, page 18 in \cite{BARBAGALLO}). In an attempt of giving an example, an operator $A$ is constructed which satisfies $\|A-I_\mathcal{X}\|=1/2\|A\|$. Unfortunately this example does not satisfy Inequality (\ref{BET}). Reason is the following. The operator $A$ is constructed as follows. Let $\mathcal{X}$ be a Banach space and assume that $\mathcal{X}=\mathcal{Y} \oplus \mathcal{Z}$ for some closed subspaces $\mathcal{Y}$ and $\mathcal{Z}$ of $\mathcal{X}$. Then the projections $P:\mathcal{Y} \oplus \mathcal{Z}\ni y\oplus z \to y \in \mathcal{X}$, $Q:\mathcal{Y} \oplus \mathcal{Z}\ni y\oplus z \to z \in \mathcal{X}$ are bounded linear operators. Define $A\coloneqq 2P$ which is linear. Then \begin{align*} A-I_\mathcal{X}=2P-(P+Q)=P-Q. \end{align*} Now for $x=y\oplus z\in \mathcal{Y} \oplus \mathcal{Z}$, we have \begin{align*} \|Ax-x\|=\|Px-Qx\|=\|y\oplus (-z)\| \quad \text{ and } \quad \|Ax\|=\|2Px\|=2\|y\|. \end{align*} Thus there is no $\beta\geq 0$ such that \begin{align*} \|Ax-x\|=\|y\oplus z\|\leq 2\beta\|y\|=\beta \|Ax\|,\quad \forall x=y\oplus z\in \mathcal{Y} \oplus \mathcal{Z}. \end{align*} In view of Theorem \ref{IMPORTANTTHEOREM}, we see that the statement given in \cite{BARBAGALLO} ``the number $1/2$ in Theorem \ref{BARBAGALLOAPP} can not be improved" is false. However, the statement ``$1/2$ is optimal for symmetry modulus'' is true. \end{remark} We now give applications to the theory of frames. Paley-Wiener theorem for orthonormal basis in Hilbert spaces inspired the study of perturbation of frames for Hilbert spaces. This was first derived by Christensen in his two papers \cite{CHRISTENSENPALEY1, CHRISTENSENPALEY2}. This motivated the perturbation of frames and atomic decompositions for Banach spaces \cite{CHRISTENSENHEIL}. Crucial result used in all these perturbation results is the Neumann series. Later, using Theorem \ref{CASAZZAKALTONTHEOREM}, Casazza and Christensen \cite{CASAZZACHRISTENSEN} improved the results obtained in paper \cite{CHRISTENSENPALEY2}. Using Theorem \ref{CASAZZAKALTONTHEOREM} Stoeva made a systematic study of perturbations of frames for Banach spaces \cite{STOEVA}. For the sake of completeness, we note that Theorem \ref{CASAZZAKALTONTHEOREM} was used in the study of perturbations of frames for Hilbert C*-modules \cite{HANJING}.\\ Large body of work on frames for Hilbert spaces (see \cite{DUFFINSCHAEFFER, CHRISTENSENBOOK, HANLARSONMEMOIRS, HANKORNELSON}) lead to the well developed theory of frames (known as Banach frames and $\mathcal{X}_d$-frames) for Banach spaces (see \cite{GROCHENIG, CASAZZAHANLARSON, CASAZZACHRISTENSENSTOEVA, CHRISTENSENSTOEVAP}) lead to the beginning of frames for frames for metric spaces (known as metric frames) \cite{KRISHNAJOHNSON}. For stating these definitions we need the definition of BK-space (Banach scalar valued sequence space or Banach co-ordinate space). \begin{definition} \cite{BANASMURSALEEN} A sequence space $\mathcal{M}_d$ is said to be a BK-space if all the coordinate functionals are continuous, i.e., whenever $\{x_n\}_n$ is a sequence in $\mathcal{M}_d$ converging to $x \in \mathcal{M}_d$, then each coordinate of $x_n$ converges to each coordinate of $x$. \end{definition} \begin{definition}\cite{KRISHNAJOHNSON}\label{METRICBANACHFRAME} Let $\mathcal{M}$ be a metric space and $\mathcal{M}_d$ be an associated BK-space. Let $\{f_n\}_{n}$ be a collection in $\operatorname{Lip}(\mathcal{M}, \mathbb{K})$ (where $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$) and $S: \mathcal{M}_d \rightarrow \mathcal{M}$. If: \begin{enumerate}[\upshape(i)] \item $\{f_n(x)\}_{n} \in \mathcal{M}_d$, for each $x \in \mathcal{M}$, \item There exist positive $a, b$ such that $ a\, d(x,y) \leq \|\{f_n(x)-f_n(y)\}_n\|_{\mathcal{M}_d} \leq b\, d(x,y), \forall x , y\in \mathcal{M}, $ \item $S$ is Lipschitz and $S(\{f_n(x)\}_{n})=x$, for each $x \in \mathcal{M}$, \end{enumerate} then we say that $(\{f_n\}_{n}, S)$ is a \textbf{metric frame} for $\mathcal{M}$ with respect to $\mathcal{M}_d$. We say constant $a$ as lower frame bound and constant $b$ as upper frame bound. \end{definition} As noted in \cite{KRISHNAJOHNSON}, we observe that if $(\{f_n\}_{n}, S)$ is a metric frame for $\mathcal{M}$ w.r.t. $\mathcal{M}_d$, then Definition \ref{METRICBANACHFRAME} tells that the analysis map \begin{align*} \theta_f:\mathcal{M} \ni x \mapsto \theta_f x\coloneqq \{f_n(x)\}_{n} \in \mathcal{M}_d \end{align*} is well-defined and bi-Lipschitz. Further, $S\theta_f =I_\mathcal{M}$. Now we improve Theorem 4 in \cite{CASAZZACHRISTENSEN} to metric frames for Banach spaces. \begin{theorem}\label{STABILITYMA} Let $(\{f_n\}_{n}, S)$ be a metric frame with lower frame bound $a$ and upper frame bound $b$ for a Banach space $\mathcal{X}$ w.r.t. $\mathcal{M}_d$. Let $T: \mathcal{M}_d \rightarrow \mathcal{X}$ be a Lipschitz map and suppose that there exist $\lambda_1, \lambda_2, \mu\geq 0$ such that $\max\{\lambda_2, \lambda_1+\mu b\}<1$ and \begin{align}\label{PERINEQ} \|S\{c_n\}_n-S\{d_n\}_n-(T\{c_n\}_n-T\{d_n\}_n)\|&\leq \lambda_1\|S\{c_n\}_n-S\{d_n\}_n\|+\lambda_2\|T\{c_n\}_n-T\{d_n\}_n\|\nonumber\\ &\quad +\mu \|\{c_n-d_n\}_n\|, \quad \forall \{c_n\}_n,\{d_n\}_n \in \mathcal{M}_d. \end{align} Then there exists a collection $\{g_n\}_{n}$ in $\operatorname{Lip}(\mathcal{X}, \mathbb{K})$ such that $(\{g_n\}_{n}, T)$ is a metric frame for $\mathcal{X}$ with lower and upper frame bounds \begin{align*} \frac{a(1-\lambda_2)}{1+\lambda_1+\mu b}, \quad \frac{b(1+\lambda_2)}{1-(\lambda_1+\mu b)} \end{align*} respectively. \end{theorem} \begin{proof} Given $x, y \in \mathcal{X}$, by taking $\{c_n\}_n$ as $\theta_fx$ and $\{d_n\}_n$ as $\theta_fy$ in Inequality (\ref{PERINEQ}) we get \begin{align*} \|S\theta_fx-S\theta_fy-(T\theta_fx-T\theta_fy)\|\leq \lambda_1 \|S\theta_fx-S\theta_fy\|+\lambda_2 \|T\theta_fx-T\theta_fy\|+\mu\|\theta_fx-\theta_fy\|, \quad \forall x,y \in \mathcal{X}. \end{align*} But $S\theta_fx=x, \forall x \in \mathcal{X}$ and hence \begin{align*} \|x-y-(T\theta_fx-T\theta_fy)\|&\leq \lambda_1 \|x-y\|+\lambda_2 \|T\theta_fx-T\theta_fy\|+\mu\|\theta_fx-\theta_fy\|\\ &\leq (\lambda_1+ \mu \operatorname{Lip}(\theta_f))\|x-y\|+\lambda_2 \|T\theta_fx-T\theta_fy\|\\ &\leq (\lambda_1+ \mu b)\|x-y\|+\lambda_2 \|T\theta_fx-T\theta_fy\|, \quad \forall x,y \in \mathcal{X}. \end{align*} Theorem \ref{IMPORTANTTHEOREM} now says that the map $T\theta_f$ is Lipschitz invertible and \begin{align*} \frac{1-\lambda_2}{1+\lambda_1+ \mu b}\leq\text{Lip}(T\theta_f)^{-1}\leq\frac{1+\lambda_2}{1-(\lambda_1+ \mu b)}. \end{align*} Define $g_n\coloneqq f_n(T\theta_f)^{-1}$ for all $n \in \mathbb{N}$. Then $g_n$ is Lipschitz for all $n$, $\{g_n(x)\}_{n} \in \mathcal{M}_d$, for each $x \in \mathcal{M}$ and \begin{align*} \|\{g_n(x)-g_n(y)\}_n\|&=\|\{f_n((T\theta_f)^{-1}(x))-f_n((T\theta_f)^{-1}(y))\}_n\|\leq b \|(T\theta_f)^{-1}(x)-(T\theta_f)^{-1}(y)\|\\ &\leq b \frac{1+\lambda_2}{1-(\lambda_1+ \mu b)}\|x-y\|, \quad \forall x,y \in \mathcal{X}, \end{align*} \begin{align*} a\frac{1-\lambda_2}{1+\lambda_1+ \mu b}\|x-y\|&\leq a \|(T\theta_f)^{-1}(x))-(T\theta_f)^{-1}(y)\|\leq \|\{f_n((T\theta_f)^{-1}(x))-f_n((T\theta_f)^{-1}(y))\}_n\|\\ &= \|\{g_n(x)-g_n(y)\}_n\|, \quad \forall x,y \in \mathcal{X} \end{align*} which establish lower and upper frame bounds. Further, \begin{align*} T\{g_n(x)\}_{n}=T\{f_n(T\theta_f)^{-1}(x)\}_{n}=T\theta_f((T\theta_f)^{-1}(x))=x, \quad \forall x \in \mathcal{X}. \end{align*} Therefore $(\{g_n\}_{n}, T)$ is a metric frame for $\mathcal{X}$. \end{proof} We now give another application of Theorem \ref{IMPORTANTTHEOREM}. For this purpose, we introduce the notion of non-linear atomic decompositions. \begin{definition}\label{ATOMICMETRIC} Let $\mathcal{X}$ be a Banach space $\mathcal{X} $ and $\mathcal{M}_d$ be a BK-space. Let $\{f_n\}_{n}$ be a sequence in $\operatorname{Lip}(\mathcal{X}, \mathbb{K})$ and $\{\tau_n\}_{n}$ to be a sequence in $\mathcal{X}$ If: \begin{enumerate}[\upshape(i)] \item $\{f_n(x)\}_{n} \in \mathcal{M}_d$, for each $x \in \mathcal{X}$, \item There exist positive $a, b$ such that \begin{align*} a\, \|x-y\| \leq \|\{f_n(x)-f_n(y)\}_n\|_{\mathcal{M}_d} \leq b\, \|x-y\|, \quad \forall x , y\in \mathcal{X}, \end{align*} \item $x=\sum_{n=1}^{\infty}f_n(x)\tau_n$, for each $x \in \mathcal{X}$, \end{enumerate} then we say that $(\{f_n\}_{n}, \{\tau_n\}_{n})$ is a \textbf{Lipschitz atomic decomposition} for $\mathcal{X}$ with respect to $\mathcal{M}_d$. We say constant $a$ as lower Lipschitz atomic bound and constant $b$ as upper Lipschitz atomic bound. \end{definition} In \cite{CASAZZAHANLARSON} it is proved that not every Banach space admits an atomic decomposition. Motivated from this, we ask the following open problem. \begin{question} Classify Banach spaces which admit Lipschitz atomic decompositions. \end{question} Following proposition gives various examples of Lipschitz atomic decompositions. \begin{proposition}\label{LIPATOMICPRO} Let $(\{g_n\}_n, \{\omega_n\}_n)$ be an atomic decomposition for a Banach space $\mathcal{Y}$ w.r.t. BK-space $\mathcal{M}_d$. Let $\mathcal{X}$ be a Banach space and let $A:\mathcal{X}\to \mathcal{Y}$ be a bi-Lipschitz map such that there exists a linear map $A:\mathcal{Y}\to \mathcal{X}$ satisfying $BA=I_\mathcal{X}$. Then $(\{f_n\coloneqq g_nA\}_n, \{\tau_n\coloneqq B\omega_n\}_n)$ is a Lipschitz atomic decomposition $\mathcal{X}$ w.r.t. $\mathcal{M}_d$. In particular, if a Banach space admits a Schauder basis, then it admits a Lipschitz atomic decomposition. \end{proposition} Particular case of Proposition \ref{LIPATOMICPRO} gives the following example. \begin{example} Let $(\{g_n\}_n, \{\omega_n\}_n)$ be an atomic decomposition for a Banach space $\mathcal{X}$ w.r.t. a BK-space $\mathcal{M}_d$. Let $T:\mathcal{X}\to \mathcal{X}$ be any bi-Lipschitz map. Define $A:\mathcal{X}\ni x \mapsto (x, Tx) \in \mathcal{X}\oplus \mathcal{X}$ and $B:\mathcal{X}\oplus \mathcal{X} \ni (x,y)\mapsto x \in \mathcal{X}$. Then $A$ is bi-Lipschitz, $B$ is linear and satisfies $BA=I_\mathcal{X}$. Hence $(\{f_n\coloneqq g_nA\}_n, \{\tau_n\coloneqq B\omega_n\}_n)$ is a Lipschitz atomic decomposition for $\mathcal{X}$ w.r.t. $\mathcal{M}_d$. \end{example} At this point it seems that it is best to develop some theory of Lipschitz atomic decompositions before giving an application of Theorem \ref{IMPORTANTTHEOREM}. Proposition 2.3 in \cite{CASAZZAHANLARSON} shows that under certain conditions, there is a close relationship between Banach frames and atomic decompositions. Following is the non-linear version of that result. \begin{proposition} Let $\mathcal{X}$ be a Banach space and $\mathcal{M}_d$ be a BK-space. Let $\{f_n\}_n$ be a sequence in $\operatorname{Lip}(\mathcal{X}, \mathbb{K})$ and $S:\mathcal{M}_d\rightarrow \mathcal{X}$ be a bounded linear operator. If the standard unit vectors $\{e_n\}_n$ form a Schauder basis for $\mathcal{M}_d$, then the following are equivalent. \begin{enumerate}[\upshape(i)] \item $(\{f_n\}_n, S)$ is a metric frame for $\mathcal{X}$. \item $(\{f_n\}_n, \{Se_n\}_n)$ is a Lipschitz atomic decomposition for $\mathcal{X}$ w.r.t. $\mathcal{M}_d$. \end{enumerate} \end{proposition} \begin{proof} We set $\tau_n=Se_n,\forall n \in \mathbb{N}$ and see that \begin{align*} \sum_{n=1}^\infty f_n(x)\tau_n=\sum_{n=1}^\infty f_n(x)Se_n=S\left(\sum_{n=1}^\infty f_n(x)e_n\right)=S\left(\{f_n(x)\}_n\right), \quad \forall x \in \mathcal{X}. \end{align*} \end{proof} Well established dilation theory of frames for Hilbert spaces says that frames are images of Riesz bases under projections \cite{CZAJA, KASHINKUKILOVA, HANLARSONMEMOIRS}. This result has been extended to frames and atomic decompositions for Banach spaces \cite{HANLARSONLIUJOURNALFUN, HANLARSONLIUCON, LARSONSZAFRANIEC, HANLARSONLIULIU, CASAZZAHANLARSON}. In the next theorem we derive a dilation result for Lipschitz atomic decompositions. We need a proposition to use in the theorem. \begin{proposition}\cite{LINDENSTRAUSSBOOK}\label{SBCHAR} A sequence $\{\tau_n\}_{n}$ in a Banach space $\mathcal{X}$ is a Schauder basis for $\mathcal{X}$ if and only if the following three conditions hold. \begin{enumerate}[\upshape(i)] \item $ \tau_n\neq 0$ for all $n$. \item There exists $b>0$ such that for every sequence $\{a_k\}_{k}$ of scalars and every pair of natural numbers $n<m$, we have \begin{align*} \left\|\sum_{k=1}^na_k\tau _k\right\|\leq b \left\|\sum_{k=1}^ma_k\tau _k\right\|. \end{align*} \item $\overline{\operatorname{span}}\{\tau_n\}_{n}=\mathcal{X}$. \end{enumerate} \end{proposition} \begin{theorem}\label{LIPPEL} Let $(\{f_n\}_{n}, \{\tau_n\}_{n})$ be a Lipschitz atomic decomposition for a Banach space $\mathcal{X}$ w.r.t. $\mathcal{M}_d$. Then there is a Banach space $\mathcal{Z}$ with a Schauder basis $\{\omega_n\}_{n}$, an injective map $\theta: \mathcal{X} \to \mathcal{Z}$ and a map $P:\mathcal{Z}\rightarrow \mathcal{Z}$ satisfying $P(\mathcal{Z})=\mathcal{X}$, $P^2=P$ and $P\omega_n=\theta\tau_n, \forall n \in \mathbb{N}$. \end{theorem} \begin{proof} We generalize the idea of proof of Theorem 2.6 in \cite{CASAZZAHANLARSON} (which is motivated from the arguments in \cite{PELCZYNSKI}) to non-linear setting. Let $c_{00}$ be the vector space of scalar sequences with only finitely many non-zero terms. Let $\{e_n\}_{n}$ be the standard unit vectors in $c_{00}$. \\ Case (i): $\tau_n\neq 0$, for all $n$. We define a norm on $c_{00}$ as follows. Let $\{a_n\}_{n} \in c_{00}$. Define \begin{align}\label{NORMDEF} \left\|\sum_{n=1}^\infty a_ne_n\right\|\coloneqq \max _{n}\left\|\sum_{k=1}^na_k\tau_k\right\|. \end{align} Proposition \ref{SBCHAR} then tells that $\{e_n\}_{n}$ is a Schauder basis for the completion of $c_{00}$, call as $\mathcal{Z}$ w.r.t. just defined norm. Define \begin{align*} \theta: \mathcal{X} \ni x \mapsto \theta x\coloneqq \sum_{n=1}^\infty f_n(x)e_n \in \mathcal{Z}. \end{align*} From the first condition of the definition of Lipschitz atomic decomposition, from Definition \ref{NORMDEF} and from the construction of $\mathcal{Z}$, it follows that $\theta$ is well-defined. From the third condition of definition of atomic decomposition, $\theta$ is injective. We next define \begin{align*} \Gamma: \mathcal{Z} \ni \sum_{n=1}^\infty a_ne_n \mapsto \Gamma \left(\sum_{n=1}^\infty a_ne_n\right)\coloneqq \sum_{n=1}^\infty a_n\tau_n \in \mathcal{X}. \end{align*} By verifying $\Gamma$ is bounded linear on dense subset $c_{00}$ of $\mathcal{Z}$, we see that $\Gamma$ is bounded on $\mathcal{Z}$. Then \begin{align}\label{PISONTO} \Gamma \theta x=\Gamma\left(\sum_{n=1}^\infty f_n(x)e_n\right)=\sum_{n=1}^\infty f_n(x)\tau_n=x, \quad \forall x \in \mathcal{X}. \end{align} So if we define $P\coloneqq \theta \Gamma$, then $P^2=\theta \Gamma\theta \Gamma=\theta \Gamma=P$. Equation (\ref{PISONTO}) tells that $P(\mathcal{Z})=\mathcal{X}$. We next see that $Pe_n=\theta \Gamma e_n=\theta \tau_n$, $\forall n $. Thus we can take $\omega_n=e_n$, for all $n$ to get the result. \\ Case (ii): $\tau_n= 0$, for some $n$. Let $J=\{n:\tau_n\neq 0\}$. We now apply case (i) to the collection $\{a_n\}_{n\in J}$. Let $\theta$, $\mathcal{Z}$, $\Gamma$ and $P$ be as in the case (i). Without affecting the definition of atomic decomposition, we can take $f_n= 0$ for all $n \in J^c$. Now consider the space $\mathcal{Z}\oplus \ell^2(J^c)$ and let $\{\rho_n\}_{n\in J^c}$ be an orthonormal basis for $\ell^2(J^c)$. Define $Q:\mathcal{Z}\oplus \ell^2(J^c) \ni z\oplus y\mapsto Q(z\oplus y)\coloneqq P z\oplus 0 \in \mathcal{Z}\oplus \ell^2(J^c)$. Now the space $\mathcal{Z}\oplus \ell^2(J^c)$ has Schauder basis $\{\tau_n \oplus 0, 0\oplus \rho_m\}_{n \in J, m\in J^c}$ and $Q$ satisfy the conclusions. Thus we can take $\omega_n=e_n$, for all $n\in J$ and $\omega_n=\rho_n$, for all $n\in J^c$ to get the result. \end{proof} We leave the further study of Lipschitz atomic decomposition to future work and end the paper with an application of Theorem \ref{IMPORTANTTHEOREM}. \begin{theorem}\label{APPLICATIONTHEOREMLAST} Let $(\{f_n\}_{n}, \{\tau_n\}_{n})$ be a Lipschitz atomic decomposition with lower Lipschitz atomic bound $a$ and upper Lipschitz atomic bound $b$ for $\mathcal{X}$ w.r.t. $\mathcal{M}_d$. Let $\{\omega_n\}_{n}$ be a collection in $ \mathcal{X}$ and suppose that there exist $\lambda_1, \lambda_2, \mu\geq 0$ such that the following conditions hold. \begin{enumerate}[\upshape(i)] \item For each $x\in \mathcal{X}$, the series $\sum_{n=1}^{\infty}f_n(x)\omega_n$ converges in $\mathcal{X}$. \item $\max\{\lambda_2, \lambda_1+\mu b\}<1$ and \item \end{enumerate} \begin{align}\label{PERINEQLIP} \left\|\sum_{n=1}^{\infty}(c_n-d_n)(\tau_n-\omega_n)\right\|&\leq \lambda_1\left \|\sum_{n=1}^{\infty}(c_n-d_n)\tau_n\right\|+\lambda_2\left \|\sum_{n=1}^{\infty}(c_n-d_n)\omega_n\right\|\nonumber\\ &\quad +\mu \|\{c_n-d_n\}_n\|, \quad \forall \{c_n\}_n,\{d_n\}_n \in \mathcal{M}_d. \end{align} Then there exists a collection $\{g_n\}_{n}$ in $\operatorname{Lip}(\mathcal{X}, \mathbb{K})$ such that $(\{g_n\}_{n}, \{\omega_n\}_{n})$ is a Lipschitz atomic decomposition for $\mathcal{X}$ with lower and upper Lipschitz atomic bounds \begin{align*} \frac{a(1-\lambda_2)}{1+\lambda_1+\mu b}, \quad \frac{b(1+\lambda_2)}{1-(\lambda_1+\mu b)} \end{align*} respectively. \end{theorem} \begin{proof} From the first condition we get that the map $T: \mathcal{X} \ni x \mapsto \sum_{n=1}^{\infty}f_n(x)\omega_n \in \mathcal{X}$ is well-defined. Now using Inequality (\ref{PERINEQLIP}), \begin{align*} \|x-y-(Tx-Ty)\|&=\left\| \sum_{n=1}^{\infty}(f_n(x)-f_n(y))\tau_n- \sum_{n=1}^{\infty}(f_n(x)-f_n(y))\omega_n\right\|\\ &=\left\|\sum_{n=1}^{\infty}(f_n(x)-f_n(y))(\tau_n-\omega_n)\right\|\\ &\leq \lambda_1 \left\|\sum_{n=1}^{\infty}(f_n(x)-f_n(y))\tau_n\right\|+\lambda_2\left\|\sum_{n=1}^{\infty}(f_n(x)-f_n(y))\omega_n\right\| +\mu \|\{f_n(x)-f_n(y)\}_n\|\\ &=\lambda_1 \left\|x-y\right\|+\lambda_2\left\|Tx-Ty\right\| +\mu \|\{f_n(x)-f_n(y)\}_n\|\\ &\leq (\lambda_1+\mu b)\left\|x-y\right\|+\lambda_2\left\|Tx-Ty\right\|, \quad \forall x, y \in \mathcal{X}. \end{align*} Theorem \ref{IMPORTANTTHEOREM} then says that $T$ is Lipschitz, invertible and \begin{align*} \frac{1-\lambda_2}{1+\lambda_1+ \mu b}\leq\text{Lip}(T)^{-1}\leq\frac{1+\lambda_2}{1-(\lambda_1+ \mu b)}. \end{align*} Define $g_n\coloneqq f_nT^{-1}$, $\forall n \in \mathbb{N}$. Then $\{g_n(x)\}_{n} \in \mathcal{M}_d$, for each $x \in \mathcal{X}$ and \begin{align*} \|\{g_n(x)-g_n(y)\}_n\|&= \|\{f_n(T^{-1}x)-f_n(T^{-1}y)\}_n\|_n\leq b\|T^{-1}x-T^{-1}y\|\\ &\leq b \frac{1+\lambda_2}{1-(\lambda_1+ \mu b)}\|x-y\|, \quad \forall x,y \in \mathcal{X}, \end{align*} \begin{align*} a\frac{1-\lambda_2}{1+\lambda_1+ \mu b}\|x-y\|&\leq a \|T^{-1}x-T^{-1}y\|\leq \|\{f_n(T^{-1}x)-f_n(T^{-1}y)\}_n\|\\ &= \|\{g_n(x)-g_n(y)\}_n\|, \quad \forall x,y \in \mathcal{X}. \end{align*} Finally \begin{align*} \sum_{n=1}^{\infty}g_n(x)\omega_n=\sum_{n=1}^{\infty}f_n(T^{-1}x)\omega_n=T(T^{-1}x)=x, \quad \forall x \in \mathcal{X}. \end{align*} \end{proof} We conclude the paper with the following remarks. \begin{remark} We can improve Theorems \ref{SODERLINDCAMPANATOAPP}, \ref{BARBAGALLOAPP}, \ref{STABILITYMA} and \ref{APPLICATIONTHEOREMLAST} using Theorem \ref{LASTTHEOREM}. \end{remark} \begin{remark} So far in the literature, there are three ways to prove Theorem \ref{CASAZZAKALTONTHEOREM} one given in \cite{CASAZZAKALTON}, another in \cite{VANEIJNDHOVEN} and yet another in \cite{CASAZZACHRISTENSEN}. As we mentioned earlier, we have done the non-linear version of arguments used in \cite{VANEIJNDHOVEN}. We hope that arguments used in \cite{CASAZZAKALTON} and \cite{CASAZZACHRISTENSEN} can be generalized to give different proofs of Theorem \ref{IMPORTANTTHEOREM}. \end{remark} \section{Acknowledgements} I thank Dr. P. Sam Johnson, Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka (NITK), Surathkal for several discussions. \bibliographystyle{plain}
1,477,468,751,255
arxiv
\section{Introduction} Since the pioneering paper published by Moore in 1970 \cite{Moore1970} and the contributions by Fulling and Davies \cite{FullingDavies1976} and by Ford and Vilenkin \cite{FordVilenkin1982} that appeared some years later, radiation reaction force on moving boundaries attracted the attention of many physicists. Due to the movement of the boundary, this topic is also referred to as the dynamical Casimir effect (DCE), a name coined by J. Schwinger in his attempt to explain sonoluminescence in the early 90s \cite{Schwinger1993}. For a review on this subject see the book by K. A. Milton \cite{MiltonLivro2001} and on DCE see Refs. \cite{DodonovRevisao2001,SpecialIssue2005}. Though the Casimir force on a unique static plate in vacuum is zero, the fluctuations of this force are non-vanishing \cite{Barton1991}. Hence, if this plate starts moving with a non-zero general acceleration, we expect that a dissipative force proportional to these fluctuations appears \cite{BraginskyKhalili1991,JaekelReynaud1992,MaiaNetoReynaud1993}, and arguments based on energy conservation lead directly to real particle creation. Though the DCE already occurs for a unique moving boundary, oscillating cavities in parametric resonance with a particular field mode of the corresponding static cavity may enhance significantly the particle creation rate \cite{DodonovKlimov1996,LambrechtEtAl-PRL1996,JungSoh1998}. This effect was studied by several authors considering the case of the $1+1$ ideal cavity \cite{DodonovKlimov1996,JungSoh1998}. The $3+1$ case was also investigated, and different geometries were taken into account, among them parallel plane plates \cite{DodonovKlimov1996,LambrechtEtAl-PRL1996,MundurainPAMN1998}, cylindrical \cite{CrocceEtAl2005}, and spherical \cite{Eberlein1996,SetareSaharian2001,MazzitelliMilln2006,PascoalEtAl2008} cavities. The nonideal case was also considered in Refs. \cite{SchallerEtAl2002,PascoalEtAlArxiv08}. Concerning the static scenario, T. H. Boyer \cite{Boyer1974} was the first to consider the case of mixed boundary conditions (BCs). He demonstrated that the electromagnetic Casimir force between a perfectly conducting plate and an infinitely permeable one is repulsive rather than attractive. An analogous result was also obtained in the case of a scalar field confined within two parallel plates \cite{Hushwater1997,Cougo-PintoEtAl1999,AguiarEtAl2003} and submitted to a Dirichlet BC at one plate and to a Neumann BC at the other. The measurement of a repulsive Casimir effect has been persued for many years and has finally been achieved very recently by Munday, Capasso and Parsegian \cite{MundayEtAl-Nature2009} in a remarkable experiment involving three distinct media, with appropriate values for their permitivity. Although the set up used by these authors in their experiment on repulsive Casimir effect is quite different from the two-plate set up made of a perfectly conducting plate and an infinitely permeable one, we may learn many things studying the DCE with such mixed BCs. Further, though mixed BCs are relatively common in the study of the static Casimir effect \cite{Hushwater1997,Cougo-PintoEtAl1999,AguiarEtAl2003,ZhaiEtAl2007,FullingEtAl2007,Teo2009}, and also in correlated topics of Cavity QED \cite{Cougo-PintoEtAlPLB1999,Cougo-PintoEtAlJPA1999,AlvesEtAl2000,AlvesEtAl2003}, the same does not occur for the DCE. In fact, as far as we know, the DCE in a $1+1$ dimensional resonant cavity with mixed BCs was considered only very recently, in Ref(s). \cite{AlvesEtAl2006,AlvesGranhen2008}. However, mixed BCs have never been considered in the study of DCE for different geometries as, for instance, in concentric (and oscillating) spherical shells. In a recent paper \cite{PascoalEtAl2008}, the DCE was examined for a massless scalar field submitted to Dirichlet BCs at two concentric spherical shells, each of them posessing a time-dependent radius. A general expression for the average number of created particles was derived for arbitrary laws of radial motions of the spherical shells. Such an expression was thus applied to breathing modes of the concentric shells: when only one of the shells oscillates and when both shells oscillate in or out of phase. The purpose of this paper is to complement the previous one \cite{PascoalEtAl2008} by considering mixed BCs. We observe that the field modes associated with mixed BCs differs from that following from Dirichlet-Dirichlet BCs. Considering again an oscillatory motion of the shells, we identify all the resonances within mixed BCs and derive the expression for the associated particle creation rate. Then, performing a numerical analysis we compare our results with those presented in Ref. \cite{PascoalEtAl2008}. For convenience, we shall assume that the spherical shell which imposes a Neumann BC to the field is at rest, while the other one, which imposes a Dirichlet BC to the field is in arbitrary motion. However, we shall consider two situations: in one of them, the inner shell is at rest while the outer one is in arbitrary motion and in the other one, the reverse occurs, namely, the outer shell is at rest while the inner one is in arbitrary motion. Comparisons of our results with those involving only Dirichlet-Dirichlet BCs will be presented graphically. This paper is organized as follows: in Section 2 we briefly summarize the main steps of the method emplyed to the case where only Dirichlet BCs were considered; in Section 3 we apply this method to the case of mixed BCs and obtain our general formulas; in Section 4, with the purpose of obtaining explicit results for the average number of created particles, we choose a particular motion for the oscillating shell and Section 5 is left for the concluding remarks. \section{Dirichlet-Dirichlet BCs} In Ref \cite{PascoalEtAl2008} the DCE for a massless scalar field confined between two cocentric moving shells was considered. This quantum scalar field obeys the Klein-Gordon equation $\square\phi(\mathbf{r};t)=0$. Besides, this field and its canonical momentum $\pi(\mathbf{r};t)=\dot{\phi}(\mathbf{r};t)$ satisfy the equal time commutation relations \begin{eqnarray} \left[ \phi(\mathbf{r};t),\pi(\mathbf{r}';t)\right] & =& i\delta(\mathbf{r}-\mathbf{r}'),\cr\cr \left[ \phi(\mathbf{r};t),\phi(\mathbf{r}';t)\right] & =& \left[ \pi(\mathbf{r};t),\pi(\mathbf{r}';t)\right] =0. \label{1}% \end{eqnarray} The spherical symmetry of the problem leads us to the following solution \begin{eqnarray} \phi(\mathbf{r};t) & =&\sum_{l=0}^{\infty}\sum_{m=-l}^{l}\sum_{s=1}^{\infty }\sqrt{\frac{1}{2\omega_{ls}(t)}}F_{ls}(r;t)\left[ a_{lms}(t)~{Y}% _{lm}(\theta,\varphi)+\text{h.c.}\right] ,\cr\cr \pi(\mathbf{r};t) & =&-i\sum_{l=0}^{\infty}\sum_{m=-l}^{l}\sum_{s=1}^{\infty }\sqrt{\frac{\omega_{ls}(t)}{2}}F_{ls}(r;t)\left[ a_{lms}(t)~{Y}% _{lm}(\theta,\varphi)-\text{h.c.}\right] , \label{2} \end{eqnarray} where $\{Y_{lm}(\theta,\varphi)\}$ are the spherical harmonics and the orthonormal radial functions satisfy the following differential equation \begin{equation} \frac{1}{r^{2}}\frac{\text{d}}{\text{d}r}\left( r^{2}\frac{\text{d}% F_{ls}(r;t)}{\text{d}r}\right) +\left( \frac{\omega_{ls}^{2}(t)}{c^{2}% }-\frac{l(l+1)}{r^{2}}\right) F_{ls}(r;t)=0. \label{3}% \end{equation} \qquad Moreover, the operators $a_{lms}(t)$ and $a_{lms}^{\dag}(t)$ obey the standard commutation relations% \begin{eqnarray} \left[ a_{lms}(t),a_{l^{\prime}m^{\prime}s^{\prime}}^{\dagger}(t)\right] & =&\delta_{ll^{\prime}}\delta_{mm^{\prime}}\delta_{ss^{\prime}},\cr\cr \left[ a_{lms}(t),a_{l^{\prime}m^{\prime}s^{\prime}}(t)\right] & =&\left[ a_{lms}^{\dagger}(t),a_{l^{\prime}m^{\prime}s^{\prime}}^{\dagger}(t)\right] =0. \label{4}% \end{eqnarray} Through the time derivative of Eqs. (\ref{2}), together with the Klein-Gordon equation and the canonical momentum formula, we obtain the time evolution for the operators \begin{eqnarray} \dot{a}_{lms}(t) & =& -i\omega_{ls}(t)a_{lms}(t)+\sum_{s^{\prime}}% \mu_{l[ss^{\prime}]}(t)a_{lms^{\prime}}(t)\cr\cr & +& \sum_{s^{\prime}}\mu_{l(ss^{\prime})}(t)a_{l(-m)s^{\prime}}^{\dag}(t), \label{5}% \end{eqnarray} where the functions $\mu_{l( ss^{\prime}) }( t)$ $=[ \mu_{lss^{\prime}}(t)+\mu_{ls^{\prime}s}(t)] /2$ and $\mu_{l[ ss^{\prime}] }(t)=[ \mu _{lss^{\prime}}(t)\linebreak-\mu_{ls^{\prime}s}(t)] /2$ are the symmetric and antisymmetric parts, respectively, of the time-dependent coefficient \begin{eqnarray} \mu_{lss^{\prime}}(t) & =&\frac{\dot{\omega}_{ls}(t)}{2\omega_{ls}(t)}% \delta_{ss^{\prime}}\cr\cr & +& \left( 1-\delta_{ss^{\prime}}\right) \sqrt{\frac{\omega_{ls}(t)}% {\omega_{ls^{\prime}}(t)}}\int_{r_{i}(t)}^{r_{o}(t)}r^{2}F_{ls^{\prime}% }(r;t)\dot{F}_{ls}(r;t)\operatorname*{d}r. \label{6}% \end{eqnarray} As demonstrated in Ref. \cite{PascoalEtAl2008}, by comparing Eq. (\ref{5}) with the Heisenberg equation of motion $\dot{a}_{lms}(t)=i\left[ H_{eff}% (t),a_{lms}(t)\right] $ and assuming the most general quadratic form of an effective Hamiltonian, we derive \begin{eqnarray} H_{eff}(t) & =& \sum_{l,m,s}\omega_{ls}(t)\left( a_{lms}^{\dag}a_{lms}% +\frac{1}{2}\right) \cr\cr & +&\frac{i}{2}\sum_{l,m,s,s^{\prime}}\mu_{lss^{\prime}}(t)\left[ \left( a_{lms^{\prime}}+a_{l\left( -m\right) s^{\prime}}^{\dag}\right) a_{lms}^{\dag}\right. \cr\cr & -& \left. a_{lms}\left( a_{l\left( -m\right) s^{\prime}}+a_{lms^{\prime}% }^{\dag}\right) \right] .\label{7}% \end{eqnarray} The evolution of the density operator is computed through the relation $\dot{\rho}(t)=i\left[ H_{eff}(t),\rho(t)\right] $, with the aid of an iterative procedure up to second order approximation in the velocity of the cavity boundaries, \emph{i.e.}, $\dot{r}_{i}(t)$, $\dot{r}_{0}(t)\ll c$. The derivation of the average number of particles created in a particular mode --- labeled by the quantum numbers ($l,m,s$) --- is thus given by $\mathcal{N}_{lms}(t)=Tr\left[ \rho(t)a_{lms}^{\dag}(0)a_{lms}(0)\right] $, and for an initial vacuum state $\rho(0)=\left\vert \left\{ 0\right\} \right\rangle \left\vert \left\{ 0\right\} \right\rangle $ it follows that \begin{equation} \mathcal{N}_{lms}(t)=\sum_{s^{\prime}}\left\vert \int_{0}^{t}\text{d}t_{1}% \mu_{l\left( s^{\prime}s\right) }(t_{1})\exp\left\{ i\left[ \Omega _{ls^{\prime}}(t_{1})+\Omega_{ls}(t_{1})\right] \right\} \right\vert ^{2},\label{8}% \end{equation} with $\Omega_{ls}(t)=\int_{0}^{t}dt_{1}\ \omega_{ls}(t_{1})$. The number of created particles $\mathcal{N}_{lms}(t)$ depends on the radial function $F_{ls}(r;t)$ through $\mu_{l\left( s^{\prime}s\right) }(t)$. The solution of Eq. (\ref{3}) is given by a linear combination of spherical Bessel functions of the first ($j_{l}$) and second ($n_{l}$) kind, such that the Dirichlet BC applied to the inner shell leads to the relation \begin{equation} F_{ls}(r;t)=N_{ls}\bigl[ j_{l}\bigl( \omega_{ls}(t)r\bigr) n_{l}\bigl( \omega_{ls}(t)r_{i}(t)\bigr) -j_{l}\bigl( \omega_{ls}(t)r_{i}(t)\bigr) n_{l}\bigl( \omega_{ls}(t)r\bigr) \bigr] ,\label{9}% \end{equation} whereas that on the outer shell results in the transcendental equation% \begin{equation} j_{l}\bigl( \omega_{ls}(t)r_{o}(t)\bigr) n_{l}\bigl( \omega_{ls}% (t)r_{i}(t)\bigr) -j_{l}\bigl( \omega_{ls}(t)r_{i}(t)\bigr) n_{l}\bigl( \omega_{ls}(t)r_{o}(t)\bigr) =0\text{.}\label{10}% \end{equation} In Fig. 1 we present a map of the solutions of Eq. (\ref{10}) for some values of the numbers $l$ and $s$. As it was noted in \cite{PascoalEtAl2008}, for the case $l=0$, the frequencies are equally spaced. This fact does not occur for the case $l\neq0$. However, when both radii of the shells are much larger than the separation between them, \textit{i.e.}, $r_{i}\left( t\right) >>r_{o}\left( t\right) -r_{i}$ $\left( t\right) $, the solutions for all values of $l$ approach the solution for the onedimensional case, so that $\omega_{ls}\rightarrow s\pi/\left( r_{o}\left( t\right) -r_{i}\left( t\right) \right) $. \begin{figure}[!h] \begin{center} \newpsobject{showgrid}{psgrid}{subgriddiv=0.5,griddots=10,gridlabels=6pt} \begin{pspicture}(-1,0.2)(3,4.5) \psset{unit=1.15} \psline[linecolor=c0](0,1)(3,4)% \pscurve[linecolor=c1](0,1.4303)(0.1,1.43356) (0.2,1.45278) (0.4,1.54946) (0.6,1.69621) (1,2.04904) (1.5,2.52667) (2,3.01678) (2.991,4)% \pscurve[linecolor=c2](0,1.83457)(0.1,1.83459) (0.2,1.83526) (0.4,1.85073) (0.6,1.91003) (1, 2.15883) (1.5, 2.58441) (2, 3.05225) (2.9738,4)% \pscurve[linecolor=c3](0,2.22433)(0.1,2.22433) (0.2,2.22434) (0.4,2.22521) (0.6,2.23499) (1, 2.35059) (1.5, 2.68295) (1.8132801537026406, 2.9449) (2.9449,4)% \pscurve[linecolor=c4](0,2.60459)(0.1,2.60459) (0.2,2.60459) (0.4,2.60461) (0.6,2.60533) (1, 2.63536) (1.5, 2.83813) (2, 3.19951) (2.9012,4)% \pscurve[linecolor=c5](0,2.97805)(0.1,2.97805) (0.2,2.97805) (0.4,2.97805) (0.6,2.97808) (1, 2.98234) (1.5, 3.06693) (2, 3.33035) (2.83535,4)% \pscurve[linecolor=c6](0,3.34634)(0.1,3.34634) (0.2,3.34634) (0.4,3.34634) (0.6,3.34634) (1, 3.34671) (1.5, 3.36892) (2, 3.51818) (2.5,3.82504) (2.73035,4)% \pscurve[linecolor=c7](0,3.71055)(0.1,3.71055) (0.2,3.71055) (0.4,3.71055) (0.6,3.71055) (1,3.71057) (1.5, 3.71425) (2, 3.77358) (2.52949,4)% \psline[linecolor=c0,linestyle=dashed](0,2)(2,4)% \pscurve[linecolor=c1,linestyle=dashed](0,2.45902)(0.1,2.46218) (0.2,2.48081) (0.4,2.5748) (0.6,2.71815) (1,3.06516) (1.5,3.538) (1.97,4)% \pscurve[linecolor=c2,linestyle=dashed](0,2.89503) (0.1,2.89505) (0.2,2.89568) (0.4,2.91031) (0.6,2.96653) (1, 3.20468) (1.5, 3.61797) (1.91768, 4)% \pscurve[linecolor=c3,linestyle=dashed](0,3.31587)(0.1,3.31587) (0.2,3.31588) (0.4,3.31669) (0.6,3.32575) (1,3.43349) (1.5,3.74823) (1.8132801537026406,4.)% \pscurve[linecolor=c4,linestyle=dashed](0,3.72579)(0.1,3.72579) (0.2,3.72579) (0.4,3.72581) (0.6,3.72646) (1, 3.75389)(1.59989,4)% \psline[linecolor=c0,linewidth=.06,linestyle=dotted,dotsep=1pt](0,3)(1,4)% \pscurve[linecolor=c1,linewidth=.06,linestyle=dotted,dotsep=1pt](0,3.47089)(0.1,3.47402) (0.2,3.4925)(0.4,3.58577) (0.6,3.72815) (0.915,4)% \pscurve[linecolor=c2,linewidth=.06,linestyle=dotted,dotsep=1pt](0,3.92251) (0.1,3.92254) (0.2,3.92315) (0.4,3.93754) (0.6,3.99285) (0.617497,4)% \pspolygon[linewidth=0.04](0,1)(3,1)(3,4)(0,4)% \psline[linewidth=0.04] (0,1)(-0.2,1) \rput(-0.5,1){$1$}% \psline[linewidth=0.04] (0,1.5)(-0.15,1.5)% \psline[linewidth=0.04] (0,2)(-0.2,2)\rput(-0.5,2){$2$}% \psline[linewidth=0.04] (0,2.5)(-0.15,2.5)% \psline[linewidth=0.04] (0,3)(-0.2,3)\rput(-0.5,3){$3$}% \psline[linewidth=0.04] (0,3.5)(-0.15,3.5)% \psline[linewidth=0.04] (0,4)(-0.2,4)\rput(-0.5,4){$4$}% \rput{90}(-1,2.5){$\omega_{ls} r_o/\pi$}% \psline[linewidth=0.04] (0,1)(0,0.8) \rput(0,0.5){$0$}% \psline[linewidth=0.04] (0.5,1)(0.5,0.85)% \psline[linewidth=0.04] (1,1)(1,0.8) \rput(1,0.5){$1$}% \psline[linewidth=0.04] (1.5,1)(1.5,0.85)% \psline[linewidth=0.04] (2,1)(2,0.8) \rput(2,0.5){$2$}% \psline[linewidth=0.04] (2.5,1)(2.5,0.85)% \psline[linewidth=0.04] (3,1)(3,0.8) \rput(3,0.5){$3$}% \rput(1.50,0.25){$\omega_{ls} r_i/\pi$}% \end{pspicture} \end{center} \vskip -0.6 cm \caption{Map of the solutions of the transcendental equation (\ref{10}). The colors correspond to different values of the number $l$. The solid, dashed, and dotted lines correspond to $s=1$, $s=2$, and $s=3$, respectively.} \label{Fa1} \end{figure} \section{Mixed boundary conditions} It is important to emphasize that the expression for the average number of created particles, derived in Eq (\ref{8}), does not depend on the character of the BCs. Thus it can be applied even for mixed BCs as in the present work, where we assume that the massaless scalar field satisfies the Neumann BC at a fixed spherical shell and a Dirichlet BC at a second concentric spherical shell whose radius has an arbitrary time dependence, \begin{equation} \partial_{r}\phi\left( \mathbf{r},t\right) \vert _{r=r_{\alpha}}= 0\;\;\;\;\; \mbox{and}\;\;\;\;\; \left. \phi\left( \mathbf{r},t\right) \right\vert _{r=r_{\beta}(t)}=0, \label{11}% \end{equation} where the index $\alpha$ ($\beta$) is related to the static (moving) shell.\textbf{ }However, different expressions come up for $F_{ls}(r;t)$ and $\omega_{ls}(t)$, as compared to those in Ref. (\ref{16}). As already noted in the previous section, the general solutions to Eq. (\ref{3}) are linear combinations of spherical Bessel functions, but the mixed BC leads to a different solution. The assumption of Neumann BC for the field at the static shell leads to the following expression for the radial functions \begin{equation} F_{ls}(r;t)=N_{ls}\left( j_{l}\left( \omega_{ls}(t)r\right) \ \frac {\partial}{\partial r_{\alpha}}n_{l}\left( \omega_{ls}(t)r_{\alpha}\right) -n_{l}\left( \omega_{ls}(t)r\right) \frac{\partial}{\partial r_{\alpha}% }j_{l}\left( \omega_{ls}(t)r_{\alpha}\right) \right) , \label{12}% \end{equation} and the subsequent assumption of Dirichlet BC on the field at the moving shell leads to a frequency discretization \begin{equation} \frac{\partial}{\partial r_{\alpha}}j_{l}\bigl( \omega_{ls}(t)r_{\alpha }\bigr) \ n_{l}\bigl( \omega_{ls}(t)r_{\beta}(t)\bigr) =j_{l}\bigl( \omega_{ls}(t)r_{\beta}(t)\bigr) \ \frac{\partial}{\partial r_{\alpha}}% n_{l}\bigl( \omega_{ls}(t)r_{\alpha}\bigr) . \label{13}% \end{equation} In Figs. 2 and 3 we show the maps of the numerical solutions of the transcendental equation Eq. (\ref{13}) for some values of $l$ and $s$. As we can see, the map of $\omega_{ls}(t)$ is very sensitive to the BCs. For Dirichlet-Dirichlet BCs (Fig. 1) the frequencies $\omega_{0s}(t)$ are equally spaced, a situation that does not occur for mixed BCs. We also note that the map of $\omega_{ls}(t)$ turns out to be entirely different when considering the Dirichlet BC in the outer shell and Neumann BC in the inner one (Fig. 2) or oppositely, with Neumann BC in the outer shell and Dirichlet BC in the inner one (Fig. 3). \begin{figure}[!h] \begin{center} \newpsobject{showgrid}{psgrid}{subgriddiv=0.5,griddots=10,gridlabels=6pt} \begin{pspicture}(-1,0)(3,4.5) \psset{unit=1.10} \pscurve[linecolor=c0](0,1)(0.1,1.00311)(0.2,1.02143)(0.3,1.05942)(0.4,1.11395)(0.5,1.18045)(0.75,1.37776)(1,1.59809) (1.5,2.06656) (2,2.55024) (2.5,3.04031) (3,3.53365)% \pscurve[linecolor=c1](0,1.4303)(0.1,1.42862) (0.2,1.41831) (0.4,1.38069) (0.5,1.38087) (0.6,1.40468) (0.7,1.44886) (0.8, 1.50792) (1, 1.65395) (1.37, 1.97193) (2, 2.56366) (2.44,2.99022) (3,3.5395)% \pscurve[linecolor=c2](0,1.83457)(0.1,1.83455) (0.2,1.8341) (0.4,1.82268) (0.5,1.80496) (0.6,1.78) (0.7,1.75805) (0.8, 1.75192) (1, 1.80326) (1.2, 1.91766) (2, 2.59321) (2.4,2.97098) (3,3.55174)% \pscurve[linecolor=c3](0,2.22433)(0.1,2.22433) (0.2,2.22432) (0.4,2.22364) (0.5,2.22144) (0.6,2.21548) (0.7,2.20286) (0.8,2.18173) (1, 2.13053) (1.2,2.13018) (2,2.64584) (2.5,3.09727) (3,3.57161)% \pscurve[linecolor=c4](0,2.60459)(0.1,2.60459) (0.2,2.60459) (0.4,2.60457) (0.5,2.60446) (0.6,2.60398) (0.7,2.60246) (0.8,2.59859) (1, 2.57524) (1.2,2.52396) (2,2.73707) (2.5,3.14515) (3,3.60134)% \pscurve[linecolor=c5](0,2.97805)(0.1,2.97805) (0.2,2.97805) (0.4,2.97805) (0.5,2.97804) (0.6,2.97802) (0.7,2.97792) (0.8,2.97758) (0.9,2.97659) (1, 2.97415) (1.1, 2.96891) (1.2,2.95896) (1.5,2.88915) (2,2.9021) (2.5,3.22103) (3,3.64495)% \pscurve[linecolor=c6](0,3.34634) (0.2,3.34634) (0.4,3.34634) (0.6,3.34634)(0.8,3.34632)(1, 3.34601) (1.2,3.34387) (1.4,3.33427) (1.6,3.30532) (1.8,3.25103) (1.9,3.22344) (2,3.20572) (2.1,3.20356) (2.2,3.21851) (2.3,3.24913) (2.5,3.34688) (2.7,3.47773) (3,3.70982)% \pscurve[linecolor=c7](0,3.71055) (0.3,3.71055) (0.6,3.71055) (0.9,3.71054) (1.2,3.71033) (1.5,3.707) (1.8,3.68373) (1.9,3.66521) (2,3.63992) (2.1,3.6102) (2.2,3.58183) (2.3,3.56225) (2.4,3.55708) (2.5,3.5683) (2.7,3.63487) (2.8,3.68545) (3,3.81038)% \pscurve[linecolor=c0,linestyle=dashed](0,2)(0.1,2.00311)(0.2,2.02143)(0.3,2.05942)(0.4,2.11395)(0.5,2.18045)(0.75,2.37776)(1,2.59809) (1.5,3.06656) (2,3.55024) (2.459,4)% \pscurve[linecolor=c1,linestyle=dashed](0, 2.45902)(0.1, 2.4574) (0.2,2.44741) (0.4,2.41103) (0.5, 2.41121) (0.6,2.43422) (0.7, 2.47701) (0.8, 2.53437) (1, 2.67679) (1.37, 2.9891) (2, 3.57471) (2.44,3.99869)% \pscurve[linecolor=c2,linestyle=dashed](0,2.89503)(0.1, 2.89502) (0.2,2.89459) (0.4,2.88381) (0.5, 2.8671) (0.6,2.84361) (0.7, 2.823) (0.8, 2.81725) (1, 2.8655) (1.2, 2.97378) (2, 3.62658) (2.4,3.99723)% \pscurve[linecolor=c3,linestyle=dashed] (0,3.31587)(0.1, 3.31587) (0.2,3.31587) (0.4,3.31524) (0.5,3.3132) (0.6,3.30768) (0.7,3.29601) (0.8,3.2765) (1,3.22941) (1.2, 3.22909) (2, 3.71278) (2.33525,4)% \pscurve[linecolor=c4,linestyle=dashed] (0,3.72579)(0.1,3.72579) (0.2,3.72579) (0.4,3.72577) (0.5,3.72567) (0.6,3.72523) (0.7,3.72384) (0.8,3.72032) (1, 3.69907) (1.2,3.65259) (2,3.84739) (2.21292 ,4)% \pscurve[linecolor=c0,linewidth=.06,linestyle=dotted,dotsep=1pt](0,3)(0.1,3.00311)(0.2,3.02143)(0.3,3.05942)(0.4,3.11395)(0.5,3.18045)(0.75,3.37776)(1,3.59809) (1.43,4.)% \pscurve[linecolor=c1,linewidth=.06,linestyle=dotted,dotsep=1pt](0,3.47089)(0.1,3.46928) (0.2,3.45937) (0.4,3.4233) (0.5,3.42348) (0.6,3.4463) (0.7,3.48873) (0.8, 3.54564) (1, 3.68706) (1.37, 3.99758)% \pscurve[linecolor=c2,linewidth=.06,linestyle=dotted,dotsep=1pt](0,3.92251)(0.1, 3.9225) (0.2,3.92208) (0.4,3.91148) (0.5,3.89506) (0.6,3.87198) (0.7,3.85173) (0.8, 3.84608) (1, 3.89348) (1.2, 3.99998)% \pspolygon[linewidth=0.04](0,1)(3,1)(3,4)(0,4)% \psline[linewidth=0.04] (0,1)(-0.2,1) \rput(-0.5,1){$1$}% \psline[linewidth=0.04] (0,1.5)(-0.15,1.5)% \psline[linewidth=0.04] (0,2)(-0.2,2)\rput(-0.5,2){$2$}% \psline[linewidth=0.04] (0,2.5)(-0.15,2.5)% \psline[linewidth=0.04] (0,3)(-0.2,3)\rput(-0.5,3){$3$}% \psline[linewidth=0.04] (0,3.5)(-0.15,3.5)% \psline[linewidth=0.04] (0,4)(-0.2,4)\rput(-0.5,4){$4$}% \rput{90}(-1,2.5){$\omega_{ls} r_o/\pi$}% \psline[linewidth=0.04] (0,1)(0,0.8) \rput(0,0.5){$0$}% \psline[linewidth=0.04] (0.5,1)(0.5,0.85)% \psline[linewidth=0.04] (1,1)(1,0.8) \rput(1,0.5){$1$}% \psline[linewidth=0.04] (1.5,1)(1.5,0.85)% \psline[linewidth=0.04] (2,1)(2,0.8) \rput(2,0.5){$2$}% \psline[linewidth=0.04] (2.5,1)(2.5,0.85)% \psline[linewidth=0.04] (3,1)(3,0.8) \rput(3,0.5){$3$}% \rput(1.5,0.2){$\omega_{ls} r_i/\pi$}% \end{pspicture} \end{center} \vskip -0.7cm \caption{Map of the solutions of the transcendental equation (\ref{13}) with $r_{\alpha}<r_{\beta}$. The colors correspond to different values of the number $l$. The solid, dashed, and dotted lines correspond to $s=1$, $s=2$, and $s=3$, respectively.. } \end{figure} \begin{figure}[!h] \begin{center} \newpsobject{showgrid}{psgrid}{subgriddiv=0.5,griddots=10,gridlabels=6pt} \psset{unit=1.1} \begin{pspicture}(-1,-1)(3,2.5) \pscurve[linecolor=c0](0,0.0678391)(0.1,0.376665) (0.2,0.52706) (0.3,0.656247)(0.4,0.77611) (0.5,0.890754) (0.75,1.16511) (1,1.4303) (1.5,1.94845) (2,2.45902) (2.5,3)% \pscurve[linecolor=c1](0,0.662586)(0.1,0.669501) (0.2,0.707391) (0.4,0.861276) (0.6,1.05239) (1,1.45461) (1.5,1.9615) (2,2.46721) (2.525,3)% \pscurve[linecolor=c2](0,1.06382)(0.1,1.06386) (0.2,1.06511) (0.4,1.09307) (0.6,1.18863) (1,1.51316) (1.5,1.99051) (2,2.48475) (2.51658,3)% \pscurve[linecolor=c3](0,1.43688)(0.1,1.43688) (0.2,1.4369) (0.4,1.43846) (0.6,1.45558) (1,1.63182) (1.5,2.04285) (2,2.51437) (2.49728,3)% \pscurve[linecolor=c4](0,1.7974)(0.1,1.7974) (0.2,1.7974) (0.4,1.79744) (0.6,1.79869) (1,1.84903) (1.5,2.13393) (2,2.5614) (2.46703,3)% \pscurve[linecolor=c5](0,2.15065)(0.1,2.15065) (0.2,2.15065) (0.4,2.15065) (0.6,2.1507) (1,2.15801) (1.5,2.29079) (2,2.63587) (2.41879,3)% \pscurve[linecolor=c6](0,2.49908)(0.1,2.49908) (0.2,2.49908) (0.4,2.49908) (0.6,2.49908) (1,2.4997) (1.5,2.53677) (2,2.75568) (2.33379,3)% \pscurve[linecolor=c7](0,2.84405)(0.1,2.84405) (0.2,2.84405) (0.4,2.84405) (0.6,2.84405) (1,2.84408) (1.5,2.85033) (2,2.94523) (2.12463,3)% \pscurve[linecolor=c0,linestyle=dashed](0,1.43135) (0.1,1.53491) (0.2,1.63894) (0.3,1.74249) (0.4,1.84564) (0.5,1.94845) (0.75,2.20435) (1,2.45902) (1.25,2.71282) (1.5,2.96597)% \pscurve[linecolor=c1,linestyle=dashed](0,1.89088)(0.1,1.89418) (0.2,1.91366) (0.4,2.01159) (0.6,2.16013) (1,2.51673) (1.5,3)% \pscurve[linecolor=c2,linestyle=dashed](0,2.32046)(0.1,2.32048) (0.2, 2.32114) (0.4, 2.33649) (0.6,2.39534) (1,2.64289) (1.42484,3)% \pscurve[linecolor=c3,linestyle=dashed] (0, 2.73229)(0.1, 2.73229) (0.2, 2.7323) (0.4, 2.73316) (0.6, 2.7427) (1, 2.85585)(1.248764177214013,3)% \pscurve[linecolor=c0,linewidth=.06,linestyle=dotted,dotsep=1pt](0,2.46004) (0.1,2.56063) (0.2,2.66212) (0.3,2.7635) (0.4,2.86478) (0.5,2.96597)% \pscurve[linecolor=c1,linewidth=.06,linestyle=dotted,dotsep=1pt](0,2.93031)(0.1,2.93349) (0.2,3)% \pspolygon[linewidth=0.04](0,0)(3,0)(3,3)(0,3)% \psline[linewidth=0.04] (0,0)(-0.2,0)\rput(-0.5,0){$0$}% \psline[linewidth=0.04] (0,.5)(-0.15,0.5)% \psline[linewidth=0.04] (0,1)(-0.2,1) \rput(-0.5,1){$1$}% \psline[linewidth=0.04] (0,1.5)(-0.15,1.5)% \psline[linewidth=0.04] (0,2)(-0.2,2)\rput(-0.5,2){$2$}% \psline[linewidth=0.04] (0,2.5)(-0.15,2.5)% \psline[linewidth=0.04] (0,3)(-0.2,3)\rput(-0.5,3){$3$}% \rput{90}(-1,1.5){$\omega_{ls} r_o/\pi$}% \psline[linewidth=0.04] (0,0)(0,-0.2) \rput(0,-0.5){$0$}% \psline[linewidth=0.04] (0.5,0)(0.5,-0.15)% \psline[linewidth=0.04] (1,0)(1,-0.2) \rput(1,-0.5){$1$}% \psline[linewidth=0.04] (1.5,0)(1.5,-0.15)% \psline[linewidth=0.04] (2,0)(2,-0.2) \rput(2,-0.5){$2$}% \psline[linewidth=0.04] (2.5,0)(2.5,-0.15)% \psline[linewidth=0.04] (3,0)(3,-0.2) \rput(3,-0.5){$3$}% \rput(1.5,-0.8){$\omega_{ls} r_i/\pi$}% \end{pspicture} \end{center} \vskip -0.6cm \caption{Map of the solutions of the transcendental equation (\ref{13}) with $r_{\alpha}>r_{\beta}$. The colors correspond to different values of the number $l$. The solid, dashed, and dotted lines correspond to $s=1$, $s=2$, and $s=3$, respectively.} \end{figure} However, there are also some similarities between the results derived from mixed BCs and those for Dirichlet-Dirichlet BCs, in Fig. 1; it can be directly verified that the solutions for $\omega_{ls}(t)$, coming from Eq. (13), approach that for the onedimensional case ($\omega_{ls}\rightarrow\left( s-1/2\right) \pi/\left\vert r_{\alpha }-r_{\beta}\left( t\right) \right\vert $) when both radii of the shells are much larger than the separation between them. A comment is in order here: we note that the BCs must be imposed in the instantaneously co-moving Lorentz frame, where the boundaries are momentarily at rest. If the Neumann BC was to be imposed on the moving boundary, we should have used the appropriate Lorentz transformation to write the fields in the inertial frame of the laboratory as follow\textbf{s } \begin{equation} \left. \partial_{r^{\prime}}\phi\left( \mathbf{r}^{\prime},t\right) \right\vert _{r^{\prime}=r_{\beta}^{\prime}(t)} \;\;\;\Longrightarrow\;\;\; \left.\left\{ \partial_{r}+\dot{r}_{\beta}(t)\partial_{t}\right\} \phi\left( \mathbf{r},t\right) \right\vert _{r=r_{\beta}(t)}=0.\label{13b}% \end{equation} In that case, the time derivative in Eq. (\ref{13b}) invalidates the expansion used in Eq. (\ref{2}), and as a consequence, also in Eq. (\ref{8}). This fact demands a different formal development for the computation of the required particle creation rate. For that reason, in the present work we treat only the case where the Neumann BC is imposed on a spherical shell at rest, leaving aside the breathing modes analyzed in Ref. \cite{PascoalEtAl2008}, when both shells oscillate in or out of phase. \section{Numerical estimatives} In this section, in order to obtain explicit results, we will consider an specific motion for the spherical shell that imposes on the field the Dirichlet BC. A typical situation consists of an oscillation that starts at some instant, has a sinusoidal behaviour with an angular frequency $\varpi $ and a small amplitude and then stops at some later instant. We, then, assume that the radius of the moving shell has the following law of motion \begin{equation} r_{\beta}(t)=r_{\beta}\left( 1+\epsilon\sin\left( \varpi t\right) \right) ,\label{14}% \end{equation} with $\epsilon \ll 1$. In the following we also assume that the cavity mirror oscillates only during a finite time interval $T$, then stopping suddenly its motion. Substituting Eqs. (\ref{6}) and (\ref{14}) into Eq. (\ref{8}) and making a power series expansion with respect to the small parameter $\epsilon$, we obtain \begin{equation} \mathcal{N}_{lms}=\left( \frac{\epsilon\varpi T}{2}\right) ^{2}% \sum_{s^{\prime}}\left\vert C_{l\left( ss^{\prime}\right)} \,f_{lss^{\prime}}\left( \varpi;T\right) \right\vert ^{2}, \label{15} \end{equation} where, after defining $\omega_{lss^{\prime}}\equiv\omega_{ls}(0)+\omega _{ls^{\prime}}(0)$, the coefficient $C_{lss^{\prime}}$ and function $f_{lss^{\prime}}\left( \varpi;T\right) $ are given by \begin{equation} f_{lss^{\prime}}\left( \varpi;T\right) =\frac{\exp\left[ i\left( \varpi-\omega_{lss^{\prime}}\right) T\right] -1}{i\left( \varpi -\omega_{lss^{\prime}}\right) T}-\frac{\exp\left[ -i\left( \varpi +\omega_{lss^{\prime}}\right) T\right] -1}{i\left( \varpi+\omega _{lss^{\prime}}\right) T}, \label{16}% \end{equation} and \begin{eqnarray} C_{lss^{\prime}} & =&r_{\beta}\delta_{ss^{\prime}}\frac{1}{2\omega_{ls}% (0)}\frac{\partial\omega_{ls}(0)}{\partial r_{\beta}}\cr\cr & -& r_{\beta}\left( 1-\delta_{ss^{\prime}}\right) \sqrt{\frac{\omega _{ls}(0)}{\omega_{ls^{\prime}}(0)}}\int_{r_{i}}^{r_{o}}dr~r^{2}F_{ls}% (r;0)\frac{dF_{ls^{\prime}}(r;0)}{dr_{\beta}}. \label{17}% \end{eqnarray} Note that $f_{lss^{\prime}}\left( \varpi;T\right) $ is an oscillating function of $T$, except when the mirror oscillating frequency satisfies the resonance condition, namely, $\varpi=\omega_{lss^{\prime}}$. In this case $f_{lss^{\prime}}\left( \omega_{lss^{\prime}};T\right) =1$, and the number of created particles turns to be the following quadratic function of $T$, \begin{equation} \lim_{\omega\rightarrow\omega_{lss^{\prime}}}\mathcal{N}_{lms}=\left( \frac{\epsilon\omega_{lss^{\prime}}T}{2}\right) ^{2}\left\vert C_{l\left( ss^{\prime}\right) }\right\vert ^{2}. \label{18} \end{equation} This result is valid only under the short-time approximation $\epsilon \omega_{lss^{\prime}}T<<1$, since we have disregard terms proportional to $\left( \epsilon\omega_{lss^{\prime}}T\right) ^{n}$ with $n\geq3$. Moreover, Eqs. (\ref{15}) to (\ref{18}) are valid either for Dirichlet-Dirichlet BCs or mixed BCs since they were derived using the equation of motion (\ref{14}) and equations (\ref{1}) - (\ref{8}) which are independent from the BCs. To study the behavior of our result in Eq. (\ref{18}), we plot in Fig. 4 the expression $\mathcal{N}_{lms}/(\epsilon\varpi T)^{2}$ as a function of $\varpi\pi/(r_{o}-r_{i})$ for some values of $l$ and $s$, under resonance conditions $\varpi=\omega_{lss^{\prime}}$, setting $r_{o}=2r_{i}$. Both letters in the legend indicate the BCs on the inner and the outer shells, respectively: for example, $D$ ($\tilde{D}$) means Dirichlet BC on a static (moving) shell, whereas $N$ means Neumann BC on a static shell. As we can see, both the intensity and position of the resonances change in a non-trivial way with the BC. The case of a moving outer shell with the field satisfying Dirichlet-Dirichlet BCs exhibits higher resonance intensities, while the case of a moving inner shell with the field submitted to mixed BCs leads to lower resonance frequencys. \begin{figure}[!h] \begin{center} \newpsobject{showgrid}{psgrid}{subgriddiv=0.5,griddots=10,gridlabels=6pt} \psset{xunit=1} \psset{yunit=3} \begin{pspicture}(-1,-0.3)(10,1.2) \psline(-0.2,0)(10,0) \rput(-0.5,0){$0$}% \psline(-0.15,0.2)(0,0.2)% \psline(-0.2,0.4)(0,0.4) \rput(-0.5,0.4){$0.1$}% \psline(-0.15,0.6)(0,0.6)% \psline(-0.2,0.8)(0,0.8) \rput(-0.5,0.8){$0.2$}% \psline(-0.15,1)(0,1)% \psline(-0.2,1.2)(0,1.2) \rput(-0.5,1.2){$0.3$}% \rput{90}(-1.4,1.0){${\cal N}_{0m1}/(\epsilon \varpi T)^2$} \psline(0,-0.066666666)(0,1.2)% \psline(1,-0.05)(1,0)% \psline(2,-0.05)(2,0)% \psline(3,-0.05)(3,0)% \psline(4,-0.05)(4,0)% \psline(5,-0.066666666)(5,0)% \psline(6,-0.05)(6,0)% \psline(7,-0.05)(7,0)% \psline(8,-0.05)(8,0)% \psline(9,-0.05)(9,0)% \psline(10,-0.06666666)(10,0)% \psdots (2., 0.25) (3., 0.222222) (4., 0.1875) (5., 0.16) (6., 0.138889) (7., 0.122449) (8., 0.109375) (9., 0.0987654) (10., 0.09)% \psdots[linecolor=c2,dotstyle=diamond*] (2., 1.) (3., 0.888889) (4., 0.75) (5., 0.64) (6., 0.555556) (7., 0.489796) (8., 0.4375) (9., 0.395062) (10., 0.36)% \psdots[linecolor=c7,dotstyle=square*](0.742019, 0.526384) (1.83658, 0.239496) (2.85061, 0.165635) (3.85648, 0.126694) (4.85973, 0.102579) (5.86179, 0.0861728) (6.86321, 0.0742893) (7.86425, 0.065285) (8.86505, 0.0582269) (9.86567, 0.0525456)% \psdots[linecolor=c4,dotstyle=triangle*] (1.29155, 0.699717) (2.20969, 0.665598) (3.18546, 0.532564) (4.17441, 0.434037) (5.16814, 0.364052) (6.16411, 0.312774) (7.16131, 0.273863) (8.15925, 0.243425) (9.15767, 0.219004)% \pspolygon(10,1.2)(10,0.8)(7.1,0.8)(7.1,1.2)% \psdot (7.4,1.1)% \psdot[linecolor=c2,dotstyle=diamond*] (7.4,0.9)% \psdot[linecolor=c7,dotstyle=square*] (8.9,1.1)% \psdot[linecolor=c4,dotstyle=triangle*] (8.9,0.9)% \rput (8.0,1.1) {\footnotesize $\widetilde{D} D$}% \rput (8.0,0.9) {\footnotesize $D \widetilde{D}$}% \rput (9.5,1.1) {\footnotesize $\widetilde{D} N$}% \rput (9.5,0.9) {\footnotesize $N \widetilde{D}$}% \end{pspicture} \begin{pspicture}(-1,-0.3)(10,1.2) \psline(-0.2,0)(10,0) \rput(-0.5,0){$0$}% \psline(-0.15,0.2)(0,0.2)% \psline(-0.2,0.4)(0,0.4) \rput(-0.5,0.4){$0.1$}% \psline(-0.15,0.6)(0,0.6)% \psline(-0.2,0.8)(0,0.8) \rput(-0.5,0.8){$0.2$}% \psline(-0.15,1)(0,1)% \psline(-0.2,1.2)(0,1.2) \rput(-0.5,1.2){$0.3$}% \rput{90}(-1.4,1.0){${\cal N}_{0m2}/(\epsilon \varpi T)^2$} \psline(0,-0.066666666)(0,1.2)% \psline(1,-0.05)(1,0)% \psline(2,-0.05)(2,0)% \psline(3,-0.05)(3,0)% \psline(4,-0.05)(4,0)% \psline(5,-0.066666666)(5,0)% \psline(6,-0.05)(6,0)% \psline(7,-0.05)(7,0)% \psline(8,-0.05)(8,0)% \psline(9,-0.05)(9,0)% \psline(10,-0.06666666)(10,0)% \psdots (3., 0.222222) (4., 0.25) (5., 0.24) (6., 0.222222) (7., 0.204082) (8., 0.1875) (9., 0.172839) (10,0.16)% \psdots[linecolor=c2,dotstyle=diamond*] (3., 0.888889) (4., 1.) (5., 0.96) (6., 0.888889) (7., 0.816327) (8., 0.75) (9., 0.691358) (10., 0.64)% \psdots[linecolor=c7,dotstyle=square*](1.83658, 0.239496) (2.93114, 0.262076) (3.94516, 0.241034) (4.95104, 0.214254) (5.95429, 0.19046) (6.95634, 0.17055) (7.95777, 0.154021) (8.95881, 0.14022) (9.9596, 0.128583)% \psdots[linecolor=c4,dotstyle=triangle*](2.20969, 0.665598) (3.12783, 0.924951) (4.1036, 0.893546) (5.09255, 0.812039) (6.08628, 0.7309) (7.08225, 0.65972) (8.07945, 0.599081) (9.07739, 0.547613)% \end{pspicture} \begin{pspicture}(-1,-0.3)(10,1.2) \psline(-0.2,0)(10,0) \rput(-0.5,0){$0$}% \psline(-0.15,0.2)(0,0.2)% \psline(-0.2,0.4)(0,0.4) \rput(-0.5,0.4){$0.1$}% \psline(-0.15,0.6)(0,0.6)% \psline(-0.2,0.8)(0,0.8) \rput(-0.5,0.8){$0.2$}% \psline(-0.15,1)(0,1)% \psline(-0.2,1.2)(0,1.2) \rput(-0.5,1.2){$0.3$}% \rput{90}(-1.4,1.0){${\cal N}_{1m1}/(\epsilon \varpi T)^2$} \psline(0,-0.066666666)(0,1.2) \rput(0,-0.166666666666){$0$}% \psline(1,-0.05)(1,0)% \psline(2,-0.05)(2,0)% \psline(3,-0.05)(3,0)% \psline(4,-0.05)(4,0)% \psline(5,-0.066666666)(5,0) \rput(5,-0.166666666666){$5$}% \psline(6,-0.05)(6,0)% \psline(7,-0.05)(7,0)% \psline(8,-0.05)(8,0)% \psline(9,-0.05)(9,0)% \psline(10,-0.066666666)(10,0) \rput(10,-0.16666666666){$10$}% \rput(8,-0.333333333333){${\varpi} \pi/(r_o-r_i)$}% \psdots (2.09194, 0.193801) (3.07064, 0.190726) (4.06265, 0.165563) (5.05855, 0.143062) (6.05606, 0.125047) (7.05439, 0.110725) (8.05319, 0.0991963) (9.05229, 0.0897666)% \psdots[linecolor=c2,dotstyle=diamond*] (2.09194, 0.88403) (3.07064, 0.829703) (4.06265, 0.7131) (5.05855, 0.613977) (6.05606, 0.53576) (7.05439, 0.473962) (8.05319, 0.424377) (9.05229, 0.383897)% \psdots[linecolor=c7,dotstyle=square*] (0.913954, 0.197624) (1.95549, 0.152794) (2.95665, 0.11437) (3.95686, 0.0900872) (4.95692, 0.0740416) (5.95695, 0.0627642) (6.95696, 0.0544353) (7.95696, 0.048043) (8.95697, 0.0429866) (9.95697, 0.038889)% \psdots[linecolor=c4,dotstyle=triangle*] (1.46733, 0.490273) (2.33204, 0.561268) (3.29386, 0.471634) (4.27688, 0.39238) (5.26734, 0.332865) (6.26125, 0.288043) (7.25702, 0.253464) (8.25391, 0.226113) (9.25153, 0.203994)% \end{pspicture} \end{center} \caption{Plot of $\mathcal{N}_{lms}/(\epsilon\varpi T)^{2}$ as a function of $\varpi \pi/(r_{o}-r_{i})$, in the resonance condition for a few values of $l$ and $s$. We have considered Dirichlet-Dirichlet and Mixed BCs. On the legend (top-right of the figure), the letter on the left indicates the BC imposed on the field at the inner shell and the letter to the right, indicates the BC imposed on the field at the outer shell. $D$ means Dirichlet BC and static shell, $\tilde{D}$ means Dirichlet BC and moving shell, whereas $N$ means Neumann BC and static shell. We have set $r_{o}=2r_{i}$.} \end{figure} In the limit $r_{i}\gg r_{o}-r_{i}$, we can use the Bessel asymptotic forms for large arguments to derive an analytical expression for the average number of created particles in a particular mode. For the case where the field is submitted to Dirichlet-Dirichlet BCs, we obtain \begin{equation} \omega_{lss^{\prime}} \;\;\;\longrightarrow\;\;\; \frac{(s+s^{\prime})\pi}{\left\vert r_{\alpha }-r_{\beta}\right\vert }\label{19}% \end{equation} and \begin{equation} \lim_{\omega\rightarrow\omega_{lss^{\prime}}}\mathcal{N}_{lms} \;\;\;\longrightarrow\;\;\; \frac{\epsilon^{2}\pi^{2}T^{2}}{4}\frac{r_{\beta}^{2}}{\left( r_{\alpha }-r_{\beta}\right) ^{4}}s^{\prime}s.\label{20}% \end{equation} Analogously, for the field submitted to mixed BCs (\ref{11}), we have \begin{equation} \omega_{lss^{\prime}}\rightarrow\frac{(s+s^{\prime}-1)\pi}{\left\vert r_{\alpha}-r_{\beta}\right\vert }\label{21}% \end{equation} and \begin{equation} \lim_{\omega\rightarrow\omega_{lss^{\prime}}}\mathcal{N}_{lms}\rightarrow \frac{\epsilon^{2}\pi^{2}T^{2}}{16}\frac{r_{\beta}^{2}}{\left( r_{\alpha }-r_{\beta}\right) ^{4}}(2s^{\prime}-1)(2s-1).\label{22}% \end{equation} Expressions (\ref{19}) and (\ref{20}) correspond to the results for the $1+1$ DCE under Dirichlet-Dirichlet BCs derived in Refs. \cite{DodonovKlimov1996,JungSoh1998,DodonovRevisao2001}, whereas Eqs. (\ref{21}) and (\ref{22}) correspond to the results under mixed BCs presented in \cite{Hushwater1997,Cougo-PintoEtAl1999,AguiarEtAl2003}. These similarities can be related to the fact that the limit $r_{i} \gg r_{o}-r_{i}$ is akin to the plane geometry. \section{Concluding remarks} In this paper we have investigated the dynamical Casimir effect for a massless scalar field within two concentric spherical shells considereing mixed boundary conditions. We have thus complemented some previous results presented in Ref. \cite{PascoalEtAl2008} where the massless scalar field was assumed to satisfy only Dirichlet BC in both shells. We have analyzed the real particle creation phenomenon for the case where only one of the shells is allowed to move with an arbitrary law of motion for its radius. In addition, the Dirichlet BC was imposed on the moving shell while the Neumann's was assumed on the static one. However, in our discussion, the moving shell could be the inner shell or the outer one as well. In order to get some numerical estimatives, and with the purpose of comparing our results with those obtained in Ref. (\cite{PascoalEtAl2008}), we chose a particular, but very typical, oscillating motion for the moving shell, in which it starts moving at a certain instant, oscillates with a given frequency and then stops suddenly its motion. Considering this particular situation, we have identified the resonance conditions where the number of created particles is more appreciable. A direct inspection in our graphs (see Fig. (4)), allows us to make some conclusions: for both cases of Dirichlet-Dirichlet BCs or mixed BCs, we see that every time the moving shell is the outer one the average number of created particles is greater than the corresponding cases where the inner shell is in motion (by a factor of the order of $\sim 4$) . This can be unsderstood simply recalling that the dynamical Casimir effect increases with the area of the moving surface. In other words, the dissipative force that acts on the moving boundary, responsible for converting mechanical energy into field energy (real field {\it quanta}) increases with the area with the moving boundary. Another interesting result that can be extracted from our calculations is the fact that the case with mixed BCs presents lower resonance frequencies than that with Dirichlet-Dirichlet BCs. This feature can be useful for further experimental investigations of particle creation within the context of the dynamical Casimir effect, since it makes easier to access the parametric amplification regime of particle creation.
1,477,468,751,256
arxiv
\section{Introduction} \label{sec:intro} The bending of light by gravitational fields can both magnify and create multiple images of an object \citep{1986ApJ...310..568B, 1996astro.ph..6001N}. The gravitational field of a galaxy creates `macro-images' of a distant source such as a quasar. Coherent trends in the magnifications of all the macro-images are due to intrinsic source variability. But, these macro-images are each broken into unresolvable `micro-images' by the individual stars in a galaxy \citep{1979Natur.282..561C, 1986ApJ...301..503P}. Uncorrelated fluctuations among the lightcurves of the macro-images are due to changes in the magnifications of individual micro-images. Of particular interest in the study of gravitationally lensed quasars are caustic crossings, events where the number of micro-images changes by two and the source becomes highly magnified. Such events can, e.g., reveal information about the size and profile of the light emitting region \citep{1991AJ....102..864W, 2018MNRAS.475.1925T}. A source crossing a fold caustic is accompanied by the creation or annihilation of a pair of images somewhere along a critical curve. One of these images is a saddlepoint of the light travel time, while the other is a minimum. Together, the newly created images dominate the total magnification (which also contains contributions due to other micro-images) near the caustic. A Taylor expansion of the lens equation in the vicinity of a critical curve allows one to find approximations for the magnifications of these two new, bright, images. To leading order, the magnification $\mu$ of a point source near a fold caustic can be approximated as proportional to some flux factor $K$ (also sometimes called the caustic strength), and as inversely proportional to the square root of the source distance normal to the caustic $d$ \citep{1984A&A...130..157C,1989A&A...221....1K,2002ApJ...574..970G}, i.e. \begin{equation} \mu=\frac{K}{\sqrt{d}}. \label{eq:std-approx} \end{equation} This approximation (that of the `straight fold caustic') is commonly used throughout the literature. Studies of caustic crossing events in the lightcurves of micro-lensed AGN often rely on the convolution of some source luminosity profile with this approximation for the magnification of a point source \citep{1999MNRAS.302...68F,2012MNRAS.423..676A,2015ApJ...814L..26M}. To be more precise, it is the sum of the magnifications of the two newly created images which is approximated -- this leading order expression works for each image individually as well, with an appropriate factor of $1/2$. However, while saddlepoint magnifications can take on any value, minima are required to be of unit or higher magnification \citep{1986ApJ...310..568B}. One therefore expects the approximation $\mu=K/\sqrt{d}$ to break down at sufficiently large distances ($d_{break}\approx K^2$).\footnote{If each of the two new images is expected to contribute roughly half of the magnification for a source near a caustic, then there should actually be a breakdown approximately when $\mu_{minimum}=1$, i.e. when $\mu=\mu_{minimum}+\mu_{saddle}=2\rightarrow d=\frac{K^2}{4}$.} Few authors have considered alternative approximations for the image magnifications. \citet{1999MNRAS.302...68F} derive a similar leading order approximation that takes into account the curvature of the caustic, giving the so-called `parabolic fold caustic' approximation. \citet{2005ApJ...635...35K} and \citet{2011MNRAS.417..541A} have derived higher order approximations for the magnifications. Such higher order approximations necessarily introduce more parameters than $K$ in order to describe caustic crossing events. These other parameters have their own uncertainties as well when measured, making it more difficult to determine the properties of interest (e.g. source size). It is understandable then why many authors work only to leading order, but clearly there are inadequacies in doing so (be it that the approximation is only valid for a small portion of the regime where it is commonly applied, or that it fails to keep the micro-minima above unit magnification). In what follows we examine the behavior of individual micro-image magnifications near critical curves for a point source near fold caustics, for the parameters of QSO 2237+0305 (Huchra's lens, \citealt{1985AJ.....90..691H}). Our results show clear deviations from the leading order approximation by the time $d=K^2$ is reached. We examine two higher order approximations from \citet{2005ApJ...635...35K} and \citet{2011MNRAS.417..541A} and compare these approximations to the actual image magnifications. We provide some statistics on the behavior of the micro-images (namely, the micro-minima) in our study. Additionally, we present statistics on the parameters present in the higher order approximations with some discussion. Finally, we briefly show how the `shape profile' of a uniform disk crossing a fold caustic is altered under one such higher order approximation. \section{Simulation setup} \label{sec:sim_setup} We consider a micro-lensing star field such that the lens equation in the vicinity of a macro-image relating the source position $\mathbfit{y}$ and image position $\mathbfit{x}$ takes the form \begin{equation} \mathbfit{y}=\begin{pmatrix}1-\kappa_{s}+\gamma&0\\0&1-\kappa_{s}-\gamma\end{pmatrix}\mathbfit{x}-\theta_{E}^{2}\sum_{i=1}^{n}m_{i}\frac{(\mathbfit{x}-\mathbfit{x}_i)}{|\mathbfit{x}-\mathbfit{x}_i|^2}, \label{eq:lenseq} \end{equation} where the $\mathbfit{x}_i$ are the positions of the $n$ stars with masses $m_i$ (measured in units of some mass $M$ that determines the size of the Einstein ring $\theta_E$). The shear $\gamma$ is due to the mass distribution of the rest of the galaxy far away from the macro-image. The total surface mass density $\kappa$ is comprised of a smooth component, $\kappa_{s}$, and a portion due to compact matter $\kappa_{\star}$. We take all of our surface mass density $\kappa=\kappa_{s}+\kappa_{\star}$ to be distributed in compact objects, so that $\kappa=\kappa_{\star}$ and $\kappa_{s}=0$. Additionally, we let $\theta_E$ be our unit distance in the image plane, and we let all of our objects be of the unit mass that determines this distance. Equation (\ref{eq:lenseq}) is then more cleanly written as \begin{equation} \mathbfit{y}=\begin{pmatrix}1+\gamma&0\\0&1-\gamma\end{pmatrix}\mathbfit{x}-\sum_{i=1}^n\frac{(\mathbfit{x}-\mathbfit{x}_i)}{|\mathbfit{x}-\mathbfit{x}_i|^2}. \label{eq:lenseq2} \end{equation} We use the surface mass density and shear parameters of QSO 2237+0305 from \citet{2010ApJ...712..658P} for our simulations. These values are given in Table \ref{tab:image_parameters}. \begin{table} \centering \caption{Convergence and shear for the macro-images of QSO 2237+0305.} \begin{tabular}{|c|c|c|c|c|} \hline image & A & B & C & D\\ \hline $\kappa$ & 0.40 & 0.38 & 0.73 & 0.62\\ \hline $\gamma$ & 0.40 & 0.39 & 0.72 & 0.62\\ \hline \end{tabular} \label{tab:image_parameters} \end{table} We spread $\approx1000$ stars within a circular region for each case, and use the parametric representation of the critical curves from \citet{1990A&A...236..311W} to precisely locate the critical curves. We consider only those critical curves that lie within a smaller region (to minimize edge effects due to asymmetries in the shear from stars), and their corresponding caustics found by mapping through the lens equation. The critical curves and caustics for the parameters of image C can be seen in Figs. \ref{fig:cc} and \ref{fig:caustics} respectively. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figures/image_c_critical_curves.pdf} \caption{The critical curves for image C ($\kappa=0.73$, $\gamma=0.72$) that were used in our simulations are shown as solid black lines, while the thinner blue lines are those of the surrounding network that were not used. The solid red circle is the region within which the critical curves we considered lie.} \label{fig:cc} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/image_c_caustics.pdf} \caption{The caustics for image C ($\kappa=0.73$, $\gamma=0.72$) from which we drew our samples are shown as solid black lines, while the thinner blue lines are those of the surrounding network that were not used. The solid red ellipse is the smoothed-out matter mapping of the circle in Fig. \ref{fig:cc}. The caustics considered lie for the most part within this ellipse.} \label{fig:caustics} \end{figure*} \section{Examination of micro-image magnifications} \label{sec:micro_image_mags} We randomly create lines through the source plane to represent different paths that the source might move along. We determine the intersection of these lines with the caustics, and use the intersection points as seeds for our study (using enough lines to generate $\approx50$ seed points). We additionally find the corresponding critical curve locations. The direction of increasing inverse magnification at each point along the critical curves, $\nabla\det\mathbfss{A}$ (where the Jacobian of the lens mapping $\mathbfss{A}=\frac{\partial\mathbfit{y}}{\partial\mathbfit{x}}$ is the inverse magnification matrix, and $\det\mathbfss{A}=0$ defines the locus of critical curves), is both normal to the critical curve and tells us which side of the critical curve the newly created micro-minimum will lie on. Rotating $\nabla\det\mathbfss{A}$ counterclockwise by $\frac{\pi}{2}$ gives a tangent to the critical curve, and mapping this vector to the source plane with $\mathbf{A}$ gives a tangent to the caustic. Further applying a $-\frac{\pi}{2}$ rotation to the resulting vector then gives the direction which is both inside and normal to the caustic. With this information, and some further knowledge and manipulation of the lens equation near critical points \citep{1992grle.book.....S}, for each pair of critical curve/caustic seed points we know: \begin{enumerate} \item the source position that induces the creation of a new micro-image pair, \item the direction normal to and inside the caustic that we want to follow the source, \item which sides of the critical curve the newly created micro-minimum and micro-saddle lie on, \item the distance from critical curve to each newly-created image for a source offset normal from the caustic \end{enumerate} We follow a source that moves along a normal direction from one of the caustic seed points, tracking the positions of the newly created micro-images also as they emerges from the corresponding critical curve seed point. The magnifications of the micro-images are calculated at each step, and stored with the source-caustic distance. We note here that we could have tracked the source as it moved along the random lines which we created to determine our seed positions. We are interested in the behavior of the magnifications compared to certain approximations however, and so we choose to follow the source along the normal direction from the caustic. For the macro-saddles (images C and D), the micro-minima must have a finite `lifetime', and the tracking process is terminated when they annihilate. This is not necessarily true for the macro-minima (images A and B), and in the process of our simulations there were cases where micro-minima appeared abnormally long-lived. We chose to ignore these cases, thus restricting ourselves to always examining micro-minima which eventually annihilated.\footnote{There were a small number of cases where the micro-minima in images A and B survived for lifetimes of order (in the notation of Section \ref{sec:micro_image_stats}) $L=k^2\cdot 10^4-k^2\cdot 10^5$. We chose to toss out such samples.} In all cases, the process for a micro-saddle terminates when the micro-saddle either annihilates or has a magnification less than $10^{-3}$. Kayser and Witt provide a simplified form of \citeauthor{1984A&A...130..157C}'s 1984 formula for the caustic strength $K$, giving \begin{equation} K=\sqrt{\frac{2}{|\mathbfit{T}_{\zeta}|}} \label{eq:str_1} \end{equation} where \begin{equation} \mathbfit{T}_{\zeta}=\mathbfss{A}\cdot \begin{pmatrix}0&-1\\1&0\end{pmatrix}\cdot\nabla_{\mathbfit{x}}\det\mathbfss{A}=\mathbfss{A}\begin{pmatrix}-\frac{\partial\det\mathbfss{A}}{\partial\mathbfit{x}_2}\\\frac{\partial\det\mathbfss{A}}{\partial\mathbfit{x}_1}\end{pmatrix} \label{eq:str_2} \end{equation} is the tangential vector of the caustic \citep{1989A&A...221....1K,1990A&A...236..311W}. However, $K$ is the strength for the combined flux of both the created micro-minimum and micro-saddle. We are interested in the behavior of individual images, for which the strength is simply $K/2$ \citep{1992grle.book.....S} that we shall designate here as lowercase $k$. We calculate the single image caustic strength $k$ at each of our seed critical curve positions, and scale our distances from the caustic by $k^2$. We compare magnification versus scaled distance to the straight fold caustic approximation $\mu=k/\sqrt{d}$ of equation (\ref{eq:std-approx}). \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/image_c_minima_magnification_vs_distance.pdf} \includegraphics[width=\textwidth]{Figures/image_c_saddles_magnification_vs_distance.pdf} \caption{Magnification vs. distance normal from the caustic (scaled by caustic strength). The solid red line shows the standard straight fold approximation of equation (\ref{eq:std-approx}), $\mu=k/\sqrt{d}$. The black lines represent the (top) micro-minima and (bottom) micro-saddles of image C whose magnifications we tracked as they emerged from a critical curve. The blue line is the magnification of the (top) micro-minimum and (bottom) micro-saddle that emerges when a source moves up the symmetry axis of the `deltoid' caustic for a single-star perturbing a macro-saddle of the same magnification as image C. Note that for this single-star perturber, the minimum eventually annihilates at the cusp of the deltoid, and the saddle asymptotically becomes the macro-image.} \label{fig:mag_vs_distance} \end{figure*} Log-log plots of the results for image C can be seen in Fig. \ref{fig:mag_vs_distance}. The results for images A, B, and D display similar qualitative features. Our results show deviations of the magnifications of micro-images from the approximation $\mu=k/\sqrt{d}$ (shown in the figure as a solid red line) at a distance of $d=k^2$, with noticeable deviation appearing as early as $\log d/k^2=-1$. We additionally show (as a blue line) the magnification of the micro-images that appear when a source moves up the symmetry axis of the `deltoid' caustic for a single star perturbing a macro-saddle \citep{1979Natur.282..561C, 1984A&A...132..168C}.\footnote{We chose a macro-saddle with the same macro-magnification as image C. For $\kappa_{\star}=0$, this requires $\gamma>1$.} It is worth pointing out that for our single star perturber, the micro-minimum typically decayed \textit{faster} than in the high stellar density case, and the micro-saddle decayed \textit{slower}. \section{Statistics on the `lifetimes' of micro-minima} \label{sec:micro_image_stats} Our examination of the magnifications of micro-minima gives us information as well about the typical `lifetime', which we designate by $L$, of a minimum (in units of $k^2$ for its particular point of creation). We additionally examine the lowest magnifications $\mu_{low}$ that the minima reach, and the distance to lowest magnification $d_{low}$ (again rendered dimensionless by $k^2$). Again as noted in Section \ref{sec:micro_image_mags}, for images C and D, the micro-minima must have a finite lifetime, while this is not strictly true for images A and B. For images A and B, we only consider those minima which displayed a finite lifetime. Fig. \ref{fig:mag_vs_distance} shows that the micro-saddles may decay down to very low magnifications, or meet another micro-minimum and annihilate. We do not present statistics on the lifetimes of those micro-saddles which annihilated, but note that for image C half of the micro-saddles decayed to very small magnifications, and half were later annihilated. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figures/image_c_log_lifetime_histogram.pdf} \caption{Histogram of logarithms of micro-minima lifetimes $L$ for image C.} \label{fig:log_lifetime_histogram} \end{figure} Fig. \ref{fig:log_lifetime_histogram} gives a histogram of $\log L$ for the micro-minima of image C. We present values for the median and mean of $L$ in Table \ref{tab:min_life_stats}. Fig. \ref{fig:log_lowest_min} shows a histogram of the logarithm of $\mu_{low}$ for image C. We provide values for the median and mean of $\mu_{low}$ and $d_{low}$ in Table \ref{tab:min_mag_stats}. \begin{table} \centering \caption{Statistics for the lifetime $L$ (rendered dimensionless by $k^2$) of micro-minima.} \begin{tabular}{|c|c|c|c|c|} \hline image & A & B & C & D\\ \hline Median $L$ & 16.561 & 5.517 & 1.562 & 3.535\\ \hline $\langle L \rangle$ & 26.393 & 13.159 & 5.042 & 8.813\\ \hline \end{tabular} \label{tab:min_life_stats} \end{table} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figures/image_c_log_lowest_min.pdf} \caption{Histogram of logarithms of lowest micro-minima magnifications $\mu_{low}$ for image C.} \label{fig:log_lowest_min} \end{figure} \begin{table} \centering \caption{Statistics for the lowest magnifications $\mu_{low}$ of micro-minima.} \begin{tabular}{|c|c|c|c|c|} \hline image&\multicolumn{2}{|c|}{A}&\multicolumn{2}{|c|}{B}\\ \hline $var$ & $\mu_{low}$ & $d_{low}$ & $\mu_{low}$ & $d_{low}$\\ \hline Median $var$ & 1.031 & 6.361 & 1.128 & 2.264\\ \hline $\langle var \rangle$ & 2.173 & 11.522 & 1.644 & 2.970\\ \hline \hline image&\multicolumn{2}{|c|}{C}&\multicolumn{2}{|c|}{D}\\ \hline $var$ & $\mu_{low}$ & $d_{low}$ & $\mu_{low}$ & $d_{low}$\\ \hline Median $var$ & 2.130 & 0.896 & 1.269 & 1.734\\ \hline $\langle var \rangle$ & 6.919 & 1.964 & 3.960 & 5.115\\ \hline \end{tabular} \label{tab:min_mag_stats} \end{table} \section{Distributions of caustic strength} \label{sec:caustic_strengths} Witt's parametric representation of the critical curves discretizes the critical curves (and hence the caustics) into sets of points that make polygons which (for appropriately small step sizes of some parameter) appear smooth, as in Figs. \ref{fig:cc} and \ref{fig:caustics}. For every point we found along the caustics, we calculate the single image caustic strength $k$ at the corresponding critical curve location. Under the assumption that $k$ can be considered constant over the small caustic length interval between neighboring points, we can calculate the probability density $p(k)$ for the caustic strength in the source plane. We chose to calculate $p(\log k)$ as well, shown in Fig. \ref{fig:probability_density}. \begin{table} \centering \caption{Statistics of the caustic strength $k$ for QSO 2237+0305.} \begin{tabular}{|c|c|c|c|c|} \hline image & A & B & C & D\\ \hline $\langle k \rangle$ & 0.364 & 0.382 & 0.306 & 0.320\\ \hline $\langle k^2\rangle$ & 0.188 & 2.578 & 0.138 & 0.142\\ \hline $\sigma_k$ & 0.235 & 0.334 & 0.210 & 0.200\\ \hline $\sigma_{k^2}$ & 13.955 & 5.805 & 1.269 & 2.406\\ \hline $\langle \log k\rangle$ & -0.493 & -0.482 & -0.575 & -0.549\\ \hline $\sigma_{\log k}$ & 0.212 & 0.230 & 0.224 & 0.214\\ \hline \end{tabular} \label{tab:str_stats} \end{table} For the parameters of QSO 2237+0305, we calculate a mean value $\langle k\rangle$, along with $\langle k^2\rangle$ and $\sigma_k=\sqrt{\langle k^2\rangle-\langle k\rangle^2}$. We also calculated $\langle k^4\rangle$, but only for the purpose of finding $\sigma_{k^2}=\sqrt{\langle k^4\rangle-\langle k^2\rangle^2}$. Our results are presented in Table \ref{tab:str_stats}, along with the mean and standard deviation for $\log k$. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Figures/image_a_p_vs_log_k.pdf} \includegraphics[width=0.4\textwidth]{Figures/image_b_p_vs_log_k.pdf} \includegraphics[width=0.4\textwidth]{Figures/image_c_p_vs_log_k.pdf} \includegraphics[width=0.4\textwidth]{Figures/image_d_p_vs_log_k.pdf} \caption{Caustic strength probability density $p(\log k)$ for (from top to bottoom) image A ($\kappa=0.40=\gamma$), image B ($\kappa=0.38$, $\gamma=0.39$), image C ($\kappa=0.73$, $\gamma=0.72$), and image D ($\kappa=0.62=\gamma$). We show $\langle\log k\rangle\pm 3\sigma_{\log k}$.} \label{fig:probability_density} \end{figure} While Fig. \ref{fig:probability_density} provides one look at the distribution of caustic strengths for image C, we show as well in Fig. \ref{fig:caustic_strength_color_plot} the caustic network of image C where each point has been color-coded by the value of the caustic strength $k$. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/image_c_caustic_strength_color_plot_total.pdf} \caption{The portion of the caustic network of image C that was used to calculate $p(k)$ (i.e. the solid black caustics in Fig. \ref{fig:caustics}) has been color coded according to the caustic strength at each point. Subsequent zooms of regions are shown in the bottom row.} \label{fig:caustic_strength_color_plot} \end{figure*} \citet{1990A&A...236..311W} has calculated distributions of the caustic strength for the case of zero external shear and a range of surface mass density values corresponding to macro-minima and maxima. \citet{1998JKAS...31...27L} have calculated distributions of the caustic strength for a binary lens with external shear. \citet{1993A&A...268..501W} provide values of $\langle K\rangle$ for (slightly different parameters of) QSO 2237+0305 as well, though they do not show the underlying distributions. The authors are unaware of anywhere in the literature where distributions of the caustic strength for values of surface mass density and shear corresponding to saddlepoints are given. \section{Higher order magnification approximations} \label{sec:higher_order_approx} \citet{2005ApJ...635...35K} and \citet{2011MNRAS.417..541A} provide higher order approximations for the magnifications of images near critical curves. Through a Taylor expansion of the lens equation in the vicinity of a critical curve that produces a fold caustic, \citet{2005ApJ...635...35K} derived \begin{equation} \label{eq:keeton_et_al_approx} \mu_{\pm}^{-1}=\pm a\cdot\sqrt{d}+b\cdot d \end{equation}where the plus and minus symbols denote the signed magnifications of minima and saddles respectively, $d$ is normal to the caustic, \begin{equation} a=\sqrt{2\tau_{22}^2\tau_{111}}=\frac{1}{k}, \end{equation}and \begin{equation} b=\frac{2}{\tau_{111}}\Big(\frac{1}{3}\tau_{22}\tau_{1111}-\tau_{112}^2+\tau_{111}\tau_{122}\Big). \end{equation} The variable $\tau=\frac{1}{2}(\mathbfit{x}-\mathbfit{y})^2-\psi(\mathbfit{x})$ is the gravitational time delay \citep{1986ApJ...310..568B}. Subscripts denote derivatives with respect to the first or second coordinate of the image plane, evaluated at the origin (around which we take the Taylor expansion of the lens equation to occur). We have written $a$ and $b$ such that at the origin, our caustic normal points along the abscissa axis of some coordinate system.\footnote{This is a rotation of the coordinate system by $-\pi/2$ from that which appears in \citet{2005ApJ...635...35K} and \citet{2011MNRAS.417..541A}. This was chosen to be consistent with the normal and tangent vectors from \citet{1990A&A...236..311W}.} In order to evaluate $a$ and $b$, we can first calculate all the derivatives up to order 4 at each critical curve seed in the global coordinate system of eq. (\ref{eq:lenseq2}) that aligns with the external shear.\footnote{Many simplifications arise due to commutativity of derivatives and the fact that $\tau_{11}+\tau_{22}=2$ for our point mass lens model.} We then rotate our coordinate basis to a local system at each seed caustic point where the normal points along the abscissa axis, and determine the necessary derivatives present in $a$ and $b$. Results from \citet{2011MNRAS.417..541A} are in agreement with \citet{2005ApJ...635...35K}, and additionally contain a next order term. We do not present the resulting lengthy equations here, but note that the next order terms contain not only the distance normal to the caustic, but also the distance tangential.\footnote{However, as we only examine a source moving normal to the caustic, this tangential distance does not come into play for our analysis. This is also why do not dwell on the `parabolic fold caustic' approximation of \citet{1999MNRAS.302...68F}, as along the normal direction it is equal to that of the straight fold caustic.} We perform a similar process as that described above for $a$ and $b$ in order to find the coefficients present in their approximation. We can compare the actual magnifications $\mu$ of the micro-images in our simulations with the predictions $\mu_{approx.}$ of these two higher order approximations. We present the results for micro-minima in Fig. \ref{fig:error_plot}, and note that errors for the micro-saddles display similar behavior. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/image_c_error_plot.pdf} \caption{Differences between actual magnification of micro-minima in image C and predictions for three different approximations.} \label{fig:error_plot} \end{figure*} The middle and right plots of Fig. \ref{fig:error_plot} show a lot of scatter in the error for small values of $d/k^2$. This is likely due to the numerical precision of our simulations very close to the critical curves -- however, the error in magnification is small (of order $10^{-4}-10^{-5}$) when the actual magnification itself is very large (of order $10^3$)! There are large downward spikes visible where the error goes to $0$. As in the top of Fig. \ref{fig:mag_vs_distance} where the blue curve of the micro-minimum intersects the straight red line around $\log(d/k^2)=-0.1$, the approximation formally gives the exact magnification, though this is only due to the difference changing sign from positive to negative. Positions where the error goes to infinity are locations where either the micro-minimum annihilates, or the approximation gives infinite magnification. The approximation may become infinite before or after the minimum annihilates, depending on the values and signs of the coefficients in the approximation. If after, there is only one infinite spike in the error (when the minimum annihilates). If before, multiple spikes (ones when the approximation goes to infinity, and the final one when the minimum annihilates). In general, Fig. \ref{fig:error_plot} makes it clear that including the next higher order terms for the magnification reduces the error significantly for small values of $d/k^2$, as is expected. However, none of the approximations are consistently well suited to the regime where $d/k^2\approx 1$ for the micro-minima. \section{Statistics for a higher order approximation} \label{sec:higher_order_approx_stats} Much like how one can calculate the distribution of $k$ along the caustics, we can do so as well for $a$ and $b$ in the approximation \begin{equation} \mu_\pm^{-1}=\pm a\cdot\sqrt{d}+b\cdot d. \end{equation} We provide the results for these calculations in Table \ref{tab:higher_order_coeff_stats}, including $a^2$ as well. We can then consider how an `average' micro-image might behave with these parameters. \begin{table} \centering \caption{Statistics for coefficients of eq. \ref{eq:keeton_et_al_approx} for QSO 2237+0305.} \begin{tabular}{|c|c|c|c|c|} \hline image & A & B & C & D\\ \hline $\langle a \rangle$ & 3.516 & 3.511 & 4.301 & 4.029\\ \hline $\langle a^2\rangle$ & 16.665 & 17.855 & 26.036 & 23.343\\ \hline $\sigma_a$ & 2.075 & 2.351 & 2.745 & 2.666\\ \hline $\sigma_{a^2}$ & 55.969 & 123.591 & 245.028 & 243.325\\ \hline $\langle b\rangle$ & -8.134 & -6.963 & -13.031 & -12.041\\ \hline $\sigma_b$ & 1383.166 & 1027.542 & 521.262 & 3677.847\\ \hline \end{tabular} \label{tab:higher_order_coeff_stats} \end{table} We include $\langle a^2\rangle$ because of possible choices one might make to non-dimensionalize distances: one can take $d\cdot\langle a\rangle^2$, or $d\cdot\langle a^2\rangle$.\footnote{One could also take $d\cdot\langle b\rangle$, though this seems less useful to the authors.} Numerically, there might be slight differences based upon this choice. We provide results for the various combinations resulting from each choice in the following discussions. The fact that we found $\langle a\rangle$ and $\langle b\rangle$ to be of opposite sign suggests that a micro-minimum with \begin{equation} \label{eq:avg_keeton_et_al_approx} \mu_+^{-1}=\langle a\rangle\sqrt{d}+\langle b\rangle d \end{equation} has an effective mean lifetime $\langle d\rangle$ of \begin{equation} \langle d\rangle=\Big(-\frac{\langle a\rangle}{\langle b\rangle}\Big)^2, \end{equation} as $\mu^{-1}=0$ at this distance.\footnote{We note that there is not \textit{always} such an effective lifetime, as the actual value of $a$ and $b$ along the critical curve may be of the same sign.} This distance is in units of $\theta_E$, but can be rendered into a dimensionless lifetime \begin{equation} \label{eq:approx_lifetime} \langle L\rangle=a^2\langle d\rangle \end{equation} (where one might choose $\langle a\rangle^2$ or $\langle a^2\rangle$ as the actual multiplier for $\langle d\rangle$) to compare with the results of Section \ref{sec:micro_image_stats}, Table \ref{tab:min_life_stats}. There are 3 unique combinations for the variables in Table \ref{tab:higher_order_coeff_stats} that provide such a dimensionless lifetime, and their results are presented in Table \ref{tab:approx_mean_lifetimes}. \begin{table} \centering \caption{Mean micro-minimum lifetime $\langle L\rangle$ (rendered dimensionless by $a$) for QSO 2237+0305.} \begin{tabular}{|c|c|c|c|c|} \hline image & A & B & C & D\\ \hline $\frac{\langle a\rangle^2}{\langle b\rangle^2}\langle a\rangle^2$ & 2.310 & 3.134 & 2.015 & 1.817\\ \hline $\frac{\langle a\rangle^2}{\langle b\rangle^2}\langle a^2\rangle$ & 3.114 & 4.540 & 2.836 & 2.614 \\ \hline $\frac{\langle a^2\rangle}{\langle b\rangle^2}\langle a^2\rangle$ & 4.198 & 6.575 & 3.992 & 3.758\\ \hline \end{tabular} \label{tab:approx_mean_lifetimes} \end{table} Similarly, one can find that the minimum magnification of eq. (\ref{eq:keeton_et_al_approx}) (if $a$ and $b$ are of opposite sign) is \begin{equation} \label{eq:approx_mu_low} \mu_{low} = -\frac{4b}{a^2} \end{equation} and occurs at a dimensionless \begin{equation} \label{eq:approx_d_low} d_{low} = \frac{a^4}{4b^2}. \end{equation} Values for $\langle\mu_{low}\rangle$ are given in Table \ref{tab:approx_mean_lowest_min} for the choices of $\langle a^2\rangle$ and $\langle a\rangle^2$. Values for $\langle d_{low}\rangle$ are easily found from Table \ref{tab:approx_mean_lifetimes} since $\langle d_{low}\rangle=\frac{1}{4}\langle L\rangle$. \begin{table} \centering \caption{Mean micro-minima lowest magnification $\langle \mu_{low}\rangle$ for QSO 2237+0305.} \begin{tabular}{|c|c|c|c|c|} \hline image & A & B & C & D\\ \hline $-\frac{\langle a\rangle^2}{4\langle b\rangle}$ & 2.632 & 2.257 & 2.818 & 2.967\\ \hline $-\frac{\langle a^2\rangle}{4\langle b\rangle}$ & 1.952 & 1.560 & 2.263 & 2.063\\ \hline \end{tabular} \label{tab:approx_mean_lowest_min} \end{table} The approximations for $\langle L\rangle$ and $\langle d_{low}\rangle$ from eqs. (\ref{eq:approx_lifetime}) and (\ref{eq:approx_d_low}) respectively are in general lower than the results of Section \ref{sec:micro_image_stats}. This gives values of $\langle\mu_{low}\rangle$ slightly higher than in Section \ref{sec:micro_image_stats}. Altogether, this is a reminder then that once a minimum begins to move away from its point of creation, the presence of other caustics begins to play a more important role in its behavior. This makes its behavior highly unpredictable based solely off the few parameters one might ascertain when the minimum comes into being. The qualitative result from our simulations seems to be that micro-minima tend on average to drop in magnification slower than what one expects from the leading order approximation $\mu=k/\sqrt{d}$, but slightly faster than one might anticipate from any predictions based on higher order approximations. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/image_c_minima_magnification_vs_distance_with_approx.pdf} \includegraphics[width=\textwidth]{Figures/image_c_saddles_magnification_vs_distance_with_approx.pdf} \caption{We show again the magnifications of the micro-minima and micro-saddles of image C, with the addition of the next order approximation of eq. (\ref{eq:keeton_et_al_approx}) for coefficients with the average values indicated from Table \ref{tab:higher_order_coeff_stats}.} \label{fig:mag_vs_distance_with_approx} \end{figure*} Finally, we also show in Fig. \ref{fig:mag_vs_distance_with_approx} the two higher approximations $\mu_\pm^{-1}=\pm\langle a\rangle\sqrt{d}+\langle b\rangle d$ and $\mu_\pm^{-1}=\pm\sqrt{\langle a^2\rangle d}+\langle b\rangle d$ for image C, along with the actual micro-image magnifications. Much as we scale the distances for the micro-images by using the local value of $k$, we scale the distances for these two approximations by the appropriate local value of $k=1/\langle a\rangle$ or $k^2=1/\langle a^2\rangle$ respectively. \section{Shape profiles of caustic crossing events with a higher order approximation} \label{sec:alt_approx} Many analyses of caustic crossing events designed to examine the size of the light emitting region of an AGN rely on the convolution of a source luminosity profile with the magnification approximation near fold caustics of equation (\ref{eq:std-approx}). \citet{1987A&A...171...49S} present details of the shapes such caustic events take in the lightcurves for two example luminosity profiles. We briefly examine here how higher order approximations to the magnification might affect these shapes. We take as our source a uniform circular disc with luminosity profile $L(x,y)=H(R^2-(x-x_s)^2-(y-y_s)^2)$, where $R$ is the radius of the source, $(x_s,y_s)$ is the center of the source, $H$ is the Heaviside step function, and $x$ and $y$ are axes in the source plane. We take our caustic to be $x=0$. We use the approximation for the magnification \begin{equation} \mu(x,y)=\mu_{minimum}+\mu_{saddle}=\frac{1}{a\sqrt{x}+bx}+\frac{1}{a\sqrt{x}-bx}. \end{equation} We assume the source crosses the caustic in the normal direction, and so we can further take $y_s=0$ for our source without loss of generality.\footnote{We have also ignored the additional flux from other existing micro-images. Such a term is generally assumed to be slowly-varying over the length scales of interest, contributing an additive constant that can be pulled out of the integrals.} Fig. \ref{fig:uniform_disk_setup} provides a visualization of this description. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figures/disk_setup.pdf} \caption{Visualization for the setup of a uniform disc crossing a fold caustic in the normal direction from outside (lower magnification, $m$ micro-images) to inside (higher magnification, $m+2$ micro-images).} \label{fig:uniform_disk_setup} \end{figure} The magnification of our source is then \begin{equation} \begin{aligned} &\frac{\iint L(x,y)\mu(x,y)dxdy}{\iint L(x,y)dxdy}= \\&\frac{1}{\pi R^2}\iint H(R^2-(x-x_s)^2-y^2)\cdot\Big(\frac{1}{a\sqrt{x}+bx}+\frac{1}{a\sqrt{x}-bx}\Big)dxdy \end{aligned} \end{equation}where the integrals are taken over the entire source plane. Integrating over $y$, we arrive at \begin{equation} \frac{2}{\pi R^2}\int_{Max(0,x_s-R)}^{Max(0,x_s+R)} \sqrt{R^2-(x-x_s)^2}\Big(\frac{1}{a\sqrt{x}+bx}+\frac{1}{a\sqrt{x}-bx}\Big)dx. \end{equation} With only the leading order approximation $\mu=2k/\sqrt{d}$ of eq. (\ref{eq:std-approx}), the caustic strength is simply an overall scale factor that can be pulled out of the integral, thus determining the maximum magnification and having nothing to do with the shape. That is no longer the case now however -- some assumption about the values of $R$, $a$, and $b$ must be made to proceed. For the sake of our example, we take $a=\sqrt{\langle a^2\rangle}$ and $b=\langle b\rangle$ for the parameters of image C. We then choose three values of the source size, $R\in\{0.03\theta_E, 0.01\theta_E, 0.003\theta_E\}$. We can then plot magnification $\mu$ vs. source-caustic distance $x_s$, where we choose to scale the distance by the value of the source radius. The resulting magnification curves as a function of $\Tilde{x_s}=x_s/R$ for our various values of $R$ can be seen in Fig. \ref{fig:uniform_disk_magnification}. The higher value of $R=0.03\theta_E$ means that once inside the caustic, the source covers a significant portion of the region within the `effective' lifetime of the minimum, $d=\frac{\langle a^2\rangle}{\langle b\rangle^2}=0.153\theta_E$ for image C. It quickly annihilates, and the prohibitively large increase in magnification towards the end is a reminder of the failings of the approximation at larger distances. For the smaller values of $R$, there would be similar behavior at larger values of $\Tilde{x_s}$ outside our plotted range. However, the source has time to settle down in magnification before reaching these locations. We note that for our selected values of $a$ and $b$, the source never goes below a magnification of $1$, as is required since a micro-minimum is present. For the standard approximation however, given enough distance the approximation would provide a lower magnification than is allowed. In general, the higher order approximation provides a higher peak magnification than the old, though the difference becomes less noticeable as $R$ decreases. Additionally, with the approximation from eq. (\ref{eq:std-approx}) the magnification profile always peaks at the same value of $\Tilde{x_s}=2/3$ \citep{1987A&A...171...49S}. This is no longer the case for the higher order, as the peak occurs at a value of $\Tilde{x_s}>2/3$ and appears to approach $\Tilde{x_s}=2/3$ as $R$ decreases. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figures/uniform_disk_magnification.pdf} \caption{Magnification vs. source center distance from caustic for various values of $R$. Solid lines indicate the standard approximation of eq. (\ref{eq:std-approx}), while dashed lines include the next order term, eq. (\ref{eq:keeton_et_al_approx}). We have used mean values from image C, $a=\sqrt{\langle a^2\rangle}$ and $b=\langle b\rangle$. Compare with Fig. \ref{fig:mag_vs_distance_with_approx} where the solid red line signifies the standard approximation of equation (\ref{eq:std-approx}) and the solid green line includes the higher order correction eq. (\ref{eq:keeton_et_al_approx}).} \label{fig:uniform_disk_magnification} \end{figure} \section{Conclusions} \label{sec:conclusions} We have examined the magnifications of micro-images near fold caustics, and found that their magnifications differ from the inverse square root approximation typically used as the standard. We find that significant differences occur at distances $d$ equal to the square of the caustic strength $k$, with noticeable deviations appearing as early as $\log d/k^2=-1$. We have presented as well some statistics on the behavior of the lifetimes and lowest magnifications of the micro-minima in our simulations. Additionally, we provide probability distributions of the caustic strengths for the macro-images of QSO 2237+0205 (Huchra's lens). We have compared the actual magnifications of the micro-images in our simulations to the higher order approximations of \citet{2005ApJ...635...35K} and \citet{2011MNRAS.417..541A}. We find that including the next higher order terms can greatly reduce the error for small values of $d$, but provides little help in the regime where $\mu\approx 1$. We include statistics on values of parameters appearing in one such higher order approximation. Additionally, we examine the effect that a higher order approximation has on the `shape profiles' for a source crossing a fold caustic. In general, the peak of the curve occurs at a smaller source-caustic distance, and with a higher peak magnification. \section*{Acknowledgements} This work was supported by the MIT Undergraduate Research Opportunities Program and the Deutsch-Amerikanische Fulbright-Kommission. We thank the anonymous referee for their comments, which led to significant improvements in this paper. \bibliographystyle{mnras}
1,477,468,751,257
arxiv
\section{Introduction} We provide a short overview of the systems used for describing fractals that are most similar to the way we are proposing. In the $1960$s, Lindenmayer \cite{AL} designed a system to formally describe the development of simple organisms such as bacteria, later known as L-systems. Mathematicians have used L-systems to describe all kinds of fractals as sequences of tokens, mainly letters and mathematical signs. L-systems became prominent, as the sequences could be fed into computers to generate fractal images. In the 1980s, Dekking \cite{Dekk0} led a thorough mathematical treatment of the subject, for which he used sequences of letters, eventually with indices. Although Dekking handled a variety of fractals, and endomorphisms played a significant role in his theory, the frequent use of indexed letters complicated matters more than necessary or desirable. For example, in (4.9) he used two indices, each to indicate a special homomorphism. Not only was that hard to follow, it also led to a mistake in his start sequence. Arndt \cite{Arndt1} further extended Dekking's results from a later study(see \cite{Dekk1}). He used L-systems to investigate all fractal-generating sequences that obeyed specific criteria. He briefly oversaw different notations such as numbers for directions or turns. Ventrella \cite{Ventrella}, who was not a mathematician but an artist drawn to the beauty of fractals, developed a method to describe what he refers in \cite{VentrellaTree}, as ``the taxonomy of fractals.'' He showed the substitution graphically and then transformed it into a matrix of $+1$ and $-1$. He also investigated fractals on the Gaussian and Eisenstein integers and discovered various new ones.\\ It is noteworthy that these three authors restricted their approaches almost to two dimensions. We now compare the descriptions mentioned above for the famous fractal, i.e., the Hilbert curve, as depicted in the original drawing in \Cref{fig:hilbert2}, published in 1891 \cite{Hilbert}. \begin{figure}[H] \centering \includegraphics[scale=0.65]{figuurHilbert.pdf} \caption{Hilbert's original drawings, the first three approximants of his curve.} \label{fig:hilbert2}\index{Hilbert curve!original drawings} \end{figure} \begin{itemize} \item Dekking writes in \cite[p.~91]{Dekk0} \emph{``Let $S=\{a,b,c,d\}$, let $\s$ be the automorphism of $S^*$ defined by $\s(a)=b, \s(b)=c, \s(c)=d, \s(d)=a$, and let $\tau$ be the \emph{reversal map} of $S^*$ defined by $\tau(s)=s, \tau(VW)=\tau(W)\tau(V)$ for $V,W\subset S^*$. Let the endomorphism $\Theta$ of $S^*$ be defined by $a\to baad$ and $\s\tau\Theta=\Theta \s\tau$.''} Then, he defines $f:S^* \to \ensuremath{\mathbb{R}}^2$ by $f(a):=(1,0):=-f(c), f(b):=(0,1)=:-f(d)$. It can be observed that he makes extensive use of homomorphisms. This leads to $abbcbaadbaadcdda$ for the second approximant and for the third to $baadabbcabbcdccb\;abbcbaadbaadcdda\;abbcbaadbaadcdda\;dccbcddacddabaad$ {(we inserted spaces between each of its quadrants)}. \item Arndt writes in \cite[p.~9]{Arndt1} \emph{``for the Hilbert curve, a possible L-system with axiom $L$ and (non-constant) maps \mbox{$L \mapsto +Rt-LtL-tR+$, $R \mapsto -Lt+RtR+tL-$} can be used (only $t$ corresponds to an edge).''} Therefore, this leads to $+(-Lt+RtR+tL-)t-(+Rt-LtL-tR+)t(+Rt-LtL-tR+)-t(-Lt+RtR+tL-)+$ and a much longer expression ($211$ tokens) for the third Hilbert approximant. \item Ventrella \cite[chap.~5]{VentrellaTree} constructed the Hilbert curve practically the same way as Dekking, thereby conjecturing that \emph{``every edge-replacement curve that permits monohedral tiling (i.e., all norms are identical) has an associated self-avoiding node-replace-ment curve, and the nodes correspond to the centers of the curves' tiles.''} \end{itemize} Therefore, this leaves us with two constructions, one with an L-system and another with Dekking's system. Notice that both the sequences, as in the original Hilbert drawings, are such that the first approximant is not the beginning of the second, and the second is not the beginning of the third. However, in all three cases, the odd and even approximants start with the former one, the third with the first, and the fourth with the second. We describe our construction of the Hilbert curve extensively in \Cref{sub:hlbrt2d} and its higher-dimensional analogs in \Cref{sec:highHlbrt}. We translate Dekking's parameters $a,b,c,d$ to $1,2,-1,-2$, indicating the directions $(1,0),(0,1),(-1,0),(0,-1)$. Therefore, Dekking's automorphism $\s$ becomes our rotation $\mu$ over $\pi/4$, denoted by \emph{signed permutation} (\Cref{sec:perms}) $\mu=[2,-1]$, with $\mu^3(1)=\mu^2(2)=\mu(-1)=-2$. Furthermore, we define $\iota=\s^0$ to be the identity. The signed permutation $\tau=[2,1]$ is the reflection across the line $y=x$, as we do not use Dekking's reverse mapping $\tau$ in this example. Our construction implies that the substitution $T:H_k\mapsto H_{k+1}$ between two succeeding approximants is given by $H_{k+1}=T(H_k)=\Big(\tau(H_k),1,H_k,2,H_k,-1,-\tau(H_k)\Big)$ with $T\tau=\tau T$, and for $k\in \{1,-1,2,-2\}$, we get $T(k)=k$, following Hilbert's original drawing. We write $T(\iota)=(\tau,1,\iota,2,\iota,-1,-\tau)$. The accompanying sequences for the odd and even approximants are $\lr{2,1,-2,1,1,2,-1,2,1,2,-1,-1,-2,-1,2,\ldots}$ and $\lr{1,2,-1,2,2,1,-2,1,2,1,-2,-2,-1,-2,1,\ldots}$, respectively. These sequences are not in the Online Encyclopedia of Integer Sequences (OEIS), which however need not be stated further. Only those fractal sequences that occur in the OEIS will be stated. Thus, we specify the main objective of this study: \emph{to describe a system in which fractal curves in all dimensions are represented as signed number sequences.} \begin{remark} In this article, we use one numbering for figures, definitions, theorems, remarks, lemma's, observations, and examples, and another one for (sub)sections. But there is no rule without exception: once in a while, we will use a framed box of text that we think is important enough to call this block an \textbf{Intermezzo}. \end{remark} \section{Axiomatic description} \subsection{Digiset, sequences, substitution, and normalized} This section describes our application of the sequence theory. Instead of the usual alphabetical sequences, we use \emph{signed integer sequences}.. See \cite{allshal} for a thorough treatment of this subject. \begin{definition} A \textbf{digiset}\footnote{~In sequence theory, a digiset is called an \emph{alphabet}, and the sequences are called \emph{words}.} of \emph{size} $n$, for some $n\in \ensuremath{\mathbb{N}}, n>0$, is $\Delta_n=\{k|\, k\in \Z; 0<|k|\le n \}$, a subset of the integers, abbreviated to $\Delta_n=\big\{\pm 1,\pm 2,\ldots,\pm n\big\}$. An infinite digiset has $\Delta_{\infty}=\Z\setminus\!\{0\}$. For completeness, if we restrict ourselves to positive integers, the digiset is denoted by ${}^+\!\Delta_n$. \end{definition} \begin{definition} A \textbf{signed integer sequence}, also known as \emph{a sequence}, is a countable, ordered multiset with elements taken from a digiset $\Delta_n$. Let $\Delta_n^*$ constitute the set of all finite sequences, where each sequence is denoted by $S=\lr{s_1,s_2,\ldots}$ with $s_k\in \Delta_n$, i.e., with commas and angle brackets. In this notation, we represent the empty sequence by $\lr{}=\epsilon$. We denote the \textbf{length} of a sequence by $\|S\|$ \footnote{~Contrary to $|S|$, which denotes the \emph{absolute} sequence $|S|=\Lr{|s_1|,|s_2|,\ldots}$.}, indicating its number of elements. For $k\ge 0$, let $\Delta_n^k=\big\{S\in \Delta_n^*;\, \|S\|=k\big\}$ be the set of sequences with length $k$, then $\Delta_n^*=\bigcup\limits_{k\ge 0}\Delta_n^k$. \end{definition} For clarity, we use commas to separate items within a sequence because the integers may contain a minus sign and more than one digit. \begin{observation} The set $\Delta_n^*$ is a \emph{monoid} with concatenation (of sequences) as multiplication, denoted by a \emph{comma}, and $\lr{\;}=\epsilon$ as the identity element. \end{observation} \begin{definition} A mapping $\phi:\Delta_n^*\to \Delta_n^*$ such that $\phi(S,T)=\big(\phi(S),\phi(T)\big)$, where $S,T\in \Delta_n^*$ is called a \emph{homomorphism}, or \textbf{morphism} for short. \end{definition} \begin{definition}\label{def:reverse} The \textbf{reverse}, denoted by $\ensuremath{\mathcal{R}}$, is a peculiar mapping $\ensuremath{\mathcal{R}}:\Delta_n^*\to \Delta_n^*$ because this mapping is an \emph{anti-homomorphism}, as we define $\ensuremath{\mathcal{R}}(S,T)=\big(\ensuremath{\mathcal{R}}(T),\ensuremath{\mathcal{R}}(S)\big)$ for $S,T\in \Delta_n^*$, and $\ensuremath{\mathcal{R}}\lr{x}=\lr{x}$ for $x\in \Delta_n$. \end{definition} There is a natural embedding of $\Delta_n$ into $\Delta_n^*$ by the injection $x\mapsto \lr{x}$; therefore, we identify $\Delta_n$ with $\Delta_n^1$. If $\alpha:\Delta_n\to \Delta_n^*$ is a mapping, then there is a natural extension to $\alpha^*:\Delta_n^*\to \Delta_n^*$ by $\alpha^*\lr{s_1,s_2,\ldots,s_k}=\lr{\alpha(s_1),\alpha(s_2),\ldots,\alpha(s_k)}$. We will, however, use $\alpha$ instead of $\alpha^*$. A bijection on $\Delta_n$ extends to a bijection on $\Delta_n^*$. For a mapping $\alpha:\Delta_n\to \Delta_n$, its extension $\alpha:\Delta_n^*\to \Delta_n^*$ is \emph{length-preserving}, i.e., $\|\alpha(S)\|=\|S\|$. The identity on $\Delta_n$ we denote by $\iota$. The \emph{negation}, a rather important mapping, is indicated by $-\iota$ or in expressions only by its sign $-$. We use the \emph{inverse} for concatenation to transform $\Delta_n^*$ from a monoid to a \emph{free group}. \begin{definition} The \textbf{inverse} on $\Delta_n^*$ for concatenation, denoted by $S^{-1}$ for $S\in \Delta_n^*$, is defined as $(S,T)^{-1}=(T^{-1},S^{-1})$ for $S,T\in \Delta_n^*$, and $\lr{x}^{-1}=\lr{-x}$ for $x\in \Delta_n$. \end{definition} \begin{observation} We observe $S^{-1}=-\ensuremath{\mathcal{R}}(S)$ for $S\in \Delta_n^*$, where $\ensuremath{\mathcal{R}}$ is the reverse from \Cref{def:reverse}. Hence, we use $-\ensuremath{\mathcal{R}}$ as its inverse. The inverse is an anti-morphism because it is a combination of two mappings of which the reverse is an anti-homomorphism, and of which the negation is not. \end{observation} \begin{definition} A \textbf{substitution} is a morphism $T : \Delta_n^* \to \Delta_n^*$ that is \emph{expansive}, i.e., for all $x\in \Delta_n$, we have $\|T\lr{x}\| \ge 2$. Also, for every length-preserving morphism $\s$ and substitution $T$, we have $T\s=\s T$. \end{definition} We call a sequence \emph{normalized} if it shows a positive number from the digiset before its negation, and the positive numbers in the sequence occur in ascending order. More precisely: \begin{definition}\label{def:normalized} For a sequence $S=\lr{s_1,s_2,\ldots,}$ and for $0<k\in \Delta_n$, let $n_k$ be the lowest index $i$ such that $s_i = k$, and $n_k=\infty$ if $k\notin S$. This sequence is \textbf{normalized} if $|s_j|<k$ for all $1\le j<n_k$ and for all $0<k$. Therefore, for a normalized sequence, we have $n_1 = 1$, and $n_{k-1}<n_k$ for all $k > 1, k\in \Delta_n$. \end{definition} A \emph{finite} sequence can be normalized in two ways because its reverse can also be normalized, for instance, $\lr{1,2,1,1}$. In this case, we prefer the smaller of the two. Therefore, $\lr{1,1,2,1}$ will be the \emph{minimal normalized} version. See \Cref{df:seqsort} on page \pageref{df:seqsort} for how we order sequences. \subsection{Signed permutations}\label{sec:perms} \begin{definition} A \textbf{signed permutation} is a bijection $\s$ on a digiset $\Delta_n$ with the property that $\s(-k)=-\s(k)$ for $k\in \Delta_n$. \end{definition} Following Knuth in \cite{Knuth}, we use perm to denote a signed permutation. Note that a signed permutation is a length-preserving morphism on $\Delta_n^*$ and commutes with a substitution. In Cauchy's two-line notation\footnote{~\cite[p.~94]{Wussing}, ``Cauchy used his permutation notation -- in which the arrangements are written one below the other and both are enclosed in parentheses -- for the first time in 1815.''}, a permutation looks like $\begin{bmatrix} x & y & z & \cdots \\ a & b & c & \cdots \\\end{bmatrix}$, where the first row contains elements from the domain, and the second row contains their respective images. As a signed permutation satisfies the property $\sigma(-x)=-\sigma(x)$, we use the \emph{one-line notation} $\left[ \sigma(1), \sigma(2), \sigma(3),\ldots ,\sigma(n) \right]$, by which $\sigma$ is completely determined.\footnote{~Bj\"{o}rner et al. \cite[p.~246]{Bjorner2005} called this a \emph{window notation}. Section (8.1) of that work is devoted to the properties of signed permutations and their group.} Examples are the identity $\iota=[1,2,3,\ldots,n]$ and its negation $-\iota=[-1,-2,\ldots,-n]$. In mathematics, a function operates from right to left. For $[-2,4,-1,3]\lr{-3}$, the one-edge sequence $\lr{-3}$ is transformed into $\lr{1}$, the negative of the third value within the signed permutation. Therefore, if we must determine the product of two signed permutations, say, $\s=[\s(1),\s(2),\ldots]$ and $\tau=[\tau(1),\tau(2),\ldots]$, then \[\s\tau =\big[\sgn\big(\tau(1)\big)*\s\big(|\tau(1)|\big),\; \sgn\big(\tau(2)\big)*\s\big(|\tau(2)|\big),\;\ldots\big].\] For instance, $[-2,4,-1,3] [3,-1,4,-2]=[-1,-(-2),3,-(4)]=[-1,2,3,-4]$. We refer to the signed permutation $\mu=[2,3,4,\ldots,n,-1]$ as \emph{the minimal rotation}. If the digiset only has positive integers, the minimal rotation is $\mu=[2,3,4,\ldots,n,1]$. Generally, a signed permutation is defined as a signed binary matrix, i.e., with elements $0,1,\text{ and }-1$, where each row and column has only one element distinct from $0$. As $[-2,4,-1,3]$ indicates the images of the four unit vectors, the matrix becomes, \[[-2,4,-1,3]= \begin{pmatrix} 0&0&-1&0\\ -1&0&0&0\\ 0&0&0&1\\ 0&1&0&0 \end{pmatrix}.\] Therefore, in general, for the signed permutation $\s=[\s(1),\s(2),\ldots]$, its corresponding matrix $\big(\omega(i,j)\big)$ has $\omega(|\s(k)|,k)=\sgn\big(\s(k)\big)$ and $\omega(m,k)=0$ for $m\not=|\s(k)|$. From the matrix representation of signed permutations, we know the determinant equals $\pm 1$. The permutations with positive determinants are \emph{rotations}; those with negative determinants are \emph{rotation-reflections}, i.e., the combination of rotation about an axis and reflection in a plane perpendicular to that axis (\cite[p.84]{Salomon}). We now investigate whether a signed permutation contains a reflection by observing its one-line notation. \begin{definition} The \textbf{parity} of a signed permutation is equal to the determinant of the corresponding matrix, i.e., $-1$ if the mapping is a rotation-reflection and $+1$ if the mapping is a rotation. \end{definition} Let $\s=[\s(1), \s(2),\ldots,\s(n)]$ be a signed permutation. Then, we define $\ng1(\s)$ and $\inv(\s)$ by $\ng1(\s)=\big|\{1\le i\le n: \s(i) < 0\}\big|$ and $\inv(\s)=\big|\{(i, j): 1 \le i<j \le n, |\s(i)| > |\s(j)|\}\big|$. \begin{theorem} $\pr(\s)=-1^{\ng1(\s)+\inv(\s)}$. \end{theorem} \begin{proof} We know that the number of inversions, $\inv(\s)$, is equal to modulo $2$, the number of transpositions of two values in the one-line notation. Such a transposition, say $\s(i)$ and $\s(j)$, corresponds to swapping the two columns $i\text{ and }j$ in the corresponding matrix, which leads to an additional factor of $-1$ in the determinant. Multiplying a value in the one-line notation by $-1$ equals multiplying the corresponding column in the matrix with $-1$. Therefore, if we add the inversions and the minus signs in the one-line notation, we get the exponent of $-1$ in the determinant. \end{proof} A sequence is not generally normalized (\Cref{def:normalized}), but we can easily construct a signed permutation to transform such an array into an isomorphic, normalized sequence. For this, we need the first occurrence $|s_i|=k$ in the series for each $0<k\in\Delta_n$, along with the $\sgn(s_i)$ of that first occurrence. These first occurrences for all $0<k\in \Delta_n$, in that order, constitute the inverse of the characteristic permutation of the original sequence. \begin{definition}\label{def:char_perm} Given a sequence $S=\lr{s_1,s_2,\ldots}$ over $\Delta_n$, define $i_k$ for $1\le k\le n$ such that $i_1=1$ and for all $k$ one has $\big\{|s_j|;1\le j\le i_k\big\}=\big\{|s_{i_1}|,|s_{i_2}|,\ldots,|s_{i_k}|\big\}$. Then, the \textbf{characteristic permutation} $\s$ is determined by $\s=\begin{bmatrix} s_{i_1} & s_{i_2} &\cdots& s_{i_n} \\1 & 2 &\cdots& n \\\end{bmatrix}$, or $\s^{-1}=[s_{i_1},s_{i_2},\ldots,s_{_n}]$. \end{definition} \begin{definition} A \textbf{constant substitution} $\gamma^*:\Delta_n^*\to \Delta^1_n$ is a mapping such that for all $S\in\Delta_n^*$, we have $\gamma(S)=\lr{c}$. A \textbf{constant morphism} $\gamma:\Delta_n\to \Delta_n$ is such that for all $x\in\Delta_n$, we have $\gamma(x)=c$ \end{definition} Henceforth, we use $\Delta$ instead of $\Delta_n$, if $n$ is evident from the context. \subsection{Fractals and approximants} Fractals are difficult to define. Mandelbrot, who coined the term in \cite{Mbrt}, described them as ``\emph{(...)~a rough or fragmented geometric shape that can be split into parts, each of which is - at least approximately - a reduced-size copy of the whole.}'' Falconer \cite{Falc} stated ``\emph{My personal feeling is that the definition of a `fractal' should be regarded in the same way as a biologist regards the definition of `life'.}'' He refers to a fractal as an object with five different, explicit properties, not all of which he describes with sufficient precision. Finally, Kimberling \cite{Kimb} stated ``\emph{A search of \cite{OEIS} for `fractal sequence' reveals that in recent years, different kinds of sequences have been called `fractal' and what many of them have in common is that they are SCSs.}'' (= Self-Contained Sequences). We define a fractal as the limit of finite sequences of increasing length, analogous to an infinite sequence. \begin{definition} A \textbf{fractal} is the limit of an ordered, infinite set of sequences, each of finite length larger than the length of the previous sequence, which are called \textbf{approximants}. Each one is a concatenation of the images of the former approximants, usually only the previous one. Therefore, we have a countable set of finite sequences $\{S_1,S_2,\ldots\}$ and a set of morphisms $\big\{\alpha_{f(k,i)}:\Delta^*\to \Delta^*; i=1,2,\ldots,r; f(k,i)\in\ensuremath{\mathbb{N}} \big\}$, such that $S_{k+1}=T\big(S_k\big)=\big(\alpha_{f(k,1)}(S_{g(k,1)}),\alpha_{f(k,2)}(S_{g(k,2)}),\ldots,\alpha_{f(k,r)}(S_{g(k,r)})\big)$, with $g(k,j)\in\{1,2,\ldots,k\}; j=1,2,\ldots,r $, then the fractal is $S=\lim\limits_{k\to\infty}S_k$. Notice that $\Delta^{\ensuremath{\mathbb{N}}}$ is the set of (right-)\textbf{infinite} sequences. We generally identify a fractal using one of its approximants. A fractal is called \label{def:slfsmlr} \textbf{self-similar} if $f(k,j)=j$ and $g(k,j)=k$ for all $k>0$ and all $j=1,2,\ldots,r$. Henceforth, we assume all our curves to be self-similar. Furthermore, without loss of generality, we assume the identity $\alpha_1=\iota$ such that $S=\big(S_k,\ldots\big)$ for $k>0$. \end{definition} A substitution $T:\Delta^*\to\Delta^*$ implies a dual substitution ${}^m T:\Omega\to\Omega$, by ${}^m T(\s)=T\; \s$ for $\s\in \Omega$, which is the group of morphisms on $\Delta^*$. For a self-similar fractal, we write $T\big(S_k\big)=\big(S_k, \alpha_2(S_k), \ldots, \alpha_r(S_k)\big)$ and denote this by ${}^m T=[\iota, \alpha_2,\ldots,\alpha_r]$. \subsection{Grid, direction, and isometry} \begin{definition} Let $\big\{u(1),u(2),\ldots,u(n)\big\}\subset \ensuremath{\mathbb{R}}^d$ be a set of vectors\footnote{~We identify points, vertices, and vectors in $\ensuremath{\mathbb{R}}^d$.}, where $d\le n$ is a dimension such that the vectors span $\ensuremath{\mathbb{R}}^d$. Furthermore, we have $u(j)\not= \alpha\ast u(i)$ for all $\alpha \in \ensuremath{\mathbb{R}}$ and all $1\le i\not= j\le n$, i.e., every pair of vectors is independent. The set $\Gamma_n=\left\{\sum\limits_{i=1}^n k_i\ast u(i) \right\}$ with $k_i\in \ensuremath{\mathbb{R}}$ and $|\{k_i\notin \Z\}|\le 1$, is called a \textbf{grid}. A grid has $2n$ \textbf{directions}, i.e., its generators and their negations $\{\pm u(k)\}$. Each direction with its opposite forms a \textbf{dimension}. The generators of a grid $\Gamma_n$ relate to a digiset $\Delta_n$ by the mapping $u(k)\mapsto \lr{k}$. \end{definition} Using the relation between the generators of the grid and the digiset, we have an association between \emph{number sequences and fractal images}, i.e., subsets of the grid, as the title of this study suggests. \begin{example} The most crucial grid we encounter is the cubic grid $\Z^d$ for $1\le d$, with the $2d$ directions $\lr{\pm 1},\lr{\pm 2},\ldots,\lr{\pm d}$. \begin{figure}[H] \centering \includegraphics[scale=1.5]{triangular_squarediag_grid.pdf} \caption{Triangular and square-diagonal grids, with directions indicated by integers.} \label{fig:triangulargrid1} \end{figure} The triangular grid shown in \Cref{fig:triangulargrid1} is also essential. Notice that a grid can have more generators than its dimension, as observed in the triangular and square-diagonal grids. We determine a grid using a matrix in which the columns represent the generators in the appropriate order. The two grids in \Cref{fig:triangulargrid1} are given by $\begin{pmatrix} 1 & \frac{1}{2} & -\frac{1}{2} \\ 0& \frac{1}{2}\sqrt{3} & \frac{1}{2}\sqrt{3}\\ \end{pmatrix}$ and $\begin{pmatrix} 1 & 1 & 0 & -1\\ 0 & 1 & 1 & 1 \\ \end{pmatrix}$. Contrary to a \emph{point lattice}, which consists of only vertices, a grid is a \emph{graph} with vertices and edges. To the best of our knowledge, in higher dimensions, only the cubic grid based on the cubic lattice is significant. \end{example} \begin{example} The following grids are associated with the triangular and square-diagonal grids: the \emph{tri-hexagonal grid}, \emph{hexagonal} (honeycomb) grid, and \emph{truncated square} grid, see \Cref{fig:hc_trsq_grd}. \begin{figure}[H] \centering \includegraphics[scale=.7]{honeycomb_truncsquare_grid.pdf} \caption{Tri-hexagonal, Honeycomb, and Truncated square grids.} \label{fig:hc_trsq_grd} \end{figure} In these grids, we use directions from the triangular and square-diagonal grids, but with restrictions, as displayed in \Cref{fig:hc_trsq_grd_2} \begin{figure}[H] \centering \includegraphics[scale=0.60]{hexagonal___octogonal_grid_1.pdf} \caption{Successor directions in the tri-hexagonal, honeycomb, and truncated square grids. The vertices indicate the incoming directions.} \label{fig:hc_trsq_grd_2} \end{figure} In \Cref{fig:hc_trsq_grd_2}, the edges that can follow a specific direction in the central polygon are shown. In the tri-hexagonal grid, three directions correspond to one direction. By contrast, there are two possible directions at a vertex of the honeycomb or the truncated square grid. \end{example} Generally, \emph{fractal} is represented as a geometrical figure, where the approximants undergo shrinking, such as $F_{n+1}=\varphi\ast T(F_n)$ with $0<\varphi<1$ and an (expanding) substitution $T$. Therefore, the fractal is defined as $F=\lim\limits_{n\to\infty}F_n$. Our infinite sequence, with the integers interpreted as directions in a grid, becomes a geometrical object of infinite size, where the size or length of each approximant is less than that of the next one. In our approach, the approximants of a fractal are \textbf{paths}, i.e., directed graphs with all vertices of degree two, except for the first and the last one, which have degree one, respectively. Therefore, an \textbf{entry (point)} is the first vertex on the path, denoted by $\circ$; an \textbf{exit (point)} is the last vertex on the path, denoted by $\bullet$. A fractal, being an infinite limit of finite paths, has a single entry. Therefore, we refer to a fractal as \textbf{curve}, and replace the term ``approximant'' with \textbf{$k$-curve} or \emph{curve of level $k$}. We define a vertex as the $0$-curve, or a curve without edges \cite{allshal}, denoted by $\epsilon=\lr{}$. A majority of the curves are generated from a substitution that uses the $1$-curve, i.e., the first approximant, as the start. \begin{definition}\label{def:orientation} The \textbf{orientation} of a $k$-curve is the vector directly from entry to exit. \end{definition} \begin{remark} A sequence represents subsequent edges, indicating graph-wise a difference between a single vertex and two successive edges in opposite directions. Hence, we do \emph{not} use annihilation, i.e., \emph{not} identify $\lr{}$ with $\big(S,-\ensuremath{\mathcal{R}}(S)\big)$ or $\lr{a,-a}$. \end{remark} \begin{definition} An \textbf{isometry} is a distance-preserving transformation on the grid. If necessary, we denote two isometric sets by $A\cong B$. \end{definition} Notice that an isometry also \emph{preserves the angles} between vectors. Whether a signed permutation is an isometry depends on the difference in the length of the generators, for instance, the square-diagonal grid in \Cref{fig:triangulargrid1}. The inverse $-\ensuremath{\mathcal{R}}$ of a (finite) path is a mapping from the path to itself, with entry and exit swapped, the ordered multiset of edges reversed, and the direction of each edge reversed. The reverse $\ensuremath{\mathcal{R}}$ only reverses the multiset of edges; hence, the entry and exit remain fixed. Therefore, if $S=\lr{e_1,e_2,\ldots,e_n}$ are the edges of a path, then $-\ensuremath{\mathcal{R}}(S)=\lr{-e_n,\ldots,-e_2,-e_1}$ and $\ensuremath{\mathcal{R}}(S)=\lr{e_n,\ldots,e_2,e_1}$. See Intermezzo 1 on page \pageref{int:intermezzo1}. \subsection{Representing fractals uniquely} One of our objectives is to set up an encyclopedia of normalized fractals as an independent set. However, such an encyclopedia could partially be considered as a subset of \href{https://oeis.org/}{OEIS}. We order this list of normalized sequences according to \Cref{df:seqsort} on page \pageref{df:seqsort}. We give the sequence, its index from \href{https://oeis.org}{OEIS}, and describe the digiset, start sequence, substitution, and grid with its generators, and finally the figure of the geometric fractal, similar to the example below of the first sequence (\Cref{ss:Dkkng Flwsnk}). \begin{description} \setlength{\itemsep}{0ex} \item[B.3 Dekking's Flowsnake] \item[sequence:] $\lr{1,1,2,-1,2,1,2,-1,-1,2,1,1,1,2,1,-2,-2,-1,-2,-2,1,2,1,-2,-2,1,\ldots}$ \item[in \href{https://oeis.org}{OEIS}:] Not present (18-01-2022) \item[digiset:] $\Delta=\{1,2\}$ \item[start sequence:] $\lr{1}$ \item[substitution:] $T(\iota)=(\iota,\iota,\mu\tau,-\tau,\mu,\iota,\mu\tau,-\tau,-\iota,\mu\tau,\iota,\iota,\tau,\mu,\tau,-\mu,-\mu,-\tau,-\mu,-\mu\tau,\tau,\mu,\\ \iota,-\mu\tau,-\mu\tau)$, where $\tau=[1,-2]$ and, as usual, $\mu=[2,-1]$ \item[grid:] The square, plane grid. \item[generators:] $(1,0) ; (1,0)$ \item[figure:] The $1$-curve (first row left), two $2$-curves (anti diagonal) and the $3$-curve (last row right). \end{description} \begin{figure}[H] \begin{center} \includegraphics[scale=0.31]{Dekking_flowsnake_2.pdf} \caption{\small{First row is drawings of the $1^{\text{st}}$ and $2^{\text{nd}}$ approximants. Second row is the $2^{\text{nd}}$ and $3^{\text{rd}}$ approximants, separating the space into two parts, black and white, both with tree-like structures.}} \end{center} \end{figure} \section{Examples} The following examples highlight different aspects of representing fractals using signed sequences. We also investigate the relationship between sequences and their geometric pictures. \subsection{Ventrella's Box 4}\label{sub:box4} Suppose we have a fractal whose $1$-curve is identical to the first image in \Cref{fig:ventrllflags}, under ``identity.'' If this is the image under the substitution of a horizontal unit line segment, then we investigate whether the images of the other line segments are horizontal or vertical. We can choose the transformations of the first curve, with similar entry and exit, as in the rest of \Cref{fig:ventrllflags}. \begin{figure}[H] \centering \includegraphics[scale=1.2]{Ventrella_s_flags.pdf} \caption{Different directions of a $1$-curve.} \label{fig:ventrllflags} \end{figure} Ventrella suggested a flag-like arrow to indicate different oriented edges, which led to various images of those edges. We placed the flag at the center, and created the drawings in \Cref{fig:ventrllflags2}, which are consistent with those in \Cref{fig:ventrllflags}. \begin{figure}[H] \centering \includegraphics[scale=1.15]{Ventrella_s_flags2.pdf} \caption{Ventrella's flags shifted to the center and its isometries.} \label{fig:ventrllflags2} \end{figure} In \Cref{fig:otherflags}, we observe the same figures as in \Cref{fig:ventrllflags} and \Cref{fig:ventrllflags2}, with their corresponding transformations, but we swapped the directions, just like the entry and exit. This completes the list of all the isometries of the original figure, except the (infinite number of) rotations. \begin{figure}[H] \centering \includegraphics[scale=1.2]{other_flags.pdf} \includegraphics[scale=1.15]{Ventrella_s_flags3.pdf} \caption{Similar drawings as \Cref{fig:ventrllflags} and \Cref{fig:ventrllflags2}, other isometries swapping entry and exit.} \label{fig:otherflags} \end{figure} \noindent\fbox{\parbox{0.98\textwidth} {\textbf{Intermezzo 1}\label{int:intermezzo1} \begin{minipage}[b]{7cm} There is an issue in terms of the difference between reverse, negation, and rotation over $\pi$, since we can swap the order of the edges, swap the direction of each edge, in which case the entry and exit are swapped as well, or both. In figure \Cref{fig:ventrllflags3}, we illustrate different ways to revert a directed graph with entry and exit. Both $\ensuremath{\mathcal{R}}\!=\text{\sl reverse}$ and $-\iota=\text{\sl negate}$ produce a rotation over $\pi$. The former preserves, whereas the latter swaps directions, including those of entry and exit. \end{minipage} \begin{minipage}[b]{8cm} \begin{figure}[H] \centering \includegraphics[scale=1.1]{rotate___variants.pdf} \caption{Rotate by reverse or negate.} \label{fig:ventrllflags3} \end{figure} \end{minipage} $-\ensuremath{\mathcal{R}}$ is the {\sl in}verse, which annihilates the original by swapping everything, the order of edges, the directions of edges, and the entry and exit. When a rotated $k$-curve is used in the build-up of the next version, the order of the edges can be reversed without swapping the entry and exit, but one can also switch the edges themselves. We refrain using the phrase ``rotate over $\pi$,'' and use one of the different formulas $-\iota$ or $\ensuremath{\mathcal{R}}$ for the sake of clarity. }} We can decorate the original $1$-curve by choosing for each edge, one of the flags from \Cref{fig:ventrllflags2} or \Cref{fig:otherflags}, to determine in which image of the $1$-curve this edge can be substituted. The (open) question remains: how many different, i.e., non-isometric curves can be constructed using only normalized curves? Ventrella \cite{VentrellaTree} studied a fractal called ``Box 4,'' clearly indicated by his flags, conform the center drawing of the first row of \Cref{fig:ventrll1crvs}, where its $2$-curve is at the right-hand side. We added the first two columns, ``integers'' and ``transformations,'' using his notation, and then a column using our notation, with $\ensuremath{\mathcal{R}},\tau_y,\text{ and }\mu$, as in \Cref{fig:ventrllflags}, \Cref{fig:ventrllflags2}, and \Cref{fig:otherflags}. In the second row, using our more informative flags, we transformed his curves by normalizing and \emph{extending}\label{df:extending}, such that $T(S_k)=S_{k+1}=\big(S_k,S'_k\big)$ for some sequence $S'_k$. \begin{figure}[H] \centering \includegraphics[scale=1.0]{Ventrella_s_1-curves_b.pdf} \caption{Top row: Ventrella's notation and his flagged $1$-curve; bottom row: our adaptation.} \label{fig:ventrll1crvs} \end{figure} \begin{list}{}{ \setlength{\leftmargin}{0mm} \parsep\parskip \setlength{\itemsep}{1mm} } \item We can perform this transformation in various ways. First, we can take Ventrella's substitution $V(\iota)=\big(\tau_d\ensuremath{\mathcal{R}},\iota,-\mu\ensuremath{\mathcal{R}},\tau_y\big)$\footnote{~In \Cref{sc:Cayley}, we described the group of isometries of the square grid with the signed permutations like $\tau_d=[2,1]$, the diagonal reflection, and $\tau_{-d}=[-2,-1]$, the anti-diagonal reflection.}, and apply the transformations $\ensuremath{\mathcal{R}}$ and $\tau_{-d}$, which produces our substitution $T(S_k)=\big(S_k, \tau_{-d}\ensuremath{\mathcal{R}}(S_k), \tau_y(S_k), \mu\ensuremath{\mathcal{R}}(S_k)\big)$, abbreviated to $T(\iota)=\big( \iota, \tau_{-d}\ensuremath{\mathcal{R}},\tau_y,\mu\ensuremath{\mathcal{R}}\big)$. \item Second, we can use the $2$-curve (right-hand side of the first row in \Cref{fig:ventrll1crvs}) to determine the four transformations of the $1$-curve using which we construct the $2$-curve. First, we normalize Ventrella's $2$-curve by applying $\ensuremath{\mathcal{R}}$, and setting $\lr{1,2,1,-2}$ for the $1$-curve and $\lr{1,2,1,-2,1,-2,-1,-2,1,-2,1,2,1,2,-1,2}$ for the $2$-curve (right-hand side of the second row in \Cref{fig:ventrll1crvs}). We partition the $2$-curve into discrete curves of length four and determine their isometric images of the $1$-curve, the first one being $\iota$. The other images are $\lr{1,-2,-1,-2}=-\mu\lr{2,1,-2,1}=-\mu\tau_y\lr{-2,1,2,1}=\tau_{-d}\ensuremath{\mathcal{R}}\lr{1,2,1,-2}$, then $\lr{1,-2,1,-2}=\tau_y\lr{1,2,1,-2}$, and finally $\lr{1,2,-1,2}=\mu\ensuremath{\mathcal{R}}\lr{1,2,1,-2}$, which brings the substitution of the isometries into $T(\iota)=\big(\iota,\tau_{-d}\ensuremath{\mathcal{R}},\tau_y,\mu\ensuremath{\mathcal{R}}\big)$, which is the same as what we obtained before. \item Finally, we construct higher-level approximants for Box 4 of Ventrella to find an easy substitution. This construction is done using our normalized version, starting with $\lr{1, 2, 1,-2}$; then, the final sequence becomes\\ $\lr{1,2,1,-2,1,-2,-1,-2,1,-2,1,2,1,2,-1,2,1,-2,1,2,1,2,-1,2,-1,-2,-1,2,-1,2\ldots}$.\\ If we group this sequence into disjoint groups of four elements and match them with the corresponding first items of the sequence, we get the following substitution \begin{equation*}T'=\begin{cases} 1 &\to \lr{ 1 , 2 , 1',-2'}\\ 1'&\to \lr{ 1',-2', 1 , 2 } \\ 2 &\to \lr{ 1',-2',-1 ,-2 } \\ 2'&\to \lr{-1 ,-2 , 1',-2'}\\ \end{cases}\end{equation*} For this, we need $\{\pm 1, \pm 2\}$, and $\{\pm 1', \pm 2'\}$, where both $\lr{x}\text{ and }\lr{x'}$ indicate the same direction. As usual, $T'(-x)=-T'(x)$ for $x\in\{\pm 1, \pm 2, \pm 1', \pm 2'\}$. To make this substitution work, we need a perm $\s$ such that if $\s(1)=2$, we have $\s\lr{1,2,1',-2'}=\lr{1',-2',-1,-2}$. Therefore, we define $\tau_{-d}'=\begin{bmatrix} 1&2&1'&2'\\-2&-1&-2'&-1'\end{bmatrix}$, then $\tau_{-d}'\ensuremath{\mathcal{R}}\lr{1,2,1',-2'}=\tau_{-d}'\lr{-2',1',2,1}=\lr{1',-2',-1,-2}$. We further define the vertical reflection $\tau_y'=\begin{bmatrix} 1&2&1'&2'\\1'&-2'&1&-2\end{bmatrix}$, the minimal rotation $\mu'=\begin{bmatrix} 1&2&1'&2'\\2'&-1'&2&-1\end{bmatrix}$, and achieve $T'=(\iota,\tau'_{-d}\ensuremath{\mathcal{R}}, \tau'_y,\mu'\ensuremath{\mathcal{R}})$ or \[T'(1)=\lr{1,2,1',-2'}=\big(\iota(1),\tau'_{-d}\ensuremath{\mathcal{R}}(1), \tau'_y(1),\mu'\ensuremath{\mathcal{R}}(1)\big).\]. \item Finally, we introduce an obstacle that did not occur in Ventrella's approach. We not only want our sequences to be normalized but also to be``extending'' (p.~\pageref{df:extending}). If we apply our substitution $T(\iota)=\big(\iota,\tau_{-d}\ensuremath{\mathcal{R}},\tau_y,\mu\ensuremath{\mathcal{R}}\big)$ to the $0$-curve, which is edge $\lr{1}$, we obtain for a $1$-curve $\lr{1,-2,1,2}$, which is not the start of the $2$-curve. Fortunately, we can resolve this by applying an additional $\tau_y$ after the substitution to get a $k$-curve, where $k$ is odd. Therefore, we get $S_{k}=\tau_y^k T(S_{k-1})$ because $\tau_y^2=\iota$. \Cref{fig:box4crvs} shows a few approximations of the normalized and extending Box4-fractal. \end{list} \begin{figure}[H] \centering \includegraphics[scale=1.05]{V_s_2curves.pdf} \caption{$k$-curves for the Box 4-fractal with $k=3,4,5$, all with rounded corners.} \label{fig:box4crvs} \end{figure} \begin{remark} The advantage of a \emph{transformation} substitution over a \emph{number} substitution is that, we can observe the different transformations involved from one approximant to the next. A second observation is that the group of signed permutations in $n$ dimensions is the hyper-octahedral group of order $(2n)!!=2^n n!$. In two dimensions, this group is generated, among others, using minimal rotation $\mu=[2,-1]$ and vertical reflection $\tau_y=[1,-2]$, which we use for the square grid throughout this study. See \Cref{sc:Cayley} for the dihedral group D4 of transformations of the square grid. \end{remark} \subsection{Ventrella's V1 Dragon} This example uses the square-diagonal grid, which is peculiar because not all directions have the same lengths; refer to the grid on the left-hand side in \Cref{fig:sqrdiag8roots}. On comparing the two grids, the $8^\text{th}$ roots of unity span the right one, where all directions have equal lengths. \begin{figure}[H] \centering \includegraphics[scale=1.2]{squarediag_8th_roots_grid.pdf} \caption{Square-diagonal and $8^\text{th}$-roots grids.} \label{fig:sqrdiag8roots} \end{figure} The minimal rotation in both grids over $\pi/4$ is given by $\mu=[2,3,4,-1]$, whereas the vertical reflection is $\tau_y=[1,-4,-3,-2]$. {As the directions are mutually dependent, the two signed permutations do not generate the entire hyper-octahedral group of four dimensions but generate the symmetry group of the octagon, i.e., the dihedral group $D_8$.} We consider another example of Ventrella \cite{VentrellaTree}, called the ``V1 Dragon.'' We slightly altered his sample to make it normalized and extending. In \Cref{fig:V1drgn}, we observe its $1$- en $2$-curves. \begin{figure}[H] \centering \includegraphics[scale=0.8]{Ventrella_V1_dragon.pdf} \caption{$1$- and $2$-curves of Ventrella's V1 Dragon.} \label{fig:V1drgn} \end{figure} Similar to the previous \Cref{sub:box4}, we can ``read'' the transformations involved from the first picture by using only reverse $\ensuremath{\mathcal{R}}$ and rotation $\mu$ (and the identity $\iota$). Therefore, $T(\iota)=\big(\iota, \ensuremath{\mathcal{R}}\mu^2, \sqrt{2}\ast\mu^3 \big)$. Here, we notice an important difference with previous fractals, which only had edges of length one. Apart from the transformations $\mu$ and $\ensuremath{\mathcal{R}}$, we multiplied the length with $\sqrt{2}$. \begin{figure}[H] \centering \includegraphics[scale=0.7]{Ventrella_V1_dragon_rt8.pdf} \caption{Same $1$- and $2$-curves on the $8^{\text{th}}$-root grid.} \label{fig:V1drgnrt8} \end{figure} As the length of a fractal sequence cannot be determined from the directions in its representation, we introduce a separate sequence of lengths using its \emph{length substitution}, which, completely independent from the \emph{directional substitution}, determines the geometrical fractal from both series. If we draw the same sequence in the $8^{\text{th}}$-roots grid, we have almost the same fractal, but with edges of length one, as shown in \Cref{fig:V1drgnrt8}. As this grid is dense in $\ensuremath{\mathbb{R}}^2$, the geometric picture is less pleasing than the one in the square-diagonal grid. \begin{figure}[H] \centering \includegraphics[scale=0.5]{Two_V1_Dragon_4_curves.pdf} \caption{$4$-curves on the $8^{\text{th}}$-roots and the square-diagonal grids.} \label{fig:V1drgns} \end{figure} In \Cref{fig:V1drgns}, we see the $4$-curves of Ventrella's V1 dragon, with the upper one on the $8^{\text{th}}$-roots grid and the lower one on the square-diagonal grid. Evidently, the lower one is larger because some of the edges have grown in length, and the vertices of this curve are on the lattice $\Z^2$. By contrast, the upper curve has edges that \emph{partially} overlap which, to the best of our knowledge, is unseen in geometrical fractals. A fractal curve only shares vertices with itself, or edges, or neither of the two. As we are more interested in fractals as number sequences than geometrical figures, we have the same sequence for directions in both grids, and a length sequence in the case of the square-diagonal grid. Therefore, we first split the substitution $T$ into two: $_dT$ for the directions and $_lT$ for the lengths, which lead to the sequences $_dS$ and $_lS$, respectively. Thus, we get \begin{equation*} T(\iota)=\big(\iota, \ensuremath{\mathcal{R}}\mu^2, \sqrt{2}\ast\mu^3 \big) \equiv \begin{cases} _dT(\iota)=\big(\iota, \ensuremath{\mathcal{R}}\mu^2, \mu^3 \big)\\ _lT(\iota)=\big(\iota, \ensuremath{\mathcal{R}}, \sqrt{2} \big) \end{cases} \end{equation*} Starting with $_xS_0=\lr{1}$ for both $x=d,l$, this leads to the sequences \\ $_dS=\lr{1,3,4,-2,-1,3,4,-2,-3,1,-4,-2,-1,-3,-4,-2,-1,3,4,-2,-3,1,-4,-2,\ldots}$\\ and\\ $_lS=\lr{1,1,\sqrt{2},\sqrt{2},1,1,\sqrt{2},\sqrt{2},2,2,\sqrt{2},\sqrt{2},1,1,\sqrt{2},\sqrt{2},1,1,\sqrt{2},\sqrt{2},2,2,\sqrt{2},\sqrt{2},2,2,\ldots}$. The latter can be simplified by taking the $\sqrt{2}$ logarithm, which gives\\ $\log_{\sqrt{2}}\left(_lS\right)=\lr{0,0,1,1,0,0,1,1,2,2,1,1,0,0,1,1,0,0,1,1,2,2,1,1,2,2,3,3,2,2,1,1,2,2\ldots}$, which is the double \big(i.e., $\lr{x,x}$ versus $\lr{x}$\big) of $A062756$ in \cite{OEIS}. $_dS$ is not normalized because the numbers $3$ and $4$ are used before $2$, as observed from the first directions with different axes in our definition of V1 dragon, which are $1,3,4,-2$. Refer to \Cref{fig:V1drgn} or \ref{fig:V1drgnrt8}. For this case, we have the characteristic permutation (c.f.~\Cref{def:char_perm}), which happens to be $[1,-4,2,3]$ and brings the sequence back to:\\ $_dS'=\lr{1,2,3,4,-1,2,3,4,-2,1,-3,4,-1,-2,-3,4,-1,2,3,4,-2,1,-3,4,-2,1,-4,3\ldots}$. Suppose we want this normalized sequence to represent the isometric image of V1 dragon. In this case, we must adjust the numbering of the directions as indicated by the characteristic permutation, which is the analog of a base transformation in a vector space. \begin{figure}[H] \centering \includegraphics[scale=1.2]{squarediag_8th_roots_grid_b.pdf} \caption{Two grids are numbered differently than in \Cref{fig:sqrdiag8roots}.} \label{fig:sqrdiag8rootsb} \end{figure} Suppose we change the numbering of the directions, such that $1,3,4,-2$ becomes $1,2,3,4$, as in \Cref{fig:sqrdiag8rootsb}, i.e., with $[1,3,4,-2]^{-1}=[1,-4,2,3]=\s$. The transformations, derived from \Cref{fig:V1drgn}, are the same; however, the minimal rotation is now represented by $\mu'=[-4,3,-1,-2]$, which is equal to $\mu'=\s\mu\s^{-1}$, where $\mu=[2,3,4,-1]$ is the minimal rotation in the original directions, and $\s$ is the ``base transformation.'' Therefore, the new substitutions become $_dT'(\iota)=\big(\iota, \ensuremath{\mathcal{R}}\mu'\,^2, \mu'\,^3 \big)$ and $_lT'=\,_lT$, and we get the same geometric pictures as \Cref{fig:V1drgns}. \subsection{Hilbert's original curve}\label{sub:hlbrt2d} In this section, we study one of the oldest and most famous fractals, the curve Hilbert presented in his two-page paper \cite{Hilbert}, with the drawing he depicted (\Cref{fig:hilbert}) as the primary explanation. \begin{figure}[H] \centering \includegraphics[scale=0.6]{figuurHilbert.pdf} \caption{Hilbert's original drawings, and his first, second, and third approximants.} \label{fig:hilbert} \end{figure} We propose a new and thorough way of generating the fractal sequence that we discussed in the introduction. The Hilbert approximants lie on the square grid, as shown in \Cref{fig:sqrgrd}. \noindent\begin{minipage}[b]{11cm} From Hilbert's drawings, we observe that isometric images of the $k$-curves are present in each of the four corners of the next $k+1$-curve, and the only edges that connect these images are the edges of the $1$-curve. On further inspection, we observe the contrary. An isometric image of the $1$-curve replaces each vertex of a $k$-curve, and the edges of the $k$-curve properly connect these images. Therefore, we have the formula for the first approach and for some isometries $\s_k;\,k=1,2,3,4$: $H(k+1)=\\ \left(\s_1\big(H(k)\big),2,\s_2\big(H(k)\big),1,\s_3\big(H(k)\big),-2,\s_4\big(H(k)\big)\right)$. \end{minipage} \begin{minipage}[b]{5cm} \begin{figure}[H] \centering \includegraphics[scale=1]{square__grid.pdf} \caption{Square grid.} \label{fig:sqrgrd} \end{figure} \end{minipage} For the second approach, with $H(k)=\lr{s_1,s_2,\ldots,s_{4^k-1}}$ as the edge-representation,\\ $H(k+1)=\left(\s_1\big(H(1)\big),s_1,\s_2\big(H(1)\big),s_2,\ldots,\s_{4^{k-1}}\big(H(k)\big),s_{4^k-1},\s_{4^k}\big(H(k)\big)\right)$. In the introduction, we described the first approach using $\tau_{d}=[2,1]$, the reflection in the line $y=x$. We also observed that the substitution $T:H(k)\mapsto H(k+1)$ between two succeeding approximants is given by $T\big(H(k)\big)=\Big(\tau_{d}(H(k)),1,H(k),2,H(k),-1,-\tau_{d}(H(k))\Big)$, which corresponds to Hilbert's original drawing. We write \begin{equation}\label{eq:hilbert1}T(\iota)=(\tau_{d},1,\iota,2,\iota,-1,-\tau_{d})\end{equation}, where $\iota$ is the identity, as usual. As we prefer normalized, extending sequences, we start by redrawing the first approximant, such that $H(k)$ is the first part of $H(k+1)$ for all $k=1,2,\ldots$, cf.~\Cref{fig:hlbrt2d123}. \begin{figure}[H] \centering \includegraphics[scale=0.8]{Hilbert2d_1,2,3_b.pdf} \caption{Hilbert's $1$-, $2$, and $3$-curves, normalized and extending, with directions for the isometries to be derived clearly.} \label{fig:hlbrt2d123} \end{figure} Now, the substitution becomes \[T\big(H(k)\big)=\Big(H(k),\tau_{d}^k(1),\tau_{d}(H(k)),\tau_{d}^k(2),\tau_{d}(H(k)),\tau_{d}^k(-1),-H(k)\Big).\] However, this formulation is a type of \emph{hybrid}. We have isometries operating on $k$-curves; by contrast, they work on edges of the $k$-curve. When the edges are replaced with $H(0)=\lr{1}$ or its images, the substitution looks unnatural. A better solution is to replace \emph{vertex replacement} with \emph{edge replacement}. We show this implicitly in \Cref{fig:hilbert4b}, where we display the two versions of the first three approximants. The first approximant, $H_1$, is extended with one (dashed) edge \emph{before} the entry, and one \emph{after} the exit, such that both are isometric. However, these incoming and outgoing edges are redundant because we need only one edge to connect the $k$-curves to build a $k+1$-curve; therefore, we omit the (dashed) entry-edge. The two enlarged curves $H_1$ and $\varphi(H_1)$ differ because the (extra) exit-edge in one case has the same direction as the orientation (see \Cref{def:orientation}) of the $1$-curve. In the other case, the exit-edge is orthogonal to the orientation of the $1$-curve. The other approximants are extended as well and denoted by $H_k$, with the entry-edge in the same direction as the orientation, and by $\varphi(H_k)$\footnote{~$\varphi(H_k)$, defined by $\varphi\Lr{s(1),s(2),\ldots,s(2^n-1),s(2^n)}=\Lr{s(1),s(2),\ldots,s(2^n-1),\tau_d\big(s(2^n)\big)}$.}, with the entry-edge orthogonal to the orientation. As the approximants $H_k$ and $\varphi(H_k)$ are equal, except the last (exit) edge, the Hilbert $k$-curve is equal to $H_k$ or $\varphi(H_k)$, excluding that last edge. \begin{figure}[H] \centering \includegraphics[scale=0.60]{Hilbert_1_3_curves_c.pdf} \caption{Hilbert's normalized, extending $1$-curves $H_1$ and $\varphi(H_1)$ in the left, where the dashed extension is excluded. On the right, the layout of the $2$-curves $H_2$ and $\varphi(H_2)$. } \label{fig:hilbert4b} \end{figure} The substitution becomes relatively straightforward; however, we need two substitutions simultaneously. Note that with $\tau_{d}^2=\iota=\varphi^2$, \begin{equation*} T(A_k)=\big(H_k,\tau_{d}(H_k),\tau_{d}\varphi(H_k),-\varphi(A_k)\big) \text{ with } A_k\in\{H_k,\varphi(H_k)\} \end{equation*} The normalized Hilbert curve is extending. If we calculate the sequence with $H_0=\lr{1}$ and $\varphi(H_0)=\lr{2}$, we get\\ $\lr{1,2,-1,2,2,1,-2,1,2,1,-2,-2,-1,-2,1,1,2,1,-2,1,1,2,-1,2,1,2,-1,-1,-2,-1\ldots}.$\\ This sequence also has a simple substitution, where both $x$ and $x'$ have the same direction: \begin{equation*} T=\begin{cases} 1 &\to \lr{ 1,2,-1',2'}\\ 1'&\to \lr{ -2,-1,2',2} \\ 2 &\to \lr{ 2,1,-2',1'} \\ 2'&\to \lr{ -1,-2,1',1}\\ \end{cases} \end{equation*} An advantage of these enlarged Hilbert curves is that they can be generalized to higher dimensions to produce high dimensional (Hilbert) curves with special properties (\cite{BH}). \subsection{The \texorpdfstring{$\beta,\Omega$}{beta, Omega} curves} In this section, we construct a pair of intertwining curves, which can be considered as the nephews of the Hilbert curve. Refer to the simple \Cref{eq:hilbert1} (page \pageref{eq:hilbert1}); except a few constants, we only use one isometry, i.e., the diagonal reflection $\tau_{d}=[2,1]$, along with the identity $\iota$. Therefore, it is evident that there is scope for variations. We introduce the $\beta,\Omega$-curves, as referred by their inventor \cite{Wierum}, see \Cref{fig:btOm1}. \begin{figure}[H] \begin{minipage}[t]{\textwidth} \centering \includegraphics[scale=0.7]{betaOmega_by_strings_c.pdf} \caption{On the left is the inventor's drawing to justify the naming, in the middle is the corresponding artist's impression, and on the right are the normalized $\beta$ and $\Omega$ $2$-curves.~\protect\footnotemark} \label{fig:btOm1} \end{minipage} \end{figure} \footnotetext{Note that there can be another non-isomorphic normalization of the $\beta$-curve if you reverse the $\beta$-curve first.} As the first approximant is equal to that of the Hilbert curve (see the top row of \Cref{fig:btOm2}), the first substitutions, similar to \Cref{eq:hilbert1} (page \pageref{eq:hilbert1}), become $T_{\beta}(\iota)=(\iota,-1,-\tau_{d},2,\tau_{d},1,\tau_{d})$ and $T_{\Omega}(\iota)=(\iota,-1,-\tau_{d},2,\tau_{d},1,\iota)$, where $\pm 1,\pm 2$ are the connecting edges. Notice the two substitutions only differ in terms of the last item. \begin{figure}[H] \centering \includegraphics[scale=0.85]{beta_Omega_approximants.pdf} \caption{The first row shows the two general forms of the $k$-curves of type $\beta$ and the general form of the $k$-curves of type $\Omega$. We display the modifications for the next generation underneath. See \Cref{sc:Cayley} for the dihedral group D4 of transformations of the square grid.} \label{fig:btOm2} \end{figure} The first approximants of the Hilbert curve have entry and exit on the vertices of the surrounding square, whereas the $k$-curves for $k\ge 2$ of the $\beta$- and $\Omega$-curve have entry and exit on (approximately) one-third of different edges of that square. Therefore, if we draw a general picture of a $k$ curve for the $\beta$ and $\Omega$-curves, we should obtain something similar to the upper row of \Cref{fig:btOm2}. \noindent\fbox{\parbox{16cm} {\textbf{Intermezzo 2}\label{int:intermezzo2} The extra curve $\beta'$ is required because of the asymmetry of the $\beta$-curve. $\beta'$ is approximately the inverse of the $\beta$-curve, as one can observe from the lower part of \Cref{fig:btOm2}: strip the last edge of $\beta'$, take the inverse of the rest, and glue the last edge diagonally reflected to the end of the part that was inversed. Or, if $\beta=\Lr{s(1),s(2),\ldots,s(n-1),s(n)}$, then $\beta'=\Lr{-s(n-1),\ldots,-s(2),-s(1),-\tau_d\big(s(n)\big)}$. Likewise, we could redefine the $\Omega$-curve: if $\beta=(\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ consists of four sub-curves of equal sizes, then $\Omega_k=\big(\alpha_1,\alpha_2,\alpha_3,(-\iota)^{k+1}\tau_d(\alpha_4)\big)$. We neglect these more complicated isometries and prefer the alternate curves $\beta'$ and $\Omega$. }} Therefore, we start with the $1$-curves $\beta_1=\lr{1,2,-1,-1}; \beta_1'=\lr{1,-2,-1,-2}\text{ and }\Omega_1=\lr{1,2,-1,2}$, conform to the first row of \Cref{fig:btOm2}.\footnote{~In \Cref{fig:btOm2}, a non-normalized version of $\beta'$ is given and used further.} As shown in the bottom row of \Cref{fig:btOm2}, the next generation of the $\beta,\Omega$ curves is not normalized if the former one is. Thus, we apply the horizontal reflection $\tau_x$ alternately. To normalize the $k$-curves, we combine the construction in the lower part of \Cref{fig:btOm2}, and obtain the following substitutions for $k\ge 1$: \begin{alignat*}{3} T_{\beta}(k+1)&=\tau_x^k (\beta_{k+1} ); \quad \beta_{k+1}&=&\Big(\tau_x(\beta_k),-\mu(\beta_k),\tau_d(\beta'_k),\mu(\Omega_k)\Big) \\ T_{\beta'}(k+1)&=\tau_x^k(\beta'_{k+1}); \quad \beta'_{k+1}&=&\Big(\tau_d(\Omega_k),\tau_d(\beta_k),-\mu(\beta'_k),\tau_x(\beta'_k)\Big)\\ T_{\Omega}(k+1)&=\tau_x^k(\Omega_{k+1}); \quad \Omega_{k+1}&=&\Big(\tau_x(\beta_k), -\mu(\beta_k),\tau_d(\beta'_k), -\iota(\beta'_k)\Big) \end{alignat*} Refer to \Cref{sc:Cayley} for the dihedral group D4 of transformations of the square grid. For $k=2$, we get the $2$-curves, as in the right-hand side part of \Cref{fig:btOm1}: \begin{align*} \beta_2&=\lr{1,2,-1,-1,-2,-1,2,2,2,1,-2,1,2,1,-2,1}\\ \beta'_2&=\lr{2,-1,-2,-1,2,-1,-2,-2,-2,1,2,1,1,-2,-1,-2} \text{ and }\\ \Omega_2&=\lr{1,2,-1,-1,-2,-1,2,2,2,1,-2,1,1,2,-1,2}\\ \end{align*} The $\beta$ and $\Omega$ curves are also mutually dependent, similar to the Hilbert curve with its extended approximants. Their two approximants only differ in their last constituent. \begin{figure}[H] \centering \includegraphics[scale=0.5]{betaOmega-curves_3_3.pdf} \caption{$3$- and $4$-curves of the $\beta$ and $\Omega$ fractals, respectively, with entries and exits. The right-hand side figures follow the drawings in \cite{HarmHilb}.} \label{fig:betom23} \end{figure} As $\lim\limits_{k\to\infty} \beta_k=\lim\limits_{k\to\infty}\Omega_k$, the corresponding sequence for $\beta$ suffices and equals \\ $\lr{1,2,-1,-1,-2,-1,2,2,2,1,-2,1,2,1,-2,1,2,1,-2,-2,-1,-2,1,1,1,2,-1,2,1,2,\ldots}$.\\ There is a number-substitution possible, albeit not very simple. The one we found counted three variables in each direction, numbered $x1, x2, x3$ for direction $x\in\{\pm 1, \pm2\}$. Instead of choosing $x1, x2, x3$, we could have chosen $x_1, x_2, x_3$, or even $x, x', x''$. Notice $T(-x)=-T(x)$. \begin{equation*} T=\begin{cases} 11 &\to\lr{ 11, 23,-13,-12}\\ 12 &\to\lr{ -23,-11, 22,-13} \\ 13 &\to\lr{ -21,-11, 22,-13}\\ 21 &\to\lr{ 11, 23,-13, 22} \\ 22 &\to\lr{ 12, 23,-13, 22}\\ 23 &\to\lr{ -23,-11, 22, 21} \\ \end{cases} \end{equation*} \subsection{Arndt's Peano curve} Arndt \cite{Arndt1} investigated all the fractal space-filling curves that can be constructed using a Lindenmayer system with only one variable, which corresponds to the minimal rotation of the grid, denoted by $\mu$. In this section, we discuss his case R9--1, the Peano curve on the square grid (\cite{Sagan}). This curve can also be drawn on the \emph{truncated} square grid. We observe that the substitution is rather simple because it only uses the minimal rotation $\mu=[2,-1]$ over $\pi/2$, i.e., $T(\iota)=(\iota,\mu,\iota,-\mu,-\iota,-\mu,\iota,\mu,\iota)$. Note that the last four transformations are equal to the first four in reverse order, which displays the folding characteristic of the curve. The normalized sequence is\\ $\lr{1,2,1,-2,-1,-2,1,2,1,2,-1,2,1,-2,1,2,-1,2,1,2,1,-2,-1,-2,1,2,1,-2,1,-2,\ldots}$\\ which has as simple substitution $T$ with $T(-x)=-T(x)$ \begin{equation*} T=\begin{cases} 1 &\to\lr{ 1,2,1,-2,-1,-2,1,2,1}\\ 2 &\to\lr{ 2,-1,2,1,-2,1,2,-1,2} \\ \end{cases} \end{equation*} \begin{figure}[H] \centering \includegraphics[scale=0.8]{Arndt_R9-1,_Peano.pdf} \caption{First two approximants of Arndt's R9--1, the Peano curve, on the square grid.} \label{fig:R9-1} \end{figure} This path is peculiar because the curve exhibits a higher degree of space-filling. Usually, similar to the Hilbert curve, a space-filler visits each vertex of the grid the curve lives on only once, and some edges of that grid are never visited; these are called space-filling curves and are in fact \emph{vertex-covering curves}. But this Peano curve visits each \emph{edge} of the square grid once and consequently each vertex exactly twice, and these curves are called \emph{edge-covering curves}. The author used the R9--1 square grid curve, \Cref{fig:R9-1}, to extend it to the truncated square grid, as depicted on the right-hand side of \Cref{fig:hc_trsq_grd} (page \pageref{fig:hc_trsq_grd}). This extension is relatively simple by considering the right-hand side drawing of \Cref{fig:hc_trsq_grd_2} (page \pageref{fig:hc_trsq_grd_2}). Here, we can observe that a direction in the square grid generates \emph{a pair} of subsequent directions in the truncated square grid, such as $\lr{1}\to \lr{1,2}$, if the first edge was part of a pair $\lr{1,2}$, and $\lr{1}\to \lr{1,-4}$, if the first edge was part of a pair $\lr{1,-2}$. Therefore, to get the fractal on the truncated square grid, we split the sequence for the square grid in overlapping pairs, such as $\lr{1,2},\lr{2,1},\lr{1,-2},\ldots$, and then apply the following substitution \begin{equation*} T'=\begin{cases} \lr{1,2}&\to\;\lr{1,2}\\ \lr{1,-2}& \to\;\lr{1,4}\\ \lr{2,1}\to\;\lr{3,2} \\ \lr{2,-1} \to\;\lr{3,-4} \\ \end{cases} \quad\text{ together with }T'\lr{-x,-y} = -T'\lr{x,y} \end{equation*} and you get the sequence for the truncated square grid\\ $\lr{1,2,3,2,1,4,-3,-2,-1,-2,-3,4,1,2,3,2,1,2,3,-4,-1,-4,3,2,1,4,-3,4,1,2,3,\ldots}$.\\ This sequence has the drawings in \Cref{fig:TSqr_R9-1} as the first two approximants, which are similar to those in \cite[sec.~4.1.2]{Arndt1}. \begin{figure}[H] \centering \includegraphics[scale=1]{T_Square_R9-1.pdf} \caption{First two approximants of R9-1 on the truncated square grid.} \label{fig:TSqr_R9-1} \end{figure} We could not obtain a simple substitution for this sequence because there are numerous variants for each direction. \subsection{Gray curve}\label{sc:graycurve} The following example is distinctive: a fractal from which the $k$-curve exists in $k$-dimensional space and not in fewer dimensions. We call this resulting curve the \emph{Gray curve} because the coordinates of the vertices of the curve form the \emph{binary reflected Gray code}. \noindent\begin{minipage}{8cm} In \Cref{fig:brGc}, we construct the binary reflected Gray code in dimension $d$, an ordered set of the binary vertices of the unit cube. First, we take the vertices in dimension $d-1$ and suffix each vertex with a $0$ coordinate. Second, we reflect the same $d-1$-dimensional vertices, i.e., place them in reverse order, and suffix them with a coordinate $1$. Finally, we join these two sets of vertices with the last coordinate of $0$ and $1$ in that order, respectively. Clearly, the Gray curves, as a set of vertices, are equal to the unit $d$-cube. Henceforth, we replace the term ``binary reflected'' with ``Gray,'' which translates to ``binary reflected Gray.'' \end{minipage}\hfill \begin{minipage}[c][6.5cm][t]{8cm} \centering \begin{figure}[H] \centering \includegraphics[scale=1.3]{Gray_coord.pdf} \caption{bin.~refl.~Gray code} \label{fig:brGc} \end{figure} \end{minipage} Define the \emph{Gray function} as $g_d:\{1,2,\ldots,2^d-1\}\to \{\pm 1,\pm 2,\ldots,\pm d\}$ such that if $v(n-1)$ and $v(n)$ differ $1$ in coordinate $k$, then $g_d(n)=k$, and if they differ $-1$, then $g_d(n)=-k$.\footnote{~$g_d$ is called a ``delta'' function by Knuth in \cite[p.~293]{Knuth.4.1}.} This gives the sequence $\Lr{g_d(1),g_d(2),\ldots,g_d(2^d-1)}=\lr{1,2,-1,3,\ldots,-1}$ and the next definition as a consequence. \begin{definition}\label{def:grayseq} The (binary reflected) \textbf{Gray sequence} $G$ is an infinite-dimensional sequence in $\Delta^{\ensuremath{\mathbb{N}}}$, where $\Delta= \Z\backslash\{0\}$. Its approximants $G(d)$ are defined as $G(0)=\lr{}$ and for $d>0$,\\ $G(d)=\Lr{g_d(1),g_d(2),\ldots,g_d(2^d-1)}= \LR{G(d-1),d,-\ensuremath{\mathcal{R}}\big( G(d-1)\big)}$, where $-\ensuremath{\mathcal{R}}$ is the inverse. \end{definition} Note that \[-\ensuremath{\mathcal{R}}\big(G(d)\big)=-\ensuremath{\mathcal{R}}\LR{G(d-1),d,-\mathcal{R}\big( G(d-1)\big)}=\LR{G(d-1),-d,-\ensuremath{\mathcal{R}}\big( G(d-1)\big)};\] therefore, $-\ensuremath{\mathcal{R}}\big(G(d)\big)= \big[1,2,\ldots,d-1,-d\big]G(d)$ as shown in \Cref{fig:graysqblck}. \begin{figure}[H] \centering \includegraphics[scale=1]{GraySeqblocked.pdf} \caption{Visual of the Gray sequence. The black block represents the inverse of the gray block. Each block is equal to the corresponding part of its upper line. Notice that for each $1<k$, $\lr{\pm k}$ is before the sub-sequences $G(k-1)\supset-\ensuremath{\mathcal{R}}(G(k-2))\supset\cdots\supset-\ensuremath{\mathcal{R}}(G(2))\supset-\ensuremath{\mathcal{R}}(G(1))$ and after $\lr{\pm k}$ the sub-sequences $-\ensuremath{\mathcal{R}}(G(k-1))\supset G(k-2)\supset\cdots\supset G(2)\supset G(1)$.} \label{fig:graysqblck} \end{figure} This Gray sequence appears under \seqnum{A164677} in \cite{OEIS}\label{cite:OEIS3}. It is normalized and starts with $\lr{1,2,-1,3,1,-2,-1,4,1,2,-1,-3,1,-2,-1,5,1,2,-1,3,1,-2,-1,-4,1,2,-1,-3,1\ldots}$.\\ Sloane \seqnum{A164677} observed that the Gray sequence is the paper-folding sequence \\ $\m{Fold}(1,2,3,4,\ldots)$, mentioned in Exercise 15 in \cite[p.~203]{allshal}\label{cite:allshall1}. This folding map is defined iteratively by \label{Fold} $\m{Fold}(x_1,\ldots,x_{n+1})=\big\ensuremath{\langle } \m{Fold}(x_1,\ldots,x_n), x_{n+1}, -\mathcal{R}\big(\m{Fold}(x_1,\ldots,x_n) \big)\big\ensuremath{\rangle}$ and \\ $\m{Fold}(x_1)=\lr{x_1}$, similar to our definition of the Gray sequence. We notice that the absolute value of the Gray sequence is the \emph{ruler function} in \seqnum{A001511} in \cite{OEIS}. There exist two substitutions that generate the Gray sequence: $T_1$ is \emph{uniform} (of length 2), c.f.~\cite{allshal}, that is, $\|T_1\lr{x}\|=\|T_1\lr{y}\|$ for all $x,y\in \Delta$, and $T_2$ is non-uniform. \begin{equation*} \begin{cases} T_1(x)=\Lr{1,x+\sgn(x)}\text{ for } |x|=1,\footnotemark\\ T_1(x)=\Lr{-1,x+\sgn(x)}\text{ for } |x|\not=1 \\ \end{cases}\footnotetext{~This is the first of a series of uniform substitutions defined for $n>1$ by $T_n(x)=\Lr{G(n),x+n*\sgn(x)}$ for $|x|=1\text{ and } T_n(x)=\Lr{\ensuremath{\mathcal{R}}(G(n)),x+n*\sgn(x)}\text{ for }|x|\not=1$} \begin{cases} T_2(1)=\lr{1,2,-1}\text{; } T_2(-1)=-\ensuremath{\mathcal{R}}\big(T_2(1)\big),\\ T_2(x)=\Lr{x+\sgn(x)}\text{ for } |x|\not=1\\ \end{cases} \end{equation*} \begin{definition}\label{def:graycrv} The (binary reflected) \textbf{Gray curve} $G$ is the curve on $\Z^{\ensuremath{\mathbb{N}}}$, which has the Gray sequence as description and the Gray code (with subsequent vertices connected) as a graph; $G(d)$, the $d^{\text{th}}$ approximant, lives on $\Z^d$. \end{definition} Notice that $G(d)$ is a Hamiltonian path on the unit cube $C_d$, with the origin as entry and the last vertex of the Gray code, $(0,0,\ldots,0,1)$, as the exit. Therefore, adding orientation to $G(d)$ transforms the Hamiltonian path to a Hamiltonian cycle. \Cref{fig:graycrv123} shows the first few approximants, where the association with ``paper folding'' is evident. The $3$-curve resembles a paperclip. \begin{figure}[H] \centering \includegraphics[scale=0.8]{Gray_curve_1,2,3.pdf} \caption{First three approximants of the Gray curve.} \label{fig:graycrv123} \end{figure} \begin{observation} For $k=1,2,\ldots$, any set of $2^k$ subsequent edges in a Gray curve spans a $(k+1)$-dimensional unit cube $C_{k+1}$. \end{observation} \begin{proof} For no dimension $d$, there is a vertex in the Gray code outside the unit cube $C_d$ because all the vertices are in $L_\infty$-distance $1$ from the origin and have non-negative coordinates. The number of vertices in the unit cube $C_d$ is $2^d$, all traced by the Gray curve $G(d)$. Hence, the number of edges in that path that trace each of these vertices only once is $2^d-1$. As we observe from \Cref{fig:graysqblck}, there are different sub-curves $H(j)\subset G(d)$ for $0\le j<d$ that are isometric with $G(j)$ because each gray block is a $G(j)$, and a black block is a $-\ensuremath{\mathcal{R}}\big(G(j)\big)$. Let $A=\lr{a_1,a_2,\ldots,a_{2^k}}$, of length $||A||=2^k$, be a (consecutive) sub-sequence $A\subseteq G(d)$, with $a\in A$ such that $|a|=\max\{|a_i|;1=1,2,\ldots,2^k\}$. Because $||A||=2^k>2^k-1$, it follows that $A\not\subseteq H(k)$, where $H(k)\cong G(k)$ (isometric); thus,$|a|>k$. Therefore, we have $A=\lr{a_1,a_2,\ldots,a_m,a=a_{m+1},a_{m+2},\ldots,a_{m+n+1}=a_{2^k}}$ with $0\le m<2^k$ and $m+n+1=2^k$;thus, $m\not=n$. Thus, either $m<n$, in which case $2^k=m+n+1<2n+1$ and $2^{k-1}\le n$, or $n<m$ and $2^{k-1}\le m$. In the first case, $\big(H(k-1),\lr{k}\big)\subseteq \lr{a_{m+2},\ldots,a_{m+n+1}}$, and in the second case $\big(\lr{k},(H(k-1)\big)\subseteq \lr{a_1,a_2,\ldots,a_m}$. In both cases, $A\setminus\{a\}$ counts $k$ directions, and hence the number of directions in $A$ equals $k+1$. \end{proof} If we consider $k=1$, then every two subsequent edges in a Gray curve are mutually orthogonal. \begin{definition} A curve is ${n}$\textbf{-hyper-orthogonal} if for $k=1,2,\ldots,n$ any set of $2^k$ subsequent edges in the curve span a $(k+1)$-dimensional unit cube $C_{k+1}$. \end{definition} Notice that if a curve is $n$-hyper-orthogonal, so are its isometric images. We say that a curve in $\ensuremath{\mathbb{R}}^d$ is {hyper-orthogonal} if the curve is $(d-2)$-hyper-orthogonal. In three dimensions, this implies that a curve is hyper-orthogonal if and only if all subsequent edges are orthogonal to each other. Clearly, the Gray curve is $k$-hyper-orthogonal for all $k>0$, and its approximant $G(d$) is $k$-hyper-orthogonal for all $k\le d-1$. We can extend $G(d)$ with additional edges preceding the entry and succeeding the exit without losing the $(d-1)$-hyper-orthogonality by adding an edge $\lr{d}$ before and after the curve. We can build a chain of $G(d)$s, mutually connected by the edge $\lr{d}$, which is still $(d-1)$-hyper-orthogonal. We \cite{BH} used hyper-orthogonality to construct Hilbert curves with excellent properties, as discussed in the following section. \subsection{High-dimensional Hilbert curves}\label{sec:highHlbrt} \subsubsection{The origin as an entry in \texorpdfstring{$3$}{3}D} In the introduction, we discussed the Hilbert curves confined only to a plane. Here, we extend them to higher dimensions; however, one could disagree whether these are ``Hilbert'' curves. We consider a curve in the higher dimensions to be a Hilbert curve if the same construction applied in two dimensions results in the original curve. In \Cref{sub:hlbrt2d}, we proposed a construction in $2$ dimensions as $H(k+1)=T(H(k))=\Big(\tau(H(k)),1,H(k),2,H(k),-1,-\tau(H(k))\Big)$, where $\tau=[2,1]$ is the reflection in the line $y=x$ and $T\tau=\tau T$. This reduces to $T(\iota)=(\tau,1,\iota,2,\iota,-1,-\tau)$. The intermediate edges are the edges of the Hilbert $1$-curve $\lr{1,2,-1}$. From our observations, this Hilbert $1$-curve is $G(2)$, i.e., the $2$-dimensional Gray curve. \begin{definition} In dimension $d$, we call a curve a \textbf{Hilbert curve} if its $1$-curve is equal to the Gray curve $G(d)$. \end{definition} From our considerations of hyper-orthogonality (at the end of \Cref{sc:graycurve}), we discovered that the only curve in $d$ dimensions that is $(d-1)$-hyper-orthogonal is a chain of Gray $d$-curves $G(d)$ connected by edges $\lr{d}$. This validates our definition of \emph{hyper-orthogonal as $(d-2)$-hyper-orthogonal.} \begin{observation}\label{obs:GCd1} The extension $G'(d)$ of $G(d)$, given by the concatenation\\ $\big(\lr{d},G(d),\lr{-(d-1)}\big)=G'(d)$ is hyper-orthogonal. \end{observation} \begin{proof} From our previous observations, the extra edge $\lr{d}$ does not perturb $(d-1)$-hyper-orthogonality. Also, the $(d-2)$-hyper-orthogonality holds: if the set of $2^{d-2}$ subsequent edges contains the first edge $\lr{d}$, then the set also contains $G(d-2)\subset G(d)$, hence, there are $d-1$ dimensions. If the last extra edge $\lr{-(d-1)}$ is a part of the set of $2^{d-2}$ subsequent edges, then this set also contains $-\ensuremath{\mathcal{R}}\big(G(d-2)\big)\subset G(d)$, hence containing $d-1$ dimensions in total. \end{proof} \begin{observation}\label{obs:GCd2} The second hyper-orthogonal extended Gray curve is\\ $\big(\lr{d-1},G(d),\lr{d}\big)=G''(d)$, and both curves are isometric. \end{observation} \begin{proof} Let $\omega=[1,2,\ldots,(d-1),-d]$ be the signed permutation that sends $d\mapsto -d$ and leaves all other directions unchanged. We observe that $-\ensuremath{\mathcal{R}}\big(G(d)\big)=\omega\big(G(d)\big)$,\footnote{~See the note directly above \Cref{fig:graysqblck}.} Therefore, $\omega\Big(-\ensuremath{\mathcal{R}}\big(\lr{d-1},G(d),\lr{d}\big)\Big)= \omega\Big(\lr{-d},\omega\big(G(d)\big),\lr{-(d-1)}\Big)= \big(\lr{d},G(d),\lr{-(d-1)}\big)=G'(d)$. \end{proof} \begin{definition} The \textbf{type of extension} of a Hilbert approximant is \textbf{one} or \textbf{two} if either its \emph{entry-edge} or \emph{exit-edge} has the same direction as its orientation. \end{definition} Note that the type of $G'(d)$ is one, and that of $G''(d)$ is two. As entry- and exit-edge of $G(d)$ are always mutually orthogonal and the type gives the edge in the same direction as the orientation, the other one is orthogonal to the orientation. Therefore, we have two isometric extended Gray curves, which we can use as building blocks for Hilbert curves. The first one has its entry-edge in the direction of its orientation, while the second one does not (but it has its exit-edge in the direction of its orientation). For further details, we refer \cite{BH}; here, it is sufficient to use only the results. We use isometries of both the extended Gray curves $G'(d)$ and $G''(d)$, but \emph{without their entry-edge,} similar to our construction of the Hilbert curve in a plane, at the end of \Cref{sub:hlbrt2d}. If $\s=\big[\s(1),\s(2),\ldots,\s(d)\big]$ is a perm, then the orientation of $\s(G(d))$ is always $\lr{\s_d}$. Thus, the entry- and exit-edge of $\s(G'(d))$ equal $\lr{\s(d)}$ and $\lr{-\s(d-1)}$, respectively, and those of $\s(G''(d))$ equal $\lr{\s(d-1)}$ and $\lr{\s(d)}$, respectively. \begin{figure}[H] \centering \includegraphics[scale=1.]{type_1_2.pdf} \caption{\small{Type $1$ and type $2$ images of a Hilbert approximant with orientation between two given edges $a$ and $b$.}} \label{fig:type_1_2} \end{figure} If we have two connected, orthogonal edges $\lr{a}$ and $\lr{b}$, cf.~\Cref{fig:type_1_2}, and we want an isometric image of $G(d)$ in between, such that the entry and exit-edge are $\lr{a}$ and $\lr{b}$, then either $\s(d)=\lr{a}$, where $\s\big(G(d)\big)$ is of type one, or $\s(d)=\lr{b}$, where $\s\big(G(d)\big)$ is of type two. In higher dimensions, we cannot visually illustrate the isometries we could use. As we have already discussed in \cite{BH}, we only have to supply the details. For higher dimensions, say five or above, the number of isometries to be provided increases exponentially with the dimension. In this example, we restrict ourselves to lower dimensions. Let us start with $d=3$ and construct a normalized, hyper-orthogonal Hilbert curve in three dimensions, which starts at the origin and ultimately fills $\Z^3_{\ge 0}$. Recall that in $3$D, hyper-orthogonal means mutually orthogonal subsequent edges. Our construction, which is valid in other dimensions than three, is performed as demonstrated in \cite{BH}. For the next $(k+1)$-curve, given the $k$-curve, we inflate each vertex until it attains the shape of a unit cube, leaving all other edges of the grid $\Z^d$ constant, as demonstrated in \Cref{fig:infltng}. \begin{figure}[H] \includegraphics[scale=0.8]{SOCG_2a_ppt.pdf} \caption{\small{How inflating the vertices of a $3$D Hilbert $1$-curve works.\protect\footnotemark[14] ~First, we define the connecting curve, as in the first drawing. As shown in center illustration, this connecting curve becomes the curve from which we inflate the vertices. Then, we fill the cubes that replace the vertices with an isometric transformation of the $1$-curve ($=$ Gray-curve) so that their entry and exit glue properly\protect\footnotemark~to the edges of the connecting curve, as shown in the third drawing. We use the new curve as the next connecting curve.}} \label{fig:infltng} \end{figure} \addtocounter{footnote}{-1} \footnotetext{~The following description is derived from \cite{BH}.} \stepcounter{footnote} \footnotetext{~Properly, that is, while maintaining (hyper)orthogonality, and such that the exit of one inflated $1$-curve plus the next connecting edge is equal to the entry of the next inflated $1$-curve. Note that the entry plus the orientation of the $1$-curve equals the exit.} \begin{figure}[H] \centering \includegraphics[scale=0.45]{Hilbert_123_000_1.pdf} \includegraphics[scale=0.48]{Hilbert_123_000_2.pdf} \caption{\small{$3$-dimensional hyper-orthogonal Hilbert $2$-curves, constructed according to \Cref{fig:infltng}, of type $1$ and $2$, respectively, and with entry- and exit-edge.}} \label{fig:HC3dOrg2} \end{figure} \begin{remark} However, we make a few remarks. First, we observe that the $2$-curves in \Cref{fig:HC3dOrg2} are normalized but not extending (i.e., start with the Hilbert $1$-curve, which is $G(3)$). Second, given a connecting curve with entry- and exit-edge in three dimensions, there exists only \emph{one possible way} to fill all the cubes. This reasoning is rather simple. We observe from the inflating description (\Cref{fig:infltng}) that a connecting edge $\lr{k}; k>0$ runs from the hyper-plane $\{(x_1,\ldots,x_d):x_k=1\pmod 4\}$ to the plane \mbox{$\{(x_1,\ldots,x_d):x_k=2\pmod 4\}$,} and the edge $\lr{-k}$ connects the two hyper-planes in the reverse direction. We recall that the entry of a Hilbert curve plus its orientation equals its exit, and exit plus exit-edge equals the entry of the next Hilbert curve. From this, we deduce that if the exit-edge is $\lr{k};k>0$ and the entry has $x_k=0\pmod 4$, then the orientation $\s(d)$ of the $1$-curve $\s\big(G(d)\big)$ has to be $\lr{k}$ as well, and the curve is of type $2$, similar to $k<0$, where the entry has $x_k=3\pmod 4$. In all other cases, $\s(d-1)=-k$, and the curve is of type $1$. As $\{|\s(d-1)|,|\s(d)|\}=\{|entry\text{-}edge|,|exit\text{-}edge|\}$, in $3$D, we have only $\s(1)$ left, and this is equal to the axis that is orthogonal to the entry and exit-edge. We \cite{BH} proved that in dimensions three and four, we have only one choice for the isometry of an inflated cube. \end{remark} Before proceeding further, we first discuss the normalization of the Hilbert curve because it is evident that our construction does not automatically produce a normalized $2$-curve from a normalized $1$-curve. We begin with curves that start at the origin and are self-similar (c.f.~\Cref{def:slfsmlr}), extending, and normalized. We exclude the entry-edge from the Hilbert curve and use it only in the construction. A $k+1$-curve is normalized if and only if its first constituting $k$-curve is normalized, i.e., if the first Hilbert $1$-curve is normalized, then that curve is the normalized Gray curve $[1,2,3]$. Owing to its hyper-orthogonality, the exit edge of the first $1$-curve has to be $\lr{d}$ or $\lr{-(d-1)}$, which is $\lr{3}$ or $\lr{-2}$ in the $3$D case. This is then the first edge of the connecting curve of the $2$-curve, which is also a $1$-curve, and so, it is also a Gray curve itself. Therefore, its last edge is equal to its first, both being either $\lr{3}$ or $\lr{-2}$. As this connecting curve cannot start with a negative edge because its entry is also the origin, it is one of two possible Gray curves given by the permutations $[3,2,1]$ and $[3,1,2]$ (shown in \Cref{fig:prmsgrph}), the first with exit edge $\lr{1}$ ($\lr{-2}$ does not count), and the second with exit edge $\lr{2}$ (as $\lr{-1}$ does not count). \begin{figure}[H] \centering \includegraphics[scale=1.2]{permsgraph.pdf} \caption{\small{Graph with each of the six positive permutations connected to those whose first direction equals the perm's last.}} \label{fig:prmsgrph} \end{figure} Now, let us introduce a concise way of representing a $2$-curve. This is done by representing a $1$-curve by its isometry and adding its exit-edge and type, as in the next matrices \Cref{tab:3Dprms1}. The first $1$-curves in \Cref{fig:HC3dOrg2} are denoted by $[2,3,1];\lr{1};2$ and $[3,2,1];\lr{1};2$. Therefore, the two $2$-curves are described by the following matrices, in which the two signed permutations $\s_1=[3,-2,-1]$ and $\s_2=[-1,-3, 2]$, generate the group of $24$ isometries. \begin{table}[H] \[\begin{matrix} \s&perm&exit\text{-}edge&type\\ \hline \s_6=\s_2^{-1}\s_1^{-1}\s_2^2&[ 2, 3, 1]&\lr{ 1}&2\\ \s_3=\s_1^{-1}\s_2^{-1}&[ 3, 1, 2]&\lr{ 2}&2\\ \s_3&[ 3, 1, 2]&\lr{-1}&1\\ \s_4=\s_2\s_1\s_2&[-2,-1, 3]&\lr{ 3}&2\\ \s_4&[-2,-1, 3]&\lr{ 1}&1\\ \s_5=\s_1\s_2&[-3, 1,-2]&\lr{-2}&2\\ \s_5&[-3, 1,-2]&\lr{-1}&1\\ \s_7=\s_1^{-1}\s_2^2 &[-3, 2,-1]&\lr{ -2}&1\\ \end{matrix} \quad\vline\quad \begin{matrix} \s&perm&exit\text{-}edge&type\\ \hline \tau=\s_1\s_2^2&[ 3,2,1]&\lr{ 1}&2\\ \s_3&[ 3,1,2]&\lr{ 2}&2\\ \s_3&[ 3,1,2]&\lr{-1}&1\\ \s_4&[-2,-1, 3]&\lr{3}&2\\ \s_4&[-2,-1, 3]&\lr{ 1}&1\\ \s_5&[-3, 1,-2]&\lr{-2}&2\\ \s_5&[-3, 1,-2]&\lr{-1}&1\\ \s_8=\s_2\s_1\s_2^2&[2,-3,-1]&\lr{3}&1\\ \end{matrix}\] \caption{Perms generating extending Hilbert 2-curves in $3$D with the origin as entry.}\label{tab:3Dprms1} \end{table} Note that the central six permutations per $2$-curve are equal for both curves, being $\s_3,\s_4,\s_5$, and the first and last permutations are all different. Finally, the respective $1$-curves are of types $2,2,1,2,1,2,1,1$ in both cases. The respective exit-edges constitute the connecting $1$-curve. It is relatively straightforward that we can continue with the construction depicted in \Cref{fig:infltng} by inflating the vertices of the newly obtained $2$-curve, filling the cubes with the proper isometric images of the Hilbert $1$-curve, and repeating this process. But it is easier, knowing that the above matrices represent the extending $2$-curves $H'(2)$ and $H''(2)$ of type $1$ and $2$, respectively, to apply the following substitution with the perms from the matrices in \Cref{tab:3Dprms1}. \begin{multline}\label{eq:Hlbrt_k} H'(k+1)=\Big(\s_6\big( H''(k)\big); \s_3\big( H''(k)\big); \s_3\big( H'(k)\big); \s_4\big( H''(k)\big);\\ \shoveright{\s_4\big( H'(k)\big); \s_5\big( H''(k)\big);\s_5\big( H'(k)\big) ;\s_7\big( H'(k)\big)\Big)} \\ \shoveleft{H''(k+1)=\Big(\tau\big( H''(k)\big); \s_3\big( H''(k)\big); \s_3\big( H'(k)\big); \s_4\big( H''(k)\big);}\\ {\s_4\big( H'(k)\big); \s_5\big( H''(k)\big);\s_5\big( H'(k)\big) ;\s_8\big( H'(k)\big)\Big)} \end{multline} We now obtain the normalized, hyper-orthogonal Hilbert curves by first depicting the two $3$-curves. \begin{figure}[H] \centering \includegraphics[scale=0.45]{Hilbert_123_000_1_2.pdf} \includegraphics[scale=0.45]{Hilbert_123_000_2_2.pdf} \caption{\small{Two $3$-dimensional, extending, and hyper-orthogonal Hilbert $3$-curves of type $1$ and $2$, respectively. As in \Cref{fig:HC3dOrg2}, these two are also equal, except for their first and last octant. Here, they are observed from the left-hand side. Notice that only the curve on the right is normalized.}} \label{fig:HC3dOrg4} \end{figure} By applying the substitutions from \Cref{eq:Hlbrt_k} on $H'(2)$ and $H''(2)$ from \Cref{fig:HC3dOrg2}, we generate these $3$-curves. We observe that the first $1$-curve in the $k$-curves of type $2$ toggles between $\iota=[1,2,3]$, when $k$ is odd, and $\tau=[3,2,1]$, when $k$ is even, implied by the first term in the second line of \Cref{eq:Hlbrt_k}. The $k$-curves of type $1$ are never normalized because their first $1$-curve toggles between $\s_6=[2,3,1]$ and $\s_6\tau=[1,3,2]$, respectively. Therefore, a normalized approximant is produced by \[H(k)=T(k)=[3,2,1]^{(k+1)}\;H''(k)\footnotemark\text{ for }k>1\] which generates the following normalized sequence \footnotetext{~We could have used a substitution with $H'(k+1)$ instead of $H''(k+1)$, but this is simpler.} \\ No simple substitution is found, as the simplest has eight substitutions per direction. \subsubsection{Non-origin entry in \texorpdfstring{$3$}{3}D} As we proved in \cite{BH}, there are only two entries for hyper-orthogonal Hilbert curves in all dimensions $\ge 3$, which are $(x,x,\ldots,x,0)$ and $(x,x,\ldots,x,0,x)$ for type $1$ and $2$, respectively, with $x\in\{\frac{1}{3},\frac{2}{3},0\}$ in the limit Hilbert curve, where the unit cube is the surrounding cube.\footnote{~Here, the ones with $x=\frac{1}{3}$ are isometric with the ones with $x=\frac{2}{3}$; hence, there are only two entries for different Hilbert curves.} This limit Hilbert curve is obtained by shrinking each $k$-curve with a factor of $2^k$. As the binary representation of $\frac{1}{3}=(0.01010101\cdots)$, the respective entries of the $k$-curves are $(e_k,e_k,\ldots,e_k,0)$ and $(e_k,e_k,\ldots,e_k,0,e_k)$, where $e_k=0,1,2,5,10,21,\ldots$ (\seqnum{A000975}) for $k=1,2,\ldots$ Using $\frac{2}{3}=(0.10101010\cdots)$, the respective entries of the $k$-curves are $(e_k,e_k,\ldots,e_k,0)$ and $(e_k,e_k,\ldots,e_k,0,e_k)$, where $e_k=1,2,5,10,21,\ldots$ for $k=1,2,\ldots$ \begin{figure}[H] \centering \includegraphics[scale=0.45]{Hilbert_123_110_1_1.pdf} \includegraphics[scale=0.45]{Hilbert_123_101_2_1.pdf} \caption{Two $3$-dimensional, extending, and hyper-orthogonal Hilbert $2$-curves $H'(2)$ and $H''(2)$ of type $1$ and $2$, respectively, with the entry $(1,1,0)$ and $(1,0,1)$, plus the entry- and exit-edge.} \label{fig:HC3dOrg3} \end{figure} These are the perms with their exit edges that appear from these constructions. \begin{table}[H] \[\begin{matrix} \s&perm&exit\text{-}edge&type\\ \hline \s_1&[-2,-1, 3]&\lr{ 1}&1\\ \s_2&[-3,-2, 1]&\lr{ 2}&1\\ \s_3&[-3, 2,-1]&\lr{-1}&2\\ \s_4&[ 2,-3,-1]&\lr{ 3}&1\\ \s_5&[ 2, 3, 1]&\lr{ 1}&2\\ \s_6&[ 3, 2, 1]&\lr{-2}&1\\ \s_7&[ 3,-2,-1]&\lr{-1}&2\\ \s_8&[ 3,-1,-2]&\lr{-2}&2\\ \end{matrix} \quad\vline\quad \begin{matrix} \s&perm&exit\text{-}edge&type\\ \hline \s_9&[-3,-1,2]&\lr{ 1}&1\\ \s_2&[-3,-2, 1]&\lr{ 2}&1\\ \s_3&[-3,2,-1]&\lr{-1}&2\\ \s_4&[2,-3,-1]&\lr{ 3}&1\\ \s_5&[2,3, 1]&\lr{ 1}&2\\ \s_6&[3, 2,1]&\lr{-2}&1\\ \s_7&[3, -2,-1]&\lr{-1}&2\\ \s_1&[ -2,-1,3]&\lr{ 3}&2\\ \end{matrix}\] \caption{Perms generating extending Hilbert 2-curves in $3$D, not with the origin as entry.}\label{tab:3Dprms} \end{table} Note that in the matrices in \Cref{tab:3Dprms} the columns of types are $1,1,2,1,2,1,2,2$ on both sides with this entry. We \cite{BH} established the relation between the entry and the \emph{type sequence}. The normalized curve is then derived as \[\label{eq:snd3DHlbrtcrv} H(k)=[-2,-1,3]^{(k+1)}H'(k) \text{ for }k>1 \] based on \begin{multline*}\label{eq:} H'(k+1)=\Big(\s_1\big( H''(k)\big); \s_2\big( H''(k)\big); \s_3\big( H'(k)\big); \s_4\big( H''(k)\big);\\ \shoveright{\s_5\big( H'(k)\big); \s_6\big( H''(k)\big);\s_7\big( H'(k)\big) ;\s_8\big( H'(k)\big)\Big)} \\ \shoveleft{H''(k+1)=\Big(\s_9\big( H''(k)\big); \s_2\big( H''(k)\big); \s_3\big( H'(k)\big); \s_4\big( H''(k)\big);}\\ \s_5\big( H'(k)\big); \s_6\big( H''(k)\big);\s_7\big( H'(k)\big) ;\s_1\big( H'(k)\big)\Big) \end{multline*} which gives rise to the following normalized sequence in three dimensions, which is different from the former, and without a simple substitution. $\lr{1,2,-1,3,1,-2,-1,-2,-3,1,3,-2,-3,-1,3,-1,-3,-1,3,2,-3,1,3,2,-1,-3,1\ldots}$ \subsubsection{The origin as the entry in \texorpdfstring{$4$}{4}D} In four dimensions, the two extending Hilbert $2$-curves with the origin as entry are uniquely determined (c.f.~\cite{BH}), which are obtained similarly as in three dimensions. \begin{table}[H] \[\begin{matrix} perm&exit\text{-}edge&type\\ \hline [ 3, 2, 4, 1]&\lr{ 1}&2\\ [ 3, 4, 1, 2]&\lr{ 2}&2\\ [ 4, 3, 1, 2]&\lr{-1}&1\\ [ 4,-2,-1, 3]&\lr{ 3}&2\\ [ 4,-2,-1, 3]&\lr{ 1}&1\\ [ 4,-3, 1,-2]&\lr{-2}&2\\ [-3, 4, 1,-2]&\lr{-1}&1\\ [-3, 2,-1, 4]&\lr{ 4}&2\\ [-3, 2,-1, 4]&\lr{ 1}&1\\ [-3,-4, 1, 2]&\lr{ 2}&2\\ [-4,-3, 1, 2]&\lr{-1}&1\\ [-4,-2,-1,-3]&\lr{-3}&2\\ [-4,-2,-1,-3]&\lr{ 1}&1\\ [-4, 3, 1,-2]&\lr{-2}&2\\ [-4, 3, 1,-2]&\lr{-1}&1\\ [-4, 2, 3,-1]&\lr{-3}&1\\ \end{matrix} \quad\vline\quad \begin{matrix} perm&exit\text{-}edge&type\\ \hline [ 4, 2, 3, 1]&\lr{ 1}&2\\ [ 4, 3, 1, 2]&\lr{ 2}&2\\ [ 4, 3, 1, 2]&\lr{-1}&1\\ [ 4,-2,-1, 3]&\lr{ 3}&2\\ [ 4,-2,-1, 3]&\lr{ 1}&1\\ [ 4,-3, 1,-2]&\lr{-2}&2\\ [-3, 4, 1,-2]&\lr{-1}&1\\ [-3, 2,-1, 4]&\lr{ 4}&2\\ [-3, 2,-1, 4]&\lr{ 1}&1\\ [-3,-4, 1, 2]&\lr{ 2}&2\\ [-4,-3, 1, 2]&\lr{-1}&1\\ [-4,-2,-1,-3]&\lr{-3}&2\\ [-4,-2,-1,-3]&\lr{ 1}&1\\ [-4, 3, 1,-2]&\lr{-2}&2\\ [ 3,-4, 1,-2]&\lr{-1}&1\\ [ 3, 2,-4,-1]&\lr{ 4}&1\\ \end{matrix}\] \caption{Perms generating extending Hilbert 2-curves in $4$D with the origin as entry.}\label{tab:4Dprms} \end{table} The $3^{\text{rd}}$ and $4^{\text{th}}$ perms $[4,3,1,2]\text{ and } [4,-2,-1,3]$ generate the group of $192$ four-di-mensional Hilbert isometries. In both cases, the types of the constituting $1$-curves are as follows: the first two have type $2$, the last two have type $1$, and the $12$ types in between are alternating, starting with $1$. We have a generalized version of the substitution for extending Hilbert curves with the entry as origin, where, except the first and last $2^{d-3}$ perms, the perms $\s'_k$ and $\s''_k$ are equal. \begin{multline*} H'(k+1) = \Big(\s'_1\big( H''(k)\big), \s'_2\big( H''(k)\big), \s'_3\big( H'(k)\big), \ldots\\ \shoveright{\ldots, \s'_{2^d-2}\big( H''(k)\big),\s'_{2^d-1}\big( H'(k)\big) ,\s'_{2^d}\big( H'(k)\big)\Big)}\\ \shoveleft{H''(k+1)= \Big(\s''_1\big( H''(k)\big), \s''_2\big( H''(k)\big), \s''_3\big( H'(k)\big), \ldots,}\\ \ldots,\s''_{2^d-2}\big( H''(k)\big),\s''_{2^d-1}\big( H'(k)\big) ,\s''_{2^d}\big( H'(k)\big)\Big). \end{multline*} The normalized curve in $4$D is then easily derived \[H(k+1)=[4,2,3,1]^{(k\bmod 2)}H''(k+1) \] which generates the next normalized sequence in $4$ dimensions.\\ $\lr{1,2,-1,3,1,-2,-1,4,1,2,-1,-3,1,-2,-1,4,1,3,-1,4,1,-3,-1,2,1,3,-1,-4,1,\ldots}$ \subsubsection{Non-origin entry in \texorpdfstring{$4$}{4}D} The normalized sequence for the Hilbert curve with the entry $(\frac{1}{3},\frac{1}{3},0,\frac{1}{3})$ equals \\ $\lr{1,2,-1,3,1,-2,-1,4,1,2,-1,-3,1,-2,-1,-3,1,-4,-1,2,1,4,-1,-3,1,-4,-1,\ldots}$\\ which is derived from the normalized approximants \[H(k+1)=[-3,-2,-1,4]^{(k\bmod 2)}H'(k+1)\] In dimensions $>4$, more than two hyper-orthogonal Hilbert curves exist per entry point. \subsection{Dekking's Gosper-type curve} Dekking \cite{Dekk0} gave an extensive treatment for the method we have applied here, with a more versatile notation. We apply the following translation table for the items he used in his example (4.9). \[\begin{matrix} s_{00}&\mapsto&\lr{1}& s_{10}&\mapsto&\lr{1'}\\ s_{01}&\mapsto&\lr{2}& s_{11}&\mapsto&\lr{2'}\\ s_{02}&\mapsto&\lr{-1}& s_{12}&\mapsto&\lr{-1'}\\ s_{03}&\mapsto&\lr{-2}& s_{13}&\mapsto&\lr{-2'} \end{matrix}\] Here $\lr{1}\mapsto(1,0)\mapsfrom\lr{1'}$, whereas $\lr{2}\mapsto(0,1)\mapsfrom\lr{2'}$. Hence, the used isometries are $\rho=\begin{bmatrix} 1 & 2 & 1' & 2'\\ 1' & 2' & 1 & 2\\ \end{bmatrix}$, which swaps the columns in the matrix above and geometrically is the identity, and $\s=\begin{bmatrix}1 & 2 & 1' & 2'\\ 2&-1&2'&-1'\\\end{bmatrix}$, which rotates each column one step downward (in the matrix above) geometrically, and is the rotation over $\pi/2$. Finally, Dekking defined the substitution in \cite{Dekk0} with a process that is greatly simplified using our proposed toolkit. \begin{figure}[H] \centering \includegraphics[scale=0.9]{DekkingFlowsnake.pdf} \caption{Construction of sequence and substitution of the $1$-curve of Dekking's flowsnake. See \Cref{fig:Dekking_flowsnake_2} (p.~\pageref{fig:Dekking_flowsnake_2}) in \Cref{ss:Dkkng Flwsnk} for the next approximants} \label{fig:DkFlsnk} \end{figure} In \Cref{fig:DkFlsnk}, we show the derivation of the sequence and the substitution for the $1$-curve of Dekking's flowsnake. In the drawing on the left, we observe that the primary edge, $\lr{1}$, covers the $5\times 5$ square on its left side. Therefore, we determine the mapping between squares and edges in the center image by starting with the $(13)$ dark gray squares with a unique edge. Each square contains the direction of its unique edge, without an accent if the square is at the left of the directed edge, or with an accent otherwise. Subsequently, the $(4)$ edges in a unique light gray square are mapped to the corresponding squares. Finally, we similarly treat the ($8$) remaining unattached edges and white squares, as shown in the drawing rightward to \Cref{fig:DkFlsnk}. This leads to the substitution $S\lr{1}=\lr{1,1,2',-1',2,1,2',-1',-1,2',1,1,1',2, 1',-2,-2,-1',-2,-2',1',2,1,-2',-2'}$. If we replace Dekking's $\rho$, the vertical flip, by $\tau_y=[1,-2]$, and his $\s$, the rotation over $\pi/2$ by $\mu=[2,-1]$, we see that $\lr{1'}=\tau\lr{1}$ and $\lr{2}=\mu\lr{1}$. As for the substitution $S$, we have $S(\varphi)=\varphi S(\iota)$; for all isometries $\varphi$, we get $S(\iota)=(\iota,\iota,\mu\tau_y,-\tau,\mu,\iota,\mu\tau_y,-\tau,-\iota,\mu\tau_y,\iota,\\ ,\iota,\tau,\mu,\tau,-\mu,-\mu,-\tau,-\mu,-\mu\tau_y,\tau,\mu,\iota,-\mu\tau_y,-\mu\tau_y)$. As Dekking's flowsnake is normalized from the beginning, we have\\ $\lr{1,1,2,-1,2,1,2,-1,-1,2,1,1,1,2,1,-2,-2,-1,-2,-2,1,2,1,-2,-2,1,1,2,-1,2,\ldots}$. There is a general way to apply this process to squares of different sizes, even if they are slightly rotated; see \Cref{fig:4x3_flowsnake}. The simplest is $a=2,b=1$, as Mandelbrot described in \cite{Mbrt}. Although he named these fractals ``quintet'' and ``teragon,'' we prefer to call them \emph{Mandelbrot flowsnakes}. Dekking also described it in \cite[Example 4.2.]{Dekk0}. \begin{figure}[H] \centering \includegraphics[scale=0.68]{4x3_flowsnake.pdf} \caption{\small{The gray center square in the first picture has size $a-b$. The indicated vertices are to be visited by the next generation of the blue, solid edge, as depicted in the middle. On the right-hand side, each small square with a cyan center is filled by its inflating edge, where the square is at its \textbf{R}ight or at its \textbf{L}eft.}} \label{fig:4x3_flowsnake} \end{figure} In the $2$-curve of \Cref{fig:4x3_flowsnake}, we use $a=4,b=3$ and have $5^2=25$ edges, which are denoted by their isometry, using $\mu=[2,-1]$ and $\tau_y=[1,-2]$. Hence, we get $S(\iota)=\big(\iota,\mu\tau_y,\iota,\\ \mu\tau_y,\iota,\mu\tau_y,\iota,-\mu,\tau,-\mu,\tau,-\mu,-\iota,-\mu\tau_y,-\iota,\mu,\mu\tau_y,-\tau,-\mu,-\tau,-\mu\tau_y,\iota,-\mu\tau_y,\iota,-\mu\tau_y\big)$,\\ that gives rise to the normalized sequence \\ $\lr{1,2,1,2,1,2,1,-2,1,-2,1,-2,-1,-2,-1,2,2,-1,-2,-1,-2,1,-2,1,-2,1,2,1,2,1\ldots}$. As this fractal is a flowsnake, its boundary is also similar to the boundary of Gosper's flowsnake. For this, we read the following substitution from the drawing at the right-hand side of \Cref{fig:4x3_flowsnake} $S(\iota)=\big(\iota,-\mu,\iota,-\mu,\iota,-\mu,\iota\big)$, which requires the $1$-curve $\lr{1,2,-1,-2}$ as a starting point. The resulting border of the Mandelbrot island is \\ $\lr{1,2,1,2,1,2,1,2,-1,2,-1,2,-1,2,1,2,1,2,1,2,1,2,-1,2,-1,2,-1,2,1,2,1,2,1,2,1\ldots}$. \section{Considerations and conclusions} \subsection{Conclusions} In this study, we have a digiset that we use to describe the fractal sequence. This digiset is our translation of the grid on which the fractal image exists. We explicitly map the set of geometrical line-fractals to the set of signed, integer sequences. We provide a substitution, and a starter sequence from which the fractal grows, via its approximants, which are finite approximations of the limit fractal. The advantage of our approach is threefold. First, we can order the set of normalized, signed, integer sequences, which implies an ordering on the set of fractal images. Second, we can use the machinery of signed permutations as isometries of signed, integer sequences. Notably, the ``reverse'' appears to be an essential anti-morphism. Finally, third, the ``coding'' of a fractal image to an signed, integer sequence makes it sufficiently simple to obtain the image from that sequence. We describe our finding using ten examples with fifteen sequences to illustrate the different peculiarities encountered when representing a fractal sequence as a signed, integer sequence. Finally, we set up an inventory of the fifteen fractal integer sequences, most of which are unknown, in the Online Encyclopedia of Integers \cite{OEIS}. \subsection{Considerations} It is evident that the fractal sequences we discussed here to illustrate the method of mapping a fractal to a signed, integer sequence are superficial. Therefore, the primary task should be to scan all publications describing fractals, convert them into integer sequences, and add them to the list provided in our study, which can build an ordered catalog of fractal sequences. The fractals in the publications of Mandelbrot, Dekking, Arndt, and Ventrella should be described first. Subsequent research could use the method described here to generate new fractal sequences, comparable with what Arndt did with the Lindenmayer systems. In further studies, emphasis should be given to fractals in higher dimensions, i.e., above two, as the only fractals known to us are on the cubic grid, like the Gray curves and their offspring, i.e., the Hilbert curves. Finally, the fractals occurring in this study are line-fractals. We are anxious to know how fractals can be represented with higher dimensional structures, like planes or volumes. \section*{Acknowledgments}We are grateful to Jsoftware.com that they provide for free the array programming language J, a derivation of APL and from the same creator. With this tool, we have been able to perform all the calculations as well as draw the more complicated figures. \bibliographystyle{plainnat}
1,477,468,751,258
arxiv
\section{Introduction} Understanding star formation mechanisms and their physical connection to the interstellar medium (ISM) properties of galaxies is crucial for resolving astrophysical issues such as the physical nature of the Hubble sequence, the nature and triggering of starbursts, and the interpretation of observations of the high--redshift universe. These scientific objectives constitute the core science program of the {\it Spitzer} Infrared Nearby Galaxies Survey (SINGS) \citep{2003PASP..115..928K}. Since the science drivers of the project are dependent of many variables, the SINGS project requires a comprehensive set of data including infrared imaging and spectroscopic data, broadband imaging in the visible and near--infrared as well as UV imaging and spectrophotometry. The {\it Spitzer} data and ancillary observations of the SINGS galaxies will also provide valuable tools for understanding the physics of galaxy formation and evolution. The formation of individual stars from the collapse of dense molecular clouds is relatively well known, e.g. the intensity of star formation is strongly correlated with the column density of gas and stars \citep{1998ApJ...498..541K}. However, the large--scale processes driving star formation are still poorly understood. For instance, the effect of gas dynamics on the regulation of star formation is somewhat unknown. Since young stars are often associated with spiral arms, it is thought that protostars are formed from compressed gas along large--scale shock fronts. Moreover, the star formation history varies greatly along the Hubble sequence. Elliptical galaxies, being gravitationally supported by velocity dispersion, have exhausted their gas reservoir and hence star--forming processes have ceased \citep{1998ARA&A..36..189K}. Nevertheless, some elliptical galaxies having a rotating disc are still forming stars (e.g. NGC 2974, \citealt{2007MNRAS.376.1021J}). Spiral galaxies, on the other hand, continue to form stars and are supported by rotation. Understanding the effect of gas dynamics on star formation would certainly help improving our understanding of galaxy formation. Indeed, a Schmidt law modulated by rotation seems to better fit the data than a simple (gas column density) Schmidt law \citep{2003MNRAS.346.1215B, 2001ApJ...555..301M, 1989ApJ...344..685K}. This paper presents the second part of an \ha\ kinematics survey of the SINGS galaxies. The 37 galaxies showing \ha\ emission were observed by means of Fabry--Perot (FP) interferometry. The paper is organized as follows. Section \ref{observations} describes the composition of the SINGS sample and the hardware used for the observations. The data reduction process, using a custom IDL pipeline designed for an optimal use of the data, is introduced in section \ref{reduction}. In section \ref{gipsy}, the details of the kinematical analysis that has been done on the SINGS galaxies are described. Section \ref{results} presents the observational results in the form of velocity fields, monochromatic maps, position--velocity diagrams, and rotation curves. In section \ref{discussion}, the effect of the bar on the observed kinematics is discussed. Finally, the scientific applications of the kinematical results presented in this paper are reviewed in section \ref{conclusion}. \section{Observations} \label{observations} \subsection{The sample} \label{sample} The sample, as described by \cite{2003PASP..115..928K}, is composed of 75 nearby ($\Delta< 30$ Mpc, for $\mathrm{H_0}$ = 70 \kms Mpc$^{-1}$) galaxies covering, in a three--dimensional parameter space, a wide range of physical properties: \begin{itemize} \item morphological type : associated with gas fraction, star formation rate (SFR) per unit mass, and bulge/disc structure; \item luminosity : associated with galaxy mass, internal velocity, and mean metallicity; \item FIR/optical luminosity ratio : associated with dust temperature, dust optical depth, and inclination. \end{itemize} In particular, there are roughly a dozen galaxies in each RC3 type (E--S0, Sa--Sab, Sb--Sbc, Sc--Scd, Sd--Sm, Im--I0) leading to an extensive set of combinations of luminosity and FIR/optical luminosity ratio. In particular, a factor of $10^5$ in infrared luminosity and $10^3$ in $L_{FIR} / L_{opt}$ is covered by the sample. The 75 galaxies also represent a vast range of other galaxy properties such as nuclear activity, surface brightness, inclination, CO/\hi\ ratio, spiral arm structure, bar structure, and environment. The galaxies were chosen as far as possible from the Galactic plane in order to avoid Galactic extinction and high density of foreground stars. Since gas fraction is correlated with morphological type \citep{2001AJ....121..753B}, not all the 75 galaxies are \ha\ emitters. In fact, \ha\ was not detected for ten galaxies (E--S0 and Irr types), thus kinematical information could not be extracted for those galaxies from emission lines. \ha\ kinematics for 28 galaxies of the SINGS sample have already been published in \cite{2006MNRAS.367..469D}. This paper presents the second part of the follow--up survey, namely the \ha\ kinematics of the remaining 37 galaxies. Table \ref{basic_parameters} presents the basic galaxy parameters. \begin{table*} \centering \begin{minipage}{140mm} \caption{\mbox{Observational data for the SINGS \ha\ kinematics sample} \label{basic_parameters} } \begin{tabular}{ccccccccc} \hline \hline Galaxy & $\alpha$(J2000) & $\delta$(J2000) & Type & $\Delta$ \footnote{$\Delta$: distance. Taken from Kennicutt et al. (2003).} & $D_{25}^{b,i}$ \footnote{$D_{25}^{b,i}$: apparent major diameter at the 25 mag arcsec$^{-2}$ in B. Taken from the RC3.} & $B_T^{b,i}$ \footnote{$B_T^{b,i}$: total apparent magnitude in B. Taken from the RC3.} & $M_B^{b,i}$ \footnote{$M_B^{b,i}$: total absolute magnitude in B. Calculated from $\Delta$ and $B_{T}^{b,i}$.} & $V_{sys}$ \footnote{$V_{sys}$: systemic velocity. Taken from Kennicutt et al. (2003).} \\ name & (hh mm ss) & (${}^{\circ}$ \, ${}^{\prime}$ \, ${}^{\prime\prime}$) & RC3 & (Mpc) & (arcmin) & & & (\kms) \\ \hline NGC 24 & 00 09 56.7 & $-$24 57 44 & SA(s)c & 8.2 & 5.8 & 12.19 & $-$17.38 & 554 \\ NGC 337 & 00 59 50.3 & $-$07 34 44 & SB(s)d & 24.7 & 2.9 & 12.06 & $-$19.90 & 1650\\ NGC 855 & 02 14 03.6 & $+$27 52 38 & E & 9.6 & 2.6 & 13.30 & $-$16.61 & 610 \\ NGC 1097 & 02 46 19.0 & $-$30 16 30 & SB(r'1)b & 16.9 & 9.3 & 10.23 & $-$20.91 & 1275\\ NGC 1291 & 03 17 18.6 & $-$41 06 29 & SB(l)0/a & 9.7 & 9.8 & 9.39 & $-$21.75 & 839 \\ NGC 1482 & 03 54 39.3 & $-$20 30 09 & SA0 & 22.0 & 2.5 & 13.10 & $-$18.61 & 1655\\ NGC 1512 & 04 03 54.3 & $-$43 20 56 & SB(r)ab & 10.4 & 8.9 & 11.13 & $-$18.96 & 896 \\ NGC 1566 & 04 20 00.4 & $-$54 56 16 & SAB(rs) & 18.0 & 8.3 & 10.33 & $-$20.95 & 1496\\ NGC 1705 & 04 54 13.5 & $-$53 21 40 & SA0 & 5.8 & 1.9 & 12.77 & $-$16.05 & 628 \\ Ho II & 08 19 05.0 & $+$70 43 12 & Im & 3.5 & 7.9 & 11.10 & $-$16.62 & 157 \\ DDO 053 & 08 34 07.2 & $+$66 10 54 & Im & 3.5 & 1.5 & 14.70 & $-$13.02 & 19 \\ NGC 2841 & 09 22 02.6 & $+$50 58 35 & SA(r)b & 9.8 & 8.1 & 10.09 & $-$19.87 & 638 \\ Ho I & 09 40 32.3 & $+$71 10 56 & IAB(s)m & 3.5 & 3.6 & 13.00 & $-$14.72 & 143 \\ NGC 3034 & 09 55 52.2 & $+$69 40 47 & I0 & 3.5 &11.2 & 9.30 & $-$18.42 & 203 \\ Ho IX & 09 57 32.0 & $+$69 02 45 & Im & 3.5 & 2.5 & 14.30 & $-$13.42 & 46 \\ NGC 3190 & 10 18 05.6 & $+$21 49 55 & SA(s)a & 17.4 & 4.4 & 12.12 & $-$19.08 & 1271\\ IC 2574 & 10 28 21.2 & $+$68 24 43 & SAB(s)m & 3.5 &13.2 & 10.80 & $-$16.92 & 57 \\ NGC 3265 & 10 31 06.8 & $+$28 47 47 & E & 20.0 & 1.3 & 13.00 & $-$18.50 & 1421\\ Mrk 33 & 10 32 31.9 & $+$54 24 03 & Im & 21.7 & 1.0 & 13.20 & $-$18.50 & 1461\\ NGC 3351 & 10 43 57.7 & $+$11 42 13 & SB(r)b & 9.3 & 7.4 & 10.53 & $-$19.31 & 778 \\ NGC 3627 & 11 20 15.0 & $+$12 59 30 & SAB(s)b & 8.9 & 9.1 & 9.65 & $-$20.10 & 727 \\ NGC 3773 & 11 38 13.0 & $+$12 06 43 & SA0 & 18.3 & 1.2 & 12.90 & $-$18.40 & 987 \\ NGC 4254 & 12 18 49.6 & $+$14 24 59 & SA(s)c & 20.0 & 5.4 & 10.44 & $-$21.07 & 2407\\ NGC 4450 & 12 28 29.6 & $+$17 05 06 & SA(s)ab & 20.0 & 5.2 & 10.90 & $-$20.61 & 1954\\ NGC 4559 & 12 35 57.7 & $+$27 57 35 & SAB(rs)cd & 11.6 &10.7 & 10.46 & $-$19.86 & 816 \\ NGC 4594 & 12 39 59.4 & $-$11 37 23 & SA(s)a & 13.7 & 8.7 & 8.98 & $-$21.70 & 1091\\ NGC 4631 & 12 42 08.0 & $+$32 32 26 & SB(s)d & 9.0 &15.5 & 9.75 & $-$20.02 & 606 \\ NGC 4736 & 12 50 53.0 & $+$41 07 14 & SA(r)ab & 5.3 &11.2 & 8.99 & $-$19.63 & 308 \\ DDO 154 & 12 54 05.2 & $+$27 08 59 & IB(s)m & 5.4 & 3.0 & 13.94 & $-$14.72 & 376 \\ NGC 4826 & 12 56 43.7 & $+$21 40 52 & SA(rs)ab & 5.6 &10.0 & 9.36 & $-$19.38 & 408 \\ DDO 165 & 13 06 24.8 & $+$67 42 25 & Im & 3.5 & 3.5 & 12.80 & $-$14.92 & 37 \\ NGC 5033 & 13 13 27.5 & $+$36 35 38 & SA(s)c & 13.3 &10.7 & 10.75 & $-$19.87 & 875 \\ NGC 5408 & 14 03 20.9 & $-$41 22 40 & IB(s)m & 4.5 & 1.6 & 12.20 & $-$16.07 & 509 \\ NGC 5474 & 14 05 01.6 & $+$53 39 44 & SA(s)cd & 6.9 & 4.8 & 11.28 & $-$17.91 & 273 \\ NGC 6822 & 19 44 56.6 & $-$14 47 21 & IB(s)m & 0.6 &15.5 & 9.31 & $-$14.58 &$-$57\\ NGC 7552 & 23 16 11.0 & $-$42 34 59 & SB(s)ab & 22.3 & 3.4 & 11.25 & $-$20.49 & 1585\\ NGC 7793 & 23 57 49.8 & $-$32 35 28 & SA(s)d & 3.2 & 9.3 & 9.63 & $-$17.90 & 230 \\ \hline \end{tabular} \end{minipage} \end{table*} \subsection{Observing runs} \label{hardware} The observations have been obtained with the same instrumental set--up consisting of a scanning Fabry--Perot (FP) interferometer, an imaging device designed for faint fluxes, and a narrow--band ($\sim$15\AA{}) interference filter. For imaging, a photon--counting camera (\FM) and an Andor commercial ({\scriptsize L3CCD}) camera were used. Each instrument was attached to a focal reducer at the Cassegrain or Nasmyth focus of the telescope. The focal reducers used were Panoramix at the Observatoire du mont M\'egantic (OmM) 1.6m telescope, Cigale at the European Southern Observatory (ESO) La Silla 3.6m telescope, MOS/FP at the Canada--France--Hawaii 3.6m telescope (CFHT), and GHaFaS at the William Herschel 4.2m telescope (WHT). Table \ref{telescopes} describes the various characteristics of the instruments. The spectral profiles for every pixel in the field of view were obtained by scanning the free spectral range (FSR) of the Fabry--Perot. The FSR is the wavelength interval between two adjacent transmission peaks : \begin{equation} \label{FSR_eq} \mathrm{FSR=\lambda_0/p} \end{equation} \noindent where $\lambda_0$ is the rest wavelength and p the interference order. The number of channels needed to scan the FSR must be at least 2.2 times the FP \textit{finesse F} for a good sampling (Nyquist criteria). The \textit{finesse} is a dimensionless parameter representing the spectral resolving power R of the scanned line and is related to the full--width half--maximum (FWHM) of the transmitted line: \begin{equation} \label{finesse_eq} \textit{F}= \mathrm{ \frac{R}{p} = \frac{FSR}{FWHM} } \end{equation} The FP etalon used for the observations has a high interference order (typically p=765 at H$\alpha$) and is capable of achieving high values for the \textit{finesse} and spectral resolution. A typical observation would be to scan the FP etalon in 48 channels with an exposure time of 15s per channel, then repeat the process for 15 cycles. However, when the sky transparency is not excellent during an exposure, the etalon is scanned more rapidly with an exposure time of 10s per channel so that the atmospheric conditions can be averaged out more efficiently. The resulting data cube is a set of interferograms stacked together, where each one represents an image of the object modulated by the interference pattern for a given FP spacing. The filter set used has 24 narrow--band \ha\ filters covering the galaxies' systemic velocities ranging from --300 to 10 0000 \kms. Sometimes, the filter was tilted by a few degrees to adjust its central wavelength to the Doppler shifted galaxy emission. Narrow--band filters, used to select the proper order that will go through the etalon, allow for the \ha\ emission to pass while at the same time blocking most of the night sky emission. The photon--counting cameras \FM\ I \&\ II \citep{2003SPIE.4841.1472H} consist of a GaAs Hamamatsu photomultiplier tube having a quantum efficiency of $\sim$23\% coupled to a Dalsa commercial {\scriptsize CCD}. The absence of read--out noise for this camera enables one to scan very rapidly the FP interferometer whereas {\scriptsize CCDs} need long exposures to overcome their read--out noise. Consequently, \FM\ can achieve high signal to noise ratios (S/N) and thus is ideal for faint fluxes like the emission found in galaxies \citep{2002PASP..114.1043G}. Additionally, a camera using a low light level charge--coupled device ({\scriptsize L3CCD}) made by Andor Technology was also used as the imaging device. {\scriptsize L3CCDs} have high quantum efficiency ($\sim$80 per cent) and sub--electron readout noise ($\sigma<$0.1 e-). This kind of sensor differs from traditional {\scriptsize CCDs} in the sense that the signal is amplified before it reaches the output circuitry which is the major source of noise. Gain is created by passing electrons through a multiplication register where an electron will create a second one by avalanche multiplication. More details about the {\scriptsize L3CCD} can be found in \cite{2004SPIE.5499..219D} and in \cite{2006SPIE.6276E..42D}. The observations of the sample were spread over nine different observing runs over a three year period. Five runs took place at the OmM 1.6m telescope where \FM\ is a permanent instrument. A second generation instrument, called \FM\ II, has been built recently and was tested successfully on the faint dwarf galaxy DDO 154 during one of these runs. Also, a new instrument for the La Silla NTT is under development and regarding this matter, the {\scriptsize L3CCD} camera was used to test its capabilities on very faint fluxes like galaxies. Therefore, four galaxies were observed with this camera during the same run at the OmM Observatory. Two runs took place at the ESO La Silla 3.6m telescope and one at the CFHT 3.6m telescope, both where \FM\ I is a visitor instrument. A last observing run took place at the WHT 4.2m telescope with the new instrument GHaFas \citep{2007arXiv0705.4093C}. The Fabry--Perot observations parameters for each galaxy can be found in Table \ref{journal}. \begin{table} \centering \caption{Telescope and instrument characteristics \label{telescopes}} \begin{tabular}{@{}cccccc@{}} \hline \hline Telescope & Instrument& Pixel Size & FOV \\% & $\mathrm{f.o.v._{unvign}}$ \\ \, & & (arcsec) &(arcmin) \\ \hline OmM & \FM & 1.61 & 19.43 \\%& 19.43\\ OmM & \FM\ II & 1.54 & 18.55 \\%& 18.55\\ OmM & L3CCD & 1.07 & 12.89 \\%& 12.89\\ ESO & \FM & 0.42 & 5.02 \\%& 5.02 \\ CFHT & \FM & 0.48 & 5.84 \\%& 3.92 \\ WHT & GHaFaS & 0.40 & 4.82 \\%& ? \\ \hline \end{tabular} \end{table} \begin{table*} \centering \begin{minipage}{140mm} \caption{\mbox{Journal of the Fabry--Perot observations.}\label{journal} } \begin{tabular}{ccccccccccccc} \hline \hline Galaxy & Date &$\lambda_c$ & FWHM &$\mathrm{T_{max}}$ & $\mathrm{t_{exp}}$ &$\mathrm{t_{ch}}$ &p&FSR&F&R&$\mathrm{n_{ch}}$&$\mathrm{step_{\lambda}}$\\ \,& &\footnote{$\lambda_c$: Non--tilted filter central wavelength at 20${}^{\circ}$ C (in \AA{})}&\footnote{FWHM: Non--tilted filter Full--Width Half--Maximum at 20${}^{\circ}$ C (in \AA{})}&\footnote{$\mathrm{T_{max}}$: Non--tilted filter transmission at $\lambda_c$ and at 20${}^{\circ}$ C (in \%)} &\footnote{$\mathrm{t_{exp}}$: Total exposure time (in min)}& \footnote{$\mathrm{t_{ch}}$: Total exposure time per channel (in min)}&\footnote{p: Interference order at \ha}&\footnote{FSR: Free spectral range at \ha\ (in \kms)}&\footnote{F: Finesse}&\footnote{R: Resolution according to the finesse}&\footnote{$\mathrm{n_{ch}}$: Number of FP channels} &\footnote{$\mathrm{step_{\lambda}}$: wavelength difference between channels (in \AA)} \\ \hline NGC 24 \footnote{ESO: European Southern Observatory, La Silla, Chile, 3.6m telescope.\label{ESO}} & 2005/11/04 & 6581& 19.8 &60& 150 & 2.50 & 765 & 392 & 19.7 & 15071 & 60 & 0.14 \\ NGC 337\footref{ESO} & 2005/11/02 & 6598& 18.2&73& 150 & 2.50 & 765 & 392 & 20.5 & 15657 & 60 & 0.14 \\ NGC 855\footnote{OmM: Observatoire du mont M\'egantic, Qu\'ebec, Canada, 1.6m telescope.\label{OmM}}& 2003/11/27 &6584& 15.5 &74& 190 & 4.75 & 609 & 492 & 13.2 & 8010 & 40 & 0.27 \\ NGC 1097\footref{ESO} & 2005/11/07 & 6598& 18.2&73& 470 & 7.83 & 765 & 392 & 20.0 & 15321 & 60 & 0.14 \\ NGC 1291\footref{ESO} & 2005/11/06 & 6584& 15.5&74& 385 & 6.42 & 765 & 392 & 20.5 & 15669 & 60 & 0.14 \\ NGC 1482\footref{ESO} & 2005/11/08 & 6608& 16.2&69& 115 & 1.92 & 765 & 392 & 20.3 & 15561 & 60 & 0.14 \\ NGC 1512\footref{ESO} & 2005/11/03 & 6584& 15.5&74& 740 & 2.67 & 765 & 392 & 17.5 & 13389 & 60 & 0.14 \\ NGC 1566\footref{ESO} & 2005/11/02 & 6598& 18.2&73& 150 & 2.50 & 765 & 392 & 20.7 & 15864 & 60 & 0.14 \\ NGC 1705\footref{ESO} & 2005/11/03 & 6581& 19.8&60& 130 & 2.17 & 765 & 392 & 20.0 & 15282 & 60 & 0.14 \\ Ho II \footref{OmM} & 2005/02/05 & 6563& 30.4&80& 204 & 4.25 & 765 & 392 & 17.7 & 13527 & 48 & 0.18 \\ DDO 053\footnote{CFHT: Canada--France--Hawaii Telescope, Hawaii, USA, 3.6m telescope\label{CFHT}.} & 2006/04/07 &6563& 30.4&80 &96 & 2.00 & 765 & 392 & 18.1 & 13885 & 48 & 0.18 \\ NGC 2841 \footref{OmM} &2005/02/03 & 6584& 15.5&74& 240 & 5.00 & 765 & 392 & 12.7 & 9731 & 48 & 0.18 \\ Ho I \footref{CFHT} & 2006/04/05 & 6563& 30.4&80& 176 & 3.67 & 765 & 392 & 14.2 & 10869 & 48 & 0.18 \\ NGC 3034\footref{OmM} & 2007/03/01 & 6581& 19.8&60& 384 & 8.00 & 765 & 392 & 17.2 & 13181 & 48 & 0.18 \\ Ho IX \footref{OmM} & 2005/05/10 & 6563& 30.4&80& 228 & 4.75 & 765 & 392 & 15.9 & 12185 & 48 & 0.18 \\ NGC 3190\footref{OmM} & 2004/11/03 & 6598& 18.2&73& 144 & 3.00 & 765 & 392 & 15.6 & 11905 & 48 & 0.18 \\ IC 2574 \footref{OmM} & 2005/02/03 & 6563& 30.4&80& 180 & 3.75 & 765 & 392 & 16.6 & 12713 & 48 & 0.18 \\ NGC 3265\footref{OmM} & 2007/03/01 & 6598& 18.2&73& 100 & 2.08 & 765 & 392 & 18.8 & 14396 & 48 & 0.18 \\ Mrk 33 \footref{CFHT} & 2006/04/08 & 6598& 18.2&73& 48 & 1.00 & 765 & 392 & 16.7 & 12798 & 48 & 0.18 \\ NGC 3351\footref{OmM} & 2005/02/03 & 6584& 15.5&74& 156 & 3.25 & 765 & 392 & 18.0 & 13750 & 48 & 0.18 \\ NGC 3627\footref{OmM} & 2005/02/06 & 6584& 15.5&74& 144 & 3.00 & 765 & 392 & 16.6 & 12670 & 48 & 0.18 \\ NGC 3773\footref{CFHT} & 2006/04/08 & 6584& 15.5&74& 76 & 1.58 & 765 & 392 & 17.2 & 13127 & 48 & 0.18 \\ NGC 4254\footref{OmM} & 2005/02/14 & 6621& 18.0&68& 240 & 5.00 & 765 & 392 & 17.4 & 13825 & 48 & 0.18 \\ NGC 4450\footref{ESO} & 2002/04/07 & 6607& 12.0&69& 60 & 2.50 & 793 & 381 & 12.1 & 4604 & 24 & 0.35 \\ NGC 4559\footref{OmM} & 2005/02/06 & 6584& 15.5&74& 408 & 8.50 & 765 & 391 & 16.5 & 12631 & 48 & 0.18 \\ NGC 4594\footnote{WHT: William Herschel Telescope, La Palma, Spain, 4.2m telescope.\label{WHT}} & 2007/07/05 & 6585 & 15.5 & 75 & 64 & 5.00 & 765&392 & 16.5 & 12623 &48 & 0.18\\ NGC 4631\footref{OmM} & 2005/02/01 & 6584& 15.5&74& 180 & 3.75 & 765 & 392 & 18.0 & 13757 & 48 & 0.18 \\ NGC 4736\footref{OmM} & 2005/05/11 & 6563& 30.4&80& 216 & 4.50 & 765 & 392 & 16.7 & 12745 & 48 & 0.18 \\ DDO 154 \footref{OmM} & 2007/02/22 & 6581& 19.8&60& 320 & 8.00 & 765 & 392 & 17.1 & 13097 & 40 & 0.21 \\ NGC 4826\footref{CFHT} & 2006/04/07 & 6563& 30.4&80& 128 & 2.67 & 765 & 392 & 17.2 & 13121 & 48 & 0.18 \\ DDO 165\footref{CFHT} & 2006/04/06 & 6563& 30.4&80& 172 & 3.58 & 765 & 392 & 17.1 & 13091 & 48 & 0.18 \\ NGC 5033\footref{OmM} & 2005/05/10 & 6584& 15.5&74& 460 & 9.58 & 765 & 392 & 16.7 & 12782 & 48 & 0.18 \\ NGC 5408\footref{CFHT} & 2006/04/07 & 6581& 19.8&60& 108 & 2.25 & 765 & 392 & 17.1 & 13083 & 48 & 0.18 \\ NGC 5474\footref{OmM} & 2007/02/28 & 6581& 19.8&60& 108 & 2.25 & 765 & 392 & 17.2 & 13157 & 48 & 0.18 \\ NGC 6822\footref{ESO} & 2005/11/08 & 6563& 30.4&80& 60 & 1.00 & 765 & 392 & 19.8 & 15121 & 60 & 0.14 \\ NGC 7552\footref{ESO} & 2005/11/02 & 6598& 18.2&73& 120 & 2.00 & 765 & 392 & 19.8 & 15160 & 60 & 0.14 \\ NGC 7793\footref{ESO} & 2005/11/08 & 6563& 30.4&80& 100 & 1.67 & 765 & 392 & 19.7 & 15040 & 60 & 0.14 \\ \hline \end{tabular} \end{minipage} \end{table*} \section{Data reduction} \label{reduction} This section introduces the few steps towards obtaining radial velocities and monochromatic maps from raw interferograms. In particular, \begin{itemize} \item wavelength calibration; \item spectral smoothing and sky emission subtraction; \item adaptive spatial binning and map extraction; \item {\scriptsize WCS} astrometry. \end{itemize} For a more complete description of the data reduction steps, we refer to \cite{2005MNRAS.360.1201H}, \cite{2006MNRAS.367..469D}, \cite{2006MNRAS.368.1016D}, and \cite{2006MNRAS.366..812C}. The software used can be found at http://www.astro.umontreal.ca/fantomm/reduction. \subsection{Wavelength calibration} \label{calib} The raw data cube, obtained during an acquisition, must be wavelength corrected since the transmitted wavelength $\mathrm{\lambda}$ is a function of the angle $\mathrm{\theta}$ of the incoming light beam : \begin{equation} \label{FP_eq} \mathrm{p \lambda = 2ne\, cos\theta } \end{equation} \noindent where p is the interference order at $\mathrm{\lambda_0}$(6562.78 \AA{}), n the index of the medium, and e the distance between the parallel plates of the etalon. The wavelength calibration is made by scanning the neon line at 6598.95 \AA{} just before and after a three hour acquisition, in the same conditions as the observation itself. This enables one to calculate the phase shift needed to assign a wavelength to a particular FP spacing, for every pixel of the field. This phase map transforms raw interferograms into a wavelength--sorted data cube. Since one can only know the transmitted wavelength value $\pm$ FSR, an uncertainty remains on the zero--point of the velocity scale. Comparison with other kinematical work will remove this uncertainty. Note that the data cubes are not flux--calibrated. \subsection{Spectral smoothing and sky emission subtraction} \label{sky} A Hanning smoothing was performed on every spectrum of the wavelength--sorted data cubes in order to remove any artifacts caused by the discrete sampling. After that, the strong night sky emission lines were subtracted. The method used for this is to reconstruct a sky cube using the sky dominated regions and interpolating it in the galaxy region. This sky cube was then subtracted from the data cube. This method has proven to be very successful at eliminating sky residuals compared with subtracting a median sky spectrum where both spatial and spectral inhomogeneities in the interference filter can lead to high sky residuals. \subsection{Adaptive spatial binning and map extraction} \label{binning} In order to increase the signal--to--noise ratio ({\scriptsize S/N}) in the diffuse and/or faint interstellar regions (e.g. inter--arm regions) of the observed galaxies, an adaptive spatial binning was applied to the data cubes. This technique is based on Voronoi diagrams where pixels are accreted into bins until the desired {\scriptsize S/N} ($\sim$5, typically chosen) is reached. In high intensity emission regions like the galactic center and spiral arms, the {\scriptsize S/N} exceeds by far the targeted minimal value and hence the high spatial resolution is maintained. This is an improvement over the usual gaussian smoothing where the kernel convolution would dilute the signal in high {\scriptsize S/N} regions. Next, the final steps are the integration of the flux under the \ha\ line for every bin, yielding a monochromatic map, and the barycenter computation, yielding a velocity field, following the procedure described by \cite{2006MNRAS.368.1016D}. The radial velocities (RV) are given in the heliocentric rest frame. A continuum map and a velocity dispersion map are also computed by the algorithm. The determination of the continuum threshold is a critical step in the data reduction as it defines the position of the barycenter of the line. This is done by iterative procedures. \subsection{WCS astrometry} \label{astrometry} Finally, {\scriptsize WCS} coordinates are attached to the computed maps using the task koords in the {\scriptsize KARMA} package \citep{1996ASPC..101...80G}. No references to the World Coordinate System are obtained during the acquisition and are necessary since the major axes position angle (PA) of the galaxy is field--orientation dependent. Besides, {\scriptsize WCS} astrometry is needed to combine the \ha\ kinematics with ancillary {\scriptsize SINGS} surveys. The coordinates are added by comparing positions of stars between a reference file (a redband DSS image for instance) and the FP non--binned continuum image. \section{kinematical parameters fitting} \label{gipsy} Rotation curves are computed using the task \textit{rotcur} available in the \textit{GIPSY} software \citep{2001ASPC..238..358V}. \textit{Rotcur} derives the kinematical parameters for a particular galaxy by fitting tilted--rings to the observed velocity field. More precisely, a least--squares--fitting is done to the function : \begin{equation} \label{vrot_eq} V_{obs}(x,y) = V_{sys} + V_{rot }(r) \, cos \, \theta \, sin \, i+V_{exp}(r) \, sin \, \theta \,sin \, i \end{equation} Here $V_{obs}(x,y)$ denotes the radial velocity at the pixel coordinates (x,y), $V_{sys}$ the systemic velocity, $V_{rot }(r)$ the rotational velocity for the corresponding radius r, $\theta$ the azimuthal angle from the major axis in the plane of the galaxy, $i$ the inclination angle of the galaxy, and $V_{exp}$ the expansion velocity. Since \textit{GIPSY} does not take into account the field rotation of the supplied velocity field with respect to {\scriptsize WCS} coordinates, the velocity fields were all rotated in order to compute accurate position angles (PAs) of the kinematical major axis. Fitting the kinematical parameters was done in a three--step process. First, the systemic velocity $V_{sys}$ and the kinematical center ($x_{pos},y_{pos}$) are fitted while keeping fixed the inclination i and position angle PA. The starting parameters used are the photometric i and PA given by the {\it hyperleda} catalog and for $V_{sys}$, the value used for selecting the interference filter. The starting values for the galactic center are the photometric center corresponding to the maximum value near the galactic center in the continuum map for spiral galaxies or in the \textit{Spitzer} 3.6 $\umu$m image for distorted and irregular galaxies. Then, a second fitting is done by letting i, PA, and $V_{rot }$ vary with radius while keeping fixed the new--found values for $V_{sys}$, $x_{pos}$, and, $y_{pos}$. Finally, \textit{rotcur} is run again, with only $V_{rot}$ as the varying parameter. The computed rotation curve is thus derived with fixed kinematical parameters so that the sample could be homogeneous, even though some velocity fields (e.g. NGC 3627) were better modeled with parameters varying with radius. The ring width used in the fitting procedure was set to be greater than 3 times the pixel width for a good sampling. The result is 5\arcsec\ for the OmM observations and 2\arcsec\ for the ESO, CFHT and WHT observations. Also, additional \textit{rotcur} runs were done for the approaching and receding sides separately in order to model any asymmetries arising between the two sides. Finally, the expansion velocity $V_{exp}$ was fixed to zero for all galaxies, thus assuming pure circular rotation. Letting $V_{exp}$ vary with radius did not change significantly the rotation velocities in the computed regions. Two \textit{rotcur} parameters remain to be explained, namely the free angle and the weighting function. To diminish the importance of deprojection errors, radial velocities in an opening angle (called the free angle) of typically 35${}^{\circ}$\ about the minor axis were rejected from the least--square fitting and a $cos(\theta)$ (where $\theta$ = angle from the major axis) weighting function was applied to give more importance to points near the major axis. Naturally, for more face--on galaxies, the projected velocities along the line--of--sight possess little information about rotational velocities resulting in large errors for the kinematical parameters. Afterwards, the \textit{GIPSY} output was analyzed by IDL routines (mainly computing averages and standard deviations). The determination of constant values for i and PA throughout the galaxy is performed by eliminating radii where non--circular motions and warps can occur. For a non--barred galaxy, this results in discarding radii located in the galactic center and outer spiral arms where one usually finds a substantial scatter in i and PA values. For a barred galaxy, this is complicated by the fact that important non--circular motions are present in the bar region, therefore this region was excluded from the fit. Also, for barred and non--barred galaxies, rings containing too few points or dominated by one side of the galaxy were discarded. \section{Results} \label{results} This section presents the \ha\ monochromatic and RV maps as well as the rotation curves for each galaxy in order of increasing right ascension. Appendix A briefly describes the observed kinematics for each galaxy of the sample. In Appendix B, four images per galaxy are first shown : DSS blue image when available (top--left), \textit{Spitzer} 3.6 $\umu$m image (top--right), \ha\ monochromatic image (middle--left), and the corresponding \ha\ velocity field (middle--right). The blue images show the intermediate ($\sim 10^9$ yrs) stellar population and the 3.6 $\umu$m the old stellar population, tracer of the total mass of galaxies, while the \ha\ monochromatic maps show the gas ionized by massive (M $> 8$ \msol), young ($\sim 10^6$ yrs) OB stars \citep{1998ARA&A..36..189K, 2001AJ....121..753B}. These four maps are WCS--oriented and represent the same field of view. For each galaxy, the field of view is adjusted so that the \ha\ morphology and kinematics as well as the large scale stellar morphology are displayed with great detail. The maps and the rotation curves can be found at http://www.astro.umontreal.ca/fantomm/singsII/ Moreover, a position--velocity (PV) diagram is provided for each galaxy when the extraction of the kinematical parameters was possible. They represent a slice in the data cube along the kinematical major axis. The black line, superposed on the diagrams, represents a cut in the model velocity field. A data cube slice along the kinematical major axis showing a rotation curve which superposes well on the velocities of the maximum intensity of the ionized gas emission implies that there are no significant non--circular motions nor any kinematical twist. Lastly, rotation curves can be found in Appendix C. The definition of the errors used is either the largest velocity difference between both sides and the approaching or receding side separately, or the \textit{rotcur} intrinsic error if it is greater. The kinematical parameters inclination and position angle found by the technique described in section \ref{gipsy} are presented in Table \ref{kinparam}. Figure \ref{fig:inclpa} compares the kinematical parameters with respect to the photometric values. As expected, the agreement is better for the PAs than for the inclinations. The two points that stand out are NGC 1512 (i$_{phot}$ = 65${}^{\circ}$\ \& i$_{kin}$ = 35${}^{\circ}$) and NGC 1566 (i$_{phot}$ = 56${}^{\circ}$\ \& i$_{kin}$ = 32${}^{\circ}$). In both cases, the explanation is quite clear: the photometric parameters are mainly representative of the bar which contributes a large part of the light resulting in more edge--on values. Only looking at the B or 3.6-$\umu$m images in Figures B7 \& B8, one can see that the outer isophotes are much more face--on. Finally, on the overall 37 galaxies exhibiting \ha\ emission, it was not possible to extract rotation curves for 21 of them due to either poor spatial coverage, absence of large--scale rotation or extremely perturbed discs. \begin{table} \centering \begin{minipage}{0.5\textwidth} \centering \caption{Photometric and kinematical parameters \label{kinparam}} \begin{tabular}{@{}ccccc@{}} \hline \hline Galaxy & \multicolumn{2}{|c|}{Phot} & \multicolumn{2}{|c|}{Kin} \\ name & PA & i & PA & i \\ \hline NGC 24 & 225 & 78 & 226 $\pm$ 1 & 75 $\pm$ 3 \\ NGC 337 & 130 & 53 & 121 $\pm$ 5 & 52 $\pm$ 6 \\ NGC 1097 & 140 & 37 & 133 $\pm$ 1 & 55 $\pm$ 1 \\ NGC 1512 & 253 & 65 & 260 $\pm$ 1 & 35 $\pm$ 14 \\ NGC 1566 & 212 & 56 & 214 $\pm$ 2 & 32 $\pm$ 14 \\ NGC 2841 & 147 & 68 & 150 $\pm$ 1 & 70 $\pm$ 1 \\ NGC 3351 & 193 & 42 & 193 $\pm$ 1 & 41 $\pm$ 2 \\ NGC 3627 & 173 & 57 & 173 $\pm$ 7 & 65 $\pm$ 7 \\ NGC 4254 & 45 & 28 & 69 $\pm$ 3 & 31 $\pm$ 6 \\ NGC 4450 & 355 & 43 & 353 $\pm$ 5 & 49 $\pm$ 17 \\ NGC 4559 & 330 & 67 & 323 $\pm$ 3 & 68 $\pm$ 5 \\ NGC 4736 & 285 & 35 & 292 $\pm$ 2 & 36 $\pm$ 7 \\ DDO 154 & 219 & 48 & 236 $\pm$ 17 & 59 $\pm$ 32\\ NGC 4826 & 295 & 60 & 291 $\pm$ 1 & 53 $\pm$ 1 \\ NGC 5033 & 351 & 66 & 353 $\pm$ 2 & 71 $\pm$ 2 \\ NGC 7793 & 264 & 53 & 277 $\pm$ 3 & 47 $\pm$ 9 \\ \hline \end{tabular} \end{minipage} \end{table} \begin{figure} \centering \includegraphics{figures/inclpaPhotvsKin} \caption{Comparison between photometric and kinematical parameters. Top: Position Angle. Bottom: Inclination. The dotted line represents agreeing parameters. \label{fig:inclpa}} \end{figure} \section{Discussion} \label{discussion} In this paper, the rotation curves given in Appendix \ref{app:rc} were obtained from the kinematical parameters derived using tilted--ring models which assume pure circular motions. Even if we tried to avoid the zones obviously affected by non--circular motions, there is still considerable work needed to extract rotation curves that are truly representative of the mass distribution, and can be used for mass modeling purposes. This is especially true for barred systems, which account for about one third of the galaxies in Table \ref{basic_parameters}. The idea behind obtaining an accurate determination of the gravitational potential is correlated with the dark halo modeling. For instance, there has been significant debate about the shape of dark matter density profiles, especially regarding their inner slope. Based on cosmological N--body simulations (Navarro et al. 1996, 1997, hereafter collectively NFW), the dark matter halo profile appears to be independent of halo mass with an inner logarithmic slope equal to $-1$. Nevertheless, recent higher resolution simulations suggest that the density profiles do not converge to a single power law at small radii. At the smallest resolved scales (0.5\% of the virial radius), profiles usually have slopes between --1 and --1.5 (Moore et al. 1999; Ghigna et al. 2000; Jing \& Suto 2000; Fukushige \& Makino 2001; Klypin et al. 2001; Power et al. 2003; Navarro et al. 2004; Diemand et al. 2004). In addition, all simulations find density profiles that are inconsistent with the isothermal profile found in observations. In the outer regions, the determination of the dark halo slope based on mapping the outer density profile of galaxies is difficult, owing mainly to a lack of mass tracers at large radii. In the inner regions, the unknown value of the stellar mass--to--light ratio further complicates the determination of the mass distribution. This has led to dedicated analysis on dwarf and low surface brightness (LSB) galaxies that are believed to be dark matter dominated at all radii (de Blok \& McGaugh 1997; Verheijen 1997; Swaters 1999). It has been suggested that rotation curves of dwarf and LSB galaxies rise less steeply than predicted by numerical simulations based on the cold dark matter (CDM) paradigm (Moore 1994; Flores \& Primack 1994; de Blok \& McGaugh 1997; McGaugh \& de Blok 1998; de Blok et al. 2001a, 2001b). However, a number of observational uncertainties cast doubt over these early claims. These include beam smearing for \hi\ rotation curves (Swaters et al. 2000; van den Bosch et al. 2000), high inclination angles and \ha\ long--slit alignment errors (Swaters et al. 2003a), and non--circular motions close to the center of galaxies (Swaters et al. 2003b). Many of these uncertainties can be quantified or eliminated by measuring high--resolution two--dimensional velocity fields (Barnes et al. 2004). At optical wavelengths, these can be obtained via Fabry--Perot interferometry (e.g., Blais--Ouellette et al. 1999) or integral field spectroscopy (e.g., Andersen \& Bershady 2003; Courteau et al. 2003). There are ways to extract the true kinematics that reflect the gravitational potential. One is to derive the potential directly from the 2D velocity field or the 3D data cube (this is work in preparation). The other way is to derive the bar parameters using the {\it Spitzer} images, compare with numerical simulations, and apply the necessary corrections for, e.g. the streaming motions induced by the bars (see Hernandez et al. 2005b; Perez, Fux \& Freeman, 2004). This detailed work will be done in another paper (Hernandez et al. 2007 in preparation), but we can illustrate what needs to be done by using three of the barred systems in our sample. The galaxy NGC 3351, better known as Messier 95, is a SBb galaxy, member of the Leo group. Situated at a distance of 9.3 Mpc, this starburst galaxy has a large--scale stellar bar which has a deprojected value of 47\arcsec\ for the semi--major axis \citep{1995AJ....109.2428M}. Outside this bar, the \ha\ velocity field is fairly regular and the kinematical PA and inclination found by {\textit GIPSY} agree well with the photometric values. Thus, the gas outside the bar is thought to be in circular orbits so that the rotation curve represents accurately the kinematics of this galaxy. The small deviations from circular motions are due to streaming along the inner ring. The \ha\ rotation curve shows a more or less constant velocity beginning at the end of the stellar bar with peaks corresponding to the 70\arcsec\ inner ring. Figure \ref{fig:modelM95} displays the tilted--ring model results and one can see, outside the bar, the fairly regular values for the position angle and inclination as a function of the radius. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/incl-pa-ngc3351-bar} \caption{Tilted--ring model for NGC 3351. The dotted line shows the fitted PA (193${}^{\circ}$) and inclination (41${}^{\circ}$). The stellar bar ends at a radius of 47\arcsec. So, the filled circles give the values inside the bar and the empty circles the values outside the bar region.\label{fig:modelM95}} \end{figure} However, inside the large--scale bar, the \ha\ velocity field shows the perturbed kinematics expected for this barred system. The twisted isovelocity contours indicate that a pure circular rotation model will present a poor fit to the data and hence kinematical information extracted in this region will be incorrect. This is illustrated in Figure \ref{fig:modelM95} where the filled circles represent the values computed in the bar region. It is thus imperative to take into account the bar location when deriving kinematical parameters. Furthermore, bar modeling is crucial in order to properly compute the rotation curve in this region. Numerical computations were performed with \texttt{GADGET}, a tree--based N--body$+$\textsc{SPH} code developed by Springel, Yoshida \& White (2001). For the needs of the simulations, an initial stellar population is set up to reproduce a disc galaxy with an already formed bulge. The initial positions and velocities of the stellar particles are drawn from a superposition of two axisymmetric Miyamoto--Nagai discs (Miyamoto \& Nagai 1975) of mass respectively $10^{10}$ and $10^{11}$\msol, of scale lengths respectively $1$ and $3.5$~kpc and common scale height of $0.5$~kpc. Velocity dispersions are computed solving numerically the Jeans equations. The total number of stellar particles is 1.1~$\times 10^6$. The run includes a dark halo made of 2.2~$\times 10^6$ live particles distributed in a Plummer sphere of scalelength $50$~kpc and of mass respectively 2.42 and 6.46~$\times 10^{11}$\msol. The total mass of the gas is 0.11~$\times 10^{11}$\msol. Finally, the total mass of the simulated galaxy is 7.67~$\times 10^{11}$\msol. Numerical simulations of the kinematical effect of the bar are shown in Figure \ref{fig:OH}. The input model for the galaxy is pure rotation and the corresponding rotation velocities along the major and minor axis are shown in faint blue and green lines. Afterwards, the code simulates galaxy evolution where a bar is developing. The difference in orientation between the major axis and the bar corresponds to different evolving times which are given in units of millions years. For NGC 3351, the kinematical PA ($PA_{kin}=193$${}^{\circ}$) is almost perpendicular to the bar, with $PA_{bar} = 113$${}^{\circ}$\ using the value found by \cite{2007ApJ...657..790M}. The results of the simulations are shown in the form of RV maps and stellar densities superposed. For an 80${}^{\circ}$\ difference between the major axis and the bar PAs, the overall effect is an artificial increase in the velocity gradient. This can be explained by gas moving along x1 orbits (parallel to the bar) where the velocity is greater at the perigalacticon (near the center) than at the apogalacticon (near the end of the bar). The true kinematics of this galaxy will therefore be obtained by correcting this artificial increase in the velocity gradient. Until then, rotation velocities inside the bar region are not given for the final rotation curve. See the galaxy description in Appendix A for additional evidences of non--circular motions. \begin{figure*} \begin{center} \includegraphics[width=5.7cm]{figures/RV_30_479 \includegraphics[width=5.2cm]{figures/RC_30_479} \includegraphics[width=7.4cm]{figures/vrot-ngc3351-incl41-pa193-vsys785-final-oh.eps}\\ \includegraphics[width=5.7cm]{figures/RV_30_454 \includegraphics[width=5.2cm]{figures/RC_30_454} \includegraphics[width=7.4cm]{figures/vrot-ngc337-incl52-pa121-vsys1636-final-oh.eps}\\ \includegraphics[width=5.7cm]{figures/RV_30_443 \includegraphics[width=5.2cm]{figures/RC_30_443} \includegraphics[width=7.4cm]{figures/vrot-ngc4559-incl68-pa323-vsys824-final-oh.eps} \caption{Three different bar orientations with respect to the major axis. Top: NGC 3351, perpendicular position. Middle: NGC 337, intermediate position. Bottom: NGC 4559, parallel position. (left) Density contours of the bar superposed on the velocity field of the model. (middle) The thin blue and green lines represent the rotation velocities along the major and minor axis, respectively, for an input model with pure rotation. The dots (blue for the approaching side and red for the receding side) represent the observed velocities. (right) Rotation curves derived in this study. \label{fig:OH}} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/incl-pa-ngc337-bar} \caption{Tilted--ring model for NGC 337. The dotted line shows the fitted PA (121${}^{\circ}$) and inclination (52${}^{\circ}$). The stellar bar ends at a radius of 35\arcsec. So, the filled circles give the values inside the bar and the empty circles the values outside the bar region.\label{fig:modeln337}} \end{center} \end{figure} Another galaxy displaying a perturbed velocity field is NGC 337. This asymmetric SBd galaxy features an off--center stellar bar having a PA of 162${}^{\circ}$\ and a deprojected value of 35\arcsec\ for the bar semi--major axis \citep{2007ApJ...657..790M}. Since the photometric major axis of this galaxy is 130${}^{\circ}$, the bar has an intermediate orientation with respect to the major axis. Numerical simulations have been done for this bar position and the results are illustrated in the middle panel of Figure \ref{fig:OH}. The simulated velocity field displays the characteristic Z--shape of the isocontours similar to what is seen in the \ha\ velocity map. One characteristic is the velocity gradient along the minor axis (green line in Figure \ref{fig:OH}). Another feature is the relative agreement between the input rotation velocities (faint blue line) and the computed ones (triangle symbols show averaged red and blue points calculated using the method presented in section \ref{gipsy}). The rotation curve in the bar region is thus provided for this galaxy since these perturbations are confined to the minor axis which is excluded from the fit. The tilted--ring model for this galaxy, presented in Figure \ref{fig:modeln337}, illustrates the small differences for the fitted values between the bar region (filled circles) and spiral arms (open circles). After having completed the kinematical fitting procedure, one can look at the {\it Spitzer} IRAC 3.6$\umu$m image in order to compare the bar location with the velocity residuals. Since the eastern side is approaching and the western side is receding, the spiral arms seem to be trailing in an anti--clockwise direction. The velocity residuals from the pure circular rotation model indicate positive residuals east of the bar as well as negative residuals on the southern end of the bar, suggesting an inflow of gas towards the bar. Two other barred galaxies having a bar orientation intermediate to the major axis are NGC 1566 and NGC 3627, but their Seyfert activity and warped disc respectively make difficult the kinematical analysis regarding the effects of the bar. Their rotation curves are provided at the end of this paper. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/incl-pa-ngc4559-bar} \caption{Tilted--ring model for NGC 4559. The dotted line shows the fitted PA (323${}^{\circ}$) and inclination (68${}^{\circ}$) values. The stellar bar ends at a radius of $\sim$30\arcsec. \label{fig:modeln4559}} \end{figure} Finally, the last barred system discussed in this paper is NGC 4559, an SABcd galaxy having a small bar roughly aligned with the major axis ($PA_{bar} \sim$ 340${}^{\circ}$\ vs $PA_{kin}=323$${}^{\circ}$ ). Numerical simulations have been done for a bar parallel to the major axis and are presented in the bottom panel of Figure \ref{fig:OH}. The small velocity gradient seen in the PV diagram and in the rotation curve is in agreement with the predicted behavior seen in the simulation. The rotation velocities are artificially lowered by the radial motions in the bar, resulting in an underestimation of the luminous mass in the central regions. It is thus vital to model the bar in order to determine the accurate mass distribution. The radial mass distribution of the gas was analyzed between the 3 time steps to verify that modifications of the inner part of the rotation curve was not an effect of migration of the gas during the run. Further bar modeling enables the bar length to be calculated. Using the Fourier moment analysis on azimuthal profiles derived using J and K 2MASS images (see \citealt{1998AJ....116.2136A} and \citealt{2000A&A...361..841A}), the deprojected value for the bar semi--major axis is $30 \pm 5$\arcsec. For example, the tilted--ring model for NGC 4559 shows that it is difficult to assess which radii are affected by the bar (see Figure \ref{fig:modeln4559}). However, the \ha\ velocity field displays streaming motions outside the bar as well, therefore the velocities are perturbed to at least a radius of $\sim$40\arcsec. Deriving the form of the gravitational potential directly from 2D kinematics and/or using numerical simulations of the bar, are thus essential to perform in order to study properly the mass distribution and solve the dark halo density profile inconsistencies. \section{Conclusions} \label{conclusion} We have presented in this paper the second and last part of the \ha\ kinematics follow--up survey of the Spitzer Infrared Nearby Galaxies Survey (SINGS) sample. The goal of this kinematical follow up is to better understand the role of baryons and of the dark/luminous matter relation in star forming regions of galaxies. The shape of the velocity field in the central galactic regions, drawn through its H$\alpha$ component, is indeed directly related to the baryonic luminous disk and the star formation processes. The SINGS sample will provide a unique opportunity to link the kinematics with numerous observations and studies at other wavelengths. The data have been obtained from high resolution Fabry--Perot observations using the \FM\ camera and a L3CCD detector. The SINGS sample of galaxies has been observed at the OmM 1.6m telescope, ESO La Silla 3.6m telescope, CFHT 3.6m telescope, and WHT 4.2m telescope. The velocity fields were obtained by using a data reduction pipeline written in IDL and the rotation curves were computed by the \textit{rotcur} task from the {\it GIPSY} software. When fitting the kinematical parameters, care was taken to avoid the zones obviously affected by non--circular motions. However, we have demonstrated that for barred systems, different bar characteristics considerably modify the central velocity gradient of the computed rotation curves in the pure circular rotation hypothesis. Therefore, numerical modeling of barred galaxies is crucial in order to extract rotation curves that are truly representative of the gravitational potential and hence of the mass distribution in those galaxies. In the meantime, the dark matter distribution of the SINGS galaxies, using the rotation curves derived here, will be presented in a forthcoming paper. Not only will these observations provide the high spatial resolution data needed for constraining the dark matter density profiles of galaxies, but they will be helpful for studying these profiles as a function of morphological type. Furthermore, they will help to delineate the role of gas kinematics in regulating the star formation rate. For instance, \cite{2002Ap&SS.281..101P} have argued that the probability of collapse of molecular clouds leading to star formation is greatly enhanced in slowly rotating gas discs compared to high velocity rotation. Moreover, \cite{2003A&A...405...89C} and \cite{2007A&A...466..905F} have suggested that the star forming inner and nuclear rings in the nearby galaxies NGC 3627 and NGC 628 (respectively) are driven by a rotating asymmetry because their location in the host disk are in agreement with Lindblad resonances caused by a bar pattern speed. Gas dynamical processes therefore are important in regulating the star formation history of galaxies and the \ha\ kinematics presented in this paper will help understanding the star formation processes. \section*{Acknowledgments} We would like to thank Jacques Boulesteix, Jean--Luc Gach, Philippe Balard and Olivier Boissin for helping with the instrumentation and part of the observations and the staff of the four Observatories, where the data were obtained, for their continuing support. The William Herschel Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. We acknowledge support from the Natural Sciences and Engineering Research Council of Canada and the Fonds Qu\'eb\'ecois de la recherche sur la nature et les technologies. The Digitized Sky Surveys (DSS images) were produced at the Space Telescope Science Institute under U.S. Government grant NAG W--2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present compressed digital form with the permission of these institutions. The IR images were obtained by the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
1,477,468,751,259
arxiv
\section{Non-interacting limit} \label{Non_int} The single particle Hamiltonian of a spin-orbit(SO) coupled bosonic system in an optical lattice (as given by Eq.\ 1 of the main text with $\lambda = \mathcal{V} =0$) can be written as a $2 \times 2$ matrix in the momentum representation as \begin{equation} H_{SO} = \left( \begin{array}{cc} -2\cos(k+q) & \Omega \\ \Omega & -2\cos(k-q) \end{array}\right) \label{HSO} \end{equation} The energy dispersion of $H_{\rm SO}$ is given by Eq.\ 2 of the main text. From this dispersion, which provides the expression of the lower branch of the spectrum $E_k^{-}$, we find that there exists a critical value $\Omega_c$ above which the ground state is doubly degenerate and the energy minima shifts to finite momenta $k_0 = \pm \cos ^{-1}[\cos q \sqrt{1 + \Omega ^2/(4\sin ^2 q)}]$ (see Fig.\ \ref{disp}). \begin{figure}[ht] \centering \includegraphics[scale=0.8]{disp.pdf} \caption{Energy dispersion for (a) $\Omega < \Omega _c$ and (b) $\Omega > \Omega _c$} \label{disp} \end{figure} The effective mass (or the band mass) of the bosons is thus given by $m^* = \partial ^2_k E_k^{-} |_{k=k_0}$. For $\Omega > \Omega _c$, the expression of $m^{\ast}$ can be written as, \begin{equation} m^{\ast}_{\Omega >\Omega_c} \equiv m^*_{>} = \left(\frac{\partial ^2 E_k^{-}}{\partial k^2}\right)_{k=0} = (1-\Omega _c/\Omega)\cos q. \end{equation} We note that the effective masses $m^*_{>}(m^*_{<})$ in the two regimes $\Omega > \Omega_c(<\Omega_c)$ both vanish at $\Omega=\Omega_c$. Furthermore, in the absence of disorder the superfluid fraction (SFF) turns out to be the boson effective mass. Thus $m^{\ast}$ captures the behavior of the SFF obtained numerically as a function of $\Omega$ (see Fig.\ \ref{SFF}(a)); this situation is similar to that obtained in the continuum limit \cite{stringari}. In presence of the AA potential we numerically diagonalize the single particle Hamiltonian to obtain the ground state and the full excitation spectrum. The localization transition of the ground state is characterized by the vanishing of superfluid fraction (SFF) which can be obtained using Eq.\ 4 of the main text. Alternatively, one can also adopt the perturbative approach to calculate the SFF. To this end, we note that in the presence of a small phase twist $\theta$ across the boundary, the original Hamiltonian becomes, \begin{equation} H_{\theta} = -t\sum_{l,\sigma} \left(\hat{b}_{l,\sigma}^{\dagger}e^{iq\hat{\sigma}_z}\hat{b}_{l+1,\sigma}e^{-i \theta /N_s}+h.c.\right)+\Omega \sum_{l,\sigma}\hat{b}_{l,\sigma}^{\dagger}\hat{\sigma}_x\hat{b}_{l,\sigma} + \lambda \sum_{l,\sigma}\cos(2\pi \beta l)\hat{b}_{l,\sigma}^{\dagger}\hat{b}_{l,\sigma}. \label{ptwist} \end{equation} An expansion of the $H_{\theta}$ (Eq.\ \ref{ptwist}) to ${\rm O}(\theta^2)$ yields, \begin{equation} H_{\theta} = H_{0} + \frac{\theta}{N_s}\hat{J} - \frac{\theta ^2}{2N_s^2}\hat{T} \end{equation} where, $H_{0}$ is the unperturbed Hamiltonian, $\hat{T} = -t\sum _{l,\sigma} (\hat{b}_{l+1,\sigma}^ {\dagger} e^{iq\hat{\sigma}_z} \hat{b}_{l+1,\sigma} + h.c.)$ is the kinetic energy operator and $\hat{J} = it\sum _{l,\sigma} (\hat{b}_{l+1,\sigma}^ {\dagger}e^{iq\hat{\sigma}_z}\hat{b}_{l,\sigma} - h.c.)$ is the current operator. So, to $O(\theta^2)$ the superfluid fraction is given by, \begin{equation} f_s = -\frac{1}{2t}\langle \psi_0 \vert \hat{T} \vert \psi_0 \rangle - \frac{1}{t}\sum _{\nu \ne 0} \frac{|\langle \psi_{\nu} \vert \hat{J} \vert \psi_0 \rangle |^2}{E^{\nu} - E^{0}} \end{equation} where $0$ and $\nu$ stands for the lowest and $\nu$th eigenmode respectively. In Fig.\ \ref{SFF}(b) we have plotted $f_S$ using the above prescription; we note that the SFF vanishes at $\lambda_c < 2$ at which the IPR starts rising indicating the localization transition. The localization transition can also be qualitatively understood from the vanishing of the energy gap $\Delta E$ at the critical disorder strength $\lambda _c$. The energy gap ($\Delta E$) between the ground state and the 1st excited state is expected to vanish at the localization transition point. Using this fact, one can obtain a qualitative understanding of the phase diagram for both small and large $\Omega$. We note that for small $\delta \Omega = \Omega-\Omega_c >0$ at $\lambda=0$, the ground state is at $k=0$ and has an energy $E(k=0) = -(\Omega + 2 \cos q)$. Now let us turn on $\lambda$ which leads to a perturbation term, which can be written in momentum space as \begin{eqnarray} H_1 = \frac{\lambda}{2} \sum_k {\hat b}_{k \sigma}^{\dagger} ({\hat b}_{k + 2\pi \beta \, \sigma} + {\hat b}_{k - 2\pi \beta \, \sigma}). \end{eqnarray} Such a perturbation term leads to a hybridization of the ground state at $k=0$ with the one at $k=\beta$ which has energy $E(k=\beta) = -2 \cos \beta \cos q - |2 \sin \beta \sin q| + {\rm O}(\Omega^2)$. Thus the simplest qualitative estimate of the transition line for small $\delta \Omega$ occurs when $\lambda \simeq E(k =\beta)-E(k=0)$ leading to \begin{eqnarray} \lambda= \delta \Omega (1 - \tan q/\sqrt{\sin^2 \beta \sin^2 q}) + 2 \cos q(1-\cos \beta) + |2 \sin q \tan q| -|\sin(q)| \sqrt{\sin^2 \beta + \tan^2 q}. \end{eqnarray} We note that this reproduces the linear behavior of the phase boundary for small $\delta \Omega$. A similar analysis can also be carried out at $\Omega \gg 1$. Here the ground state is again at $k=0$ for $q < \pi/2$. An exactly similar analysis as the one charted out above shows that for this case $E[k=\beta]-E[k=0] = 2 \cos q(1-\cos \beta) + {\rm O}(1/\Omega)$ which leads to \begin{eqnarray} \lambda \simeq 2 \cos q(1-\cos \beta) \end{eqnarray} Thus the phase boundary becomes a horizontal line in the $\lambda-\Omega$ plane, as also seen in exact numerics. In Fig.\ \ref{SFF}(c) we have shown $\Delta E$ as a function of $\lambda$ for two different system sizes corroborating these qualitative features and justifying the assumption of vanishing of $\Delta E$ at the transition point. \begin{figure}[ht] \centering \includegraphics[scale=0.2]{SFF.pdf} \caption{(a) Superfluid fraction as a function of $\Omega$ for $\lambda = 0$. Solid line represents the SFF obtained analytically from the effective mass calculation. (b) SFF and IPR is plotted as a function of $\lambda$ for $\Omega = 0.2$. (c) Energy gap between ground state and 1st excited state as a function of $\lambda$ for $\Omega = 3.5$.} \label{SFF} \end{figure} \section{Localization of weakly interacting bosons} \label{weak_int} In the weakly interacting limit, {\it i.e.}, for $U/t<<1$ and $V=0$, we replace the quantum field operator $\hat{b}_{l,\sigma}$ by the classical field operator $\psi _{l,\sigma}$ assuming the existence of a 1D quasi-condensate \cite{shlyapnikov1d}. By minimizing the energy functional calculated thereby, we obtain the discrete non-linear Schr\"odinger (DNLS) equation for the condensate wave function $\psi_{l,\sigma}$ given by, \begin{eqnarray} &&-(\psi _{l+1,\uparrow}e^{iq} + \psi _{l-1,\uparrow}e^{-iq}) + \lambda \cos (2\pi \beta l) \psi _{l,\uparrow} + \Omega \psi _{l,\downarrow} \nonumber + U(|\psi _{l,\uparrow}|^2 + |\psi _{l,\downarrow}|^2)\psi _{l,\uparrow} = \mu \psi _{l,\uparrow} \\ &&-(\psi _{l+1,\downarrow}e^{-iq} + \psi _{l-1,\downarrow}e^{iq}) + \lambda \cos (2\pi \beta l) \psi _{l,\downarrow} + \Omega \psi _{l,\uparrow} \nonumber + U(|\psi _{l,\uparrow}|^2 + |\psi _{l,\downarrow}|^2)\psi _{l,\downarrow} = \mu \psi _{l,\downarrow} \end{eqnarray} where $\mu$ is the chemical potential. We then obtain the ground state wavefunction $\psi_{l \sigma}$ numerically and use it to compute all relevant quantities such as IPR and $f_s$. In what follows we have shown the results of such numerical study which are shown in Fig.\ \ref{PD_int}. \begin{figure}[ht] \centering \includegraphics[scale=0.2]{Fig11.pdf} \caption{The IPR and SFF has been shown as a function of $\lambda$ for $\Omega = 2.5$ and for different interaction strength $UN_p/t$ in (a) and (b) respectively. The spatial distribution of the ground state density has been shown for $UN_p = 0.5$, $\Omega = 2.5$ in (c). IPR, the order parameter $m$ and the total magnetization $M$ as a function of $\lambda$ for $UN_p = 20$ and $\Omega = 0.3$ are shown in (d). We set $N_p=200$ and $N_s=144$ for all the plots.} \label{PD_int} \end{figure} In Fig.\ \ref{PD_int}(a) we plot the ground state IPR as a function of $\lambda$ for different interaction strength $UN_p/t$. We see that on increasing $\lambda$ beyond the localization transition, the growth of IPR decreases. This is due to the fact that the ground state wavefunction becomes multi-site localized due to weak repulsive interaction (see Fig.\ \ref{PD_int}(c)). We further calculate the superfluid fraction which vanishes in the localized phase as depicted in Fig.\ \ref{PD_int}(b). \begin{figure}[ht] \centering \includegraphics[scale=0.3]{Fig12.pdf} \caption{Momentum distribution with increasing disorder strength $\lambda$ for $\Omega = 0.5$ and $UN_p = 20$.} \label{k_dist_int} \end{figure} To gain a better understanding of the localization transition, we further study the spin resolved momentum distribution of bosons in the regime $\Omega < \Omega _c$. In contrast to the non-interacting case, the superfluid with finite $U$ the chooses one of the two symmetry broken states with spins polarized along the z-axis \cite{stringari1}. As a result the momentum distribution corresponding to the spin polarization of the ground state becomes highly peaked at the nonvanishing momentum of the ground state as depicted in Fig.\ \ref{k_dist_int}(a). With increasing disorder strength, other momentum modes get gradually occupied and spin-momentum distributions are peaked at equal and opposite momentum with a net spin polarization indicating symmetry breaking (see Fig.\ \ref{k_dist_int}). Finally in the localized phase, the momentum distributions become symmetric and peaked around finite momentum with $n_{k,\uparrow} = n_{-k,\downarrow}$. To verify this we plot the order parameter $m = \sum_k (n_{k,\uparrow} - n _{-k,\downarrow})^2$ and the total magnetization $M = \sum_k(n_{k,\uparrow} - n _{k,\downarrow})$ which decreases with increasing $\lambda$ and finally vanishes in the localized phase (see Fig.\ \ref{PD_int}(d)). Next we investigate the momentum distribution in the regime $\Omega > \Omega_c$; similar to the non-interacting case, we see that in the delocalized regime the momentum distributions for both up and down spins are peaked at zero momentum, whereas, in the localized phase they are peaked at finite momentum and other momentum modes get gradually occupied (see Fig.\ \ref{phase_diff_int}(a,b)). \begin{figure}[ht] \centering \includegraphics[scale=0.18]{phase_kdist_int.pdf} \caption{(a)-(b) Momentum distribution for up(down) spin is shown in solid(dashed) line with increasing disorder strength $\lambda$. Other parameters are $\Omega = 2.5$ and $UN_p = 10$.} \label{phase_diff_int} \end{figure} \section{Peak splitting and spin dephasing near the localization transition} This effect of spin-splitted momentum distribution of the localized wavefunction in the regime $\Omega >\Omega_c$ arises due to the interplay between the SO interaction and Raman coupling. Here we provide a simple variational calculation to understand this effect. First we consider the variational wavefunction given by \begin{equation} \psi_l = \mathcal{N}e^{-|l|/\xi}\left(\begin{array}{c} e^{ikl}\\ -e^{-ikl} \\ \end{array}\right), \quad \mathcal{N} = \sqrt{\frac{\tanh(1/\xi)}{2}} \end{equation} where $l$ is the site index and $\xi$ represents localization length which is assumed to subsume the effect of AA potential and the interaction. The spinor part is chosen in such a way that up(down) spin momentum distribution is peaked at $\pm k$ and for $k=0$ it reduces to the usual form of the ground state for $\Omega > \Omega_c$. For the aforementioned wavefunction, the parameter $k$ is treated as the variational parameter and we investigate its dependence on $\xi$ and $\Omega$. Considering the single particle Hamiltonian of a spin-orbit(SO) coupled bosonic system in an optical lattice (Eq.\ 1 of the main text with $\lambda = \mathcal{V} =0$), the energy can be written as, \begin{equation} E(k) = - \left[\frac{\cos(k-q)}{\cosh(1/\xi)} + \Omega \tanh(1/\xi)\frac{\sinh(2/\xi)}{\cosh(2/\xi) - \cos 2k}\right]. \end{equation} From the structure of $E(k)$ we note that the $E(k) \to - (\cos(k-q) +\Omega \delta_{k0}) +{\rm O}(1/\xi)$ in the delocalized limit where $\xi \to \infty$. This implies that this functional reproduces the correct $k=0$ ground state for $\Omega > \Omega_c$ in the absence of the AA potential. Thus in this case the momentum distribution of both the spin-up and spin-down components are peaked at $k=0$. In the strongly localized phase, were $\xi \ll 1$, the second term dominates and in the limit of single site localization $k$ looses its meaning. However, in between these two limits for finite $\xi$, the ground state minima shifts to finite $k$ provided $q \ne 0$. This is seen by minimizing $E(k)$ to obtain $k_{min}$ and by plotting its variation as a function of $\xi^{-1}$ as shown in Fig.\ \ref{split_peak}. As seen from Fig.\ \ref{split_peak}, it is evident that $k_{min}$ decreases with increasing $\xi$ and finally it vanishes in the delocalized regime i.e. $\xi^{-1} \rightarrow 0$. We further notice for a fixed $\xi$ the spin splitting (characterized by $k_{min}$) decreases with decreasing strength of SO interaction ($q$) and eventually vanish for $q = 0$. This simple variational calculation elucidates how the combined effect of localization and SO interaction gives rise to the spin splitted momentum distribution in the regime $\Omega > \Omega_c$. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.2]{peaksplit.pdf} \end{center} \caption{$k_{min}$ as a function of $\xi ^{-1}$ is plotted for $\Omega = 3$.} \label{split_peak} \end{figure} {\it Spin-phase diffusion}: The effect of spin-splitting in momentum distribution near the localization transition is also accompanied with the phase fluctuation of the wavefunction. In general, the wavefunction can be written as, \begin{equation} \psi^{l} = \sqrt{n_{0}^{l}}\left(\begin{array}{c} \cos \theta^l e^{i\phi _{\uparrow}^{l}}\\ \sin \theta^l e^{i\phi _{\downarrow}^{l}} \\ \end{array}\right) \end{equation} where $\phi _l = \phi _{\uparrow}^{l} - \phi _{\downarrow}^{l}$ is the phase angle of the spinor at site $l$. For $\Omega > \Omega_c$, we find that $\cos \theta = \sin \theta \approx 1/\sqrt{2}$ and $\phi \approx \pi$ in the delocalized phase, whereas, near localization transition due to increasing phase fluctuations, the phase angle fluctuates significantly from $\pi$ at different sites. We quantify the phase fluctuation by calculating $|\langle e^{i\phi} \rangle |$, where the average is taken over all the lattice sites. In Fig.\ \ref{spin_diff_hc} we have shown the behavior of $|\langle e^{i\phi} \rangle |$ as a function of the disorder potential strength $\lambda$ which shows that near the localization transition it decreases from $1$ with increasing strength of the disorder $\lambda$. In case of hard core bosons we consider the eigenvector corresponding to the largest eigenvalue of the density matrix defined in the main text and calculate the similar quantity. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.2]{spin_dephasing.pdf} \end{center} \caption{$|\langle e^{i\phi} \rangle |$ as a function of $\lambda$ has been plotted for (a) non-interacting and weakly interacting bosons, (b) for hard core bosons. Other parameters taken are $\Omega = 2.5$ and $q = 0.3\pi$.} \label{spin_diff_hc} \end{figure} \section{Non-equilibrium dynamics in the strongly interacting regime} \label{dynamics} To elucidate the localization transition of the HCB, we now look at into the non-equilibrium dynamics of the bosons. We start from the density wave state at $\lambda=0$ denoted by $|\psi(0)\rangle$. Next we quench $\lambda$ to a finite value $\lambda_f$ so that the system Hamiltonian after the quench is given by $H[\lambda_f]$. Let us denote the eigenfunctions and eigenvalues of $H[\lambda_f]$ as $|m\rangle$ and $\epsilon_m$ respectively. The time evolved wavefunction $|\psi(t)\rangle$ at any instant of time $t$ after the quench can be obtained by solving the Schrodinger equation $i \hbar |\psi(t)\rangle = H[\lambda_f] |\psi(t)\rangle$ and is given by \begin{eqnarray} |\psi(t)\rangle &=& \sum_{m} c_m e^{-i \epsilon_m t/\hbar} |m\rangle, \quad c_m = \langle m|\psi(0)\rangle \label{qdyn1} \end{eqnarray} The expectation value of any operator $O(t)$ can be obtained from $|\psi(t)\rangle$ as \begin{eqnarray} \langle \psi(t)|O|\psi(t) \rangle = \sum_{m,n} c_m^{\ast} c_n e^{i(\epsilon_m -\epsilon_n)t/\hbar} \langle m |O| n\rangle \label{qdyn2} \end{eqnarray} Using Eq.\ \ref{qdyn2}, we calculate the time evolution of the imbalance factor which is defined as, \begin{equation} \mathcal{I} = \frac{N_o-N_e}{N_{tot}} \end{equation} where, $N_{o[e]} = \langle \psi(t)|\sum_{ r \in {\rm odd[even] sites}} {\hat b}_{i}^{\dagger} {\hat b}_i|\psi(t)\rangle$ and $N_{tot}= N_o + N_e$. Note that at $t=0$, we have density wave state with ${\mathcal I} =1$ and it approaches zero for a delocalized state. In Fig.\ \ref{Imbalance}(a) we have shown the time evolution of $\mathcal{I}(t)$ for the up spin species (the same feature can be observed for the down spin species as well) for different $\lambda$ values. We note that for small $\lambda$ which corresponds to the delocalized regime, $\mathcal{I}$ vanishes to zero with time showing the ergodic dynamics in that regime, whereas for larger $\lambda$ value which corresponds to the localized regime, $\mathcal{I}$ doesn't vanish and saturates to some positive value which indicates the non-ergodic regime and the density wave ordering is retained in the course of time evolution. In Fig.\ \ref{Imbalance}(b) the final density distribution after the time evolution has been shown for different values of $\lambda$. We repeated the same numerical experiment starting from a different initial state where the atoms are loaded on one half of the lattice and study the imbalance factor $\mathcal{I'} = (N_l-N_r)/N_{tot}$ as a function of time $N_l$ and $N_r$ being the total number density of bosons at the left and the right halves of the lattice respectively. In Fig.\ \ref{Imbalance}(c,d) we have plotted the time evolution of the imbalance factor and the final density distribution of the up spin species for different values of the disorder strength. \begin{figure}[ht] \centering \includegraphics[scale=0.2]{dynamics.pdf} \caption{The time evolution of the imbalance factor for up spin species has been shown starting from two initial states (a) density wave and (c) bosons loaded in left half of the lattice for different values of the disorder strength $\lambda$. The final up spin density distribution at the end of the time evolution for the same $\lambda$'s are shown for the two types of initial states in (b) and (d) respectively. The other parameters are $\Omega = 2$, $q = 0.3\pi$ and $V = 0$.} \label{Imbalance} \end{figure}
1,477,468,751,260
arxiv
\section{Background and Data Collection} \label{sec:Background and Data Collection} \subsection{Stack Overflow Q\&A Site} Q\&A sites have become very popular in recent years. There are several Q\&A sites that programmers use to ask questions, solve problems they encounter, provide answers to other people's problems, and discuss different approaches. Stack Overflow is the most popular of these sites. Since its inception in 2006, it has become a popular and reliable platform for sharing knowledge among programmers. As a result, Stack Overflow has plenty of resources for programmers on a variety of topics. From the beginning to 2020, 49,53,854 developers have asked 2,01,28,125 questions on 59,524 different topics in the stack overflow. \subsection{New Programming Languages Discussions in Stack Overflow} About 37~\citep{wiki:Timeline} programming languages have been released after the inception of Stack Overflow in 2008. Most of the new languages (released after 2008) have a little footprint in SO, which is insufficient to formally analyze the interaction between developers and programming languages. For selecting the language, we have used the SO survey~\citep{StackoverflowSurvey} and the newly released language list~\citep{wiki:Timeline}. In Table~\ref{table:language stats}, we show the footprints of the languages in SO. To do a comparative analysis of the evolution with three new languages, we picked one high footprint language (Java) and one medium footprint language (Python). Javascript has the highest footprint, but it is primarily used for web clients only. We have selected Java as the representative of top-tier languages due to its wide range of use. We have selected Python as the representative of mid-tier languages due to its recent emergence. \input{Tables/language_stats.tex} \subsection{Data Collection} \begin{figure}[t] \centering \includegraphics[scale=0.7]{figures/methodology.pdf} \caption{An overview of the methodology of our study} \label{fig:methodology} \end{figure} The following steps are carried out to develop the dataset for this study: \begin{enumerate}[leftmargin=15pt,itemsep=0pt] \item We download the SO dataset, \item We identify list of tags related to the three languages in SO, \item We extract all questions and accepted related to the list of tags from SO, \item We extract issues reported to the GitHub repositories of the three languages. \end{enumerate} Figure \ref{fig:methodology} shows an overview of the methodology of our study. We explain the steps below. \subsubsection{Download Stack Overflow dataset} For our analysis, we have collected the January 2018 Stack Overflow data dump, which is available in the Stack Exchange data dump. In Stack Overflow schema, both question and answer are considered as \emph{post}. The post table of the data dump contains all the information of a post like a title, tags, body, creation date, view count, type (question or answer), and accepted answer identifier. An answer is accepted if the questioner marks that answer as accepted. Our dataset includes 4,17,82,536 questions and answers posted over 9 years from August 2008 to January 2018 by 39,40,962 users of Stack Overflow. Among these posts 1,63,89,567 (39\%) are questions and 2,52,97,926 (61\%) are answers of which 87,04,031 (21\%) are marked as accepted answers. \subsubsection{Develop tag set} To compare the growth of languages, we have to separate the posts by language. Posts on Stack Overflow can be about any topic, and we need a way to identify posts by language. Every Stack Overflow post is associated with at least one tag. We consider a post associated with one of the new languages if that contains at least one tag of the respective language tag. \ra{We have created an initial set of tag $\uptau_0$ for each of the languages. One of the authors checked the initial tagset. Like Vásquez et al. [2], we scaled down the full Stack Overflow (SO) tag set by performing a wildcard query (e.g.; ”SELECT * FROM Tags WHERE TagName like '\%swift\%' order by Count desc”). After that, the search space becomes feasible to perform a manual inspection. The initial tagset is available at \href{https://git.io/JTIqL}{GitHub}.} Next, we go through the Stack Overflow dataset $\mathcal{S}$ and extract questions $\rho$ whose tags contain a tag from $\uptau_0$. Third, we extract tags of the posts in $\uprho$ to form the set of candidate tags $\uptau$. Now we have a set of tags $\uprho$ for each language, which includes all tags of that language. However, set $\uprho$ may include tags that may be irrelevant to new languages. Hence, following the approach of Rosen et al.~\citep{Rosen2015}, we have used two heuristics, $\alpha$ and $\beta$, to find the significantly relevant tags for each language. \begin{equation} \alpha = \dfrac{number \ of \ posts \ with \ tag \ t \ in \ \uprho}{number \ of \ posts \ with \ tag \ t \ in \ \mathcal{S}} \end{equation} \begin{equation} \beta = \dfrac{number \ of \ posts \ with \ tag \ t \ in \ \uprho}{number \ of \ posts \ in \ \uprho} \end{equation} We have experimented with a broad range of $\alpha$ and $\beta$ and found that $\alpha = 0.01$ and $\beta=0.01$ provides a significantly relevant set of tags. The values are consistent with previous research on finding tags for big data or concurrency tags~\cite{Bagherzadeh2019, Ahmed2018}. The tag set used to extract posts in this study is available at \href{https://git.io/JTIqL}{GitHub}. Our new language tag set is extensive and covers a large spectrum of tags related to new languages. The name of language or language versions is a highly relevant tag to identify posts. \emph{Swift 2.1, go,} and \emph{rust} are this kind of tag in our tag set. In terms of relevance, the next type of tag is is the name of a library/framework with or without a version. As these libraries/frameworks are applicable to a particular language they can be used to identify the post of that language. These types of tags in our tag set are \emph{grails2.0, beego, cocoa, rust-tokio}, etc. The third type of tag is named after a specific feature of the language. \emph{Goroutine, unmarshalling, traits} are such kind of tag in our tag set. In addition to highly focused tags, our tag set also includes generic tags such as \emph{concurrency, protocol}. \subsubsection{Extract posts of new languages} Using the tag set prepared in the previous step, we have separated the posts by language. We have 4,37,880 Swift posts, consisting of 1,88,065 (43\%) questions and 2,49,815 (57\%) answers of which 94,310 (21.6\%) are accepted answers. We have 72,843 Go posts, consisting of 30,286 (41.6\%) questions and 42,557 (58.4\%) answers of which 19,178 (26.3\%) are accepted answers. We have 18,311 Rust posts, consisting of 8,083 (44.1\%) questions and 10,228 (55.9\%) answers of which 5,964 (32.6\%) are accepted answers. \subsubsection{Preprocess new language post set} In this step, posts of the new language are preprocessed to reduce noise. The preprocessing steps include the removal of code segments, HTML tag, and URL, exclusion of stop words (e.g., a, the, is), and word stemming. We have used porter stemming for converting the word into their root form. \subsubsection{Model and label new language topics} In this step, we use the Gensim implementation of latent Dirichlet allocation (LDA) to identify new languages' topics. Previous studies have pointed that LDA topics may be subject to change if the order of documents is changed. Thus we have used a differential evolution algorithm to select LDA parameters. This approach has made our topics more stable. After extracting the topic, we manually labeled the topics with an appropriate title. \subsubsection{Calculate topic absolute impact} The absolute topic impact shows the absolute proportion of a particular topic in a particular month’s posts. In this step, the absolute impact of topics are calculated using equation \ref{eq: absolute impact}. \subsubsection{Calculate topic popularity \& difficulty} In this step, for each topic, we have extracted the number of posts without accepted answers and the median time to answer. Later we have calculated the correlation between without accepted answer percentage and the median time to answer. \subsubsection{Calculate quality score and interaction score} We have calculated the quality score and interaction score for each language using equations \ref{eq: quality score} and \ref{eq: interaction score}. After that, we used the scores to determine the stable point date, a date after which sufficient resources would be available in the SO. \begin{figure*}[t] \centering \subfloat[\# of Repositories Checked from Github]{{\includegraphics[scale=0.25]{figures/RepoCheck.eps} }}% \subfloat[\# of Users Checked from Github]{{\includegraphics[scale=0.25]{figures/UserCheck.eps}}}% \enskip \caption{\# of Repositories and Users checked from Github}% \label{fig:Github Data} \end{figure*} \subsubsection{Data extraction from GitHub \& model developer activity} GitHub provides access to data of public repositories and users through its \href{https://developer.github.com/v3/}{public API}. The new languages have their \emph{official repository} in GitHub. In this step, we have collected the creation date and closing date of all the issues from these new languages' official repositories. GitHub issues have two states, `open' and `closed.' As soon as an issue is taken care of, it changes its state from `open' to `closed.' We collected states and frequencies of the issues. \ra{Then for each month, we collected the number of repositories and users of each new language. Github sets the dominant programming language of a repository as the language of that repository. We have used Github Search for collecting repository and user count. Using that specific search language, one can search and count the repositories of a particular language. We have queried for each month and each of the languages. In the search process, we have excluded the fork repositories. Another part of Github data was users. To collect user data, we have collected all the repositories of a particular language, and then we collected the unique committers of those repositories. After finding those committers, we collected their joining date and counted the number of users joined each month. The number of repositories and users we have checked are presented in Figure~\ref{fig:Github Data}. Then we used regression to model developers' activity.} \section{Conclusion} \label{sec:conclusion} In this study, we have analyzed the reflection of the growth of new languages on Stack Overflow, i.e., how the activity pattern of Stack Overflow users changes along with the growth of the resources of the language and the expected time of availability of adequate resources. In the early stages of new programming languages, documentation is not very rich, and it is likely to be enriched with time. We have found that documentation of the language is one of the major topics about which developers talk about. The impact of the quality of documentation on the growth of new languages can be a new avenue for future work. We have also demonstrated a relationship between the growth of the three programming languages and developers' activity patterns using data from both Stack Overflow and GitHub. We have found that an active community can influence languages' growth and pinpointed the timeline after which language achieved enough resources for developers in QA sites. We believe our findings can help not only developers but also language owners and Stack Overflow to support the growth of new languages. \section{Developers' Discussions about the Three New Languages} \label{sec:Dev discussions} \input{Sections/RQ1} \input{Sections/RQ2} \section{Developers' Support to the Three New Languages} \label{sec:Dev support} \input{Sections/RQ3} \input{Sections/RQ4} \input{Sections/RQ7} \section{Implication} \label{sec:Implication} Thus far, we have discussed the characteristics of answer pattern of new languages, the relation between the advancement of new languages and its developers' activity, and expected answer interval for new languages. In this section, we discuss the implications of our findings. As well as helping developers find resources while learning a new language, our study can also help language owners, researchers, and Stack Overflow to refine their strategies to support the growth of new languages. \indent \textbf{Developers:} In this study, we have estimated the answer interval and time when we can expect the availability of the adequate resource in Stack Overflow. If the community support is still evolving in Stack Overflow, developers can decide to look into other resources. Sometimes community projects developed and curated by developers can be an alternative for traditional resources. For example, Rust was a community project of concerned developers. After strong positive feedback, it was donated and has been part of official Rust documentation from Rust 1.25. \indent \textbf{Language owners:} Our study identifies the significant difference in answer interval between two phases of new languages. As support for the developers in the starting stages is likely to play a significant role in the overall acceptance of that language, owners should provide extensive support during that time. Another option for new languages that are currently in the design stage can be to use the community base of some mature language by carefully selecting predecessor language. Moreover, new languages can fill the gap in supporting materials using developer-friendly documentation with a detailed example. We observed that the issue and release version influences developers' activity pattern (Table~\ref{table:issue_question relationship}, Table~\ref{table:github_question relationship}, Figure~\ref{fig:Release and user behavior}). Though it is not possible to release a bug-free version, extra care must be taken for a bug-free release and solution of issues in GitHub. A good portion of questions in Stack Overflow seeks clarification of the documentation. Owners should take extra care to prepare documentation suitable for developers of all levels. We have also found that migration is a common topic among all the new languages. As there are many mature languages in the same domain before the arrival of new languages it is assumed that a huge number of new language projects are migrated projects from some other language. To facilitate developers' efforts, language owners should provide detailed documentation of migration steps from common sources. \indent \textbf{Stack Overflow:} Small community size can disrupt the growth of a language. Our study found that the new languages have a small number of experts or active developers in Stack Overflow. To support the growth of a language which has a few expert developers, Stack Overflow should refine their strategy. According to the current policy, the stack overflow focuses on the expert developers. However, to support new languages, they should encourage developers from all levels to answer questions. It supports the findings of Srba et al.~\citep{Srba2016} where they suggested Stack Overflow replace the current question-oriented policy with an answer-oriented policy. \indent \textbf{Researchers:} We have found that migration is the hardest topic in two of the three new languages in terms of posts without an accepted answer. Furthermore, it is a common topic in all of the three languages. As migration problems are too user-specific further research may be conducted on how a generalized solution can be designed to solve user-specific issues. The data and data structure category are common and one of the top two categories in all the languages in terms of the number of posts. It gives the researchers a direction of an impactful and broad research area. Our study finds that the library/SDK category is a common discussion topic among new language developers. Also, this category contains one of the top three difficult topics in all three languages. We have observed that developers often face difficulties integrating libraries or setting up communication between SDKs'. A standard protocol for SDK communication may help developers to overcome such difficulties. \section{Introduction} \label{sec:introduction} New programming languages are being introduced to make software development easy, maintainable, robust, and performance-guaranteed~\cite{Maloney2010,pierce2002}. For example, Swift was introduced in June 2014 as an alternative to Objective-C to achieve better performance. At the initial stage of its lifetime, a programming language is likely to have constraints of resources, and consequently, developers using these languages face additional challenges~\cite{Kushida2015}. Naturally, the developers seek help from community experts of question-answering (QA) sites such as Stack Overflow (SO). Hence, it is expected that the discussions on issues related to a new language in SO represent the different characteristics of the growth of that language and also reflect the demands of the development community who use that language. After the release of a new programming language, it takes time for the developers to get acquainted with that language. Earlier releases of new languages often contain bugs. The developers who work on the new languages are likely to face problems that are similar to the solved problems of mature languages. Developers of the new languages often feel the absence of a library or feature that has already been available in other languages. Therefore, the discussions on a new language are likely to differ from that of a mature language. To the best of our knowledge, there is yet to be any software engineering research that focuses on the specific characteristics of the new languages by mining relevant discussions from SO. In this study, we would like to fill this gap, analyzing the discussions on Swift, Go, and Rust that are the most popular programming languages introduced after the inception of SO (2008). Our study is limited to these three languages because other new languages have very small footprints in SO. Being born after SO, the evolution of these languages, right from the beginning, is expected to be reflected in SO. From now on, by the \emph{new language}, we imply Swift, Go, and Rust languages. We also match the SO discussions with the relevant activities in GitHub in the required cases. The primary goal of this research is to study how software developers discuss and support three new programming languages (Go, Swift, Rust) in Stack Overflow. On this goal, we conduct two studies: (1) Understanding New Language Discussions: We aim to understand what topics developers discuss about the three new programming languages, whether and how the topics are similar and/or different across the languages, and how the topics evolve over time. (2) Understanding New Language Support: We aim to understand what difficulties developers face while using the three new languages, and when and how adequate resources and expertise become prevalent to support the three new programming languages in Stack Overflow. In particular, we answer five research questions around the two studies as follows. \begin{itemize}[leftmargin=10pt] \item \textbf{Study 1. New Language Discussions}: We answer two research questions: \begin{enumerate}[label={\textbf{RQ\arabic{*}.}}, leftmargin=30pt] \item \textbf{What topics are discussed related to Swift, Go, and Rust?} This investigates the discussion topics of the developers of new languages. Identification of the discussion topics may help the sponsor to design a feature roadmap that actually facilitates the requirement of developers. \item \textbf{How do the discussed topics evolve over time?} The community’s discussion topics is likely to vary over time, as resources evolve continuously. This analysis would enable us to investigate any possible relation between discussion topics and real-world dynamics, such as new releases. We found that a new release does not initiate any significant change in the evolution of discussion topics. \end{enumerate} \item \textbf{Study 2. New Language Support}: We answer three research questions: \begin{enumerate}[label={\textbf{RQ\arabic{*}.}},start=3, leftmargin=30pt] \item \textbf{How does the difficulty of topics vary across the languages?} Developers of new languages face problems that are rarely answered or get \emph{delayed answers}. By the \emph{delayed answer}, we imply that answer which is accepted by the user and received after the median answer interval of that month. We want to know about these questions so that special measures can be taken to answer this question. We found that questions related to migration, data and data structure are considered as difficult topics in all three languages. \item \textbf{When were adequate resources available for the new programming languages in Stack Overflow?} In this research question, we want to know the time interval, after which we can expect the availability of these resources of new languages in Stack Overflow at a satisfactory level. The use of programming languages is significantly related to the availability of resources of those languages. This question will help developers to make design decisions related to software development. We have seen that two years after the release, sufficient resources can be expected for Swift, whereas this period is three years for Go. We have also found the evidence of having an inadequate resource of Rust language in Stack Overflow. \item \textbf{Is there any relationship between the growth of the three programming languages and developers' activity patterns?} This question investigates the relationship between developers' activity (e.g., question, answer) and the growth of a language. Language projects maintain a Github repository that supports feature requests~\citep{Bissyande2013} and bug reports through Github issues. We used those issues as an indirect measure for language growth. We found evidence of relationships between developers' activity and the growth of a language. \end{enumerate} \end{itemize} Our findings show that questions related to ``migration" are common among new languages. To facilitate developer efforts, platform owners should provide detailed documentation of steps to migrate from conventional sources. In this study, we identified the duration, after which adequate resources become available in SO. This finding can help developers to make any decisions regarding migration to a new language. In addition, language owners should provide support until adequate resources become available in the QA community. Moreover, our study identifies some of the factors that influence the evolution of new languages. The finding can help language owners to prioritize their goals. A preliminary version of this paper appeared previously as a short conference paper~\cite{Chakraborty2019}. The only overlap between the previous paper~\cite{Chakraborty2019} and the current paper is Research Question 4, i.e., `When were adequate resources available for the new programming languages in Stack Overflow?'. \noindent\textbf{Paper Organization.} The rest of the paper is organized as follows. Section~\ref{sec:Background and Data Collection} describes the background of our study and the data collection procedure. Section~\ref{sec:Dev discussions} reports the research questions about developers' discussion. Section~\ref{sec:Dev support} presents the research questions about the developers’ support to the three new languages. Section~\ref{sec:Implication} discusses the implications of our findings. Section~\ref{sec:validity} discusses the threats to validity. Section~\ref{sec:Related Works} presents the related work to our study, and Section~\ref{sec:conclusion} concludes the paper. \section{Methodology} \label{sec:methodology} \subsection{Selection of the languages} From Stack Overflow data, we have calculated the number of posts, number of questions and number of answers, and accepted answers along with their percentage among the total number of posts, total number of questions, total number of answers and accepted answers in SO. For selecting the language, we have used the SO survey~\citep{StackoverflowSurvey} and the newly released language list~\citep{wiki:Timeline}. Most of the new languages (released after 2008) have a little footprint in SO that are not sufficient to formally analyze the evolution of that language. Thus we have picked the top three (considering the number of posts) newly released language for our study. In Table~\ref{table:language stats}, we show the footprints of the languages in SO. To do the comparative analysis with three new languages, we picked one heavily encountered language (Java) and one medium footprint language (Python) to compare the evolution of newly released languages. Since JavaScript is used for web clients only, and Java has a wide range of use, we preferred it as a top-tier language. From the mid-category languages, Python is selected as an emerging language. \input{Tables/language_stats.tex} \subsection{Data Extraction} The following steps are followed to develop the dataset for this study. \begin{enumerate} \item \textbf{Download Stack Overflow dataset:} For our analysis, we have collected the December 2017 Stack Overflow data dump which is available in the Stack Exchange data dump. In Stack Overflow schema, both question and answer are considered as \emph{post}. The post table of the data dump contains all the information of a post like a title, tags, body, creation date, view count, type (question or answer) and accepted answer identifier. An answer is accepted if the questioner marks that answer as accepted. Our dataset includes 4,17,82,536 questions and answers posted over some time of over 9 years from August 2008 to December 2017 by 39,40,962 users of Stack Overflow. Among these posts 1,63,89,567 (39\%) are questions and 2,52,97,926 (61\%) are answers of which 87,04,031 (21\%) are marked as accepted answers. \item \textbf{Develop tag set:} To compare the growth of languages, we have to separate the posts by language. Posts on Stack Overflow can be about any topic, we need a way to identify posts by language. Every Stack Overflow post is associated with at least one tag. We consider a post is associated with one of the new languages if that contain at least one tag of tag set of that respective language. We have created an initial set of tag $\uptau_0$ for each language by manually inspecting the tag table of Stack Overflow schema. Next, we go through the Stack Overflow dataset $\mathcal{S}$ and extract questions $\rho$ whose tags contain a tag from $\uptau_0$. Third, we extract tags of the posts in $\uprho$ to form the set of candidate concurrency tags $\uptau$. Now we have a set of tag $\uprho$ for each language which includes all tags of that language.However, set $\uprho$ may include tags which may be irrelevant to new languages. So following the approach of Rosen et al.~\citep{Rosen2015} we have used two heuristics $\alpha$ and $\beta$ to find the the significantly relevant tags for each language. \begin{equation} \alpha = \dfrac{number \ of \ posts \ with \ tag \ t \ in \ \uprho}{number \ of \ posts \ with \ tag \ t \ in \ \mathcal{S}} \end{equation} \begin{equation} \beta = \dfrac{number \ of \ posts \ with \ tag \ t \ in \ \uprho}{number \ of \ posts \ in \ \uprho} \end{equation} We have experimented with a broad range of $\alpha$ and $\beta$ and found that $\alpha = 0.01$ and $\beta=0.01$ provides a significantly relevant set of tags. The tagset used to extract posts i this study is available at GitHub.Relevant tags selected for this study is presented in appendix ~\ref{appendix:tagrelevance}. https://git.io/JTIqL \item \textbf{Extract posts of new languages:} Using the tag set prepared in the previous step, we have separated the posts by language. We have 4,37,880 Swift posts, consisting of 1,88,065 (43\%) questions and 2,49,815 (57\%) answers of which 94,310 (21.6\%) are accepted answers. We have 72,843 Go posts, consisting of 30,286 (41.6\%) questions and 42,557 (58.4\%) answers of which 19,178 (26.3\%) are accepted answers. We have 18,311 Rust posts, consisting of 8,083 (44.1\%) questions and 10,228 (55.9\%) answers of which 5,964 (32.6\%) are accepted answers. \item \textbf{Data extraction from GitHub:} GitHub provides access to data of public repositories and users through API\footnote{https://developer.github.com/v3/}. The new languages have their official repository in GitHub. We have collected the creation date and closing date of the issues from the official repositories of these new languages. GitHub issues have two states `open' and `closed.' As soon as an issue is taken care of, it changes its state from `open' to `closed.' Along with the frequency of issues, we have also collected the states of issues. Besides, we have collected the number of users and repositories of each new language. \end{enumerate} \iffalse \item \textbf{Preprocessing of posts for topic modeling:} To avoid noise, we preprocess the post before topic modeling~\citep{Barua2012}. First, all the codes enclosed in <code> tags, HTML tags, and url are removed. Second, the Porter stemming algorithm~\citep{Porter1997} is applied to convert the words into their base form. Third, all the articles and stop words are removed. Now, the posts are ready for topic modeling. We have used Latent Dirichlet Allocation (LDA)~\citep{Blei2003} to infer topics automatically. Stack Overflow publishes its data periodically. We have collected the Stack Overflow data dump in October 2018. There are total 38485046 posts(both question and answer) in that dump. Stack Overflow posts are associated with tags. Though the tags are added by the users, in the moderation process correct tags will be associated with that post. To separate the posts by languages, we rely on tags. We carefully curated a tag set for each of these languages based on keywords, frameworks etc. from the tag table of Stack Overflow. GitHub provides access to public data of repository and user through API\footnote{\url{https://developer.github.com/v3/}}. By using that API we have collected information about the official Go, Rust and Swift repository, the number of users and number of repositories of each language. \fi \section{Introduction} \label{sec:introduction} New programming languages are being introduced to make software development easy, maintainable, robust, and performance-guaranteed. For example, Swift was introduced in June 2014 as an alternative to Objective-C to achieve better performance. At the initial stage of its lifetime, a programming language is likely to have constraints of resources, and consequently, developers using these languages face additional challenges. Naturally, the developers seek help from community experts of question-answering (QA) sites such as Stack Overflow (SO). Hence, it is expected that the discussions on issues related to a new language in SO represent the different characteristics of the growth of that language and also reflect the demands of the development community who use that language. After the release of a new programming language, it takes time for the developers to get acquainted with that language. Earlier releases of new languages often contain bugs. The developers who work on the new languages are likely to face problems that are similar to the solved problems of mature languages. Developers of the new languages often feel the absence of a library or feature that has already been available in other languages. Therefore, the discussions on a new language are likely to differ from that of a mature language. To the best of our knowledge, there is yet to be any software engineering research that focuses on the specific characteristics of the new languages by mining relevant discussions from SO. In this study, we would like to fill this gap, analyzing the discussions on Swift, Go, and Rust that are the most popular programming languages introduced after the inception of SO (2008). Our study is limited to these three languages because other new languages have very small footprints in SO. Being born after SO, the evolution of these languages, right from the beginning, is expected to be reflected in SO. From now on, by the \emph{new language}, we imply Swift, Go, and Rust languages. We also match the SO discussions with the relevant activities in GitHub in the required cases. We published partial results of the study in a recent publication~\cite{Chakraborty2019}, which explored three of the seven research questions. All the questions in the previous publication were focused on the availability of solutions. On the other hand, this publication explores the different perspectives of the evolution of new languages (e.g., topics of discussion, evolution, and difficulties, developers' activity, and relationship with the evolution of language). The primary goal of this research is to study how software developers discuss and support three new programming languages (Go, Swift, Rust) in Stack Overflow. On this goal, we conduct two studies: (1) Understanding New Language Discussions: We aim to understand what topics developers discuss about the three new programming languages, whether and how the topics are similar and/or different across the languages, and how the topics evolve over time. (2) Understanding New Language Support: We aim to understand what difficulties developers face while using the three new languages, and when and how adequate resources and expertise become prevalent to support the three new programming languages in Stack Overflow. In particular, we answer seven research questions around the two studies as follows. \begin{itemize}[leftmargin=10pt] \item \textbf{Study 1. New Language Discussions}: We answer two research questions: \begin{enumerate}[label={\textbf{RQ\arabic{*}.}}] \item \textbf{What are the topics of discussions related to Swift, Go, and Rust?} This investigates the discussion topics of the developers of new languages. Identification of the discussion topics may help the sponsor to design a feature roadmap that actually facilitates the requirement of developers. \item \textbf{How do the discussed topics evolve over time?} The community’s discussion topics is likely to vary over time, as resources evolve continuously. This analysis would enable us to investigate any possible relation between discussion topics and real-world dynamics, such as new releases. \end{enumerate} \item \textbf{Study 2. New Language Support}: We study the support the three new languages based on two themes (T). \begin{description} \item[- Theme 1. Solution Availability.] We study the difficulty of getting answers. We answer three research questions. \begin{enumerate}[label={\textbf{RQ\arabic{*}.}},start=3] \item \textbf{How does the difficulty of the topics vary across the languages?} Developers of new languages face problems that are rarely answered or get \emph{delayed answers}. By the \emph{delayed answer}, we imply that answer which is accepted by the user and received after the median answer interval of that month. We want to know about these questions so that special measures can be taken to answer this question. We found that questions related to migration, data and data structure are considered as difficult topics in all three languages. \item \textbf{When were adequate resources available for the new programming languages in Stack Overflow?} In this research question, we want to know the time interval, after which we can expect the availability of these resources of new languages in Stack Overflow at a satisfactory level. The use of programming languages is significantly related to the availability of resources of those languages. This question will help developers to make design decisions related to software development. We have seen that two years after the release, sufficient resources can be expected for Swift, whereas this period is three years for Go. We have also found the evidence of having an inadequate resource of Rust language in Stack Overflow. \item \textbf{What are the characteristics of answer pattern for the languages?} With the evolution of a new language, the number of skilled developers increases. In this question, we wanted to observe the effect of a growing number of skilled developers on answer characteristics such as the expected interval of first and accepted answers, unanswered question ratio. One of the findings of this question is, in Stack Overflow, the unanswered question ratio increases regardless of the age of the language. \end{enumerate} \item[- Theme 2. Developer Engagement.] We study how developers of new languages are engaged. We answer two research questions. \begin{enumerate}[label={\textbf{RQ\arabic{*}.}},start=5] \item \textbf{How likely are the questions of the three new languages answered by the developers of predecessor languages?} Stack Overflow includes developers from a diverse domain. We are interested to see if there is a pattern that the experts in the predecessor of a new language mostly answers the questions of that language (e.g., Objective-C for Swift). This support from predecessor language can help new language developers in the early stages of that particular language. We have seen that the new language receives a significant amount of support from the predecessor language community. \item \textbf{Is there any relationship between the growth of the three programming languages and developers' activity patterns?} This question investigates the relationship between developers' activity (e.g., question, answer) and the growth of a language. Language projects maintain a Github repository that supports feature requests~\citep{Bissyande2013} and bug reports through Github issues. We used those issues as an indirect measure for language growth in this language. We have found evidence of a relationship between developers' activity and the growth of a language. \end{enumerate} \end{description} \end{itemize} The major findings of the study are: (i) migration, data and data structure are generally the difficult topics of new languages, (ii) the time when adequate resources are expected to be available vary from language to language, (iii) the unanswered question ratio increases regardless of the age of the language, (iv) a new language is benefited from its predecessor language, and (v) there is a relationship between the developers' activity pattern and the growth of the relevant language. The motivation behind investigating the first four and sixth research questions is to help the owner/sponsor of these languages design better features and documentation, which would eventually benefit the developers. The general software developers or students can have insight about how to prepare themselves to work on these languages. Moreover, the fifth and seventh research questions would cater to researchers' academic interest by presenting interesting parameters of the evolution pattern of new languages and their reflection in SO. The rest of the paper is organized as follows. Section~\ref{sec:Background and Data Collection} describes the background of our study and the data collection procedure. Section~\ref{sec:Dev discussions} reports the research questions about developers' discussion. Section~\ref{sec:Dev support} presents the research questions about the developers’ support to the three new languages. Section~\ref{sec:Implication} discusses the implications of our findings. Section~\ref{sec:validity} discusses the threats to validity. Section~\ref{sec:Related Works} presents the related work to our study, and Section~\ref{sec:conclusion} concludes the paper. \section{Related Works} \label{sec:Related Works} There have been many works on Stack Overflow data, analyzing the developers' discussion topics. Barua et al.~\citep{Barua2012} have investigated the question "What are the developers asking?". Rosen and Shihab~\cite{Rosen2015}, Bajaj et al.~\citep{Bajaj2014}, Wan et al.~\cite{Wan2019} did similar work focusing on mobile developers, web developers, and blockchain developers, respectively. In a study on big data developers, Bagherzadeh et al.~\cite{Bagherzadeh2019} have identified no statistically significant correlation between big data topics' popularity and difficulty. To reach this conclusion, they used LDA to identify big data topics and then calculated the topics' popularity and difficulty. However, in a similar study on concurrency developers, Ahmed et al.~\cite{Ahmed2018} have found a negative correlation between difficulty and topics' popularity. Abdellatif et al.~\cite{Abdellatif2020} conducted a study on chatbot developers to identify challenging chatbot development issues. For this study, they extracted posts from stack overflows related to chatbot development. They found that the maturity level of the chatbot community is still lower than other SE fields. One of their suggestions for facilitating chatbot developers' efforts is the improvement of the documentation of the chatbot platform and documentation for integration with popular third parties. Hart and Sharma~\citep{Hart2014} had suggested to consider user reputation, the social reputation of the answerer, and post length to judge post quality. Reboucas et al.~\citep{Reboucas2016} have compared the data from Stack Overflow with opinions of 12 Swift developers to answer three research questions - common problems faced by Swift developers, problems faced by developers in the usage of `optionals,' and error handling in Swift. They used Latent Dirichlet Allocation (LDA) to identify the topics from Stack Overflow questions and then cross-checked the findings by interviewing Swift developers. These are different from our research questions. Zagalsky et al.~\citep{Zagalsky2016} have analyzed the R language using data from both Stack Overflow and R-help mailing list. They focused on the participation pattern of users in the two communities. They collected users' information in both sites and later mining on their activities (questions, answers). They tried to answer how communities create, share, and curates knowledge. Vasilescu et al.~\citep{Vasilescu2014} have compared popularity and user activity level between StackOverflow and R-help mailing list. They followed a similar approach to Zagalsky et al.~\citep{Zagalsky2016} by identifying active users in both communities. They have some interesting findings on the decreasing popularity of the mailing list and the influence of the reputation system in Stack Overflow. Their work is mainly focused on identifying the user behavior of these communities. Vasilescu et al.~\cite{Vasilescu2013} have conducted a study to find associations between software development and crowdsourced knowledge. They have found that the Stack Overflow activity rate correlates with the code changing activity in GitHub. One of their interesting findings is active GitHub committers ask fewer questions, but provided more answers than others. In a study on developers' behavior, Xiong et al.~\cite{Xiong2017} have linked developers' activity across GitHub and Stack Overflow. They have shown that active issue committers are also active in asking questions. Moreover, for most developers, their contents on GitHub are similar to their questions and answers in the Stack Overflow. Tausczik et al.~\citep{TausczikWC17} measured the effect of crowd size on Stack-Exchange question quality. They have found that among question audience size, contributor audience size, and topic audience size, contributor audience size has a higher effect on solution quality. They have classified the problems into three problems: error problems, how to problems, and conceptual problems. Error problems are very specific, and as a result, no matter how much the audience size is, 25\% problems are never solved. A large audience provides a diverse solution which is critical for how to problems. Conceptual problems are trickier and rarely solved with a small audience. Srba et al.~\citep{Srba2016} had discussed the reason behind the increasing failure and churn rate of Stack Overflow. In their work, criticizing the existing automatic deletion and classification of posts, they introduced a new reputation system. They also suggested to follow answer oriented approach instead of the current asker-oriented approach. Instead of focusing on highly expert users, Stack Overflow should engage users of all levels. \section{Research Setting} \label{sec:Research Setting} This section introduces four research questions along with the motivation for this question. In this section, we will describe the research questions. \subsection{Research Questions} \noindent \textbf{RQ1.} What are the difficult topics in the questions of new languages in Stack Overflow? \indent Developers of new languages face problems that are rarely answered or get \emph{delayed answers}. By the \emph{delayed answer}, we imply that answer which is accepted by the user and received after the median answer interval of that month. We want to know about these questions so that special measures can be taken to answer this question. \noindent \textbf{RQ2.} Are the questions of new languages in Stack Overflow answered mostly by the developers of predecessor languages? \indent Stack Overflow includes developers from a diverse domain. We are interested to see if there is a pattern that the experts in the predecessor of a new language mostly answers the questions of that language (e.g., Objective-C for Swift). \noindent \textbf{RQ3.} When can we expect the availability of adequate resource of the new languages in Stack Overflow? \indent After the introduction of a new language, the resources of those languages may be absent in QA sites. Gradually the lackings will be met. We want to know the time interval after which we can expect the availability of these resources of new languages in Stack Overflow at a satisfactory level. \noindent \textbf{RQ4.} Is there any relation between the growth of a language and developers' activity pattern of that language? \indent Stack Overflow has become one of the most prominent QA sites over the years. It has been used as a source to gain insight into developers' activity~\citep{Ahmed2017}. We can observe developers' activity from the frequency of question and answer in Stack Overflow and also from the number of developers and repositories of that language in GitHub. GitHub provides \emph{issue}~\citep{GithubIssue} to keep track of task, bug and feature request for a project. Most of the issues of a GitHub project are associated with bugs or feature request~\citep{Bissyande2013}. Therefore, it can be assumed that the solution to the issues leads to the advancement of the project. Hence, issues reflect the growth of the project. Moreover, every new release of a language implies the growth of that language. Thus, developers activity pattern after a new release of that language can also help to understand the relationship. We want to understand the relation between the growth of a programming language and developers' activity pattern. \noindent \textbf{RQ5.} What are the characteristics of answer pattern for new languages in Stack Overflow? \indent With the evolution of a new language, the number of skilled developers increases. Being the most used programming related QA site, Stack Overflow is supposed to reflect that change. The increasing number of skilled developers may have its impact on answer patterns such as the expected interval of first and accepted answers, unanswered question ratio. By answer interval, we imply the delay between a question posted and an answer received. We want to know the characteristics of the answer pattern of new languages. \iffalse \noindent \textbf{RQ6.} Is there any change over time in topics of Stack Overflow questions with the growth of the relevant language? \indent Evolution of a language has many phases. These phases must have their footprint in the topics of the questions asked by the developers of the new languages. Hence, we want to know the topics of the questions asked by developers with respect to a timeline so that we can identify those phases. \noindent \textbf{RQ7:} \emph{How do developers respond to the release of a new version of a language?} \indent A new release of a language comes with a new and updated feature set. These updated or new features must have some impact over developer community. It may trigger new thread of question in Stack Overflow. We want to know the after effect of such release. \fi \subsection{RQ1. What are the topics of discussions related to Swift, Go, and Rust?} \label{RQ1} \subsubsection{Motivation} In this work, we explore the SO footprint of three new languages introduced after SO and become popular in the developer community for knowledge sharing. Hence, it is likely that the issues developers face while working with these languages will be reflected in the posts and discussions on these languages in SO. If the queries are organized according to topics and characteristics of the responses are analyzed accordingly, it would be helpful for the sponsors of those languages. The lack of resources (such as proper documentation) will be revealed for the most visited topics, and the relevant people may address those in an organized way. Hence, our first research question is intended to analyze these languages' discussions by dividing these into different topics and categories. \subsubsection{Approach.} We used LDA to identify the topics of developers' discussion. LDA (Latent Dirichlet Allocation) is a generative statistical model commonly used for topic modeling~\cite{Blei2003}. In LDA, it is assumed that each document is a mixture of a small number of topics. It is believed that LDA topics are related to the order of documents in the dataset. If the order is changed, the topics will likely be changed. We have calculated the Raw score for LDA topics to mitigate the risk. The raw score is a modified version of the Jaccard Index. In calculating the raw score, the LDA parameters are kept constant, and the data orders are changed. This process is repeated 10 times to identify the raw score of LDA for one set of parameters. To ensure topic stability and determine the best LDA Parameters, we filled up the LDA parameters with a set of arbitrary values and calculated the raw score for each group. Using the best (in terms of raw score) parameters of the current run, the second run starts. This process has been continued for three generations, and we received the best stability ensuring the LDA parameter. Next, using those parameters, we run the LDA. From LDA, we received a set of keywords for each topic. To label the topics, we identified the dominant topic of all the posts. Next, for each topic, we continued the following labeling strategy. First, we randomly selected twenty posts, where the dominant topic is this topic. Then we manually analyzed the topic keywords along with the posts and labeled the topics. The first author merged the topics into higher level categories first. Then it was reviewed by the second and third authors. The issues identified were resolved by a detailed discussion involving all the authors. Furthermore, we have extracted some of the features of the adopted topics in prior works~\cite{Ahmed2018, Nadi2016, Bagherzadeh2019} to measure popularity. \begin{itemize}[leftmargin=10pt] \item\textbf{Average view of post}: SO collects view counts for each post. Using this metric, we can get an indication of the public interest. The intuition is that if many developers view a post, then this post is either very popular or the problem is common among the developers of new languages. For this reason, we have collected average views for each topic. \item \textbf{Average number of post count as favourite} In SO, users can mark a post as a personal favorite if the post is helpful. Favourite facility notes things that are important or interesting to developers. Developers can return to their favourite posts from the favourite tab in Stack Overflow. We collected the average favorite count for each topic in the new language. The metric will reveal how helpful/aligned the posts are with the developers' goals. \item \textbf{Average Scores}: In SO, an interesting/unique question or best solution can be rewarded by upvote. Where the attribute “favourite” expresses developers' individual choice, upvote tells the fellow developers whether the post is useful or not. SO then aggregates the votes (summation of the upvotes less than the summation of the downvotes) and presents them as scores. In this study, we summed up all posts' scores and divided them by the number of posts to calculate each topic's average score. This score of each topic will be used as metrics of perceived collective values. \end{itemize} \input{Tables/raw_score} \input{Tables/coherence_score} \input{Tables/topic_category_popularity_swift} \begin{figure}[t] \centering \includegraphics[scale=0.25]{figures/swift_topic_question_count.eps} \caption{Swift topics and number of their questions.} \label{fig:Swift topic question count} \end{figure} \input{Tables/topic_category_popularity_go} \begin{figure}[t] \centering \includegraphics[scale=0.25]{figures/go_topic_question_count.eps} \caption{Go topics and number of their questions.} \label{fig:Go topic question count} \end{figure} \input{Tables/topic_category_popularity_rust} \begin{figure}[t] \centering \includegraphics[scale=0.25]{figures/rust_topic_question_count.eps} \caption{Rust topics and number of their questions.} \label{fig:Rust topic question count} \end{figure} \subsubsection{Result.} The raw score of our approach presents the stability of the topics identified by LDA. The higher the raw score, the higher the stability. In this study, we have used three words while calculating the raw score. The raw score achieved in our study is presented in Table~\ref{table:raw score}. We also calculated the topic coherence score for our LDA model. The topic coherence score measures the quality of the extracted topics~\cite{Syed2017}. The higher the coherence score, the higher the quality of topics. The topic coherence score is presented in Table~\ref{table:cohernece score}. Table \ref{table:topic_popularity_swift}, \ref{table:topic_popularity_go}, and \ref{table:topic_popularity_rust} shows the discussion topics for each language sorted in terms of average views. It also shows the number of posts related to each topic and its popularity through the popularity metrics: views, likes, and scores received from users in Stack Overflow. \subsubsection*{\textbf{Swift Topics}} The percentage of Swift posts related to each topic is presented in Figure ~\ref{fig:Swift topic question count}. From Table ~\ref{table:topic_popularity_swift} we can observe that 5 of the 18 topics of Swift are related to UI. They are User Interface, View Controller Lifecycle, UI constraint, Gesture Recognition and Graphics. These topics include questions like, a) how a specific UI functionality can be achieved, b) how to use multiple UI components together, c) how the life cycle of view controller that manages applications UI interface changes, d) using 2D and 3D graphics components for game development, etc. More than 17\% of the Swift related posts are on these topics. An example of posts under this topic is a developer asking on Stack Overflow, ``I am making a game on Xcode using sprite kit it is, and I need to add an angry bird slingshot like to the ball, I don’t know how can I apply it." Swift is mainly used to develop iOS applications and games with state-of-the-art user interfaces. As a result, UI related posts are generally higher for Swift. 3 of the 18 topics of Swift are related to Data and Data Structure. They are Data Handling, Type Conversion, Mutability, and Database. These topics include questions like, a) how to save, stream or receive video data from the network, b) how to perform custom data type conversion and how to write proper syntax for typecasting, c) the use and syntax of immutable data, d) how to perform CRUD and other data manipulation operation in the portable database and corresponding framework provided by Swift, etc. More than 17\% of the Swift related posts are on these topics, and more than 6\% of them are related to data type conversion. The posts related to data type conversion have the highest average score (3.1) among the Swift posts, meaning the posts' answers are generally helpful to the developers. An example of posts under this topic is a developer asking on Stack Overflow, ``I am bit confused with typecasting in Swift. Have a small doubt. What is the difference between `as?', `as!' and only `as.' And can we say `as' is similar to `is'". About 6\% of the Swift related posts are related to Migration. This topic includes questions for two types of migration problems, a) when the developers face problems to recreate something in Swift while migrating from another language, mostly from Objective-C, and b) when the developers face problems in XCode while migrating from an object project. An example of posts under this topic is a developer asking on Stack Overflow, ``I’m not sure if this is something to do with SWIFT or some bug, but I used to be able to call this in objective c". Objective-C is the predecessor language of Swift, and migration is quite common between these two. Xcode is an IDE for Swift that generally introduces a new major version every year, and migration from an older version to the newer is quite common as well. About 6\% of the Swift related posts are on the remaining topic, Library/SDK. This topic includes questions related to Swift libraries or SDK primarily on the Foundation Kit. The Foundation Kit or just Foundation is an Objective-C framework that provides basic classes such as wrapper classes and data structure classes with a fixed prefix NS. It is part of the Swift standard library. An example of posts under this topic is a developer asking on Stack Overflow, ``My NSLog result shows that the string is there. However I get the error mentioned in the title of this question. When I replace the string `resortName' with `location' and store the whole object instead the error goes away." The Foundation Kit is a fundamental framework that is quite old and mature, so the number of questions (posts) will likely be lower. \boxtext{Swift users mostly discussed about application.} \subsubsection*{\textbf{Go Topics}} The percentage of Go posts related to each topic is presented in Figure~\ref{fig:Go topic question count}. From Table~\ref{table:topic_popularity_go} we can observe that 3 of the 13 topics of Go are related to Data and Data Structure. They are Database, Type Conversion, Marshalling/Unmarshalling, and Go Channel. These topics include questions related to, a) problems in using pointers, slicing, errors related to reference or pointer, b) custom types, type conversion, typecasting in Go, c) convert Go object/struct to JSON (marshalling), convert JSON to struct (unmarshalling), pointer marshalling/unmarshalling, d) proper usage of Go channel through which goroutines communicate strictly type data, etc. More than 22\% of the Go related posts are on these topics, and more than 16\% of them are related to Unmarshalling/Marshalling. The posts related to Unmarshalling/Marshalling have a much higher average score (8.11) among the Go posts, meaning the posts' answers are generally helpful to the developers. An example of posts under this topic is a developer asking on Stack Overflow, ``what the best way is to perform idiomatic type conversions in Go. Basically my problem lays within automatic type conversions between uint8, uint64, and float64. From my experience with other languages a multiplication of a uint8 with a uint64 will yield a uint64 value, but not so in go." More than 13\% of the Go related posts are related to Memory. This topic includes questions related to problems in memory allocation and sharing. Go supports automatic memory management, such as automatic memory allocation and automatic garbage collection. It is interesting that developers still face issues related to memory. An example of posts under this topic is a developer asking on Stack Overflow, ``I want to make an array of size N in go, but I don’t know what N will be at compile time, how would I allocate memory for it?". More than 2\% of the Go related posts are related to Build/Compilation. This topic includes questions related to build/compilation problem. Go requires a directory structure for compilation, and it seems that the structure is not clear to developers. An example of posts under this topic is a developer asking on Stack Overflow, ``I noticed the go/ast, go/token, go/parser, etc. packages in the src/pkg/go folder. However, the GCC compiler was based on C files located in src/cmd/gc. My question regards the new go command in Go that builds and runs programs: does this tool depend on the packages I referenced above?". More than 0.3\% of the Go related posts are related to Migration. The posts related to Migration have the second-highest average score (7.4) among the Go posts, meaning the posts' answers are generally helpful to the developers. This topic includes questions related to problem developers face while migrating their solution in a different language to Go. An example of posts under this topic is a developer asking on Stack Overflow, ``We want to rewrite kodingen.com backend with Go, which currently is Java, running as a daemon using Jsvc. I have never touched any C in my life; simple requirements give me hope that I can start using this wonderful language. What would you advise? Is C still better?". More than 21\% of the Go related posts are related to I/O. This topic includes questions related to all types of I/O operations in Go. An example of posts under this topic is a developer asking on Stack Overflow, ``I’m trying to write a golang program to control mpv via issuing commands to a unix socket. This should cause mpv to quit but nothing happens". More than 20\% of the Go related posts are related to Library/SDK. This topic includes questions related to different libraries, the majority of them on the ORM library named GORM. An example of posts under this topic is a developer asking on Stack Overflow, ``I'm using Go with the GORM ORM. I have the following structs. The relation is simple. One Town has multiple Places and one Place belongs to one Town. How can i do such query?". More than 6\% of the Go related posts are related HTTP. This topic includes questions related to serving HTTP requests in Go. An example of posts under this topic is a developer asking on Stack Overflow, ``What website has some good, up to date resources on using Go HTML/templates, especially in regard to parsing HTML files". \boxtext{Data and data structure related posts are mostly discussed among Go developers. They represent 31.2\% of posts of Go language.} \subsubsection*{\textbf{Rust Topics}} The percentage of Rust posts related to each topic is presented in Figure~\ref{fig:Rust topic question count}. From Table~\ref{table:topic_popularity_rust} 5of the 9 topics of Rust are related to Data and Data Structure. They are Borrow Mechanism, Use of Trait, Mutability, Use of Struct, and Generic Coding. These topics include questions related to a) use of Rust borrowing mechanism to access data without taking ownership, b) use of Rust trait (similar to the interface in Java) especially by new developers, c) use of immutable variable, d) problems to get the exact behavior from Rust struct and in destructuring a struct, e) use of generic programming that deals with generic data types and use of traits in generic algorithms, f) use of persistent data storage, data iterator, database driver, storing custom object into database, etc. More than 72\% of the Rust related posts are on these topics, and more than 27\% of them are related to Use of Trait. The posts related to the Generic coding have the second highest average score (4.1) among the Rust posts, meaning the posts' answers are generally helpful to the developers. An example of posts under this topic is a developer asking on Stack Overflow, ``I’m trying to learn Rust, I'm wondering if it is possible to declare the reader variable earlier with a generic type ..." More than 11\% of the Rust related posts are related to Library/SDK, primarily on the Rust package manager, Cargo. This topic includes questions related to package’s dependencies, compilation of the packages, and distribution of packages. An example of posts under this topic is a developer asking on Stack Overflow, ``I am developing a cargo package which has both a library and an executable of the same name in the same directory. How can I specify different dependencies for them both?" 2 of the 9 topics of Rust are related to Parallelization. They are Parallel Execution and Mutex. These topics include questions related to a) parallel execution in Rust, b) use of mutex and lock in a multiprocessing environment. More than 11\% of the Rust related posts are on these topics. An example of posts under this topic is a developer asking on Stack Overflow, ``I am having trouble understanding how to modify Option inside a Mutex. Any idea which Rust concept causes this?" More than 4\% of the Rust related posts are related to Migration. This topic includes questions related to problem developers face mimicking logic in Rust during migration. An example of posts under this topic is a developer asking on Stack Overflow, ``I am hoping to re-write some parts of a Python project in Rust to speed up things. I am capable of returning complex arrays/structures in Python. And this does not work properly in Rust." \boxtext{More than half (72.05\%) of the discussed topic among Rust developers is related to data and data structure.} \subsection{RQ2. How do the discussed topics evolve over time?} \subsubsection{Motivation} We have the rare opportunity to observe the evolution of the discussion on the issues belonging to different topics for these three new languages from the relevant SO posts. The community's interest is likely to vary on particular topics over time, as resources and surroundings also evolve continuously. Moreover, with evolution, languages introduce/abandon features. These changes might be reflected in the developers' discussions. This analysis would also enable us to investigate how the topics change and any possible relation between topic-wise post frequency and real-world dynamics, such as new releases. \subsubsection{Approach} To compare the evolution of discussed topics we used two metrics, \emph{topic popularity} and \emph{topic absolute impact} introduced in prior works~\cite{Wan2019}. These two metrics are applied on the data received from LDA. The definition of the two metrics is presented below. Let, ($z_1, z_2,..., z_k$) is the set of topic probability vector and $dominant(d_i)$ is the dominant topic of document. The dominant topic $dominant(d_i)$ is defined as, \begin{equation} dominant(d_i) = z_k:\theta(d_i, z_k) = max(\theta(d_i, z_j)); 1\leq j \leq K \end{equation} Now, the \emph{topic popularity} for each topic $z_k$ in the dataset $c_j$ is defined as, \begin{equation} popularity(z_k,c_j) = \frac{|\{d_i\}|}{|c_j|};dominant(d_i) = z_k; 1\leq j \leq K \end{equation} and the \emph{absolute topic impact} of a topic $z_k$ in month $m$ within corpus $c$ is defined as, \begin{equation} impact\textsubscript{absolute}(z_k, m) = \sum_{d_i\in D(m)}\theta(d_i,z_k) \label{eq: absolute impact} \end{equation} where $D(m)$ is the set of posts in month $m$. The \emph{absolute topic impact} shows the absolute proportion of a particular topic in a particular month's posts where the \emph{topic popularity} presents the proportion of a particular topic in the full dataset. \subsubsection{Results} \begin{figure}[htb] \hspace*{-.8cm} \includegraphics[scale=0.29]{figures/swift_topic_popularity.pdf} \caption{Topic absolute impact by the topic categories of Swift along with release of language version. Each release is a vertical gray dashed line.} \label{fig:swift topic popularity} \end{figure} \begin{figure}[htb] \hspace*{-.8cm} \includegraphics[scale=0.29]{figures/go_topic_popularity.pdf} \caption{Topic absolute impact by the topic categories of Go along with release of language version. Each release is a vertical gray dashed line.} \label{fig:go topic popularity} \end{figure} \begin{figure}[htb] \hspace*{-.8cm} \includegraphics[scale=0.29]{figures/rust_topic_popularity.pdf} \caption{Topic absolute impact by the topic categories of Rust along with release of language version. Each release is a vertical gray dashed line.} \label{fig:rust topic popularity} \end{figure} The topic popularity and absolute topic impact of each topic of each language is presented in Figure \ref{fig:swift topic popularity}, \ref{fig:go topic popularity}, and \ref{fig:rust topic popularity} along with the releases of new version. Both for Go and Rust, the topics related to the category `Data \& Data Structure' remain popular starting from the first day of their discussions in Stack Overflow till the last date of our analysis of the data. For the other language (Swift), however, the topics related to the category `Application' remained the most popular over time, followed by the topics related to `UI'. Most Swift developers are interested in the \emph{user interface} topic. The finding is consistent with real-world observation because Swift is primarily used to create GUI-based software. \ra{Overall, for all topics of Swift, we are observing a downward trend. The release frequency of Swift is comparatively lower than the other two languages. This indicates that Swift developers can have more time than the other languages to learn the specific features offered in a given release. As such, if we normalize the number of questions Swift developers asked per release, it is not surprising that the average number of questions per release for Swift is less than the other two languages. Moreover, in Section ~\ref{RQ4} we have shown that Swift language achieved maturity on November 1, 2016. That means after that point, most of the questions of the Swift are already answered. As most of the questions are already answered in this way, they do not need to ask a new question. As a result, the topic's absolute impact is downward.} From the topic absolute impact of Go and Rust, the topics related to the `Library/SDK' problem remain the second most discussed throughout the entire timeline. The most commonly discussed library for Rust was Cargo. In Rust `Parallelization' also achieves the second position along with library/SDK topic. There are two main reasons for this. Cargo is Rust's package manager. It is clear from the topic absolute impact of Rust language that Rust developers struggle to use Cargo properly. Rust also does not have a specific guidance on how to do concurrency. This is because Rust simply exposes standard library operating system threads and block system calls like any generic language. The discussions around mutex or parallel execution using Rust in Stack Overflow show the opinions of different developers on the issues and the best practices to handle parallelization in Rust. We have anticipated that the release of a major version of the languages may increase the discussion on certain topics. Thus we have collected the release dates from the official website (for GO) and Github repository (for Swift and Rust) and plotted the topic absolute impact with the release of languages in Figure \ref{fig:swift topic popularity}, \ref{fig:go topic popularity}, and \ref{fig:rust topic popularity}. From the figures, we see spikes in the developers' discussions around the release of new version of the three languages at the beginning, i.e., when the three new languages are relatively new to the developers. However, the intensity of such spikes has subsided over time, as the new languages get old. \boxtext{The absolute impact is almost constant in all languages except Swift. In Swift, we have noticed a downward trend in the topic absolute impact. On the other hand, the release of a new version of a language does not result in any significant change in the values of topic absolute impact of that particular language as the language gets older.} \subsection{RQ3. How does the difficulty of the topics vary across the languages?} \label{RQ3} \subsubsection{Motivation} A new language is likely to have some topics with new concepts. Hence, programmers experienced in other languages may find those difficult. Consequently, posts/queries on those topics are likely to have less response from the community. Considering this, we plan to explore those topics for our three languages of interest. The owners/sponsors of these languages can enrich their documentation for these difficult topics with priority. Language instructors will also get an idea of where they should focus more. \subsubsection{Approach} To answer this question we collected two well-known~\cite{Rosen2015,Bagherzadeh2019} metrics for all topics of section \ref{RQ1} to measure difficulty. \begin{enumerate}[leftmargin=10pt] \item \textbf{The percentage of posts of a topic without accepted answers (\% w/o accepted answers)} In SO, if users think that an answer to a query/post provides a solution to that problem, they can mark it as accepted. For each topic, we have collected the average number of accepted answers. Generally, a post is considered difficult if the number of accepted answers is low~\cite{Rosen2015,Bagherzadeh2019}. \item \textbf{The median time in minutes for an answer to be accepted (Median Time to Answer (Minutes.)).} We have calculated the median time to get an accepted answer. The more time it takes to get an accepted answer for a post, the more difficult the post is~\cite{Rosen2015,Bagherzadeh2019}. \end{enumerate} \subsubsection{Results} \input{Tables/topic_difficulty_swift} \input{Tables/topic_difficulty_go} \input{Tables/topic_difficulty_rust} Table \ref{table:topic_difficulty_swift}, \ref{table:topic_difficulty_go}, and \ref{table:topic_difficulty_rust} shows the percentage of questions without an accepted answer and median time (in minutes) to receive an acceptable solution for each of the identified topics in Section \ref{RQ1}. The topics in Table \ref{table:topic_difficulty_swift}, \ref{table:topic_difficulty_go}, and \ref{table:topic_difficulty_rust} are grouped into categories and ordered inside the group based on the percentage of posts without an accepted answer. \input{Tables/topic_corr} To understand the relationship between topic difficulty and popularity, we have performed a correlation analysis. We have chosen the Spearman correlation as it does not assume normality in the distribution of data. Table \ref{table: topic_correlation} shows the correlation. It is clear from Table \ref{table: topic_correlation} that the correlation between the popularity and difficulty metrics is not statistically significant except for the Go language. It seems that difficult topics are not that much popular among Go developers. \begin{figure} \centering \includegraphics[scale=0.5]{figures/TopicChart.eps} \caption{Top six difficult topics of new languages} \label{fig:less_unanswered_topics} \end{figure} \boxtext{Difficult topics are not that much popular among Go developers.} \subsection{RQ4. When were adequate resources available for the new programming languages in Stack Overflow?} \label{RQ4} \subsubsection{Motivation} The resources of a programming language, maturity and performance of its libraries usually take time to be stable. In the meantime, developers using that language are likely to discuss these in community QA sites such as SO. In the RQ, we would like to inspect the length time it takes for a language to get maturity by analyzing its footprint in SO. \subsubsection{Approach} It is hard to define ``adequate resource'' of a programming language in a QA site. However, we can use an indirect approach to measure adequate resources. Two major types of Stack Overflow questions are \emph{repetitive} questions and \emph{new} questions. By \emph{repetitive} question we mean the same question or same problem was discussed before, but then the developers~\citep{TausczikWC17} faced it in another platform or environment. The decrease in the number of the new questions indicates that Stack Overflow already has the answer to most of the questions or problems. From this point of view, we can say that in Stack Overflow we have ``adequate resource'' of that particular language if the number of new questions is within a limit. However, questions are not the only way developers interact with Stack Overflow. There are other ways like votes and comments. To consider all types of interactions into determining the expected time for the availability of adequate resources, we have followed the approach of Srba et al.~\citep{Srba2016}. Using this approach, we have calculated average post (by post we mean both question and answer) quality in Stack Overflow. To measure post quality, we need to consider all kinds of interactions within a given time frame. A deadline is needed to ensure that each post (both old and new) received equal time to receive votes and comments. Otherwise, old posts will get more time to get comments and votes than the new posts. In SO a post may receive a vote long after the date it was created. For example, in our dataset, a post received a vote from a user twelve years later. However, most votes, comments, and answers are available after a certain period. As stated by Bhat et al.~\citep{Bhat2014}, 63.5\% questions receive an answer within one hour, and only 9.98\% questions receive an answer after one day. To calculate post quality from votes of answers, accepted answers and comments, we have considered the votes received within thirteen days of the creation of the post. We studied the distribution of the accepted answer time, answer time, and comment time. We found that the \emph{thirteenth day} covers the 85 percentile of answer time, 95 percentile of accepted answer time, and 90 percentile of the comment time. We calculated the post quality increasing the duration, but found that quality did not change significantly. The \emph{quality score} represents the average post quality over a month and \emph{interaction score} represents the average developers' interaction of that language. To calculate \emph{quality score}, votes from accepted answers are given double weight compared to those without an accepted answer. This practice exists~\citep{Romano2013} to prioritize the contribution of accepted answers. The detail calculation of \emph{quality score} and \emph{interaction score} is presented below. \noindent Let, \\ $Q = $ {All questions of a month},\\ $A = $ {All answers of questions in $Q$ where creation date is within 13 days of $Q$},\\ $C = $ {All comments of both $Q$ and $A$ within 13 days of $A$},\\ $S = $ {All accepted answers of $Q$ where creation date is withing 13 days of $Q$},\\ $T(x) = $ Creation time of item $x$.\\ Now, \begin{equation} \begin{split} Interaction\ Score = \dfrac{\sum_{Q_i\in Q}Q_i+ \sum_{A_i\in A}A_i+\sum_{C_i\in C}C_i}{\sum_{Q_i\in Q}Q_i} \end{split} \label{eq: interaction score} \end{equation} \begin{equation} \begin{split} Quality\ Score = \dfrac{\sum_{Q_i\in Q}\sum_{\substack{Q_v\in Votes\: of\: Q_i\\T(Q_v) \leq T(Q_i)+13}}Q_v}{\sum_{Q_i\in Q}Q_i}+ \dfrac{\sum_{A_i\in A}\sum_{\substack{A_v\in Votes\: of\: A_i\\T(A_v) \leq T(A_i)+13}}A_v}{\sum_{Q_i\in Q}Q_i}\\ +\dfrac{\sum_{S_i\in S}\sum_{\substack{S_v\in Votes\: of\: S_i\\T(S_v) \leq T(S_i)+13}}S_v}{\sum_{Q_i\in Q}Q_i} \end{split} \label{eq: quality score} \end{equation} \subsubsection{Results} \begin{figure*}[t] \centering \subfloat[Post quality of new languages in Stack Overflow\label{fig:Content quality}]{{\includegraphics[scale=0.44]{figures/content_quality.eps} }}% \enskip \subfloat[Interaction of developers of new languages with Stack Overflow\label{fig:Interaction}]{{\includegraphics[scale=0.44]{figures/interaction.eps}}}% \caption{Post quality and developers' interaction with new languages vs. time}% \label{score of languages} \end{figure*} We plotted the quality score and interaction score of the three languages in Figure \ref{fig:Content quality} and Figure \ref{fig:Interaction}, respectively. From Figure~\ref{fig:Content quality} it is quite clear that after the introduction of a language, post quality is unstable, and the quality scores are very high. The obvious reason behind this instability is that the language lacks resources, and every new release triggers a set of new questions. The questions of the starting years are less repetitive than the later years~\citep{Srba2016}, and it is the reason behind the high value of \emph{quality score}. Gradually \emph{quality score} stabilizes into a certain point. In a stable language, users' interaction with Stack Overflow should be minimum and within a range. From Figure~\ref{fig:Interaction}, it is evident that after the first release, \emph{interaction score} also stabilizes to a point which supports our conjecture. To effectively measure the difference of quality scores between consecutive months, \emph{first difference} metric~\citep{Rasheed2011} has been applied to the quality score of each language. The first difference of quality score is the difference of quality scores between two consecutive months. The \emph{first difference} technique removes any unobserved variable from data. Moreover, as the data points are taken at a constant interval, the value of the \emph{first difference} works like a differential value of the quality score function from where we can observe the change. The first difference is plotted against the release time in Figure~\ref{First difference and release}. In the beginning, the first difference of quality score was following the trend of release. However, gradually it decreases the level of response which means the language is stabilizing. We can detect a stable point for a language from this point of view. By the stable point, we mean the starting date of the period after the first release of a language when the language is so stable that a single release cannot change or disrupt the development process. If the first difference of quality score of a language is within a range, then it has two implications. First, the language is stable and it does not initiate any significant change in the development lifecycle. Secondly, most of the Stack Overflow posts are repeating for this language and the contribution of these kind of questions will be omitted in the first difference process. Now, we have the effect of the change of frequency of new questions in the first difference of quality score. Therefore, the first difference of quality score within a range means developers face fewer problems that are not already answered in Stack Overflow. Hence, we can say at this point that the new languages have adequate resources in Stack Overflow. \begin{figure*}[t] \centering \subfloat[Swift]{{\includegraphics[scale=0.25]{figures/Swift_First_difference_release.eps} }}% \subfloat[Go]{{\includegraphics[scale=0.25]{figures/Go_First_difference_release.eps}}}% \enskip \subfloat[Rust]{{\includegraphics[scale=0.25]{figures/Rust_First_difference_release.eps}}}% \caption{First difference of the post quality and release of a new version of new languages.}% \label{First difference and release} \end{figure*} We defined the stable point as the time point after which the value of the first difference is always between -1 and 1. Stable points for each language are presented in Table~\ref{table:Stable point} \begin{table}[htbp] \centering \caption{Stable point of the new languages} \begin{tabular}{|l|l|l|} \hline \textbf{Language} & \textbf{Release Date} & \textbf{Stable Point Date} \\ \hline Go & March 1, 2012 & July 1, 2015\\ \hline Swift & September 9, 2014 & November 1, 2016\\ \hline Rust & January 1, 2012 & Not reached \\ \hline \end{tabular} \label{table:Stable point} \end{table} In Stack Overflow, the number of Rust developer is too low compared to the other two languages. It is quite common in Stack Overflow that a particular portion of developers leaves or becomes inactive in Stack overflow after some time. The post quality of Rust will change quickly after such departure. However, such departure cannot change Go or Swift post quality so frequently as departing developers represent a small percentage of the whole community of these languages in Stack Overflow. We also observed that the Rust language's release frequency is relatively high compared to the other two languages. These can be reasons why Rust has not reached the stable point yet. \boxtext{In Stack Overflow, we can expect adequate resources for Swift after two years of release, while this period is three years for Go. We have found the evidence of having an inadequate resource of Rust language in Stack Overflow.} \boxtext{The size of an active community can influence the growth of a new language.} \subsection{RQ5. What are the characteristics of answer pattern for the languages?} \label{RQ5} \subsubsection{Motivation.} The developer base (programmer base) for the new languages grows over time. Initially, when there is a scarcity of developers/experts for a new language, a question may take time to be answered, and some questions may remain unanswered. With the increase in the number of experts and developers, continuous improvement is expected in these metrics. The change in answer pattern with respect to different aspects will be investigated in this RQ. \subsubsection{Approach.} In SO, the community size and age of a language are most likely to influence the answer interval. The answer interval is likely to be decreased over time. To observe the expected change in intervals of new languages, we extracted several features from Stack Overflow like frequency of questions, questions without having any answer (no answer), questions having an accepted answer (accepted answer), and questions without an accepted answer (no accepted answer). To compare the growth, we plotted those features along with the features of one top-tier language (Java) and one mid-tier language (Python). Since Java is used in a wide range of projects, we have chosen it as a representative of top-tier languages. As the use of python has been steadily increasing in recent years, we have selected Python as a representative of mid-tier languages. To check whether there is a significant difference in the answer pattern of a language during the evolutionary period and its maturation period, we have defined two states for languages according to the findings of Section \ref{RQ4}. (1) Evolving State: the language has just been released, it lacks experts and other resources in Stack Overflow, and (2) Matured State: the language has a stable release with a support community. The details of the stable date point are presented in Section \ref{RQ4}. \begin{table} \caption{Duration of the evolving and matured state of new languages} \begin{tabular}{|l|l|l|} \hline Language & Evolving State & Matured State \\ \hline Swift & September 2014-October 2016 & November 2016-December 2017 \\ \hline Rust & N/A & N/A \\ \hline Go & March 2012-June 2015 & July 2015-December 2017 \\ \hline \end{tabular}% \label{table:States of languages} \end{table} After that, we have measured the median interval between the question and the first answer (first answer interval) and the interval between the question and the accepted answer (accepted answer interval) for each month for each of the languages. We hypothesized that there would be a significant difference in the answer interval between the two states of languages. To determine the appropriate method for testing the hypothesis, we conducted the Shapiro-Wilk test~\citep{SHAPIRO1965} and found that the distribution of the first answer interval does not follow a normal distribution. Since non-parametric tests do not assume any distribution, it is widely used in cases where the data does not follow the normal distribution~\citep{Mann1947}. Therefore, we used the non-parametric Man-Whitney U test to test the assumption that it takes longer to get the answer in the \emph{evolving state} than the \emph{matured state}. It is expected that the number of unanswered questions will decrease with the evolution. To verify this assumption, we calculated the unanswered-question ratio for the new language for each month. By the unanswered question ratio, we mean the ratio of all questions and questions without any answers. We have defined the unanswered question ratio as, $$ \text{Unanswered question ratio} = \dfrac{\sum \text{Unanswered questions}}{\sum \text{Questions}} $$ To compare the growth among the mature languages and new languages, we have collected the median time to get the first answer and the accepted answer. Furthermore, we have also completed an intra-language comparison of the first answer interval and accepted answer interval. Since the number of experts in a new language is relatively low, it is a general assumption that the new language will take longer to get accepted answers. So the accepted answer interval should be larger than the first answer interval. \subsubsection{Results.} The answer pattern of new and matured languages are presented in Figure~\ref{fig:Evolution of new languages}. It is clear from Figure~\ref{fig:Evolution of new languages} that the evolution pattern of Python is quite similar to Swift, while Java shows a different pattern compared to all three new languages. A natural deduction is that Java was released long before and was already a mature language prior to the inception of SO. Hence, the community interaction at the initial period of Java is missing in our dataset. On the other hand, although Python was also released long before the inception of SO, its use increased significantly after 2010~\citep{TIOBE:index}. \begin{figure} \includegraphics[scale=0.28]{figures/Evolution.eps} \caption{Question, No answer, Accepted answer, and No accepted answer count of languages in Stack Overflow} \label{fig:Evolution of new languages} \end{figure} The result of the conjecture whether it takes longer in the \emph{evolving state} than the \emph{matured state} of their age to have the first answer and accepted answer is presented in Table~\ref{table:Mann-Whitney Test answer interval between states}. \begin{table}[h] \centering \caption{Mann-Whitney U Test result for comparison of the first answer interval between matured and evolving state of new languages} \begin{tabular}{|l|l|} \hline Language & p value \\ \hline Swift & \textless 0.01 \\ \hline Rust & Not found \\ \hline Go & \textless 0.01 \\ \hline \end{tabular}% \label{table:Mann-Whitney Test answer interval between states} \end{table} From the Mann-Whitney test result presented in Table~\ref{table:Mann-Whitney Test answer interval between states}, we have found that only the Swift and Go languages reject the null hypothesis which means that the difference is significant for Swift and Go languages. \rifat{I think we don't need this. It is confusing because some of the mentioned topics are not in our previously shown topic list}. \color{red} There may be some differences between topics discussed in the evolving state and the matured state. To test the conjecture, we divided the questions of each language into two states and extracted 10 topics from each state. But the difference was not noticed between the topics. That is because while we are selecting a few topics from a large set, sub-topics grouped into common programming language topics. To overcome the problem, we extracted 50 topics from each state of each language with their percentage in that stage. If a topic does not appear at both stages, or if their percentage difference is greater than or equal to 2, then we considered that the topic belongs to that stage in which it is most commonly seen. We have tested with various threshold and selected two because otherwise, some topics which are common to all languages irrespective of the stage may appear. Now using this criterion we can identify the topic shift between states of languages. \begin{enumerate} \item \textbf{Topics of Swift Language in Evolving State:} During the evolving period, Swift developers mostly talked about language integration and equivalence. Since, in this period, the transition from Objective-C to Swift has just happened, developers were trying to transfer their codebase from Objective-C to Swift. Other popular topics in this state are the use of \emph{alamofire} (a HTTP library) and audio, video controller API of Swift. \item \textbf{Topics of Swift Language in Matured State:} During the matured state, Swift developers are mostly concerned about advanced features like ORM, advanced view controllers (segue), and threading. One thing prominent from the data that in matured state Swift developers are asking more questions related to game development such as different type of kits to handle 2D and 3D graphics (sprite kit, scene kit, etc.). The change might be associated with the availability of high-end phones in recent years~\citep{Aleem2016,Gavalas2011}. \item \textbf{Topics of Go Language in Evolving State:} During the evolving state, Go developers discussed compiling issues, compiler path, data types, and type conversion. It seems that developers are trying to understand the syntax and usage of that language. Another common topic in this stage using simple HTTP, TCP, and socket server and encryption technology. As Go is mostly used in the server-side, developers are trying to identify Go's potential by deploying sample project or mimicing old project in the new language. \item \textbf{Topics of Go Language in Matured State:} In the matured state, Go developers were concerned about HTTP web servers like Gin and Martini, advanced features like ORM (GORM), and container deployment system Docker, Kubernetes. It is clear that developers are mostly concerned about delivering and deployment system in this stage. \end{enumerate} \color{black} \begin{figure} \centering \includegraphics[scale=0.38]{figures/UnansweredQuestionRatio.eps} \caption{Unanswered-question ratio in Stack Overflow} \label{fig:Unanswered-question ratio} \end{figure} The Unanswered question ratio of the languages is presented in Figure~\ref{fig:Unanswered-question ratio}. We have guessed that with time the Unanswered question ratio will gradually decrease. Figure ~\ref{fig:Unanswered-question ratio} shows the contrariwise scenery of our assumption. As the day goes by, the unanswered question ratio increases. In one sense, we can say that the answer pattern for new languages and matured languages are the same as in both cases, the unanswered question ration increases. However, we can observe two interesting phenomena from these results. First, the change ratio is smoother for matured language while it is notched for new languages except Swift. The reason for the jagged curve may be the absence of active expert developers. Inactive expert developers can change the ratio of the unanswered question by being active for a short time. That is why the curve for Go and Rust are jagged. Being the direct successor of Objective-C, Swift language has avoided this phenomenon. Second, Swift language starts its rise from a certain level caused by topic and question inheritance from Objective-C. Figure~\ref{fig:Answer intervals} presents the median time to get the first answer and the accepted answer of new languages, a top-tier language (Java), and a mid-tier language (Python). \begin{figure} \begin{subfigure}{0.65\textwidth} \centering \hspace{-3cm} \includegraphics[scale=0.38]{figures/AcceptedAnswerInterval.eps} \caption{\textbf{Accepted answer}} \label{fig:Accepted Answer interval} \end{subfigure} \begin{subfigure}{0.65\textwidth} \centering \hspace{-3cm} \includegraphics[scale=0.38]{figures/FirstAnswerInterval.eps} \caption{\textbf{First answer}} \label{fig:First Answer interval} \end{subfigure} \caption{Comparison of median answer interval of languages in Stack Overflow} \label{fig:Answer intervals} \end{figure} Our intuition was that with maturity the time to get the first answer and the accepted answer would decrease. Figure~\ref{fig:Answer intervals} proves that our assumption is wrong. The answer intervals of Java is increasing in the long run which may be caused by various reason. However, a common reason behind the long answer interval of stack overflow questions is the inability to attract an expert developer to answer that question~\citep{Asaduzzaman2013}. However, we can observe steps in the accepted answer interval and the first answer interval of Python and Java. It points out that after a specific time, the answer interval increases. It is common in SO that matured communities often face repeated questions and \emph{hit and run}~\citep{DBLP:journals/corr/ChengDL14,SO:decay} problems cease the necessity of community collaboration significantly. The increment of answer interval for Python and Java language can be associated with that. Time to get an accepted answer indirectly represents the growth of developers' expertise. From Figure~\ref{fig:Accepted Answer interval}, it is prominent that Rust has a comparatively longer accepted answer interval than the other two new languages. However, it is interesting that this is not true for the first answer interval. Go has a longer first answer interval than Rust. Hence, we can say that Rust's accepted answer interval is longer than Go but Go's first answer interval is longer than Rust. Longer accepted answer interval means the absence of expert developers. Thus, we can claim that Go developers receive the quality answer in the long run, but they have to wait a little longer as active support is absent. That means Stack overflow needs more \emph{active} Go developers and \emph{expert} Rust developers. We have performed hypothesis testing on the accepted answer interval and the first answer interval of new languages to strengthen our claim. To find a suitable testing method for hypothesis testing, we have performed the Shapiro-Wilk test~\citep{SHAPIRO1965}. The Shapiro-Wilk test pointed out that the distribution is not normal. Thus we have performed a Mann-Whitney U test on first answer interval and accepted answer interval. The result is presented in Table~\ref{table:accepted-first u value}. \input{Tables/accepted-first.tex} From Table~\ref{table:accepted-first u value}, we can reject the null hypothesis for the first answer interval and the accepted answer interval of Go-Rust and Swift-Rust pair. It means that the difference between the first answer interval and the accepted answer interval of Rust-Go and Swift-Rust is statistically significant. It implies that our claim about the first and accepted answer intervals of Go and Rust is true. \boxtext{In Stack Overflow, the unanswered question ratio increases regardless of the age of the language.} \boxtext{In Stack Overflow, it takes significantly higher time to get the first answer in the evolving state than the matured state of a new language.} \boxtext{In Stack Overflow, we found evidence that Go has comparatively less active community support, and Rust has a small number of expert developers.} \subsection{RQ6. How likely are the questions of the three new languages answered by the developers of predecessor languages?} \label{RQ6} \subsubsection{Motivation.} If new languages share some common features with already well-established languages, the experts from that relevant established language are likely to contribute to the interested community of the new language. This may be crucial for the new languages because they do not have any community of their own at the initial stage. This research question will let us know about the details of the people who contributed significantly to the posts of the new languages, i.e., their prior experiences. \subsubsection{Approach.} To answer this question, we need to find out which users are experts in answering queries of new languages and then find out their other areas of expertise. We defined \emph{expert developers} as those who have authored at least one accepted answer. First, we have identified expert developers for each language. We then collected tags of all the questions answered by the expert developers to look for other areas of expertise. After that, tags are sorted according to their frequency to get the most frequent tags. \subsubsection{Results.} The most frequent ten tags of answers by each language's expert developers are presented in Figure~\ref{fig:dev skills}. \begin{figure} \centering \includegraphics[scale=0.5]{figures/Tagfrequency.eps} \caption{Most frequent 10 tags answered by expert developers of new languages.} \label{fig:dev skills} \end{figure} Rust experts mostly answered C and C++ questions previously. C++ is the predecessor language of Rust~\cite{Uzlu2017}. Hence, we can say developers expert in Rust are also expert in C and C++. We can infer that Rust is receiving contributions from the C and C++ community base in Stack Overflow. Developers expert in Swift have mostly answered the Java and Objective-C questions. Objective-C is considered the predecessor of the Swift language. \boxtext{New languages are benefited from the community base of the predecessor language.} \subsection{RQ5. Is there any relationship between the growth of the three programming languages and developers' activity patterns?} \label{RQ7} \subsubsection{Motivation} Along with the QA sites such as Stack Overflow, repository like GitHub presents the activities of the developers on that language. In this research question, we would like to explore these two sources to understand how language advancement is reflected in the developer activities and engagements in both these platforms. \subsubsection{Approach} The most common developers' activities in SO~\citep{Badashian2014} are: \begin{enumerate}[leftmargin=10pt, itemsep=0pt] \item \textbf{Questions:} Developers ask development-related questions. Questions might be moderated based on clarity and duplicity. \item \textbf{Answers:} Developers answer questions about their field of expertise. \item \textbf{Comment:} Users can comment on other users' questions and answers. \item \textbf{Up Votes:} Developers can vote to increase the score of other users' questions or answers. \item \textbf{Down Votes:} Developers can cast votes to decrease the score of other users' questions or answers. \item \textbf{Question View:} Users can view other users' questions. (SO does not keep this count with a timestamp). \item \textbf{Answer View:} Users can view other users' answers. (SO does not keep this count with a timestamp). \end{enumerate} High developers' activity helps to expose special cases and rare bugs of a project. Developers use the issue to inform the language owners about these problems or a particular case. The solution to these problems and bugs led to the growth of the language. Hence, we expect a relationship between the issue and the developer's activity pattern. Moreover, developers' activity can also be observed from the number of users and repositories of that language from GitHub. Table ~\ref{table:model parameters} summarizes the descriptions of and rationales behind the studied factors. To measure the relationship among variables, we performed the following steps. \input{Tables/Model_parameters} \begin{enumerate} \item Model Construction (MC) \item Model Analysis (MA) \end{enumerate} These steps are discussed below. \noindent\textbf{Model construction (MC):} We build a regression model to explain the relationship between dependent and explanatory variables. The regression model fits the dependent variable with respect to independent variables. We followed the model construction approach of Harrel et al.~\citep{Harrell2015}. While relaxing the linearity assumption, this approach models the nonlinear relationship accurately. The steps for model construction are described below. \begin{enumerate}[leftmargin=10pt] \item \emph{Estimation of maximum degrees of freedom:} A critical concern in model building is overfitting. Overfitting is most frequent in models that use more degree of freedom than the dataset can support. Hence, we have fixed the maximum degree of freedom for our model. As suggested by Harrel et al.~\citep{Harrell2015}, we have fixed \(\frac{n}{15} \) degree of freedom for our model, where n is the number of data points (120) in the dataset. \item \emph{Normality adjustment:} We fit our regression models using the Ordinary Least Squares (OLS) technique. OLS assumes the normality in the distribution of the dependent variable. Hence, it is crucial that the distribution of the dependent variable is normal. A widely used approach for conversion into a normal distribution is applying $\ln$ function~\citep{pmid25092958}. We have some zero value in our dataset. Therefore in our case, we have used $\ln (x+1) $ to lessen the skew and better fit the OLS assumption. \item \emph{Correlation analysis:} Before building the model, we checked the highly correlated explanatory variables. In this step, we have used Spearmen rank correlation as it is resilient to data that is not normally distributed. We have constructed a hierarchical overview of the correlation among explanatory variables. For sub-hierarchies of explanatory variables with correlation $\rho$ \textgreater 0.9, we selected only one element of the sub-hierarchy. \item \emph{Fit regression Model:} Finally, after selecting explanatory variables and log transformations of dependent variables, we fit our regression models to the data. \end{enumerate} \noindent\textbf{Model Analysis (MA):} We have calculated the adjusted $R^2$ to measure the goodness of fit of the model. Adjusted $R^2$ considers the bias of an additional degree of freedom by penalizing the model for each degree of freedom. The steps for model analysis are described below. \begin{enumerate}[leftmargin=10pt] \item \emph{Assessment of Model stability:} Adjusted $R^2$ may overestimate the performance of the model for the curse of overfitting. The performance estimation is taken into account by subtracting the average \emph{optimism}~\citep{Efron1986}. The \emph{optimism} is calculated in three steps. First, a bootstrap is created from N samples. Second, a model is fitted on the bootstrap data using the same degree of freedom. Third, \emph{optimism}, the difference between the adjusted $R^2$ of the bootstrap model and the model built in the previous step(original model) is calculated. The process is repeated for 1000 times, and we have got average optimism. Finally, we subtracted the average optimism from the original adjusted $R^2$ and got optimism reduced $R^2$. \item \emph{Estimation of the power of explanatory variables:} To measure the impact of an explanatory variable on a model, we measured the difference in performance between all explanatory variables (full model) and all explanatory variables except one (dropped model). A $\chi^2$ test is applied to the resulting values to detect whether each explanatory variable improves model performance to a statistically significant degree. To estimate the impact, we performed the Wald $\chi^2$ maximum likelihood test. The larger the Wald $\chi^2$ value, the more significant the impact of that particular explanatory variable is~\citep{McIntosh2015}. \end{enumerate} We can observe the relation between developers' activity patterns and advancement of the language project from two different perspectives: (1) Question count of Stack Overflow and (2) Repository and User count of that language from GitHub. Hence, we performed the process of estimating the relationship for each perspective. We have used open issue count, closed issue count, and the ratio of open issue count with respect to the total number of the issue as the explanatory variable and the question count as the dependent variable to estimate the relationship between issue frequency and developers' activity from Stack Overflow perspective. We collected 47710, 23967, 14033 issue data for Rust, Go, and Swift language, respectively. Developers use an issue to ask owners about a new feature and seeking help with any problems. More developers may lead to a high number of issues. Therefore, from the GitHub perspective, to model a relationship between the issue and the developers' activity, we have used the User count and the Repository count of that language as the explanatory variable and Open issue count as the dependent variable. We have collected the number of repositories and users for each language. To compute the number of new users for each language, we searched for all the users whose account creation date is within a particular month and whose major language is this language. \subsubsection{Results} According to our approach, results for the estimation of the relationship between the age of the language and the developers' activity from the \textbf{perspective of Stack Overflow} is presented below. \input{Tables/issue_question_relation.tex} \begin{enumerate}[wide=0pt, leftmargin=10pt] \item[(MC-1)] \textbf{Estimation of the maximum degree of freedom:} We have 120 points in our dataset. Hence, by Harrel et al.~\citep{Harrell2015} we can allow maximum 8 degrees of freedom. \item[(MC-2)] \textbf{Normality adjustment:} Question frequency of the new languages is right-skewed. So we have to perform log normality adjustment in this case. \item[(MC-3)] \textbf{Correlation analysis:} We hierarchically clustered the features by Spearmen $\mid \rho \mid$ value. It is found that the open issue ratio is highly correlated with closed issue count. For the sake of completeness, we have created a model using closed issue count instead of open issue ratio and vice-versa but have not found any change in the performance of the model. \item[(MA-1)] \textbf{Assessment of model stability:} Table~\ref{table:issue_question relationship} presents the adjusted $R^2$ and optimism corrected $R^2$. From Table~\ref{table:issue_question relationship}, we can say that the model is stable for Swift and Rust where the optimism (the difference between Adjusted $R^2$ and Optimism-reduced $R^2$) is 0.002 and 0.005 respectively. However, for Go, the optimism is 0.011. Though the difference is noteworthy, it does not invalidate our model. \item[(MA-2)] \textbf{Estimation of the power of explanatory variables:} The high $\chi^2$ value of the open issue and the ratio of the open issue to the total number of issue in Table~\ref{table:issue_question relationship} represents the significant role of these parameters in Stack Overflow. However, they are not that much significant for determining the number of Swift and Rust language questions in Stack Overflow. On the other hand, closed issue ratio is significant for all three languages in determining the number of questions in Stack Overflow which is proved by the high $\chi^2$ value of closed issue in Table~\ref{table:issue_question relationship}. Overall, the $\chi^2$ value of Swift language is relatively smaller than the other two languages. Hence, we can say that the GitHub issue provides a meaningful and robust amount of explanatory power in describing question frequency of new languages except Swift. \end{enumerate} It is quite clear from the adjusted $R^2$ value of Table~\ref{table:issue_question relationship} that there is a relationship between the growth of a language and the number of questions posted. As seen from Table~\ref{table:issue_question relationship}, the number of closed issues is the most impactful explanatory variable for the Rust language model. Hence, we can say that the number of open issues will significantly influence the number of Rust question in Stack Overflow. The relationship between the growth of a language and the developers' activity from the \textbf{perspective of GitHub} is presented below. \input{Tables/github_question_relation.tex} \begin{enumerate}[wide=0pt, leftmargin=*] \item[(MC-1)] \textbf{Estimation of the maximum degree of freedom:} To answer this question we have used the same dataset used in the previous step. Hence, we can allow a maximum of 8 degrees of freedom. \item[(MC-2)] \textbf{Normality adjustment:} Like the previous step, we have applied a log transform to normalize the dependent variable (open issue count). \item[(MC-3)] \textbf{Correlation analysis:} We have used two features to build this model. Hence, instead of hierarchical clustering, we just calculated the Spearmen $\mid \rho \mid$ value between \emph{user count} and \emph{repository count}. It is found that they are not correlated. \item[(MA-1)] \textbf{Assessment of model stability:} Table~\ref{table:github_question relationship} presents the adjusted $R^2$ and optimism reduced $R^2$. From Table~\ref{table:github_question relationship} the optimism for each language is \textless 0.01 which ensures the stability of the model. \item[(MA-2)] \textbf{Estimation of the power of explanatory variables:} The issue is associated with the developers' experience. Hence, we expect the `User' parameter to be an important feature in determining the number of open issue in the official GitHub repository of new languages. Table~\ref{table:github_question relationship} shows a high $\chi^2$ value of user count parameter, which supports our conjecture about the significance of the number of users in determining the number of open issue in GitHub. We have also found that Rust has relatively fewer users in GitHub than the other two languages which are expressed in the $\chi^2$ value of the `User' parameter for Rust. The high $\chi^2$ value of the repository parameter for Swift and Rust language represents the significance of the number of the repository in determining the number of Swift and Rust open issue. However, the number of repositories is less significant in determining the number of open issue in the Go GitHub repository than the other two languages. \end{enumerate} From the adjusted $R^2$ value of Table~\ref{table:github_question relationship}, it is clear that there is a strong relationship between developers' GitHub activity and the number of open issue in the official repository of that respective language. \boxtext{There is a relationship between developers' activity pattern and the growth of the language.} \boxtext{The number of open issues of Rust in GitHub significantly influenced the number of questions on Rust in Stack Overflow.} \boxtext{The open issue count of Swift and Rust is highly dependent on the number of repositories of those languages in GitHub.} Every new release impacts the growth of a programming language. After a new release, the developers' activity can give us an idea about the relationship between developers' activity pattern and the growth of a language. To observe the developers' activity pattern after a new release, we have collected all release dates of new languages from GitHub and then plotted them alongside question, issue, and repository count. \begin{figure*}[t] \centering \subfloat[Swift]{{\includegraphics[scale=0.15]{figures/Swift_release_and_user_behaviour} }}% \enskip \subfloat[Go]{{\includegraphics[scale=0.15]{figures/Go_release_and_user_behaviour}}}% \enskip \subfloat[Rust]{{\includegraphics[scale=0.15]{figures/Rust_release_and_user_behaviour}}}% \caption{Release of a new version and developer activity pattern per languages}% \label{fig:Release and user behavior}% \end{figure*} Each sub-figure shows the response after the release of a new version. It is clear that the developers' activity is influenced by the benefits, features, and bugs of the new release. This trend is also visible in question count, i.e., question count increases after each release (Figure~\ref{fig:Release and user behavior}). However, we observed that the issue count of Swift is less influenced than that of the other two languages. For tracking bugs and language problems, \href{https://Golang.org/project/}{Go} and \href{https://github.com/Rust-lang/Rust/blob/master/CONTRIBUTING.md}{Rust} use only GitHub issue tracker. On the other hand, besides using the GitHub issue tracker, \href{https://Swift.org/contributing/\#reporting-bugs}{Swift} uses their own JIRA~\citep{wiki:JIRA} instance for \href{https://bugreport.apple.com/}{tracking bugs}. This is a likely cause of the difference, and as a result, the issue count of Swift represents a portion of the actual issue (bugs), so it is less influenced by a new release than the other two languages. We have tested this hypothesis using statistical testing. We have performed Wilcoxon signed-rank test between the question, repository, and issue count of the month before release and the count of the month of release. The result is presented in Table~\ref{table:release_impact}. However, only the change in repository count was significant. The reason behind this significance is after each new release developers create a new repository to test the new features without altering the production version of the software. Hence, the number of repository increases after a new release. Though we can observe that the question and issue count \input{Tables/release_impact.tex} are responding with the release of a new version, it is statistically insignificant according to the Wilcoxon signed-rank test. To find the reason behind this insignificance, we conducted a further investigation. We noticed that the spikes in question and issue count curve does not appear immediately after a release. Rather, it appears after a variable time gap. Thus, the change in question and issue count is statistically insignificant. \section{Threats to validity} \label{sec:validity} In this section, we discuss the validity of our study. \indent \textbf{Internal validity:} Use of tags to categorize questions by language is an internal threat to validity. A new Stack Overflow user may not add an appropriate tag with the question. However, Stack Overflow questions go through an extensive moderation process, and eventually, it will have the appropriate tags. In some cases, our identification of posts by tags may not capture the posts of new languages. To alleviate this threat, we considered the relevance of tags. In this study, we have used Stack Overflow as the primary dataset. There are many language-specific developers' forums and QA sites, and those sites may contain posts that can help to understand the growth of new languages. However, we believe that numerous participants and the widespread popularity of Stack Overflow have made it a familiar venue for developers. Hence, the posts of Stack Overflow are considered enough to understand the trends of the growth of a new language. \ra{We conducted this study with the Stack Overflow (SO) dump of January 2018, which was the latest dump available during our analysis. Our analysis presents the types of discussions and support developers offer in SO regarding the three new programming languages Go, Swift, and Rust. While these three languages are new compared to languages like Java and C\#, we note that we found at least three years of data for each language (Go, Swift, Rust) in SO in our dump of January 2018. Such a large volume of data can provide us considerable insights into the research questions we explored in our paper. However, the data dump is a little bit old, and replicating this study on a newer dataset may lead to different results. Like any language, over time, a new language is no longer considered new. This can be true for the above three new languages if they are studied for a longer period. Given that our focus was to understand how these three new languages are discussed and supported, our analysis and results from January 2018 across the three languages are well-suited to accommodate the analysis of newness/freshness of the three languages.} \indent \textbf{External validity:} After the inception of the Stack Overflow (2008), about 35 programming languages have been released~\citep{wiki:Timeline}, whereas this study is focused on three languages (Swift, Go, and Rust). For this reason, our research results may not apply to other new languages. However, in this study, we did not emphasize any specific feature of a particular language. The languages we considered vary in terms of their time of inception and other properties (such as having predecessor language or not). Instead, we focused on the characteristics and trends of the growth of new languages. We compared the growth trends with a top-tier (Java) and mid-tier (Python) language and found that mid-tier language (Python) shows similar characteristics that confirm the generalizability of the findings. The dissimilarity with the top-tier language (Java) is that we missed the community interaction at the initial period of this language. Java was published a long time ago and was already a developed language before the establishment of SO. Therefore, we think our findings are free from any bias in a particular language. \iffalse \indent \textbf{Construct validity:} In this study, votes of accepted answers are given double weight in the calculation of post quality. The weight used may not represent their exact contribution. However, the magnitude of weight does not influence our analysis. Thus, double weight in the accepted answer will not invalidate our claim. \fi \section{Tags of new languages} \label{appendix:tagrelevance} \input{Tables/TagRelevance} \section{Topic listings} \label{appendix:LDA topic} \input{Tables/LDA_topic} \fi \end{document}
1,477,468,751,261
arxiv
\section{Introduction} A pseudo-Anosov homeomorphism $f\colon S\to S$ on a surface determines a complex structure and holomorphic quadratic differential, $(X,q)$, up to Teichm\"uller deformation, for which the vertical and horizontal foliations are the stable and unstable foliations of $f$. The pseudo-Anosov generates an infinite cyclic subgroup of the full group of orientation preserving affine homeomorphisms, ${\rm{Aff}}_+(X,q)$. For a finite type surface $S$, we say that the pseudo-Anosov homeomorphism $f$ is {\em lonely} if $\langle f \rangle < {\rm{Aff}}_+(S,q)$ has finite index. The motivation for this paper is the following; see e.g.~Hubert-Masur-Schmidt-Zorich \cite{HuMaScZo} and Lanneau~\cite{Lanneau} \begin{conjecture} [Lonely p-As] There exist lonely pseudo-Anosov homeomorphisms. In fact, lonely pseudo-Anosov homeomorphisms are generic. \end{conjecture} There is not an agreed upon notion of ``generic'', and some care must be taken: work of Calta \cite{Calta} and McMullen \cite{McGenus2,McTeichTrace} shows that {\em no} pseudo-Anosov homeomorphism on a surface of genus $2$, with orientable stable/unstable foliation is lonely. In fact, in this case, not only are the pseudo-Anosov homeomorphisms not lonely, but their Veech groups always contain parabolic elements. In this paper, we consider infinite families of pseudo-Anosov homeomorphism arising as follows; see \S\ref{S:fibered 3-manifolds}. Suppose $f \colon S \to S$ is a pseudo-Anosov homeomorphism of a finite type surface $S$ and $M_f$ the mapping torus (which is hyperbolic by Thurston's Hyperbolization Theorem \cite{Otal-ThurstonHyp}). The connected cross sections of the suspension flow are organized by their cohomology classes (up to isotopy), which are primitive integral classes in the cone on the open fibered face $F \subset H^1(M,\mathbb R)$ of the Thurston norm ball containing the Poincar\'e-Lefschetz dual of the fiber $S$. Given such an integral class $\alpha$, the first return map to the cross section $S_\alpha$ is a pseudo-Anosov homeomorphism $f_\alpha \colon S_\alpha \to S_\alpha$. When $b_1(M) > 1$, there are infinitely many such pseudo-Anosov homeomorphisms; in fact, $|\chi(S_\alpha)|$ is a linear function of $\alpha$, and hence tends to infinity with $\alpha$. We let $\bar \alpha \in F$ denote the projection of the primitive integral class $\alpha$ in the cone over $F$, and let $F_{\mathbb Q}$ be the set of all such projections, which is precisely the (dense) set of rational points in $F$. \begin{question} \label{Q:lonely fibers} Given a fibered hyperbolic $3$--manifold and fibered face $F$, are the pseudo-Anosov homeomorphism $f_\alpha$ for $\bar \alpha \in F_{\mathbb Q}$ generically lonely? \end{question} We will provide two pieces of evidence that the answer to this question is `yes'. Write ${\rm{Aff}}_+(X_\alpha,q_\alpha)$ for the orientation preserving affine group containing $f_\alpha$; see \S\ref{S:Veech groups} for more details. \newcommand{\TParabolicsA} {Suppose $F$ is the fibered face of a fibered hyperbolic $3$--manifold. Assuming Lehmer's Conjecture, the set of $\bar \alpha \in F_{\mathbb Q}$ such that ${\rm{Aff}}_+(X_\alpha,q_\alpha)$ contains a parabolic element is discrete in $F$.} \begin{theorem} \label{T:locally finite parabolics} \TParabolicsA \end{theorem} In certain examples, the set of classes whose associated Veech group contains parabolics is actually finite (again, assuming Lehmer's conjecture); see Theorem~\ref{T:finite parabolics}. In \S\ref{S:examples} we describe some explicit computations that illustrate this finite set. Much of the defining structure survives for non-integral classes $\alpha \in F - F_{\mathbb Q}$; see \S\ref{S:foliations in cone} for details. Briefly, we first recall that every $\alpha \in F - F_{\mathbb Q}$ is represented by a closed $1$--form $\omega_\alpha$ which is positive on the vector field generating the suspension flow. The kernel of $\omega_\alpha$ is tangent to a foliation $\mathcal F_\alpha$, and the flow can be reparameterized to send leaves of $\mathcal F_\alpha$ to other leaves. There is no longer a first return time, but rather a {\em higher rank abelian group} of return times, $H_\alpha$, to any given leaf $S_\alpha$ of $\mathcal F_\alpha$. Work of McMullen \cite{Mc} associates a {\em leaf-wise} complex structure and quadratic differential $(X_\alpha,q_\alpha)$ to each $\alpha \in F - F_{\mathbb Q}$ so that the leaf-to-leaf maps of the flow are all Teichm\"uller maps. For every leaf $S_\alpha$ of $\mathcal F_\alpha$, the return maps to $S_\alpha$ thus determine an isomorphism from $H_\alpha < \mathbb R$ to a subgroup we denote $H_\alpha^{{\rm{Aff}}} \!\! < {\rm{Aff}}_+(X_\alpha,q_\alpha)$, an abelian group of pseudo-Anosov elements. Our second piece of evidence for a positive answer to Question~\ref{Q:lonely fibers} is the following. \newcommand{{If $F$ is a fibered face of a closed, fibered, hyperbolic $3$--manifold, then for all $\alpha \in F- F_{\mathbb Q}$, and any leaf $S_\alpha$ of $\mathcal F_\alpha$, the abelian group $H_\alpha^{\Aff} \!\! < \Aff_+(X_\alpha,q_\alpha)$ has finite index.}}{{If $F$ is a fibered face of a closed, fibered, hyperbolic $3$--manifold, then for all $\alpha \in F- F_{\mathbb Q}$, and any leaf $S_\alpha$ of $\mathcal F_\alpha$, the abelian group $H_\alpha^{{\rm{Aff}}} \!\! < {\rm{Aff}}_+(X_\alpha,q_\alpha)$ has finite index.}} \begin{theorem} \label{T:lonely leaves} {If $F$ is a fibered face of a closed, fibered, hyperbolic $3$--manifold, then for all $\alpha \in F- F_{\mathbb Q}$, and any leaf $S_\alpha$ of $\mathcal F_\alpha$, the abelian group $H_\alpha^{\Aff} \!\! < \Aff_+(X_\alpha,q_\alpha)$ has finite index.} \end{theorem} For $\alpha \in F-F_{\mathbb Q}$, the leaves $S_\alpha$ are infinite type surfaces. In general, there is much more flexibility in constructing affine groups for infinite type surfaces, and exotic groups abound. Indeed, work of Przyticki-Schmithusen-Valdez \cite{PrScVa} and Ram\'{\i}rez-Valdez \cite{RaVa} proves that {\em any} countable subgroup of ${\rm{GL}}_2(\mathbb R)$ without contractions is the derivative-image of some affine group. (See also Bowman \cite{Bowman-lonely} for a ``naturally occurring" lonely pseudo-Anosov homeomorphism on an infinite type surface of finite area.) Theorem~\ref{T:lonely leaves} says that for the leaves $S_\alpha$ of the foliations and their associated quadratic differentials, the situation is much more rigid. \subsection*{Acknowledgements} The authors would like to thank Alan Reid for helpful conversations, and Ferr\'an Valdez for his interest in this project. The first author was partially supported by NSF grant DMS-2106419. The second author was partially supported by NSERC Discovery grant, RGPIN 06486. The fifth author was partially supported by an NSERC-PDF Fellowship. \section{Definitions and background} \subsection{Fibered \texorpdfstring{$3$}{3}--manifolds} \label{S:fibered 3-manifolds} Here we explain the set up and background for our work in more detail. For a pseudo-Anosov homeomorphism $f \colon S\to S$ of a finite type surface $S$, let $\lambda(f)$ denote its {\em stretch factor} (also called its {\em dilatation}); see \cite{flp:TTS}. We write \[ M = M_f = S \times [0,1]/(x,1) \sim (f(x),0)\] to denote the mapping torus of the pseudo-Anosov homeomorphism $f$. The suspension flow $\psi_s$ of $f$ is generated by the vector field $\xi = \frac{\partial}{\partial t}$, where $t$ is the coordinate on the $[0,1]$ factor. Alternatively, we have the local flow of the same name $\psi_s(x,t) = (x,t+s)$ on $S \times [0,1]$, defined for $t, s+t \in [0,1]$, which descends to the suspension flow. A {\em cross section} (or just {\em section}) of the flow is a surface $S_\alpha \subset M$ transverse to $\xi$, such that for all $x \in S_\alpha$, $\psi_s(x) \in S_\alpha $ for some $s >0$. If $s(x) > 0$ is the smallest such number, then the {\em first return map} of $\psi_s$ is the map $f_\alpha \colon S_\alpha \to S_\alpha$ defined by $f_\alpha(x) = \psi_{s(x)}(x)$ for $x \in S_\alpha$. Note that $S (= S \times \{0\}) \subset M$ is a section, and the first return map to $S$ is precisely the map $f = \psi_1|_S$. Cutting open along an arbitrary section $S_\alpha$ we get a product $S_\alpha \times [0,1]$ where the slices $\{x\} \times [0,1]$ are arcs of flow lines. Thus, $M$ can also be expressed as the mapping torus of $f_\alpha$, or alternatively, $M$ fibers over the circle with {\em monodromy} $f_\alpha$. Up to isotopy, the fiber $S_\alpha$ is determined by its Poincar\'e-Lefschetz dual cohomology class $\alpha = [S_\alpha] \in H^1(M; \mathbb Z) \subset H^1(M;\mathbb R) = H^1(M)$. To see how these are organized, we first recall the following theorem of Thurston \cite{ThNorm} \begin{theorem} \label{T:Thurston cone} For $M = M_f$ as above, there is a finite union of open, convex, polyhedral cones $\mathcal C_1,\ldots,\mathcal C_k \subset H^1(M)$ such that $\alpha \in H^1(M;\mathbb Z)$ is dual to a fiber in a fibration over $S^1$ if and only if $\alpha \in \mathcal C_j$ for some $j$. Moreover, there is a norm $\| \cdot \|_T$ on $H^1(M)$ so that for each $\mathcal C_j$, $\| \cdot \|_T$ restricted to $\mathcal C_j$ is linear, and if $\alpha \in \mathcal C_j \cap H^1(M;\mathbb Z)$ then $\|\alpha\|_T$ is the negative of the Euler characteristic of the fiber dual to $\alpha$. \end{theorem} The unit ball $\mathfrak B$ of $\| \cdot \|_T$ is a polyhedron, and each $\mathcal C_j$ is the cone over the interior of a top dimensional face $F_j$ of $\mathfrak B$. The cones in the theorem are called the {\em fibered cones} of $M$ and the $F_j$ the {\em fibered faces} of $\mathfrak B$. It follows from Thurston's proof of Theorem~\ref{T:Thurston cone} that each of the sections $S_\alpha$ of $(\psi_s)$ described above must lie in a single one of the fibered cones $\mathcal C$ over a fibered face $F$. The following theorem elaborates on this, combining results of Fried from \cite{Fr,Fr0}. \begin{theorem} \label{T:Fried cone} For $M = M_f$ as above, there is a fibered cone $\mathcal C \subset H^1(M)$ such that $\alpha \in H^1(M;\mathbb Z)$ is dual to a section of $(\psi_s)$ if and only if $\alpha \in \mathcal C$. Moreover, there is a function $\mathfrak h \colon \mathcal C \to \mathbb R_+$ which is continuous, convex, and homogenous of degree $-1$, with the following properties. \begin{itemize} \item For any $\alpha \in \mathcal C \cap H^1(M;\mathbb Z)$, $f_\alpha$ is pseudo-Anosov and $\mathfrak h(\alpha) = \log(\lambda(f_\alpha))$. \item For any $\{ \alpha_n\} \subset \mathcal C$ with $\alpha_n \to \partial \mathcal C$, we have $\mathfrak h(\alpha_n) \to \infty$; \end{itemize} \end{theorem} We let $\mathcal C_{\mathbb Z} \subset \mathcal C$ denote the primitive integral classes in the fibered cone $\mathcal C$; that is, the integral points which are not nontrivial multiples of another element of $H^1(M;\mathbb Z)$. These correspond precisely the the connected sections of $(\psi_s)$. McMullen \cite{Mc} refined the analysis of $\mathfrak h$, proving for example that it is actually real-analytic. For this, he computed the stretch factors using his \textit{Teichm\"uller polynomial} $\Theta_{\mathcal{C}}$. This polynomial \[\Theta_{\mathcal{C}} = \sum_{g\in G}a_g g\] is an element of the group ring $\mathbb{Z}[G]$ where $G=H_1(M;\mathbb{Z})/\text{torsion}$. For $\alpha \in \mathcal{C}_{\mathbb Z}$, the \textit{specialization} of the Teichm\"uller polynomial is \[\Theta_{\mathcal{C}}^\alpha(t) = \sum_{g\in G} a_g t^{\alpha(g)} \in \mathbb{Z}[t^{\pm 1}]\] where we view $\alpha \in H^1(M;\mathbb Z) \cong \rm{Hom}(G;\mathbb Z)$. Further, $G \cong H \oplus \mathbb{Z}$ where $H=\text{Hom}(H^1(S,\mathbb{Z})^f,\mathbb{Z}) \cong \mathbb{Z}^m$ and $H^1(S,\mathbb{Z})^f$ are the $f$--invariant cohomology classes. So we can regard $\Theta_{\mathcal{C}}$ as a Laurent polynomial on the generators $x_1, x_2, \ldots, x_m$ of $H$ and the generator $u$ of $\mathbb{Z}$. Then specialization to the dual of an element $(a_1, a_2, \ldots, a_m, b) \in \mathcal{C} \cap H^1(M;\mathbb{Z})$ amounts to setting $x_i=t^{a_i}$ for $1\leq i \leq m$ and $u=t^b$. McMullen proves that the specializations and the pseudo-Anosov first return maps are related by the following. \begin{theorem} \label{T:Teich poly} For any $\alpha \in \mathcal C_{\mathbb Z}$, the stretch factor $\lambda(f_\alpha)$ is a root of $\Theta_{\mathcal{C}}^{\alpha}$ with the largest modulus. \end{theorem} Combining the linearity of $\| \cdot \|_T$ on $\mathcal C$ together with the homogeneity of $\mathfrak h$, we have the following observation of McMullen; see \cite{Mc}. \begin{corollary} The function $\alpha \mapsto \|\alpha\|_T \mathfrak h(\alpha)$ is continuous and constant on rays from $0$. In particular, if $K \subset \mathcal C$ is any compact subset, then $\| \cdot \|_T \mathfrak h(\cdot)$ is bounded on $\mathbb R_+ K$. \end{corollary} The key corollary for us is the following, also observed by McMullen from the same paper. \begin{corollary} \label{C:asymptotics} If $\{ \alpha_n\}_n \subset \mathcal C_{\mathbb Z}$ is any infinite sequence of distinct elements, then $|\chi(S_{\alpha_n})| \to \infty$ and if the rays $\mathbb R_+ \alpha_n$ do not accumulate on $\partial \mathcal C$, then \[ \log(\lambda(f_{\alpha_n})) \asymp \frac{1}{|\chi(S_{\alpha_n})|}. \] In particular, $\lambda(f_{\alpha_n}) \to 1$. \end{corollary} \begin{remark}One can sometimes promote the final conclusion to {\em any} infinite sequence of distinct elements, without the assumption about non-accumulation to $\partial \mathcal C$; see the examples in \S\ref{S:examples}. This is not always the case, and the accumulation set of stretch factors can be fairly complicated, as described by work of Landry-Minsky-Taylor \cite{LaMiTay}. \end{remark} \subsection{Foliations in the fibered cone} \label{S:foliations in cone} Fried's work described above \cite{Fr,Fr0} implies that any $\alpha \in \mathcal C$ may be represented by a closed $1$--form $\omega_\alpha$ for which $\omega_\alpha(\xi)>0$ at every point of $M$. For integral classes, $\omega_\alpha$ is the pull-back of the volume form from the fibration over the circle $\mathbb R/\mathbb Z$, and in general, $\omega_\alpha$ is a convex combination of such $1$--forms. The kernel of $\omega_\alpha$ defines a foliation $\mathcal F_\alpha$ transverse to $\xi$ whose leaves are injectively immersed surfaces $S_\alpha \subset M$. We consider the reparameterized flow $\{\psi_s^\alpha\}$ defined by scaling the generating vector field $\xi$ by $\xi/\omega_\alpha(\xi)$. Then for every leaf $S_\alpha \subset M$ of $\mathcal F_\alpha$ and for every $s \in \mathbb R$, the image by the flow $\psi_s^\alpha(S_\alpha)$ is another leaf of $\mathcal F_\alpha$. The subgroup $H_{\alpha} < \mathbb R$ mentioned in the introduction is precisely the set of return times of $\psi_s^\alpha$ to $S_\alpha$. As such, $H_\alpha$ acts on $S_\alpha$ so that $s \in H_\alpha$ acts by $s \cdot x = \psi_s^\alpha(x)$, for all $x \in S_\alpha$. The group $H_\alpha \cong \mathbb Z^n$ for some $n= n_{\alpha} \leq b_1(M)$, and can alternatively be defined as the set of periods of $\alpha$ (i.e.~the $\alpha$--homomorphic image of $H_1(M;\mathbb Z)$). A leaf $S_\alpha$ is a closed surface, and in fact a fiber as above if and only if $n_\alpha = 1$ in which case $H_\alpha$ is a discrete subgroup of $\mathbb R$ and $\bar \alpha \in F_{\mathbb Q}$. On the other hand, $n_\alpha \geq 2$ if and only if the group of return times $H_\alpha$ is indiscrete, and so $S_\alpha$ is {\em dense} in $M$. \subsection{Teichm\"uller flows and Veech groups} \label{S:Veech groups} In \cite{Mc}, McMullen defines a conformal structure and quadratic differential, $(X_\alpha,q_\alpha)$, on the leaves $S_\alpha$ of the foliation $\mathcal F_\alpha$, for all $\alpha \in \mathcal C$, with the following properties. For each $s \in \mathbb R$ and leaf $S_\alpha$, the leaf-to-leaf map $\psi_s^\alpha \colon S_\alpha \to \psi_s^\alpha(S_\alpha)$ is a Teichm\"uller map with initial/terminal quadratic differentials given by $q_\alpha$ on the respective leaves. In fact, there exists some $K_\alpha > 1$ so that $\psi_s^\alpha$ is a $K_\alpha^{|s|}$--Teichm\"uller map, and hence $K_\alpha^{2|s|}$--quasi-conformal. \begin{remark} The notation $(X_\alpha,q_\alpha)$ is somewhat ambiguous: this really denotes a family of structures, one on every leaf, though we abuse notation and also use this same notation to denote the restriction to any given leaf. \end{remark} The vertical and horizontal foliations of $q_\alpha$ on the leaves $S_\alpha$ of $\mathcal F_\alpha$ are obtained by intersecting with a {\em fixed} singular foliation on the $3$--manifold; namely, the suspension of the unstable/stable foliations for the original pseudo-Anosov homeomorphism $f$. In particular, the cone points (i.e.~zeros) of $q_\alpha$ are precisely the intersections of $S_\alpha$ with the $\psi_s$--flowlines through the cone points on the original surface $S$. Consequently, the cone points are isolated, and the cone angles are bounded by those of the original surface, and are hence bounded independent of $\alpha$. For $s \in H_\alpha$, $\psi_s^\alpha \colon S_\alpha \to S_\alpha$ is (a remarking) of the Teichm\"uller map, and thus an affine pseudo-Anosov homeomorphism with respect to $q_\alpha$. In this way, we obtain an isomorphism from $H_\alpha$ to a subgroup $H_\alpha^{{\rm{Aff}}} < {\rm{Aff}}_+(X_\alpha,q_\alpha)$, the group of orientation preserving affine homeomorphisms of the leaf $S_\alpha$ with respect to $(X_\alpha,q_\alpha)$. The derivative with respect to the preferred coordinates defines a map \[ D_\alpha \colon {\rm{Aff}}_+(X_\alpha,q_\alpha) \to {\rm{GL}}_2^+(\mathbb R)/\pm I,\] which is called the {\em Veech group} of $(X_\alpha,q_\alpha)$. A {\em parabolic} element of ${\rm{Aff}}_+(X_\alpha,q_\alpha)$ is one whose image by $D_\alpha$ is parabolic. \begin{remark} The preferred coordinates for a quadratic differential are only defined up to translation and rotation through angle $\pi$, so the derivative is only defined up to sign. If all affine homeomorphisms are area preserving (e.g.~if the surface has finite area) then the derivative maps to ${\rm{PSL}}_2(\mathbb R) = \rm SL_2(\mathbb R)/\pm I$. \end{remark} Since the vertical/horizontal foliations are the stable/unstable foliations, the image of $H_\alpha^{\rm{Aff}}$, which we denote $H_\alpha^D = D_\alpha(H_\alpha^{\rm{Aff}})$ is contained in the diagonal subgroup of ${\rm{PSL}}_2(\mathbb R)$: \[ H_\alpha^D < \Delta = \left\{ \left. \left( \begin{array}{cc} a & 0 \\ 0 & \frac1a \end{array} \right) \in \rm SL_2(\mathbb R) \,\, \right| \, \, a > 0 \, \, \right\}/\pm I.\] Define ${\rm SAff}(X_\alpha,q_\alpha) < {\rm{Aff}}_+(X_\alpha,q_\alpha)$ to be the area preserving subgroup of orientation preserving affine homeomorphisms; this is the preimage of ${\rm{PSL}}_2(\mathbb R)$ under $D_\alpha$. In particular, $H_\alpha^{\rm{Aff}} < {\rm SAff}(X_\alpha,q_\alpha)$. \subsection{Trace fields} A number field is \textit{totally real} if the image of every embedding into $\mathbb{C}$ lies in $\mathbb{R}$. Hubert-Lanneau~\cite{HuLaParabolic} proved the following. \begin{theorem}\label{theorem: totally real trace field if parabolics} If a nonelementary Veech group contains a parabolic element, then the trace field is totally real. \end{theorem} A pseudo-Anosov $f$ being lonely implies that there are no parabolic elements in the Veech group, but not conversely; see \cite{HuLaMo}. McMullen~\cite[Corollary~9.6]{McTeichTrace} proved the following fact about the trace field of a Veech group; see also Kenyon-Smillie~\cite{KenSmi}. \begin{theorem} \label{T:pA trace generates} The trace field of a Veech group containing a pseudo-Anosov is generated by the trace of that pseudo-Anosov. That is, the trace field is given by $\mathbb Q(\lambda(f) + \lambda(f)^{-1})$. \end{theorem} Thus, this trace field is totally real precisely when the trace of the pseudo-Anosov has only real Galois conjugates. \subsection{Lehmer's Conjecture} Theorem~\ref{T:locally finite parabolics} is dependent on the validity of what is known as Lehmer's conjecture \cite{Lehmer} though Lehmer did not actually conjecture the statement we will use. See \cite{SmythSurvey}. To state this conjecture, we need the following. \begin{defn} Let $p(x) \in \mathbb C[x]$ with factorization over $\mathbb C$ \[ p(x) = a_0\prod_{i=1}^{m}(x-\gamma_i). \] The \textbf{Mahler measure} of $p$ is \[ \mathcal{M}(p) = \left|a_0\right|\prod_{i=1}^{m}(\max{1, |\gamma_i|}). \] \end{defn} With this definition, we state the conjecture we assume. \begin{conjecture}[Lehmer] \label{conj:Lehmer} There is a constant $\mu > 1$ such that for every $p(x) \in \mathbb{Z}[x]$ with a root not equal to a root of unity $\mathcal{M}(p) \geq \mu$. \end{conjecture} \section{Examples} \label{S:examples} Here we provide examples of fibered faces of fibered 3-manifolds and examine arithmetic features of the Veech groups of the corresponding pseudo-Anosov homeomorphisms. \subsection{Example 1} \label{ex:hironaka1} Let $\beta = \sigma_1\sigma_2^{-1}$ be an element of the braid group $B_3$ on three strands (viewed as the mapping class group of a four-punctured sphere, $S$), where $\sigma_1$ and $\sigma_2$ denote the standard generators. Let $M$ denote the mapping torus of $\beta$. McMullen computes the Teichm\"uller polynomial for this manifold in detail in \cite{Mc}. See also Hironaka \cite{Hironaka}. Since $\beta$ permutes the strands of the braid cyclically, $b_1(M)=2$. Choosing appropriate bases, we obtain an isomorphism $H^1(M;\mathbb{Z}) \cong \mathbb{Z}^2$ so that the starting fiber surface $S$ is dual to $(0,1)$, the fibered cone is \[ \mathcal{C} = \{(a,b)\in \mathbb{R}^2 : b > 0, -b < a < b\}\] and the Teichm\"uller polynomial for this cone is \[\Theta_{\mathcal{C}}(x,u) = u^2 - u(x + 1 + x^{-1}) - 1.\] Specialization to an integral class $(a,b) \in \mathcal{C}_{\mathbb Z}$ equates to setting $x=t^a$ and $u=t^b$ and yields \[\Theta_{\mathcal C}^{(a,b)}(t) = \Theta_{\mathcal{C}}(t^a,t^b) = t^{2b}-t^{b+a}-t^{b}-t^{b-a}+1.\] We used the mathematics software system SageMath \cite{sage} to factor $\Theta_{\mathcal C}^{(a,b)}(t)$ for all primitive integral pairs $(a,b) \in \mathcal{C}$ with $b < 50$, to determine the stretch factors $\lambda_{(a,b)}$ of the corresponding monodromies and their minimal polynomials. We then computed the conjugates of the corresponding traces, $\lambda_{(a,b)}+1/\lambda_{(a,b)}$, to determine whether the trace field of each associated Veech group is totally real. The results are shown in Figure \ref{figure: hironaka cone}. Recall that by Theorem \ref{theorem: totally real trace field if parabolics}, when this trace field is not totally real, the Veech group has no parabolic elements. These computations suggest that there are only finitely many pairs $(a,b)$ where the trace field is not totally real. This is not a coincidence as we will see below. For this, we record the following improvement on Corollary \ref{C:asymptotics} for the cone $\mathcal C$ for this example. \begin{lemma} \label{L:Hironaka finite} For any sequence $\alpha_n = (a_n,b_n) \in {\mathcal C}_{\mathbb Z}$ of distinct elements, we have $\lambda(f_{\alpha_n}) \to 1$. \end{lemma} \begin{proof} Since $\mathfrak h$ is convex, the maximum value of $\mathfrak h(a,b) = \log(\lambda(f_{(a,b)}))$, for points $(a,b) \in \mathcal C_{\mathbb Z}$ and a fixed $b$, occurs at either $(b-1,b)$ or $(1-b,b)$. First we consider the points of the form $(b-1,b)$. The specialization of $\Theta_{\mathcal C}$ in this case takes the form \[ \Theta_{\mathcal C}^{(b-1,b)}(t) = t^{2b} - t^{2b-1} - t^b - t+1.\] Recall that $\lambda_b = \lambda(f_{(b-1,b)}) > 1$. As $b \to \infty$, we claim that $\lambda_b \to 1$. Suppose instead that the sequence is bounded below by $1+\epsilon$, for $\epsilon > 0$ on some subsequence. Then in this subsequence we have \begin{align*} \Theta_{\mathcal C}^{(b-1,b)}(\lambda_b) &= \lambda_b^{2b}(1 - \lambda_b^{-1} - \lambda_b^{-b} - \lambda_b^{1-2b}) + 1 \\ &\geq (1+\epsilon)^{2b}\left(1-(1+\epsilon)^{-1} - (1+\epsilon)^{-b} - (1+\epsilon)^{1-2b}\right) \end{align*} The first factor on the right hand side tends to infinity when $b$ does, while the second factor tends toward $1-(1+\epsilon)^{-1} = \epsilon / (1+\epsilon) > 0$. This implies that $\Theta_{\mathcal C}^{(b-1,b)}(\lambda_b)$ approaches infinity, whereas instead it is identically equal to 0. This contradiction proves the claim. For points of the form $(1-b,b)$, the specialization takes the form \[ \Theta_{\mathcal C}^{(1-b,b)}(t) = t^{2b} - t - t^b - t^{2b-1} + 1 = \Theta_{\mathcal C}^{(b-1,b)}(t). \] Therefore, $\lambda(f_{(1-b,b)}) = \lambda(f_{(b-1,b)}) = \lambda_b$ and as $b \to \infty$, these both tend to $1$. \end{proof} \begin{figure} \includegraphics[width=\linewidth, trim = {0 2cm 0 3cm}, clip]{hironaka_cone.pdf} \caption{Primitive integral elements in a fibered cone for the mapping torus of the three-strand braid $\sigma_1\sigma_2^{-1}$. Elements marked with green triangles have corresponding Veech group with trace field that is not totally real.} \label{figure: hironaka cone} \end{figure} One of the difficulties in the proof of Theorem~\ref{T:locally finite parabolics} is understanding the degrees of the trace field. This is complicated by the fact that the Teichm\"uller polynomial need not be irreducible in general. For example, when specialized to $(a,b) = (9, 14)$, the Teichm\"uller polynomial in this example splits into the cyclotomic polynomials $t^2 - t + 1$ and $t^4 - t^2 + 1$, plus the minimal polynomial of the corresponding stretch factor. However, in other cases, such as the specialization to $(a,b) = (5,14)$, the Teichm\"uller polynomial remains irreducible. We refer the reader to \cite{FilGar} for more on the factorizations of the specialized polynomials in the example above. As we will see in the example below, the Teichm\"uller polynomial also sometimes admits additional non-cyclotomic factors aside from the minimal polynomial of the corresponding stretch factor. \subsection{Example 2} Let $\beta' = \beta^2$, for $\beta$ from the preceding example. Let $M'$ denote the mapping torus on $\beta'$ and ${\theta'}_{\mathcal{C}'}$ the Teichm\"uller polynomial of the fibered cone $\mathcal{C}'$ containing the dual of $\beta'$. Here we will observe three different splitting behaviors of specializations of the Teichm\"uller polynomial. In particular, we see that certain specializations of ${\theta'}_{\mathcal{C}'}$ split into multiple non-cyclotomic factors, limiting what information can be derived about conjugates of the corresponding stretch factors and their traces by looking at the collection of all roots of ${\theta'}_{\mathcal{C}'}$. The Teichm\"uller polynomial here is \[{\theta'}_{\mathcal{C}'}(x,u) = u^2 - u(x^2 + 2x + 1 +2x^{-1} + x^{-2}) + 1\] over the cone \[\mathcal{C} = \{(a,b) \in \mathbb{R}^2 : b > 0, -b/2 < a < b/2\}.\] The specialization to $(a,b) = (6,17)$ is irreducible over $\mathbb{Z}$: \[t^{34} - t^{29} - 2t^{23} - t^{17} - 2t^{11} - t^5 + 1,\] while the specialization to $(a,b) = (7, 17)$ splits as a cyclotomic and non-cyclotomic factor: \begin{multline*} (t^4 + t^3 + t^2 + t + 1) (t^{30} - t^{29} - t^{27} + t^{26} + t^{25} - t^{24} - t^{22} + t^{21} - t^{20} + t^{19} - t^{17} + t^{16}\\ - t^{15} + t^{14} - t^{13} + t^{11} - t^{10} + t^9 - t^8 - t^6 + t^5 + t^4 - t^3 - t + 1), \end{multline*} and the specialization to $(a,b) = (7, 18)$ has multiple non-cyclotomic factors: \[(t^2 - t + 1) (t^4 + t^3 + t^2 + t + 1) (t^{12} - t^9 - t^8 + t^7 + t^6 + t^5 - t^4 - t^3 + 1) (t^{18} - t^{16} - t^9 - t^2 + 1).\] Figure \ref{figure: fibered cone squared} shows whether the Veech groups corresponding to elements of $\mathcal{C}'$ have totally real trace field. For all three specializations described in this example, the corresponding Veech group trace field is not totally real. The analog to Lemma \ref{L:Hironaka finite} holds in this example as well. $M'$ is a 2-fold cover of $M$ so the stretch factors in $\mathcal{C}_{\mathbb{Z}}'$ are at most squares of the stretch factors in $\mathcal{C}_{\mathbb{Z}}$. \begin{figure} \includegraphics[width=\linewidth, trim = {0 2cm 0 3cm}, clip]{fibered_cone_1212} \caption{Primitive integral elements in a fibered cone for the mapping torus of the three-strand braid $(\sigma_1\sigma_2^{-1})^2$. Elements marked with green triangles have a not totally real corresponding Veech group.} \label{figure: fibered cone squared} \end{figure} \section{Most Veech groups have no parabolics} We are now ready for the proof of the first theorem from the introduction.\\ \noindent {\bf Theorem~\ref{T:locally finite parabolics}.} {\em \TParabolicsA} \begin{proof} Consider any sequence of distinct elements $\alpha_n$ in $\mathcal C_{\mathbb Z}$ such that $\bar \alpha_n$ does not accumulate on $\partial F$. We need to show that ${\rm{Aff}}(X_\alpha,q_{\alpha_n})$ contains a parabolic for at most finitely many $n$. According to Theorem~\ref{theorem: totally real trace field if parabolics}, it suffices to prove that the trace field is totally real for at most finitely many $n$. Setting $\lambda_n = \lambda(f_{\alpha_n})$, Theorem~\ref{T:pA trace generates} implies that the trace field of ${\rm{Aff}}(X_{\alpha_n},q_{\alpha_n})$ is $\mathbb Q(\lambda_n + \lambda_n^{-1})$. Next, let $N$ be the number of terms of the Teichm\"uller polynomial, $\Theta_{\mathcal C}$ for $\mathcal C$. The stretch factor $\lambda_n$ is the largest modulus root of the specialization $\Theta_{\mathcal C}^{\alpha_n}(t)$ by Theorem~\ref{T:Teich poly}. We observe that this polynomial has no more nonzero terms than $\Theta_{\mathcal C}$, and thus has at most $N$ terms. Descartes's rule of signs implies that the number of real roots of $\Theta_{\mathcal C}^{\alpha_n}$ is at most $2N-2$. Suppose that $p_n(t)$ is the minimal polynomial of $\lambda_n$, which is thus a factor of $\Theta_{\mathcal C}^{\alpha_n}(t)$ (up to powers of $t$, which we will ignore). In particular, note that $\lambda_n$ bounds the modulus of all other roots of $p_n(t)$. The stretch factors are always algebraic integers, and hence $p_n(t)$ is monic. The Mahler measure is therefore the product of the moduli of the roots outside the unit circle. There are at most $2N-2$ real roots of $\Theta_{\mathcal C}^{\alpha_n}(t)$, and hence the same is true of $p_n(t)$. Write \[ \mathcal M(p_n) = A_nB_n \] where $A_n$ is the product of the moduli of the {\em real roots} and $B_n$ is the product of the moduli of the non-real roots outside the unit circle (and $1$ if there are none). Thus, we have \begin{equation} \label{E:real root bound} A_n \leq \lambda_n^{2N-2}. \end{equation} Now, as $n \to \infty$, we have $|\chi(S_{\alpha_n})| = \| \alpha_n \|_T \to \infty$ as $n \to \infty$. Since $\bar \alpha_n$ does not accumulate on $\partial F$, Corollary~\ref{C:asymptotics} implies $\lambda_n = \lambda(f_{\alpha_n}) \to 1$. By \eqref{E:real root bound}, it follows that $A_n \to 1$ as $n \to \infty$. Since we are assuming Lehmer's Conjecture, it follows that $B_n > 1$ for all but finitely many $n$. That is, there is at least one non-real root $\zeta_n$ of $p_n(t)$ outside the unit circle. (In fact, the number of such roots tends to infinity linearly with $|\chi(S_{\alpha_n})|$ since $\lambda_n$ has the maximum modulus of any root of $p_n(t)$). Therefore, for all but finitely many $n$, the embedding of $\mathbb Q(\lambda_n + \lambda_n^{-1})$ to $\mathbb C$ sending $\lambda_n+\lambda_n^{-1}$ to $\zeta_n + \zeta_n^{-1}$ has non-real image, since $\zeta_n$ is non-real and lies off the unit circle. Therefore, $\mathbb Q(\lambda_n+\lambda_n^{-1})$ is totally real for at most finitely many $n$, as required. \end{proof} \begin{remark} The proof of Theorem~\ref{T:locally finite parabolics} follows a strategy of Craig Hodgson, \cite{Hodgson}, for understanding trace fields under hyperbolic Dehn filling. \end{remark} The key ingredient is that for sequences $\{\alpha_n\}$ in $\mathcal C_{\mathbb Z}$ we $\lambda(f_{\alpha_n}) \to 1$. \begin{theorem} \label{T:finite parabolics} Suppose $F$ is the fibered face of a fibered hyperbolic $3$--manifold and that $1$ is the only accumulation point of the set \[ \{\lambda(f_\alpha) \mid \bar \alpha \in F_{\mathbb Q}\}.\] Assuming Lehmer's Conjecture, the set of $\bar \alpha \in F_{\mathbb Q}$ such ${\rm{Aff}}_+(X_\alpha,q_\alpha)$ contains a parabolic element is finite. \end{theorem} \begin{proof} This is exactly the same as the proof of Theorem~\ref{T:locally finite parabolics}, except that the assumption that $1$ is the only accumulation point of $\{\lambda(f_\alpha) \mid \bar \alpha \in F_{\mathbb Q}\}$ replaces the references to Corollary~\ref{C:asymptotics}, and does away with the requirement that $\bar \alpha_n$ does not accumulate on $\partial F$. \end{proof} Returning to the examples from Section \ref{S:examples}, Lemmas~\ref{L:Hironaka finite} and the discussion in both implies that the hypotheses of Theorem~\ref{T:finite parabolics} are satisfied. Thus only finitely many elements $\alpha \in \mathcal C_{\mathbb Z}$ are such that ${\rm{Aff}}_+(X_\alpha,q_\alpha)$ can contain parabolics. We refer the reader to \cite{LaMiTay} for more on accumulation set of $\{\lambda(f_\alpha) \mid \alpha \in \mathcal C_{\mathbb Z}\}$ \section{Veech groups of leaves} We now turn our attention to the non-integral points in the cone and the second theorem from the introduction. \bigskip \noindent {\bf Theorem~\ref{T:lonely leaves}.} {\em {If $F$ is a fibered face of a closed, fibered, hyperbolic $3$--manifold, then for all $\alpha \in F- F_{\mathbb Q}$, and any leaf $S_\alpha$ of $\mathcal F_\alpha$, the abelian group $H_\alpha^{\Aff} \!\! < \Aff_+(X_\alpha,q_\alpha)$ has finite index.} } \bigskip For the rest of the paper, we assume $M$ is a closed, fibered, hyperbolic $3$--manifold. The results of this section are only nontrivial if $b_1(M) >1$, since otherwise $F- F_{\mathbb Q} = \emptyset$ for any fibered face $F$ (since in that case $F = F_{\mathbb Q}$ is a point). Given $\alpha \in F$, we recall that $\psi_s^\alpha$ is the reparameterized flow as in \S\ref{S:foliations in cone}, that sends leaves of $\mathcal F_\alpha$ to leaves. Furthermore, $(X_\alpha,q_\alpha)$ is the leaf-wise conformal structure and quadratic differential, and there is $K_\alpha > 1$ so that $\psi_s^\alpha$ is the $K_\alpha^{|s|}$--Teichm\"uller map, hence $K_\alpha^{2|s|}$--quasi-conformal and $K_\alpha^{|s|}$--bi-Lipschitz. \begin{lemma} \label{L:compact flow} For any $\alpha \in F - F_{\mathbb Q}$ there exists a compact subsurface $Z \subset S_\alpha$ such that \[ M = \bigcup_{s \in [0,1]} \psi_s^\alpha(Z).\] \end{lemma} \begin{proof} Choose an exhaustion of $S_\alpha$ by a sequence of compact subsurfaces: \[Z_1 \subsetneq Z_2 \subsetneq Z_3 \subsetneq \cdots S_\alpha, \mbox{ and } \bigcup_{n=1}^\infty Z_n = S_\alpha, \] and observe that \[ \left\{ \bigcup_{s \in (0,1)} \psi_s^\alpha(\mbox{int}(Z_n)) \right\}_{n=1}^\infty \] is an open cover of $M$ since every leaf is dense. Since $M$ is compact, the open cover admits a finite subcover of $M$. As the compact surfaces $Z_n$ are nested, there exists an index $N$ such that for $Z = Z_N$ we have \[ M = \bigcup_{s \in [0,1]} \psi_s^\alpha(Z). \qedhere \] \end{proof} The isomorphism $H_\alpha \cong H_\alpha^{\rm{Aff}}$ is given by $s \mapsto \psi_s^\alpha|_{S_\alpha}$. We write \[ H^{\rm{Aff}}_\alpha[0,1]\subset H^{\rm{Aff}}_\alpha\] for the image of $H_\alpha \cap [0,1]$ under this isomorphism. Note that every element of $H^{\rm{Aff}}_\alpha$ is $K_\alpha^2$--quasi-conformal and $K_\alpha$--bi-Lipschitz since $s \leq 1$. As a consequence of Lemma~\ref{L:compact flow}, we have the following. \begin{corollary} \label{C:compact translates} For $\alpha \in F- F_{\mathbb Q}$ and $Z \subset S_\alpha$ as in Lemma~\ref{L:compact flow} we have \[ S_\alpha = \bigcup_{h \in H_\alpha^{\rm{Aff}}[0,1]} h(Z).\] \end{corollary} \begin{proof} Let $Z \subset S_\alpha$ be the compact subsurface from Lemma \ref{L:compact flow}, so that for every $x \in S_{\alpha} \subseteq M$, we have $x \in \psi_s^\alpha(Z)$ for some $s \in [0,1]$. Since $x \in S_{\alpha}$, this implies that $s \in H_{\alpha}$. Therefore \[ S_\alpha = \bigcup_{s \in H_\alpha \cap [0,1]} \psi_s^\alpha(Z) = \bigcup_{h \in H_\alpha^{\rm{Aff}}[0,1]} h(Z). \qedhere \] \end{proof} \begin{corollary} \label{C:bounded geometry} For any $\alpha \in F-F_{\mathbb Q}$ there exists $C >0$ so that for any leaf $S_\alpha$ of $\mathcal F_\alpha$, the geometry of $q_\alpha$ is bounded. Specifically, (1) there is a lower bound on the length of any saddle connection, in particular a lower bound on the distance between any two cone points, (2) all cone points have finite (uniformly bounded) cone angle, and (3) $(X_\alpha,q_\alpha)$ is complete. \end{corollary} \begin{proof} Let $S_{\alpha}$ be any leaf, and consider the compact surface $Z$ from Corollary \ref{C:compact translates}. By making $Z$ slightly larger, we can assume that no singular points of $q_{\alpha}$ lie on the boundary of $Z$. Denote the set of all singularities of $q_{\alpha}$ by $A$. Let $d_{\partial Z}(a)$ denote the distance of a singularity $a \in A$ to the boundary of $Z$, and let $d_{Z}(a, b)$ denote the minimal length of a saddle connection in $Z$ between two (not necessarily distinct) singularities $a,b \in A \cap Z$. Since $Z$ is compact, we have that \[ \epsilon = \min \left\{ \min_{a, b \in A \cap Z} d_{Z}(a, b), \min_{a \in A} d_{\partial Z}(a) \right\} > 0. \] Pick a saddle connection $\omega$ connecting any singularity $a$ to any singularity $b$. There exists an $h \in H_\alpha^{\rm{Aff}}[0,1]$ such that $h(Z)$ contains $a$. Since $h$ is $K_{\alpha}$--bi-Lipschitz, either $\omega$ is contained in $h(Z)$ and has length at least $\epsilon K_\alpha^{-1}$, or it leaves $h(Z)$ and we again deduce that $\omega$ has length at least the distance from $a$ to $\partial h(Z)$, which is at least $\epsilon K_\alpha^{-1}$. In either case, we obtain a uniform lower bound $\epsilon K_\alpha^{-1}$ to the length of $\omega$, proving (1). As was noted in Section \ref{S:Veech groups}, we have that all cone points have finite cone angle which proves (2). Since $Z$ is compact, there is an $\epsilon'$ so that the $\epsilon'$--neighborhood of $Z$ also has compact closure, which is thus complete. Any Cauchy sequence has a tail that is contained in the $h$-image of the closure of this neighborhood for some $h \in H_\alpha^{\rm{Aff}}[0,1]$. Since this $h$--image is also complete, the Cauchy sequence converges, and we have that $(X_\alpha,q_\alpha)$ is complete which proves (3). \end{proof} \begin{remark}\label{Rmk:TameSurfaces} Note that Corollary~\ref{C:bounded geometry} implies that our surfaces are tame in the sense of Definition 2.1 of \cite{PrScVa}. \end{remark} An important observation is the following: for any element of $g \in {\rm{Aff}}_+(X_\alpha,q_\alpha)$, we can choose some element $h \in H^{\rm{Aff}}_{\alpha}[0,1]$ so that $h \circ g(Z) \cap Z \neq \emptyset$, and furthermore, if $g$ is $K$--quasi-conformal, then $h\circ g$ is $(KK_\alpha^2)$--quasi-conformal. \begin{proposition} \label{P:constant subsequences} Suppose $\alpha \in F-F_{\mathbb Q}$, $K_0 > 1$, and $\{g_n\}_{n=1}^\infty \subset {\rm{Aff}}_+(X_\alpha,q_\alpha)$ is a sequence of elements with $K(g_n) \leq K_0$. Then there is a subsequence $\{g_{n_k}\}_{k=0}^\infty$ and $\{h_{n_k}\}_{k=0}^\infty \subset H_{\alpha}^{\rm{Aff}}[0,1]$ so that $h_{n_k} \circ g_{n_k} = h_{n_0} \circ g_{n_0}$ for all $k \geq 0$. \end{proposition} \label{P:} \begin{proof} From the observation before the statement, we can find $h_n \in H_{\alpha}^{\rm{Aff}}[0,1]$ so that $h_n \circ g_n(Z) \cap Z \neq \emptyset$. Next, observe that $h_n \circ g_n$ is $(K_0K_{\alpha}^2)$--quasi-conformal, so by compactness of quasi-conformal maps, after passing to a subsequence, $h_{n_k} \circ g_{n_k}$ converges uniformly on compact sets to a map $f$. The maps $h_{n_k} \circ g_{n_k}$ are affine, so they must map cone points to cone points. Since the cone points are uniformly separated by Corollary~\ref{C:bounded geometry}, there are a pair of cone points $a,b$ so that for $k$ sufficiently large $h_{n_k} \circ g_{n_k}(a) = b$. Moreover, if we pick a pair of saddle connections in linearly independent directions emanating from $a$, then for $n$ sufficiently large $h_{n_k} \circ g_{n_k}$ all agree on this pair, again by Corollary~\ref{C:bounded geometry}. But these conditions uniquely determines the affine homeomorphism, and hence $h_{n_k} \circ g_{n_k}$ is eventually constant, and passing to a tail-subsequence of this subsequence completes the proof. \end{proof} From this we can prove a special case of Theorem~\ref{T:lonely leaves}: \begin{proposition}\label{Prop:H_alphaFI} If $\alpha \in F-F_{\mathbb Q}$, then $H_{\alpha}^{\rm{Aff}}$ has finite index in ${\rm SAff}(X_\alpha,q_\alpha)$. \end{proposition} \begin{proof} Suppose $H_{\alpha}^{{\rm{Aff}}}$ is not finite index, consider the closure of the $D_\alpha$--image in ${\rm{PSL}}_2(\mathbb R)$: \[ G = \overline{D_\alpha({\rm SAff}(X_\alpha,q_\alpha))}. \] Since $\alpha \in F- F_{\mathbb Q}$, every leaf $S_\alpha$ of $\mathcal F_\alpha$ is dense in $M$. Therefore $H_{\alpha}^{D}< \Delta \cong \mathbb R$ is an abelian subgroup with rank at least $2$, and hence is dense. Consequently, $\Delta < G$. By the classification of Lie subalgebras of $\mathfrak{s}\mathfrak{l}_2(\mathbb R)$ (or a direct calculations) we observe that, after replacing $G$ with a finite index subgroup, we must be in one of the following situations: \begin{enumerate} \item $G = {\rm{PSL}}_2(\mathbb R)$, \item $G$ is the subgroup of upper triangular matrices, or \item $G = \Delta$. \end{enumerate} In any case, we claim that there is a sequence of elements $\{g_n\} \subset {\rm SAff}(X_\alpha,q_\alpha)$ such that $D_\alpha(g_n) \to I$ in ${\rm{PSL}}_2(\mathbb R)$ and so that $H_{\alpha}^{{\rm{Aff}}}g_n$ are distinct cosets of $H_{\alpha}^{{\rm{Aff}}}$. Assuming the claim, we prove the proposition. For this, we simply apply Proposition~\ref{P:constant subsequences}, pass to a subsequence (of the same name) so that $h_n \circ g_n = h_0 \circ g_0$ for all $n \geq 0$. This contradicts the fact that $\{H_{\alpha}^{{\rm{Aff}}} g_n\}$ are all distinct cosets. To prove the claim, notice that in the first two cases, a finite index subgroup of $D_\alpha({\rm SAff}(X_\alpha,q_\alpha))$ is dense in the Lie subgroup $G \leq {\rm{PSL}}_2(\mathbb R)$, and $\Delta < G$ is a $1$--dimensional submanifold of $G$, which itself has dimenion $3$ or $2$ in cases (1) and (2), respectively. This implies that there exists a sequence $\{ g_n \} \in {\rm SAff}(X_\alpha,q_\alpha)$ such that $D_{\alpha}(g_n) \rightarrow I$ as $n \rightarrow \infty$ but $D_{\alpha}(g_n) \notin \Delta$. By way of contradiction, suppose that there exists a subsequence $\{ g_{n_i} \}$ such that $g_{n_i}$ are in the same coset $H_{\alpha}^{{\rm{Aff}}}g$ where $D_{\alpha}(g) \notin \Delta$. This implies that $D_{\alpha}(g_{n_i}) \subset \Delta D_{\alpha}(g)$, which is a $1$--manifold parallel to $\Delta$ and does not accumulate to $I$. This contradicts the fact that $D_{\alpha}(g_{n_i}) \rightarrow I$. Therefore, there exists a subsequence of $\{ g_n \}$ such that $\{H_{\alpha}^{{\rm{Aff}}} g_n\}$ are all distinct cosets. To prove the final case of the claim, we argue two distinct subcases. First, if $H_{\alpha}^D$ has infinite index in $D_{\alpha}({\rm SAff}(X_{\alpha},q_{\alpha}))$, then by definition there exists infinitely many distinct cosets $b_n^{D} H_{\alpha}^D $ of $H_{\alpha}^D$ in $D_{\alpha}({\rm SAff}(X_{\alpha},q_{\alpha}))$. Since $H_{\alpha}^{D}$ is dense in $\Delta$, there are elements \[ a_n^{D} \in H_{\alpha}^{D} \quad\text{such that}\quad b_n^{D} a_n^{D} \rightarrow I \quad\text{as}\quad n \rightarrow \infty. \] Choose a sequence $g_n \in {\rm SAff}(X_{\alpha},q_{\alpha})$ such that $D_{\alpha}(g_n) = b_n^{D} a_n^{D}$. Then $D_{\alpha}(g_n) \rightarrow I$ in $\mathrm{PSL}_2(\mathrm{R})$ and $H_{\alpha}^{{\rm{Aff}}}g_{n}$ are distinct cosets of $H_{\alpha}^{{\rm{Aff}}}$. Secondly, suppose $H_{\alpha}^{D}$ has finite index in $D_{\alpha}({\rm SAff}(X_{\alpha},q_{\alpha}))$. Since we are assuming that $H_{\alpha}^{{\rm{Aff}}}$ is infinite index in ${\rm SAff}(X_{\alpha},q_{\alpha})$, then we have infinitely many distinct cosets $b_n^{{\rm{Aff}}} H_{\alpha}^{{\rm{Aff}}}$ of $H_{\alpha}^{{\rm{Aff}}}$ in $ {\rm SAff}(X_{\alpha},q_{\alpha})$. Since $H_{\alpha}^{D}$ is dense in $\Delta$, we can find a sequence \[ \big\{a_n^{{\rm{Aff}}}\big\} \in H_{\alpha}^{{\rm{Aff}}} \quad\text{such that}\quad D\big(b_n^{{\rm{Aff}}}\big)D\big(a_n^{{\rm{Aff}}}\big) \rightarrow I \quad\text{as}\quad n \rightarrow \infty. \] Let $g_n = a_n^{{\rm{Aff}}} b_n^{{\rm{Aff}}}$. Then $D_{\alpha}(g_n) \rightarrow I$ in $\mathrm{PSL}_2(\mathrm{R})$ and $H_{\alpha}^{{\rm{Aff}}}g_{n}$ are distinct cosets of $H_{\alpha}^{{\rm{Aff}}}$. This completes the proof of the claim. Since we already proved the proposition assuming the claim, we are done. \end{proof} To complete the proof of Theorem~\ref{T:lonely leaves}, we need only prove the following. \begin{proposition} $ \mathrm{Aff}_+(X_{\alpha}, q_{\alpha}) = \mathrm{SAff}(X_{\alpha}, q_{\alpha})$. \end{proposition} \begin{proof} First, observe that ${\rm SAff}_+(X_\alpha,q_\alpha)$ is a normal subgroup of ${\rm{Aff}}_+(X_\alpha,q_\alpha)$ since it is precisely the kernel of the homomorphism given by the determinant of the derivative. In fact, from this homomorphism, either ${\rm{Aff}}_+(X_\alpha,q_\alpha) = {\rm SAff}(X_\alpha,q_\alpha)$ or else the index is infinite; $[{\rm{Aff}}_+(X_\alpha,q_\alpha): {\rm SAff}(X_\alpha,q_\alpha)] = \infty$. After passing to a finite index subgroup, $\Gamma < {\rm{Aff}}_+(X_\alpha,q_\alpha)$, if necessary, the conjugation action of $\Gamma$ on ${\rm SAff}_+(X_\alpha,q_\alpha)$ preserves the finite index subgroup $H_\alpha^{\rm{Aff}}$ (and without loss of generality, $H_\alpha^{\rm{Aff}} < \Gamma$). It thus suffices to prove $\Gamma < {\rm SAff}_+(X_\alpha,q_\alpha)$, or equivalently, $D_\alpha(\Gamma) < {\rm{PSL}}_2(\mathbb R)$. Consider any element \[ g = \begin{pmatrix} a & b\\ c & d \end{pmatrix} \in D_\alpha(\Gamma) \quad \mbox{ and } \quad h = \begin{pmatrix} \lambda & 0\\ 0 & \lambda^{-1} \end{pmatrix} \in H_\alpha^D,\] with $\lambda \neq \pm 1$. Then $ghg^{-1} \in H_\alpha^D$, and is given by \begin{equation*} \begin{aligned} ghg^{-1} & = \frac{1}{\mathrm{det}(g)}\begin{pmatrix} a & b\\ c & d \end{pmatrix} \begin{pmatrix} \lambda & 0\\ 0 & \lambda^{-1} \end{pmatrix} \begin{pmatrix} d & -b\\ -c & a \end{pmatrix}\\ &= \frac{1}{\mathrm{det}(g)} \begin{pmatrix} ad \lambda - bc \lambda^{-1} & ab(\lambda-\lambda^{-1})\\ cd (\lambda - \lambda^{-1}) & ad\lambda^{-1}-bc \lambda \end{pmatrix}. \end{aligned} \end{equation*} In order for this element to be in $H_\alpha^D$ (hence diagonal), we must have that $ab = 0$ and $cd = 0$. Suppose that $a = 0$. If $c = 0$, then we have the zero matrix, so we must have that $c \neq 0$ and instead that $d = 0$. This gives us that $g$ is a matrix of the form \begin{equation*} g = \begin{pmatrix} 0 & b\\ c & 0 \end{pmatrix}. \end{equation*} We note that the square of a matrix of this form is a diagonal matrix. Similarly, if $b = 0$, we must have that $c = 0$ and we have that $g$ is a matrix of the form \begin{equation*} g= \begin{pmatrix} a & 0\\ 0 & d \end{pmatrix}. \end{equation*} Together, these two conclusions imply that either $g$ or $g^2$ is diagonal. Now we show that $D_\alpha(\Gamma) < {\rm{PSL}}_2(\mathbb R)$. If not, then there exists $g \in D_\alpha(\Gamma)$ with $0<\det(g) \neq 1$. After squaring and inverting if necessary, we may assume that $g$ is diagonal, \[ g= \left( \begin{matrix} \lambda & 0 \\ 0 & \sigma \end{matrix} \right), \] and $0 < \det(g) = \lambda \sigma < 1$. Without loss of generality, suppose $\lambda < 1$. Notice that there exists an element $h \in H_{\alpha}^D$ such that \begin{equation*} h = \begin{pmatrix} \mu & 0\\ 0 & \mu^{-1} \end{pmatrix} \end{equation*} and there exist $n,k \in \mathbb{Z}$ so that \begin{equation*} m = g^{n}h^k = \begin{pmatrix} r & 0\\ 0 & s \end{pmatrix} \end{equation*} where $0 < r,s < 1$. Therefore, $m^j$ is a contraction for all $j > 0$, which implies that it is contracting in both directions. Fixing a saddle connection $\omega$ of $q_\alpha$, it follows that the length of $m^j(\omega)$ tends to $0$ as $j \to \infty$. This contradicts Corollary~\ref{C:bounded geometry}, part (1), and thus proves that $g \in {\rm{PSL}}_2(\mathbb R)$, as required. \end{proof} \begin{remark} The final contradiction in the proof also follows from Theorem~1.1 of \cite{PrScVa}, since $D_\alpha({\rm{Aff}}_+(X_\alpha,q_\alpha))$ is necessarily of type (i) in that theorem. \end{remark} \bibliographystyle{alpha} \newcommand{\etalchar}[1]{$^{#1}$}
1,477,468,751,262
arxiv
\section{Introduction} This paper can be thought as a continuation of \cite{BeKr} and \cite{BaBe}, on which we build our main results. The problem addressed in the present work is to extend the approach proposed in \cite{BeKr} and \cite{BaBe} to Stein domains of analytic spaces. More precisely, the main theorems of this paper prove that the characterization of open embeddings with the homotopy monomorphism property, given in \cite{BeKr} for affinoid domains and in \cite{BaBe} for dagger affinoid domains, extends to Stein domains. Since Stein spaces are defined as spaces which have a suitable exhaustion by (dagger) affinoid subdomains, the strategy of our proofs is to rely on the results of \cite{BeKr} and \cite{BaBe}, and deduce the theorems for Stein spaces using their exhaustions. In particular, this strategy naturally leads to consider projective limits of bornological spaces and the issue about the commutation of the bornological projective tensor product with projective limits. We devote Section\ref{sec:functional_analysis} to provide basic results about these issues for which it seems that there is no literature available. Our main difficulty is the fact that neither the projective tensor product nor the injective tensor product of bornological vector spaces commutes with projective limits in general. This is also the main obstacle to the generalization of our results to more general notion of domains, such as dagger quasi-Stein domains. The paper is organized as follows: in Section\ref{sec:2} we recall the main results from \cite{BaBe} that will be used, we fix the notation and we introduce some key notions that will be used in our main proofs. Section\ref{sec:functional_analysis} contains our technical results in functional analysis that are required to prove Theorems \ref{thm_stein_homotopy} and \ref{thm:DaggerQStHoEpToImm}. In Section\ref{sec:proper} we describe the class of proper bornological spaces and we prove its main property in Proposition \ref{prop:sequence_close}, which shows that the closure of a subspace of a proper bornological space is equal to the set of its (bornological) limit points. Section \ref{sec:nuclear} begins with the definition of nuclear bornological spaces and the study of their main properties. We remark that, although proper and nuclear bornological vector spaces have been previously considered in literature, see for example \cite{Hog} and \cite{Hog3}, this is the first time, at our knowledge, that a detailed study of their property is done also for non-Archimedean base fields. The main result of Section\ref{sec:nuclear} is Theorem \ref{thm:strct_exact_nuclear} which shows that nuclear bornological vector spaces are flat in ${\text{\bfseries\sf{CBorn}}}_k$ with respect to its natural monoidal structure, \emph{i.e. } the endofunctor $(-) \widehat{\otimes}_k F$ is exact if $F$ is nuclear. This is a bornological counterpart of the well-known result of the theory of locally convex spaces. The rest of Section\ref{sec:functional_analysis} deals with projective limits of bornological vector spaces: In particular, Section \ref{sec:relative_flat} addresses the issue of commutation of projective limits with the bornological complete projective tensor product; Section \ref{sec:exact_sequences} deals with a bornological version of the Mittag-Leffler Lemma for Fr\'echet spaces and Section \ref{sec:der_lim} contains some lemmas about the computation of the derived functor of the projective limit functor for quasi-abelian categories. Section \ref{sec:Stein} starts by providing some results on Fr\'echet bornological algebras and then it continues by giving the definition of Stein spaces and Stein algebras suitable for our context. The main result of this section, Theorem \ref{prop:quasi_Stein_algebra_spaces}, is a generalization of Forster's Theorem about the anti-equivalence between the category of complex Stein algebras and complex Stein spaces, to arbitrary base fields. Section \ref{sec:Stein_geometry} contains our main results. We characterize the open embeddings of Stein spaces by the maps having the homotopy monomorphism property (see Theorem \ref{thm_stein_homotopy} and Theorem \ref{thm:DaggerQStHoEpToImm}). Giving a disjoint union of Stein spaces mapping to a fixed Stein space, we characterize in Theorem \ref{thm:coverings} the surjectivity of such a morphism by a conservativity property for transversal modules. See Definition \ref{defn:RRqcoh} for the notion of transversal module, called a RR-quasicoherent module, after the work of Ramis and Ruget \cite{RR}. The homological treatment of holomorphic functional calculus, as developed by Taylor, leads naturally to derived methods (see for example \cite{Tay}). Let $({\tC}, \overline{\otimes}, \id_{{\tC}})$ be a closed symmetric monoidal elementary quasi-abelian category with enough flat projectives. In \cite{BeKr2} a Grothendieck topology of (homotopy) Zariski open immersions in ${\text{\bfseries\sf{Comm}}}(\text{\bfseries\sf{sC}})^{op}$ is defined where $\text{\bfseries\sf{sC}}$ is the closed symmetric monoidal model category of simplicial objects in ${\tC}$. The homotopy monomorphism condition that we use in this article (in the case that ${\tC}= {\text{\bfseries\sf{CBorn}}}_{k}$ or ${\text{\bfseries\sf{Ind}}}({\text{\bfseries\sf{Ban}}}_{k})$) is a restriction of Definition 1.2.6.1 (3) of \cite{TVe3} from the homotopy category of ${\text{\bfseries\sf{Comm}}}(\text{\bfseries\sf{sC}})^{op}$ to the opposite category of dagger Stein algebras (thought of as constant simplicial objects). The model structure to be explained in \cite{BeKr2} is compatible in a natural way with the quasi-abelian structure on ${\tC}$. This allows us to relate our work to the work of To\"{e}n and Vezzosi from \cite{TVe2, TVe3}, as shown in \cite{BeKr2}: ${\text{\bfseries\sf{Comm}}}(\text{\bfseries\sf{sC}})^{op}$ satisfies their axioms on a monoidal model category so according to their approach one can do derived geometry relative to it, and apply their results. In the case that our base field is the complex numbers, some of our results are already present in the work of Pirkovskii, for instance \cite{Pir}. \subsection{Notation} The notation used here will be totally consistent with the notation of \cite{BaBe}, which is the following: \begin{itemize} \item If ${\tC}$ is a category we will use the notation $X \in {\tC}$ to indicate that $X$ is an object of ${\tC}$. \item If ${\tC}$ is a category then ${\text{\bfseries\sf{Ind}}}({\tC})$ will denote the category of Ind-objects of ${\tC}$. \item $k$ will denote a field complete with respect to a fixed non-trivial valuation, Archimedean or non-Archimedean. \item ${\text{\bfseries\sf{Vect}}}_{k}$ is the closed symmetric monoidal category of vector spaces (with no extra structure) over $k$. \item ${\text{\bfseries\sf{SNrm}}}_k$ the category of semi-normed modules over $k$, remarking that, if not otherwise stated, by a semi-normed space over non-Archimedean base field we mean a $k$-vector space equipped with a non-Archimedean semi-norm. \item ${\text{\bfseries\sf{Nrm}}}_k$ the category of normed modules over $k$. \item ${\text{\bfseries\sf{Ban}}}_k$ the category of Banach modules over $k$. \item For $V\in {\text{\bfseries\sf{SNrm}}}_k$, $V^{s}= V/\overline{(0)}$ is the separation and $\widehat{V} \in {\text{\bfseries\sf{Ban}}}_{k}$ is the separated completion. \item ${\text{\bfseries\sf{Born}}}_{k}$ the category of bornological vector spaces of convex type over $k$ and ${\text{\bfseries\sf{CBorn}}}_{k}$ the category of complete bornological vector spaces of convex type over $k$. \item For $E \in {\text{\bfseries\sf{Born}}}_{k}$ and $B$ a bounded absolutely convex subset, (i.e. a bounded disk) of $E$, $E_B$ is the linear subspace of $E$ spanned by elements of $B$ equipped with the gauge semi-norm (also called the Minkowski functional) defined by $B$ (see Remark 3.40 of \cite{BaBe} for a review of the notion of gauge semi-norm). \item For $E \in {\text{\bfseries\sf{Born}}}_{k}$, $\mathcal{D}_{E}$ denotes the category of bounded absolutely convex subsets of $E$. \item ${\text{\bfseries\sf{Afnd}}}^{\dagger}_{k}$ denotes the category of dagger affinoid algebras over $k$. \item For $E \in {\text{\bfseries\sf{Born}}}_{k}$, $\mathcal{D}^{c}_{E}$ denotes the category of bounded absolutely convex subsets $B$ of $E$ for which $E_{B}\in {\text{\bfseries\sf{Ban}}}_{k}$. \item The notation $\mathop{\lim\limits_{\displaystyle\rightarrow}}$ refers to a colimit (also known as inductive or direct limit) of some functor in a category. \item The notation $\mathop{\lim\limits_{\displaystyle\leftarrow}}$ refers to a limit (also known as projective or inverse limit) of some functor in a category. \item For polyradii $\rho = (\rho_i) \in \mathbb R_+^n$, the notation $\rho < \rho'$ means that $\rho$ and $\rho'$ have the same number of components and every component of $\rho$ is strictly smaller than the corresponding component of $\rho'$. \item With the notation ${\tC}^{\tD}$ we will denote the category of covariant functors ${\tD} \to {\tC}$. In particular, if $I$ is a filtered set we will denote ${\tC}^I$ the category of functors $I \to {\tC}$, when $I$ is thought as a category. \item A cofiltered projective system $\{ E_i \}_{i \in I}$ of objects of a category is said to be \emph{epimorphic} if for any $i < j$ the system map $E_j \to E_i$ is an epimorphism. Similarly, a filtered direct system $\{ E_i \}_{i \in I}$ is said to be \emph{monomorphic} if for any $i < j$ the system map $E_i \to E_j$ is a monomorphism. \end{itemize} \section{Bornological Algebraic Geometry} \label{sec:2} \subsection{Quasi-abelian categories, bornological spaces and dagger analytic geometry} We suppose that the reader is familiar with the theory of quasi-abelian categories as developed in \cite{SchneidersQA}. In this section $({\tC}, \overline{\otimes}, \id_{{\tC}})$ will be a closed symmetric monoidal quasi-abelian category and $\underline{\Hom}$ will denote the internal hom functor. To any closed symmetric monoidal category is associated a category of commutative monoids, denoted ${\text{\bfseries\sf{Comm}}}({\tC})$ and a category of affine schemes ${\text{\bfseries\sf{Aff}}}({\tC}) = {\text{\bfseries\sf{Comm}}}({\tC})^{op}$. The duality functor ${\text{\bfseries\sf{Comm}}}({\tC}) \to {\text{\bfseries\sf{Aff}}}({\tC})$ is denoted by $\spec$. To any $A \in {\text{\bfseries\sf{Comm}}}({\tC})$ we can associate the category of $A$-modules ${\text{\bfseries\sf{Mod}}}(A)$, which is quasi-abelian closed symmetric monoidal with respect to a bifunctor $\overline{\otimes}_A$, naturally induced by $\overline{\otimes}$. Moreover, since ${\text{\bfseries\sf{Mod}}}(A)$ is quasi-abelian we can always associate to $A$ the derived categories of ${\text{\bfseries\sf{Mod}}}(A)$, denoted $D(A)$, and using the left t-structure of $D(A)$ we define $D^{\leq 0}(A)$, $D^{\geq 0}(A)$ and $D^b(A)$. Notice also that in Proposition 2.1.18 (c) of \cite{SchneidersQA} it is shown that if ${\tC}$ is elementary quasi-abelian ${\text{\bfseries\sf{Mod}}}(A)$ is elementary quasi-abelian. \begin{defn} A morphism $\spec(B)\to \spec(A)$ is said to be a \emph{homotopy monomorphism} if the canonical functor $D^{\leq 0}(B)\longrightarrow D^{\leq 0}(A)$ is fully faithful. In a dual way, we say that the correspondent morphism of monoids $A \to B$ is a \emph{homotopy epimorphism}. \end{defn} The following characterization of homotopy monomorphisms is the useful one for pratical purposes. \begin{lem} \label{lem_HomotopyMon} Assume that $p:\spec(B)\to \spec(A)$ is a morphism in ${\text{\bfseries\sf{Aff}}}({\tC})$ and that the functor ${\text{\bfseries\sf{Mod}}}(A) \to {\text{\bfseries\sf{Mod}}}(B)$ given by tensoring with $B$ over $A$ is left derivable to a functor $D^{\leq 0}(A)\to D^{\leq 0}(B)$. Then, $p$ is a homotopy monomorphism if and only if $B\overline{\otimes}^{\mathbb{L}}_{A} B\cong B$. \end{lem} {\bf Proof.} See Lemma 2.24 of \cite{BaBe}. \ \hfill $\Box$ \begin{lem} \label{lem:composition_HomotopyMon} Let $f: \spec(A) \to \spec(B)$, $g: \spec(B) \to \spec(C)$ be two morphisms of affine schemes such that $g \circ f$ and $g$ are homotopy monomorphisms, then also $f$ is a homotopy monomorphism. \end{lem} {\bf Proof.} The hypothesis mean that we have a diagram of functors \[ \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { D^{\leq 0}(A) & & D^{\leq 0}(B) \\ & D^{\leq 0}(C) \\}; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$f_*$} (m-1-3); \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$(g\circ f)_*$} (m-2-2); \path[<-,font=\scriptsize] (m-2-2) edge node[auto] {$g_*$} (m-1-3); \end{tikzpicture} \] such that $g_*$ is fully faithful and $g_* \circ f_*$ is fully faithful. Hence for any $V,W \in D^{\leq 0}(A)$ \[ \Hom_{D^{\leq 0}(A)}(V, W) \cong \Hom_{D^{\leq 0}(C)}((g\circ f)_*(V), (g \circ f)_*(W)) \cong \Hom_{D^{\leq 0}(B)}(f_*(V), f_*(W)) \] which precisely means that $f_*$ is fully faithful. \ \hfill $\Box$ We recall the following notion from \cite{BaBe} and \cite{BeKr}. \begin{defn}\label{defn:RRqcoh} Consider an object $A \in {\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{CBorn}}}_k)$. We define a sub-category ${\text{\bfseries\sf{Mod}}}^{RR}(A)$ of ${\text{\bfseries\sf{Mod}}}(A)$ whose modules $M$ satisfy the property that the natural morphism $M \widehat{\otimes}^{\mathbb{L}}_{A} B \to M \widehat{\otimes}_{A}B$ is an isomorphism in $D^{\le 0} (B)$, for all homotopy epimorphisms $A \to B$. We call these modules \emph{RR-quasicoherent modules}. \end{defn} Homotopy epimorphisms are the morphisms that we use to endow ${\text{\bfseries\sf{Comm}}}({\tC})$ with a Grothendieck topology. The following definition is based on definitions in \cite{TV} and \cite{TVe3}. \begin{defn}\label{defn:homotopy_Zariski} Consider a full sub-category ${\text{\bfseries\sf{A}}} \subset {\text{\bfseries\sf{Aff}}}({\tC})$ such that the base change of a homotopy monomorphisms in ${\text{\bfseries\sf{A}}}$ is a homotopy monomorphism. On ${\text{\bfseries\sf{A}}}$ we can define the \emph{homotopy Zariski topology} which has as its covers collections $\{\spec (B_i)\to \spec(A)\}_{i\in I}$ where there exists a finite subset $J \subset I$ such that \begin{itemize} \item for each $i\in J$, the morphism $A\to B_i$ is of finite presentation and the resulting morphisms $D^{\leq 0}(B_i)\to D^{\leq 0}(A)$ is fully faithful; \item a morphism in ${\text{\bfseries\sf{Mod}}}^{RR}(A)$ is an isomorphism if and only if it becomes an isomorphism in each ${\text{\bfseries\sf{Mod}}}^{RR}(B_j)$ for $j\in J$ after applying the functor $M \mapsto M\overline{\otimes}_{A}^{\mathbb{L}} B_{j}$. Such a family is called \emph{conservative}. \end{itemize} One can drop the requirement on the maps of the covering $A\to B_i$ to be finitely presented, obtaining another topology called the \emph{formal homotopy Zariski topology}. \end{defn} Later on, we will relax the condition on coverings allowing the subset $J \subset I$ to be countable. We will discuss this issue at the end of Section\ref{sec:Stein_geometry}. We will also make an extensive use of flat objects in ${\tC}$, in the following sense. \begin{defn}\label{defn:flat}Let $({\tC}, \overline{\otimes}, \id_{{\tC}})$ be a closed, symmetric monoidal, quasi-abelian category. We call an object $F$ of ${\tC}$ \emph{flat} if for any strictly exact sequence \[0 \to E' \to E \to E'' \to 0 \] the resulting sequence \[0 \to E'\overline{\otimes} F \to E\overline{\otimes} F \to E'' \overline{\otimes} F \to 0 \] is strictly exact, \emph{i.e. } if the endofunctor $E \mapsto E \overline{\otimes} F$ is an exact functor in the terminology of \cite{SchneidersQA}. \end{defn} We conclude this section by defining free resolutions in closed symmetric monoidal quasi-abelian categories and by showing some properties. \begin{defn} Let $({\tC}, \overline{\otimes}, \id_{{\tC}})$ be a closed symmetric monoidal quasi-abelian category and let $A \in {\text{\bfseries\sf{Comm}}}({\tC})$. An object $E\in {\text{\bfseries\sf{Mod}}}(A)$ is called \emph{free} if \[ E \cong A \overline{\otimes} V \] for some $V \in {\tC}$. \end{defn} \begin{defn} \label{def:free_resolution} Let $({\tC}, \overline{\otimes}, \id_{{\tC}})$ be a closed symmetric monoidal quasi-abelian category and let $A \in {\text{\bfseries\sf{Comm}}}({\tC})$. A \emph{free resolution} of $E \in {\text{\bfseries\sf{Mod}}}(A)$ is the data of a strict complex \[ \cdots \to L^{2}(E) \to L^{1}(E) \to L^{0}(E) \to 0 \] and a strict quasi-isomorphism \[ L^\bullet(E) \cong E \] where each $L^i(E)$ is free in ${\text{\bfseries\sf{Mod}}}(A)$ and $E$ is thought as a complex concentrated in degree $0$. \end{defn} \begin{lem} \label{lemma:flat_res} Let $({\tC}, \overline{\otimes}, \id_{{\tC}})$ be a closed symmetric monoidal quasi-abelian category with enough projectives. Let $A \in {\text{\bfseries\sf{Comm}}}({\tC})$ and $E \in {\text{\bfseries\sf{Mod}}}(A)$. Then, $E$ admits a free resolution. If in addition, both $E$ and $A$ are flat as objects in ${\tC}$ then each term of the free resolution can be chosen to be a flat object in ${\text{\bfseries\sf{Mod}}}(A)$. \end{lem} {\bf Proof.} Consider \[ \mathscr{L}_{A}^{n}(E) = A \overline{\otimes} (\underbrace{A \overline{\otimes} \cdots \overline{\otimes} A}_{n \text{ times}} \overline{\otimes} E) \] where we think the first $A$ factor as an $A$-module and the other factors as objects of ${\tC}$. In this way $\mathscr{L}_{A}^{n}(E)$ is by definition a free $A$-module. Defining the differentials $d_n: \mathscr{L}_{A}^{n}(E) \to \mathscr{L}_{A}^{n-1}(E)$ in the following way: Let $m_A: A \overline{\otimes} A \to A$ denotes the multiplication map of $A$ and $\rho_E: A \overline{\otimes} E \to E$ the action of $A$ on $E$, then \begin{equation} \label{eqn:diff_bar} d_n = \sum_{i = 0}^{n-1} (-1)^i id_A \overline{\otimes} \cdots \overline{\otimes} m_A \overline{\otimes} \cdots \overline{\otimes} id_A \overline{\otimes} id_E + (-1)^n id_A \overline{\otimes} \cdots \overline{\otimes} id_A \overline{\otimes} \rho_E, \end{equation} where $m_A$ is at the $i$th place. Standard computations show that the complex $\mathscr{L}_{A}^\bullet(E)$ is a free resolution of $E$. This complex is a resolution because $\mathscr{L}_{A}^{\bullet}(E)$ has a splitting over ${\tC}$ given by the maps \[\mathscr{L}_{A}^{n}(E) \longleftarrow \mathscr{L}_{A}^{n - 1}(E) \] \begin{equation} \label{eqn:splitting} 1_A \overline{\otimes} id_{\mathscr{L}_{A}^{n - 1}(E)}, \end{equation} where $1_A$ is the constant morphism to the identity of $A$. Therefore, we can deduce that the cone of the map $\mathscr{L}_{A}^{\bullet}(E) \to E$ is a strictly exact complex in $D^{\leq 0}({\tC})$. By Proposition 1.5.1 of \cite{SchneidersQA} a morphism in ${\text{\bfseries\sf{Mod}}}(A)$ is strict if and only if is strict as a morphism in ${\tC}$, hence the cone $\mathscr{L}_{A}^{\bullet}(E) \to E$ is also a strictly exact complex in $D^{\leq 0}(A)$. It remains to show the claim about the flatness of ${\mathscr L}_A^n(E)$. Since $\overline{\otimes}_A$ is right exact we only need to show only the left exactness of the functor $(-) \overline{\otimes}_A {\mathscr L}_A^n(E)$. Consider a strictly exact sequence of morphisms of ${\text{\bfseries\sf{Mod}}}(A)$ \[ 0 \to F \to G \to H. \] Applying $(-) \overline{\otimes}_A {\mathscr L}_A^n(E)$ we obtain the sequence \begin{equation} \label{eqn:flat_bar_res} 0 \to F \overline{\otimes}_A {\mathscr L}_A^n(E) \to G \overline{\otimes}_A {\mathscr L}_A^n(E) \to H \overline{\otimes}_A {\mathscr L}_A^n(E) \end{equation} which can be rewritten as \[ 0 \to F \overline{\otimes} (\underbrace{A \overline{\otimes} \cdots \overline{\otimes} A}_{n \text{ times}} \overline{\otimes} E) \to G \overline{\otimes} (\underbrace{A \overline{\otimes} \cdots \overline{\otimes} A}_{n \text{ times}} \overline{\otimes} E) \to H \overline{\otimes} (\underbrace{A \overline{\otimes} \cdots \overline{\otimes} A}_{n \text{ times}} \overline{\otimes} E). \] Therefore, the hypothesis that $A$ and $E$ are flat objects of ${\tC}$ directly implies that the sequence (\ref{eqn:flat_bar_res}) is strictly exact. \ \hfill $\Box$ \begin{rem} The splitting maps of equation (\ref{eqn:splitting}) are not $A$-linear in general. \end{rem} \begin{defn}\label{defn:Bar} Let $({\tC}, \overline{\otimes}, \id_{{\tC}})$ be a closed symmetric monoidal quasi-abelian category. Let $A \in {\text{\bfseries\sf{Comm}}}({\tC})$ and $E \in {\text{\bfseries\sf{Mod}}}(A)$. We define the \emph{Bar resolution} of $E$ to be the free resolution introduced in Lemma \ref{lemma:flat_res}. \end{defn} Using the Bar resolution we can prove the following important lemma. \begin{lem} \label{lem:ind_limit_homotopy_epi} Let ${\tC}$ be elementary quasi-abelian. Let $\{A_i\}_{i \in I}$ be a filtered inductive system in ${\text{\bfseries\sf{Comm}}}({\tC})$ such that all system morphisms are homotopy epimorphisms. Then, for any $j \in I$ the canonical maps \[ A_j \to \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i \] are homotopy epimorphisms. \end{lem} {\bf Proof.} Let's fix a $j \in I$. To show that $A_j \to \underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} A_i$ is a homotopy epimorphism we need to check that \[ (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i) \overline{\otimes}_{A_j}^\mathbb L (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i )\cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i \] in $D^{\le 0}({\tC})$. Consider the Bar resolution $\mathscr{L}_{A_j}^{\bullet}(\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i)$. Then, the complex \[ (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i) \overline{\otimes}_{A_j} \mathscr{L}_{A_j}^{\bullet}(\underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} A_i) \] is a representative of $(\underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} A_i )\overline{\otimes}_{A_j}^\mathbb L( \underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} A_i)$. More explicitly, for each $n \in \mathbb N$ \[ (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i) \overline{\otimes}_{A_j} \mathscr{L}_{A_j}^{n}(\underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} A_i) = (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i) \overline{\otimes}_{A_j} A_j \overline{\otimes} (\underbrace{A_j \overline{\otimes} \cdots \overline{\otimes} A_j}_{n \text{ times}} \overline{\otimes} (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i)) \] which simplifies to \[ (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i) \overline{\otimes} (\underbrace{A_j \overline{\otimes} \cdots \overline{\otimes} A_j}_{n \text{ times}} \overline{\otimes} (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i)) \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} (A_i \overline{\otimes} \underbrace{A_j \overline{\otimes} \cdots \overline{\otimes} A_j}_{n \text{ times}} \overline{\otimes} A_i). \] Now, because by Proposition 2.1.16 (c) of \cite{SchneidersQA} $\underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}}$ is an exact functor when $I$ is filtered we have $\underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} \cong \mathbb{L}\underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}}$. Therefore, \[( \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i )\overline{\otimes}_{A_j}^\mathbb L (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i) \cong (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i) \overline{\otimes}_{A_j} \mathscr{L}_{A_j}^{\bullet}(\underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} A_i) \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I}(A_i \overline{\otimes}_{A_j} \mathscr{L}_{A_j}^{\bullet}(A_i)) \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I}(A_i \overline{\otimes}_{A_j}^\mathbb L A_i) \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} A_i. \] \ \hfill $\Box$ We also introduce the following complex needed for computing \v{C}ech cohomology in the theories we will develop. \begin{defn} Given a collection of morphisms $\mathfrak{U}=\{A\to B_{i}\}_{i \in I}$ in ${\text{\bfseries\sf{Comm}}}({\tC})$ and $M \in {\text{\bfseries\sf{Mod}}}(A)$ we have the \v{C}ech-Amitsur complex ${\mathscr C}_{A}^\bullet(M,\mathfrak{U})$ \[ \prod_{i} (M \widehat{\otimes}_{A} B_{i}) \to \prod_{i,j} (M \widehat{\otimes}_{A} B_{i}\widehat{\otimes}_{A} B_{j}) \to \cdots \] and its augmented version which we call the Tate complex ${\mathscr T}_{A}^\bullet(M,\mathfrak{U})$ \[0 \to M \to \prod_{i} (M \widehat{\otimes}_{A} B_{i}) \to \prod_{i,j} (M \widehat{\otimes}_{A} B_{i}\widehat{\otimes}_{A} B_{j}) \to \cdots \] where we use the degree convention \[\mathscr{C}^{d}_{A}(M,\mathfrak{U})= \mathscr{T}^{d}_{A}(M,\mathfrak{U})= \prod_{i_1,\dots, i_d} (M \widehat{\otimes}_{A} B_{i_1}\widehat{\otimes}_{A} \cdots \widehat{\otimes}_{A} B_{i_d})\] for $d \geq 1$. \end{defn} \subsection{Bornological vector spaces} In this section we recall very briefly the theory of bornological vector spaces. We refer the reader to Section3.3 of \cite{BaBe} for more details. \begin{defn} Let $X$ be a set. A \emph{bornology} on $X$ is a collection $\mathcal{B}$ of subsets of $X$ such that \begin{enumerate} \item $\mathcal{B}$ is a covering of $X$, \emph{i.e. } $\forall x \in X, \exists B \in \mathcal{B}$ such that $x \in \mathcal{B}$; \item $\mathcal{B}$ is stable under inclusions, \emph{i.e. } $A \subset B \in \mathcal{B} \Rightarrow A \in \mathcal{B}$; \item $\mathcal{B}$ is stable under finite unions, \emph{i.e. } for each $n \in \mathbb N$ and $B_1, ..., B_n \in \mathcal{B}$, $\underset{i = 1}{\overset{n}\bigcup} B_i \in \mathcal{B}$. \end{enumerate} The pair $(X, \mathcal{B})$ is called a \emph{bornological set}, and the elements of $\mathcal{B}$ are called \emph{bounded subsets} of $X$ (with respect to $\mathcal{B}$, if it is needed to specify). A family of subsets $\mathcal{A} \subset \mathcal{B}$ is called a \emph{basis} for $\mathcal{B}$ if for any $B \in \mathcal{B}$ there exist $A_1, \dots, A_n \in \mathcal{A}$ such that $B \subset A_1 \cup \dots \cup A_n$. A \emph{morphism} of bornological sets $\varphi: (X, \mathcal{B}_X) \to (Y, \mathcal{B}_Y)$ is defined to be a bounded map $\varphi: X \to Y$, \emph{i.e. } a map of sets such that $\varphi(B) \in \mathcal{B}_Y$ for all $B \in \mathcal{B}_X$. \end{defn} \begin{defn} A \emph{bornological vector space} over $k$ is a $k$-vector space $E$ along with a bornology on the underlying set of $E$ for which the maps $(\lambda, x) \mapsto \lambda x$ and $(x, y) \mapsto x + y$ are bounded. \end{defn} \begin{defn} A bornological vector space is said to be \emph{of convex type} if it has a basis made of absolutely convex subsets. We will denote by ${\text{\bfseries\sf{Born}}}_k$ the category whose objects are the bornological vector spaces of convex type and whose morphisms are bounded linear maps between them. \end{defn} \begin{rem} \label{rem:gauge} For every bornological vector space of convex type $E$ there is an isomorphism \[ E \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{B \in \mathcal{D}_E} E_B \] where $B$ varies over the family of bounded absolutely convex subsets of $E$ and $E_B$ is the vector subspace of $E$ spanned by elements of $B$ equipped with the gauge semi-norm (also called Minkowski functional) defined by $B$ and each $E_B$ is equipped with the bornology induced by this semi-norm. \end{rem} Reasoning as last remark, one can show that there is a functor ${\rm diss\,}: {\text{\bfseries\sf{Born}}}_k \to {\text{\bfseries\sf{Ind}}}({\text{\bfseries\sf{SNrm}}}_k)$ which is fully faithful, which commutes with all projective limits and direct sums and whose essential image is the sub-category of essential monomorphic objects of ${\text{\bfseries\sf{Ind}}}({\text{\bfseries\sf{SNrm}}}_k)$, \emph{i.e. } by ind-objects isomorphic to systems which have monomorphisms as system maps. The functor ${\rm diss\,}$ does not commute with cokernels in general. \begin{defn} \label{defn:separated_born} A bornological vector space over $k$ is said to be \emph{separated} if its only bounded linear subspace is the trivial subspace $\{0\}$. \end{defn} \begin{defn} \label{defn:complete_born} A bornological vector space $E$ over $k$ is said to be \emph{complete} if there is a small filtered category $I$, a functor $I \to {\text{\bfseries\sf{Ban}}}_{k}$ and an isomorphism \[ E \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} E_i \] for a filtered colimit of Banach spaces over $k$ for which the system morphisms are all injective and the colimit is calculated in the category ${\text{\bfseries\sf{Born}}}_k$. \end{defn} Let ${\text{\bfseries\sf{CBorn}}}_k$ be the full subcategory of ${\text{\bfseries\sf{Born}}}_k$ consisting of complete bornological spaces. Collecting together several facts from the theory of bornological vector spaces. See Section3.3 of \cite{BaBe} for a more detailed account. \begin{lem}\label{lem:CBornProps} ${\text{\bfseries\sf{CBorn}}}_k$ has all limits and colimits and the inclusion functor ${\text{\bfseries\sf{CBorn}}}_k \to {\text{\bfseries\sf{Born}}}_k$ commutes with projective limits. Both ${\text{\bfseries\sf{Born}}}_k$ and ${\text{\bfseries\sf{CBorn}}}_k$ are closed symmetric monoidal elementary quasi-abelian categories with enough projectives. \end{lem} The monoidal structure of ${\text{\bfseries\sf{Born}}}_k$ is denoted by $\otimes_{\pi, k}$ and is called the \emph{projective tensor product} and the one of ${\text{\bfseries\sf{CBorn}}}_k$ is $\widehat{\otimes}_{\pi, k}$ (often written just as $\widehat{\otimes}_{k}$) and it is called the \emph{completed projective tensor product}. We conclude by noticing that the fact that ${\text{\bfseries\sf{CBorn}}}_k$ has enough projectives implies that we can always derive right exact functors from ${\text{\bfseries\sf{CBorn}}}_k$ to a quasi-abelian category, as happens in the theory of abelian categories. \subsection{Tensor product of unbounded complexes} In this section we extend to the quasi-abelian settings some results of \cite{BN} on the derived functor of the tensor product for abelian categories. We see how the derived functor $(-)\overline{\otimes}^\mathbb L(-): D^{\le 0}({\tC}) \times D^{\le 0}({\tC}) \to D^{\le 0}({\tC})$ (discussed extensively in \cite{BaBe}) extends to a functor $(-)\overline{\otimes}^\mathbb L(-): D({\tC}) \times D({\tC}) \to D({\tC})$. Since we are not looking for the utmost generality of the result we suppose in this section that ${\tC}$ is an elementary quasi-abelian closed symmetric monoidal category, although it would be sufficient to suppose only that ${\tC}$ has exact direct sums and enough projectives instead of supposing it to be elementary. Recall that the hypothesis of ${\tC}$ being elementary implies that ${\tC}$ has enough projectives, cf. Proposition 2.1.15 (c) of \cite{SchneidersQA}. \begin{lem} \label{lem:homotopy_colim} Let $X \in D({\tC})$, then the morphism \[ \mathop{\lim\limits_{\displaystyle\rightarrow}}_{n \in \mathbb N} \tau^{\le n}(X) \to {\rm ho\,}\mathop{\lim\limits_{\displaystyle\rightarrow}}_{n \in \mathbb N} \tau^{\le n}(X) \] is an isomorphism. \end{lem} {\bf Proof.} We can apply the dual statement of Remark 2.3 of \cite{BN} because by Proposition 2.1.15 (a) of \cite{SchneidersQA}, $LH({\tC})$ satisfies the axiom AB4${}^*$. \ \hfill $\Box$ \begin{lem} \label{lem:homotopy_iso} Every object in $D({\tC})$ is strictly quasi-isomorphic to a complex of projective objects. \end{lem} {\bf Proof.} Let $X \in D({\tC})$. Consider the truncated complex $\tau^{\le n}(X)$ (recall that we are using the left t-structure of $D({\tC})$). There is a canonical map $\tau^{\le n}(X) \to X$. Since ${\tC}$ has enough projectives we can always find strict quasi-isomorphisms $P^{\le n} \to \tau^{\le n}(X)$, where $P^{\le n}$ is a complex of projectives which is zero in degree $> n$. The diagram \[ \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { P^{\le n} & P^{\le n + 1} \\ \tau^{\le n}(X) & \tau^{\le n + 1}(X) \\}; \path[->,font=\scriptsize] (m-2-1) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-2-1); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-2-2); \end{tikzpicture} \] defines a morphism $P^{\le n} \to P^{\le n + 1}$ in $D({\tC})$, because $P^{\le n + 1} \to \tau^{\le n + 1}(X^\bullet)$ is a strict quasi-isomorphism and this morphism can be realized as a morphism of complexes because $P^{\le n}$ and $P^{\le n + 1}$ are made of projectives. So, there is a sequence of morphisms \[ \mathop{\lim\limits_{\displaystyle\rightarrow}}_{n \in \mathbb N} \tau^{\le n}(X) \to {\rm ho\,}\mathop{\lim\limits_{\displaystyle\rightarrow}}_{n \in \mathbb N} \tau^{\le n}(X) \to {\rm ho\,}\mathop{\lim\limits_{\displaystyle\rightarrow}}_{n \in \mathbb N} \tau^{\le n}(P^{\le n}) \] where $\underset{n \in \mathbb N}\mathop{\lim\limits_{\displaystyle\rightarrow}} \tau^{\le n}(X) \cong X$ and the last map is an isomorphism because is a homotopy colimit of strict quasi-isomorphisms. The first map is an isomorphism by Lemma \ref{lem:homotopy_colim}. Therefore, $X \cong {\rm ho\,}\underset{n \in \mathbb N}\mathop{\lim\limits_{\displaystyle\rightarrow}} \tau^{\le n}(P^{\le n})$ which by construction is a complex of projectives. \ \hfill $\Box$ \begin{defn} Let ${\tT}$ be a triangulated category with direct sums. We say that subcategory of ${\tT}$ is \emph{localizing} if it is closed under direct sums. \end{defn} We use the following notation: $K({\tC})$ the homotopy category of ${\tC}$ and $K({\tP})$ the smallest localizing subcategory of $K({\tC})$ containing the complexes made of projectives. \begin{rem} Notice that if ${\tC}$ is elementary, then both $K({\tC})$ and $D({\tC})$ have all the direct sums. Indeed, the claim on $K({\tC})$ is trivial and for the one on $D({\tC})$, using Proposition 2.1.12 one deduce that ${\tC}$ is derived equivalent to an elementary abelian category and therefore $D({\tC})$ has all direct sums as consequence of Corollary 1.7 of \cite{BN}. \end{rem} \begin{lem} \label{lem:homotopy_cat} The composition functor \[ K({\tP}) {\hookrightarrow} K({\tC}) \to D({\tC}) \] is an equivalence. \end{lem} {\bf Proof.} Since $K({\tP})$ is thick subcategory of $K({\tC})$, we can use the dual statements of Lemmata 2.8-11 of \cite{BN} to deduce that $K({\tP})$ is a localizing subcategory of $K({\tC})$. Using Lemma \ref{lem:homotopy_iso} we see that all objects of $D({\tC})$ are isomorphic to objects of $K({\tP})$. \ \hfill $\Box$ \begin{thm} \label{thm:extend_tensor} The tensor product can be derived to a functor \[ (-)\overline{\otimes}^\mathbb L(-): D({\tC}) \times D({\tC}) \to D({\tC}). \] The restriction to $D^{\le 0}({\tC}) \times D^{\le 0}({\tC})$ agrees with the derived tensor product based on projective resolutions. \end{thm} {\bf Proof.} The tensor product $(-)\overline{\otimes}(-)$ extends to a functor $K({\tC}) \times K({\tC}) \to K({\tC})$. By Lemma \ref{lem:homotopy_cat}, under our hypothesis, $D({\tC})$ is equivalent to a subcategory of $K({\tC})$. Therefore, we can define $(-)\overline{\otimes}^\mathbb L(-)$ as the restriction to $K({\tP})$ of the extension of $(-)\overline{\otimes}(-)$ to $K({\tC})$. \ \hfill $\Box$ \section{Some results in functional analysis} \label{sec:functional_analysis} This section is devoted to prove results in the theory of bornological vector spaces needed in the rest of the paper. \subsection{Proper bornological vector spaces} \label{sec:proper} \begin{defn} Let $E$ be a bornological $k$-vector space and $\{x_n\}$ a sequence of elements of $E$. We say that $\{x_n\}_{n \in \mathbb N}$ \emph{converges (bornologically) to $0$} if there exists a bounded subset $B \subset E$ such that for every $\lambda \in k^\times$ there exists an $n = n(\lambda)$ for which \[ x_m \in \lambda B, \forall m > n. \] We say that $\{ x_n \}_{n \in \mathbb N}$ converges (bornologically) to $a \in E$ if $\{x_n - a\}_{n \in \mathbb N}$ converges (bornologically) to zero. \end{defn} The idea of this definition is due to Mackey and it is sometimes called Mackey convergence. \begin{defn} Let $E$ be a bornological vector space over $k$. \begin{itemize} \item a sequence $\{x_n\}_{n \in \mathbb N} \subset E$ is called \emph{Cauchy-Mackey} if the double sequence $\{x_n - x_m\}_{n,m \in \mathbb N}$ converges to zero; \item a subset $U \subset E$ is called \emph{(bornologically) closed} if every sequence of elements of $U$ which is bornologically convergent in $X$ converges bornologically to an element of $U$. \end{itemize} \end{defn} \begin{defn} A bornological vector space is called \emph{semi-complete} if every Cauchy-Mackey sequence is convergent. \end{defn} The notion of semi-completeness is not as useful as the notion of completeness in the theory of topological vector spaces. We remark that any complete bornological vector space is semi-complete, but the converse is false. \begin{rem} \label{rem:born_conv} The notion of bornological convergence on a bornological vector space of convex type $E = \underset{B \in \mathcal{D}_E}\mathop{\lim\limits_{\displaystyle\rightarrow}} E_B$, where $\mathcal{D}_E$ denotes the family of bounded disks of $E$, can be restated in the following way: $\{ x_n \}_{n \in \mathbb N}$ is convergent to zero in the sense of Mackey if and only if there exists a $B \in \mathcal{D}_E$ and $N \in \mathbb N$ such that for all $n > N$, $x_n \in E_B$ and $x_n \to 0$ in $E_B$ (equipped with the semi-norm induced by $B$). \end{rem} \begin{rem}\label{rem:BornTop} The notion of bornologically closed subset induces a topology on $E$, but this topology is neither a linear topology nor group topology in general. However, an arbitrary intersection of bornological closed subsets of a bornological vector space is bornologically closed. \end{rem} From Remark \ref{rem:BornTop} follows that the following definition is well posed. \begin{defn} Let $U \subset E$ be a subset of a bornological vector space. The (bornological) closure of $U$ is the smallest bornologically closed subset of $E$ in which $U$ is contained. We denote the closure of $U$ by $\overline{U}$. \end{defn} \begin{defn} \label{defn:born_dense} Let $E$ be a bornological vector space over $k$. We will say that a subset $U \subset E$ is \emph{bornologically dense} if the bornological closure of $U$ is equal to $E$. \end{defn} \begin{prop} \label{prop:strict_morphisms_born_close} Let $f: E \to F$ be a morphism in ${\text{\bfseries\sf{CBorn}}}_k$, then \begin{itemize} \item $f$ is a monomorphism if and only if it is injective \item $f$ is an epimorphism if and only if $f(E)$ is bornologically dense in $F$ \item $f$ is a strict epimorphism if and only if it is surjective and $F$ is endowed with the quotient bornology; \item $f$ is a strict monomorphism if and only if it is injective, the bornology on $E$ agrees with the induced bornology from $F$ and $f(E)$ is a bornologically closed subspace of $F$. \end{itemize} \end{prop} {\bf Proof.} See Proposition 5.6 (a) of \cite{PrSc}. \ \hfill $\Box$ \begin{defn} \label{def:proper_born} A bornological vector space is called \emph{proper} if its bornology has a basis of bornologically closed subsets. \end{defn} \begin{rem} \label{rem:proper} \begin{enumerate} \item All bornological vector spaces considered in \cite{Bam} and \cite{BaBe} are proper. \item Let $E$ be a separated bornological vector space (\emph{i.e. } $\{0\}$ is a closed subset of $E$), then the morphism $E \to \widehat{E}$ is injective (cf. Proposition 17 page 113 of \cite{H2}). Indeed, in this case, in $E \cong \underset{B \in \mathcal{D}_E}\mathop{\lim\limits_{\displaystyle\rightarrow}} E_B$ then \[ \widehat{E} = \widehat{\underset{B \in \mathcal{D}_E}\mathop{\lim\limits_{\displaystyle\rightarrow}} E_B} \cong \underset{B \in \mathcal{D}_E}\mathop{\lim\limits_{\displaystyle\rightarrow}} \widehat{E_B}. \] \end{enumerate} \end{rem} The main drawback, for our scope, of proper bornological vector spaces is that although they form a closed symmetric monoidal category, this category is not quasi-abelian. We will also need the following property of proper bornological vector spaces. \begin{prop} \label{prop:proper_proj} Any projective limit of proper objects in ${\text{\bfseries\sf{Born}}}_{k}$ is proper. \end{prop} {\bf Proof.} \cite{H2}, Proposition 12, page 113. \ \hfill $\Box$ \begin{lem} \label{cor:proper_subspace} Any subspace of proper bornological vector space, endowed with the induced bornology, is proper. \end{lem} {\bf Proof.} Let $E$ be a proper bornological vector space and let $\{B_i\}_{i \in I}$ be a basis for its bornology made of bornologically closed bounded subsets. Given a subspace $F \subset E$, the family $\{B_i \cap F\}_{i \in I}$ is a basis for the bornology that $E$ induces on $F$. Consider a sequence of elements $\{x_n\}_{n \in \mathbb N} \subset B_i \cap F$ converging bornologically to $x \in F$ then, since the inclusion map $F \to E$ is bounded, it preserves bornological limits. Therefore, $\{x_n\}_{n \in \mathbb N}$ converges to $x$ also as a sequence of elements of $E$ and since $B_i$ is closed $x \in B_i$. It follows that $x \in F \cap B_i$ and therefore the claim. \ \hfill $\Box$ We have to warn the reader that the closure of a subset $X \subset E$ of a bornological vector space is rarely equal to the limit points of convergent sequences of elements of $X$. So, we introduce the following notation. \begin{defn} \label{def:limit_points} Let $E$ be a bornological vector space and $X \subset E$, then $X^{(1)}$ denotes the elements of $E$ that are bornological limits for sequences of elements of $X$. \end{defn} Thus, we have the (in general strict) inclusion $X^{(1)} \subset \overline{X}$. We now show that this inclusion is an equality for an important special case: linear subspaces of complete, proper bornological spaces. We need a preliminary lemma. \begin{lem} \label{lemma:closed_complete} Let $F \subset E$ be a linear subspace of a complete bornological vector space over $k$. Then, $F$ is (bornologically) closed if and only if $F$ is complete. \end{lem} {\bf Proof.} This is a classical result in the theory of bornological vector spaces. For the convenience of the reader we reproduce here the proof that can be found in \cite{Hog}, chapter 3, Proposition 3.2.1. Moreover our proof deals also with the non-Archimedean case of the lemma not covered in \cite{Hog}. Suppose that $F \subset E$ is a complete subspace. Let $\{x_n\}_{n \in \mathbb N}$ be a sequence of elements of $F$ that converges bornologically to $x \in E$. By Remark \ref{rem:born_conv} there exists a bounded disk $B \subset E$ such that $x_n \to x$ in $E_B$. Since $B \cap F$ is a bounded disk in $F$ and $F$ is complete can find a disk $B' \subset F$ such that $B \subset B'$ and $F_{B'}$ is Banach. This shows that $\{x_n\}_{n \in \mathbb N}$ is a Cauchy sequence in $F_{B'}$, because the map $F_B \to F_{B'}$ is by construction bounded. Hence, the limit of $\{x_n\}_{n \in \mathbb N}$ is an elements of $F$, which is hence closed. For the converse, suppose that $F \subset E$ is a closed subspace. Let $\mathcal{D}^{c}_{E}$ denotes the family of completant bounded disks of $E$, \emph{i.e. } bounded disks $B$ for which $E_B$ is Banach. It is enough to show that for any $B \in \mathcal{D}^{c}_{E}$ the bounded disk $B \cap F$ of $F$ is completant. So, let $\{ x_n \}_{n \in \mathbb N}$ be a Cauchy sequence in $F_{B \cap F}$. This implies that $\{ x_n \}_{n \in \mathbb N}$ is a Cauchy sequence in $E_B$ and therefore it must converge to a point of $E_B$. But $F_{B \cap F}$ is a closed subspace of $E_B$, since it is the intersection of $E_B \subset E$ with a closed subspace of $E$, hence the limit of $\{ x_n \}_{n \in \mathbb N}$ is in $F_{B \cap F}$. This proves that $F_{B \cap F}$ is a $k$-Banach space and concludes the proof. \ \hfill $\Box$ \begin{prop} \label{prop:sequence_close} Let $F \subset E$ be a subspace of a complete, proper bornological vector space over $k$, then $F^{(1)} = \overline{F}$, where $F^{(1)}$ is as in Definition \ref{def:limit_points}. \end{prop} {\bf Proof.} Let $E = \underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} E_i$ be a presentation of $E$ as a direct limit of Banach spaces over $k$ as in Definition \ref{defn:complete_born}. The bornology on $F$ induced by the inclusion $F \subset E$ can be described by \[ F \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} (F \cap E_i), \] where each $F \cap E_i$ is endowed by the norm induced by $E_i$. By Lemma \ref{cor:proper_subspace} $F$ endowed with this bornology is a proper bornological vector space which is easily seen to be separated. By Remark \ref{rem:proper} (2), the inductive system $\{\widehat{F \cap E_i} \}_{i \in I}$ obtained by applying the completion functor of Banach spaces, is monomorphic and hence it defines a complete and proper bornological vector space $\widehat{F} = \underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} \widehat{F \cap E_i}$ in which $F$ embeds. Since $\widehat{F}$ is the completion of $F$ and the inclusion $F \to E$ must factor through a map \[ \phi: \widehat{F} \to E. \] Moroever, $\phi$ is a strict monomorphism, because for each $i \in I$ the maps \[ F \cap E_i \to \widehat{F \cap E_i} \to E_i \] are strict, since all spaces are endowed with the restriction of the same norm, by construction. $\widehat{F}$ is a closed linear subspace of $E$ because it is complete and hence we can apply Lemma \ref{lemma:closed_complete}. This implies that $\overline{F} \subset \im(\phi) \cong \widehat{F}$. Finally, let $x \in \im(\phi)$ then $x = \phi(y)$ for some $y \in \widehat{F \cap E_i}$ and for some $i \in I$. So, there exists a sequence of elements $\{x_n\}_{n \in \mathbb N} \subset F \cap E_i$ which converges in norm to $x$, which shows that $ \widehat{F} \cong \im(\phi) \subset F^{(1)} \subset \overline{F}$. \ \hfill $\Box$ We remark that in Proposition \ref{prop:sequence_close} the hypothesis that $F \subset E$ is a (linear) subspace is crucial. It is possible to construct subsets $S\subset E$ for which $S^{(1)} \subset \overline{S}$ strictly even when $E$ is a proper object of ${\text{\bfseries\sf{CBorn}}}_{k}$ (see \cite{Hog2} Theorem 1, SectionII.7). \begin{rem} Proposition \ref{prop:sequence_close} in the case $k =\mathbb C$ was proven in \cite{M2}, cf. Proposition 4.14. The arguments of \cite{M2} are similar to ours, but the terminology and the context are very different. \end{rem} \subsection{Relations between bornological and topological vector spaces} \label{sec:normal} We recall the definitions of two functors from \cite{H2}, $(-)^{t}: {\text{\bfseries\sf{Born}}}_k \to {\text{\bfseries\sf{Tc}}}_k$ and $(-)^b: {\text{\bfseries\sf{Tc}}}_k \to {\text{\bfseries\sf{Born}}}_k$, where ${\text{\bfseries\sf{Tc}}}_k$ is the category of locally convex topological vector spaces over $k$. To a bornological vector space $E$ we associate the topological vector space $E^t$ in the following way: we equip the underlying vector space of $E$ with a topology for which a basis of $0$-neighborhoods is given by \emph{bornivorous subsets}, \emph{i.e. } subsets that absorb all bounded subsets of $E$. If $E$ is a locally convex space, $E^b$ is defined to be the bornological vector space obtained by equipping the underlying vector space of $E$ with the \emph{von Neumann} (also called canonical) bornology, whose bounded subsets are the subsets of $E$ absorbed by all $0$-neighborhoods. In chapter 1 of \cite{H2} one can find details of these constructions and their properties of which the main one is that \begin{equation}\label{eqn:tbadj} (-)^t : {\text{\bfseries\sf{Born}}}_k \leftrightarrows {\text{\bfseries\sf{Tc}}}_k : (-)^b \end{equation} is an adjoint pair of functors. \begin{lem}\label{lem:GivesEquiv} It is shown in \cite{H2} (cf. Chapter 2, \S 2 $n^\circ$ 6) that there are full sub-categories of ${\text{\bfseries\sf{Born}}}_k$ and ${\text{\bfseries\sf{Tc}}}_k$ for which these functors give an equivalence of categories. \end{lem} \begin{defn}\label{defn:normal_space} We call the objects of the categories mentioned in Lemma \ref{lem:GivesEquiv} normal bornological vector spaces or normal locally convex spaces (depending on the ambient category we are thinking them in). We call the corresponding full sub-categories of normal objects ${\text{\bfseries\sf{NBorn}}}_{k} \subset {\text{\bfseries\sf{Born}}}_{k}$ and ${\text{\bfseries\sf{NTc}}}_{k} \subset {\text{\bfseries\sf{Tc}}}_{k}$. \end{defn} For two elements $E, F \in {\text{\bfseries\sf{NTc}}}_k$ (or $E, F \in {\text{\bfseries\sf{NBorn}}}_k$) the notion of boundedness and continuity of a linear map $f: E \to F$ is equivalent. \begin{rem} \label{rem:closed_top_bor_nor} Any normal bornological vector space is proper (cf. Definition \ref{def:proper_born}). One can show this fact by first noticing that in any topological vector space the closure of bounded subset is bounded. Hence, the von Neumann bornology of a locally convex topological vector space always has a basis made of closed (for the topology) subsets. Moreover, every subset which is closed for the topology is also bornologically closed for the von Neumann bornology. For more details on this topic see \cite{H2}, Chapter 2, \S 3, $n^\circ$ 6 and 7. \end{rem} \begin{obs}\label{obs:tbequiv} The functor ${(-)}^b: {\text{\bfseries\sf{Tc}}}_k \to {\text{\bfseries\sf{Born}}}_k$ commutes with projective limits and ${(-)}^t: {\text{\bfseries\sf{Born}}}_k \to {\text{\bfseries\sf{Tc}}}_k$ commutes with inductive limits. \end{obs} \begin{obs}\label{obs:Dense2Dense}Let $E$ be a bornological vector space over $k$. It follows that a subset $U \subset E$ is bornologically dense then $U \subset E^{t}$ is topologically dense. Indeed, all bornologically closed subsets for the von Neumann bornology of $E^{t}$ must be bornologically closed for the bornology of $E$ and, as said in Remark \ref{rem:closed_top_bor_nor}, all topologically closed subsets of $E^{t}$ are bornologically closed for the von Neumann bornology. Therefore, the closure of $U$ in $E^{t}$ contains the closure of $U$ in $E$. \end{obs} \begin{example}\label{example:normal} \begin{itemize} \item All semi-normed spaces over $k$ are in an obvious way normal objects in ${\text{\bfseries\sf{Born}}}_{k}$ and ${\text{\bfseries\sf{Tc}}}_{k}$. \item All metrizable locally convex vector spaces and metrizable bornological vector spaces of convex type are normal. See the beginning of page 109 of \cite{H2} or Proposition 3 at page 50 of \cite{Hog}. \item The underlying object in ${\text{\bfseries\sf{CBorn}}}_{k}$ of a $k$-dagger affinoid algebra is a normal bornological vector space (see \cite{Bam}, Chapter 3). \end{itemize} \end{example} We introduce some classes of normal spaces that will be used throughout the paper. \begin{defn} \label{defn:frechet} A \emph{Fr\'echet space} over $k$ is a complete, metric, locally convex topological vector space over $k$. A \emph{bornological Fr\'echet space} over $k$ is a bornological vector space $E \in {\text{\bfseries\sf{CBorn}}}_k$ such that $E \cong F^b$ for a Fr\'echet space $F \in {\text{\bfseries\sf{Tc}}}_k$. \end{defn} \begin{defn} A \emph{LB space} (respectively \emph{LF space}) over $k$ is a locally convex topological vector space over $E$ such that \begin{equation} \label{eqn:LF} E \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{n \in \mathbb N} E_n \end{equation} where $E_n$ is a Banach space (respectively a Fr\'echet space) and the system maps are injective. \end{defn} All LB spaces are normal locally convex spaces but the functor ${(-)}^b$ may not commute with the colimit (\ref{eqn:LF}). \begin{defn} \label{defn:regular_LF} An LB space or an LF space over $k$ is said to be \emph{regular} if \[ E^b \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{n \in \mathbb N} E_n^b. \] \end{defn} \begin{defn} We say that $E \in {\text{\bfseries\sf{Born}}}_k$ is a \emph{bornological LB space} if $E \cong F^b$ where $F \in {\text{\bfseries\sf{Tc}}}_k$ is a regular LB space. \end{defn} The following criterion will be very important in following sections. \begin{prop} \label{prop:limpro_commutation_t} Consider a projective system $\{ E_{i} \}_{i \in I}$ of objects of ${\text{\bfseries\sf{NBorn}}}_{k}$. The following conditions are equivalent \begin{enumerate} \item $\underset{i \in I}\mathop{\lim\limits_{\displaystyle\leftarrow}} (E_i^t)$ is normal in ${\text{\bfseries\sf{Tc}}}_k$; \item $\underset{i \in I}\mathop{\lim\limits_{\displaystyle\leftarrow}} (E_i^t )\cong (\underset{i \in I}\mathop{\lim\limits_{\displaystyle\leftarrow}} E_i)^t$. \end{enumerate} \end{prop} {\bf Proof.} If $\underset{i \in I}\mathop{\lim\limits_{\displaystyle\leftarrow}} (E_i^t)$ is normal, then by Observation \ref{obs:tbequiv} \[ \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in I} (E_i^t) \cong ((\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in I} E_i^t)^b)^t \cong (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in I} (E_i^{tb}))^t \cong (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in I} E_i)^t. \] On the other hand, the same calculations also shows that if $\underset{i \in I}\mathop{\lim\limits_{\displaystyle\leftarrow}}( E_i^t) \cong (\underset{i \in I}\mathop{\lim\limits_{\displaystyle\leftarrow}} E_i)^t$ \[ ((\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in I} E_i^t)^b)^t \cong (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in I} (E_i^{tb}))^t \cong (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in I} E_i)^t \cong \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in I} (E_i^t) .\] Therefore, $\underset{i \in I}\mathop{\lim\limits_{\displaystyle\leftarrow}} (E_i^t)$ is normal. \ \hfill $\Box$ Finally, in next section we will use the following class of bornological vector spaces. \begin{defn}\label{defn:regular} A bornological vector space is said to be \emph{regular} if it has a basis for its bornology made of subsets which are closed subsets with respect to the topology of $E^t$. \end{defn} \begin{rem} The condition of Definition \ref{defn:regular} is strictly stronger than the condition of properness of Definition \ref{def:proper_born}, where the requirement is that $E$ has a basis of bornologically closed subsets. \end{rem} \subsection{Nuclear bornological spaces and flatness} \label{sec:nuclear} The functor ${\text{\bfseries\sf{CBorn}}}_k \to {\text{\bfseries\sf{CBorn}}}_k$, given by $E \mapsto E \widehat{\otimes}_{\pi, k} F$ where $F$ is a complete bornological vector space, is right exact (even strongly right exact) but not left exact, in general. In order to have a sufficient condition to ensure the exactness of such functors, we use the notion of nuclear operator. This notions was introduced by Grothendieck in the context of locally convex spaces over Archimedean base fields. We will also need to introduce the injective tensor product over maximally complete fields (non-Archimedean or not). \begin{defn} \label{defn:BanNucMap} Conside a morphism $f: E \to F$ in ${\text{\bfseries\sf{Ban}}}_{k}$. The morphism $f$ is called \emph{nuclear} if there exist two sequences $\{ \alpha_n \} \subset \underline{\Hom}(E, k)$ and $\{ f_n \} \subset F$ where \begin{itemize} \item $\underset{n = 0}{\overset{\infty}\sum} \|\alpha_n\| \|f_n\| < \infty$ if $k$ is Archimedean; \item $\|\alpha_n\| \|f_n\| \to 0, \text{ for } n \to \infty $ if $k$ is non-Archimedean; \end{itemize} such that \[ f(x) = \sum_{n = 0}^\infty \alpha_n(x) f_n \] for all $x \in E$. \end{defn} \begin{rem}\label{rem:CatDefCompose}A morphism $f$ is nuclear in the sense of Definition \ref{defn:BanNucMap} if and only if $f$ is in the image of the canonical morphism \[ \Hom(k, E^{\vee}\widehat{\otimes}_{k}F) \to \Hom(k,\underline{\Hom}(E,F)) \cong \Hom(E, F), \] where $E^{\vee} = \underline{\Hom}(E,k)$. This definition makes sense in any closed, symmetric monoidal category. In this abstract context, the pre-composition of a nuclear morphism with any morphism is a nuclear morphism. Similarly, the post-composition of a nuclear morphism with any morphism is nuclear (see Proposition 2.1 of \cite{HR} for a proof of these facts). \end{rem} \begin{defn} \label{defn:compactoid} Let $V$ be a locally convex $k$-vector space. A subset $B \subset V$ is called \emph{compactoid} if for any neighborhood of the origin $U$ there is a finite subset $F \subset V$ such that \[ B \subset U + \Gamma(F), \] where $\Gamma$ denotes the absolute convex hull of $F$. \end{defn} Recall that a subset $B \subset V$ of a locally convex $k$-vector space is called \emph{pre-compact} if for any neighborhood of the origin $U$ \[ B \subset U + F \] for a finite set $F \subset V$. \begin{prop}\label{prop:CompactoidPrecompact} Let $V$ be a locally convex $k$-vector space. If $k$ is locally compact then a subset $B \subset V$ is compactoid if and only if it is pre-compact. \end{prop} {\bf Proof.} Let $B \subset V$ be a compactoid subset and $U \subset V$ be an absolutely convex neighborhood of zero. Then, there is a finite set $F = \{x_1, \ldots, x_n \} \subset V$ such that $B \subset U + \Gamma(F)$. The map $k^n \to E$ given by $(\lambda_1, \dots, \lambda_n) \mapsto \underset{i = 1}{\overset{n}\sum} \lambda_i x_i$ is continuous and it maps the compact set $(k^{\circ})^n$ onto $\Gamma(F)$. Therefore, $\Gamma(F)$ is compact. This means that there is a finite set $F' \subset V$ such that $\Gamma(F) \subset U + F'$, so \[ B \subset U + U + F'. \] If $k$ is non-Archimedean we can conclude that \[ B \subset U + F' \] proving that $B$ is pre-compact. If $k$ is Archimedean, we can notice that the same reasoning can be done with $\frac{U}{2}$ in place of $U$, so that \[ B \subset \frac{U}{2} + \frac{U}{2} + F' \subset U + F' \] proving that $B$ is pre-compact also in this case. The converse statement, that every pre-compact subset is compactoid, is obvious. \ \hfill $\Box$ Thanks to Proposition \ref{prop:CompactoidPrecompact}, a compactoid subset in the case when $k$ is Archimedean is simply pre-compact subset. The notion of compactoid is necessary when the base field is not locally compact, because in this case the definition of pre-compact subset is not useful. \begin{defn} The family of compactoid subsets of a locally convex topological vector space $V$ over $k$ forms a bornology, which is denoted by ${\text{\bfseries\sf{Cpt}}}(V)$. \end{defn} Compactoid subsets are always bounded subsets in the sense of von Neumann, but the converse is often false. For example, one can show that if on a Banach space every bounded subset is compactoid, then the Banach space must be finite dimensional. \begin{defn} \label{defn:compactoid_map} Consider a morphism $f: E \to F$ in ${\text{\bfseries\sf{Ban}}}_{k}$. The morphism $f$ is called \emph{compactoid} if the image of the unit ball of $E$ is a compactoid (in the sense of Definition \ref{defn:compactoid}) subset of $F$. \end{defn} \begin{rem}\label{rem:BoundedMor} A morphism $f: E \to F$ in ${\text{\bfseries\sf{Ban}}}_{k}$ is compactoid if and only it is a bounded morphism when considered as a map of bornological spaces $f: E^{b} \to {\text{\bfseries\sf{Cpt}}}(F)$. \end{rem} Notice that what we called a compactoid map in literature is just called a compact map, in the case in which $k$ is Archimedean. If $k$ is Archimedean a nuclear morphism is compact but there exist compact morphisms which are not nuclear. However, the non-Archimedean case is simpler. \begin{prop} \label{prop:compactor_nuclear_map} Let $k$ be non-Archimedean and $f: E \to F$ a bounded morphism in ${\text{\bfseries\sf{Ban}}}_{k}$. Then $f$ is nuclear if and only if it is compactoid. \end{prop} {\bf Proof.} In classical terminology what we called nuclear maps over a non-Archimedean field are called completely continuous maps. With this terminology, the proposition is proved as Proposition 2 in page 92 of \cite{Gruson}. \ \hfill $\Box$ \begin{defn}\label{defn:nuclear_born} An object of ${\text{\bfseries\sf{CBorn}}}_{k}$ is said to be \emph{nuclear} if there exists an isomorphism \[ E \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} E_i \] as in Definition \ref{defn:complete_born} where for any $i < j$ in $I$ the map $E_i \to E_j$ is a nuclear monomorphism of $k$-Banach spaces. Therefore, by definition, nuclear bornological spaces are always complete bornological spaces. \end{defn} We now limit the discussion to the case where $k$ is maximally complete. This restriction will be removed later. \begin{defn}\label{defn:injective_norm} Let $k$ be a maximally complete valuation field. Let $E, F$ be objects of ${\text{\bfseries\sf{Ban}}}_{k}$ and let with $E^{\vee}=\underline{\Hom}_{{\text{\bfseries\sf{Ban}}}_{k}}(E,k)$ and $F^{\vee}=\underline{\Hom}_{{\text{\bfseries\sf{Ban}}}_{k}}(F,k)$. Then, the \emph{(completed) injective tensor product} is defined to be the completion of the algebraic tensor product $E\otimes_{k}F$ equipped with the semi-norm \[ \| \sum e_i \otimes f_i \|_{\epsilon,k} = \underset{\alpha \in (E^{\vee})^\circ, \beta \in (F^{\vee})^\circ}{\rm sup\,} |\sum \alpha(e_i) \beta(f_i)| \] where $(E^{\vee})^\circ$ and $(F^{\vee})^\circ$ denote the unit balls. We denote the injective tensor product by $E \otimes_{\epsilon,k} F$ ad the completed one by $E \widehat{\otimes}_{\epsilon,k} F$. \end{defn} The following is Theorem 1 of page 212 of \cite{H2}. \begin{lem}\label{lem:DifferentTensors} Let $k$ be a non-Archimedean, maximally complete valuation field. Then, for any $E,F \in {\text{\bfseries\sf{Ban}}}_{k}$, the natural morphism in ${\text{\bfseries\sf{Ban}}}_{k}$ \[ E \otimes_{\pi,k} F \to E \otimes_{\epsilon,k} F \] is an isomorphism. \end{lem} \begin{defn}\label{defn:ITPborn} Let $k$ be a maximally complete valuation field and let $E, F$ be two objects of ${\text{\bfseries\sf{CBorn}}}_{k}$. We define the \emph{injective tensor product} of $E$ and $F$ as the colimit in ${\text{\bfseries\sf{CBorn}}}_{k}$ of an the monomorphic inductive system: \begin{equation} \label{eqn:inj_tens_born} E \otimes_{\epsilon,k} F = \mathop{\lim\limits_{\displaystyle\rightarrow}}_{B_E \in \mathcal{D}_E, B_F \in \mathcal{D}_F} E_{B_E} \otimes_{\epsilon,k} F_{B_F}, \end{equation} where $\mathcal{D}_E$ and $\mathcal{D}_F$ are the family of bounded disks of $E$ and $F$ respectively. The \emph{completed injective tensor product} of $E$ and $F$ is defined \[ E \widehat{\otimes}_{\epsilon,k} F = \widehat{E \otimes_{\epsilon,k} F}. \] \end{defn} \begin{rem} The inductive system (\ref{eqn:inj_tens_born}) is monomorphic because $(-) \otimes_{\epsilon,k} (-)$ is a strongly left exact functor, cf. Corollary 2 page 210 of \cite{H2}. \end{rem} Definition \ref{defn:ITPborn} differs from the one given in \cite{H2} chapter 4, \S 2, but is equivalent to it for thanks to the following lemma. \begin{lem} \label{lemma:limind_injective_tensor} Let $k$ be a maximally complete valuation field. Let $\{ E_i \}_{i \in I}$ and $\{ F_j \}_{j \in J}$ be two filtered inductive systems of complete bornological vector spaces such that all $E_i$ and $F_i$ are separated and regular (in the sense of Definition \ref{defn:regular}) and all system morphisms are injective. Then, in ${\text{\bfseries\sf{CBorn}}}_{k}$ we have that \[ E \otimes_{\epsilon,k} F \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{(i,j) \in I \times J} ( E_i \otimes_{\epsilon,k} F_i ) \cong ( \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} E_i ) \otimes_{\epsilon,k} (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{j \in J} F_j), \] where on the right hand side the injective tensor product is the one considered in \cite{H2}. \end{lem} {\bf Proof.} See \cite{H2} Proposition 4, page 210. \ \hfill $\Box$ \begin{rem} The injective tensor product does not commute with filtered inductive limits in general. \end{rem} Since all Banach spaces are obviously regular and complete bornological vector spaces are essentially monomorphic objects in ${\text{\bfseries\sf{Ind}}}({\text{\bfseries\sf{Ban}}}_k)$, Lemma \ref{lemma:limind_injective_tensor} readily implies that our definition of injective tensor product agrees with the definition of \cite{H2} page 208, for proper complete bornological vector spaces. \begin{prop} \label{prop:nuclear_proj_equal_injecive} Let $k$ be a maximally complete valuation field. Let $E$ and $F$ be objects of ${\text{\bfseries\sf{CBorn}}}_{k}$. In the Archimedean case we assume that $F$ is nuclear. Then, the natural morphism \[ E \widehat{\otimes}_{\pi,k} F \longrightarrow E \widehat{\otimes}_{\epsilon,k} F \] in ${\text{\bfseries\sf{CBorn}}}_{k}$ is an isomorphism. \end{prop} {\bf Proof.} We start with the case in which $k$ is non-Archimedean. Write $E=\underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} E_i$ and $F=\underset{j \in J}\mathop{\lim\limits_{\displaystyle\rightarrow}} F_j$ as in Definition \ref{defn:complete_born}. The isomorphisms $E_{i} \otimes_{\pi,k} F_{j} \to E_{i} \otimes_{\epsilon,k} F_{j}$ from Lemma \ref{lem:DifferentTensors} gives an isomorphism of systems \[ \mathop{\lim\limits_{\displaystyle\rightarrow}}_{(i,j) \in I \times J}(E_{i} \otimes_{\pi,k} F_{j}) \to \mathop{\lim\limits_{\displaystyle\rightarrow}}_{(i,j) \in I \times J} (E_{i} \otimes_{\epsilon,k} F_{j}). \] The projective tensor product commutes with all colimits and by Lemma \ref{lemma:limind_injective_tensor} also the injective tensor product commutes with colimit we are calculating. Hence, we get an isomorphism \[(\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} E_{i} )\otimes_{\pi,k} (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{j \in J} F_{j} )\to ( \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} E_{i}) \otimes_{\epsilon,k} (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{j \in J} F_{j}) \] and the functoriality of the completion yields the claimed isomorphism. We now consider the case in which $k$ is Archimedean. First recall the following fact: let $f: E_1 \to E_2$ be a nuclear morphism of Banach spaces and $V$ another Banach space, then the linear map of vector spaces \[f \otimes_{k} \text{id}_V: E_1 \otimes_{\epsilon,k} V \to E_2 \otimes_{\pi,k} V \] is bounded. This is precisely the content of Proposition 2.4, page I-15 of \cite{DV}, where the complex case is discussed, but the same arguments work also over $\mathbb R$. Then, consider $F = \mathop{\lim\limits_{\displaystyle\rightarrow}}\limits_{j \in J} F_j$ and $E = \mathop{\lim\limits_{\displaystyle\rightarrow}}\limits_{i \in I} E_i$ presentations of $F$ and $E$ as in Definition \ref{defn:complete_born} and as in Definition \ref{defn:nuclear_born} respectively. For any $(i_1, j_1) < (i_2, j_2)$ the morphisms \[ \a_{i_1, j_1}: E_{i_1} \otimes_{\epsilon,k} F_{j_1} \to E_{i_2} \otimes_{\epsilon,k} F_{j_1} \to E_{i_2} \otimes_{\pi,k} F_{j_2} \] are bounded morphisms of Banach spaces because of the mentioned Proposition 2.4 of \cite{DV}. By Lemma \ref{lemma:limind_injective_tensor} we have that the map \[\begin{split} E \otimes_{\epsilon,k} F &\cong (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} E_i) \otimes_{\epsilon,k} (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{j \in J} F_j) \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{(i,j) \in I \times J} (E_i \otimes_{\epsilon,k} F_j) \stackrel{\a}{\to} \\ & \mathop{\lim\limits_{\displaystyle\rightarrow}}_{(i,j) \in I \times J} (E_i \otimes_{\pi,k} F_j) \cong (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} E_i) \otimes_{\pi,k} (\mathop{\lim\limits_{\displaystyle\rightarrow}}_{j \in J} F_j) \cong E \otimes_{\pi,k} F \end{split} \] obtained by the morphisms $\a_{i_1, j_1}$ is a bounded map, and is easy to check that the composition of $\a$ with the canonical map \[ E \otimes_{\pi,k} F \to E \otimes_{\epsilon,k} F \] is the identity on both sides. We conlcude the proof using again the functorialiy of the completion of bornological vector spaces. \ \hfill $\Box$ \begin{lem} \label{lem:injective_tensor_strict_mono} Let $k$ be a maximally complete field and $F \in {\text{\bfseries\sf{CBorn}}}_k$. The functor $(-) \otimes_{\epsilon,k} F$ is strongly left exact. \end{lem} {\bf Proof.} See \cite{H2} Corollary 2, page 210. \ \hfill $\Box$ \begin{lem} \label{lem:exact_completion} The completion functor $\widehat{(-)}: {\text{\bfseries\sf{Nrm}}}_k \to {\text{\bfseries\sf{Ban}}}_k$ is exact. \end{lem} {\bf Proof.} Proposition 4.1.13 of \cite{Pr2} shows that the completion functor is exact for locally convex spaces. Since the functor $\widehat{(-)}: {\text{\bfseries\sf{Nrm}}}_k \to {\text{\bfseries\sf{Ban}}}_k$ precisely agrees with the completion as locally convex spaces, we can use that proposition to deduce our lemma. Notice that \cite{Pr2} deals only with Archimedean base fields but the same reasoning works also for non-Archimedean base fields since only the uniform structures of the spaces are considered. \ \hfill $\Box$ \begin{rem} $\widehat{(-)}: {\text{\bfseries\sf{Nrm}}}_k \to {\text{\bfseries\sf{Ban}}}_k$ is not strongly left exact and does not preserve monomorphisms. \end{rem} \begin{lem}\label{lem:nArchBanFlat} If $k$ is non-Archimedean, then any object in the category of non-Archimedean Banach spaces over $k$ is flat. \end{lem} {\bf Proof.} If $k$ is maximally complete, then the claim follows from the combination of Lemma \ref{lem:injective_tensor_strict_mono}, Lemma \ref{lem:DifferentTensors} and Lemma \ref{lem:exact_completion} . If $k$ is not maximally complete we can choose a maximal completion $K/k$. For every $E \in {\text{\bfseries\sf{Ban}}}_k$ the canonical map $\iota_E: E \to E \otimes_k K$ is a strict monomorphism (even an isometry onto its image, cf. Lemma 3.1 of \cite{Poi} applied with $A = k, B = K$ and $C = E$). Consider a strict exact sequence \begin{equation} \label{eqn:flat_ban} 0 \to \ker f \to E \stackrel{f}{\to} F \end{equation} in ${\text{\bfseries\sf{Ban}}}_k$. We can always suppose that $\ker f$ is equipped with the restriction of the norm of $E$, in which case $\ker f \to E$ is an isometric embedding. Applying again Lemma 3.1 of \cite{Poi}, we obtain a diagram \[ \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { 0 & \ker f & E & F \\ 0 & \ker f \otimes_k K & E \otimes_k K & F \otimes_k K \\}; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-1-2); \path[->,font=\scriptsize] (m-2-1) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-1-3); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-2-2) edge node[auto] {$$} (m-2-3); \path[->,font=\scriptsize] (m-1-3) edge node[auto] {$$} (m-2-3); \path[->,font=\scriptsize] (m-1-3) edge node[auto] {$f$} (m-1-4); \path[->,font=\scriptsize] (m-2-3) edge node[auto] {$$} (m-2-4); \path[->,font=\scriptsize] (m-1-4) edge node[auto] {$$} (m-2-4); \end{tikzpicture} \] where all vertical maps are isometric embeddings. The bottom row is exact algebraically. It is also strictly exact, because we can apply Lemma 3.1 of \cite{Poi} to the isometric embedding $\ker f \to E$ (precisely using $A = \ker f, B = E$ and $C = K$ this time) to deduce that $\ker f \otimes_k K \to E \otimes_k K$ is an isometric embedding. Then, consider and a $G \in {\text{\bfseries\sf{Ban}}}_k$. Tensoring (\ref{eqn:flat_ban}) with $G$ we obtain a diagram \[ \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { 0 & \ker f \otimes_k G & E \otimes_k G & F \otimes_k G \\ 0 & (\ker f \otimes_k G) \otimes_k K & (E \otimes_k G) \otimes_k K & (F \otimes_k G) \otimes_k K \\}; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-1-2); \path[->,font=\scriptsize] (m-2-1) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-1-3); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-2-2) edge node[auto] {$$} (m-2-3); \path[->,font=\scriptsize] (m-1-3) edge node[auto] {$$} (m-2-3); \path[->,font=\scriptsize] (m-1-3) edge node[auto] {$f$} (m-1-4); \path[->,font=\scriptsize] (m-2-3) edge node[auto] {$$} (m-2-4); \path[->,font=\scriptsize] (m-1-4) edge node[auto] {$$} (m-2-4); \end{tikzpicture} \] where the top row is algebraically exact. We need only to check that $\ker f \otimes_k G \to E \otimes_k G$ is a strict monomorphism. Using the isomorphisms \begin{eqnarray*} (E \otimes_k G) \otimes_k K \cong (E \otimes_k K) \otimes_K (G \otimes_k K), \\ (F \otimes_k G) \otimes_k K \cong (F \otimes_k K) \otimes_K (G \otimes_k K) \\ (\ker f \otimes_k G) \otimes_k K \cong (\ker f \otimes_k K) \otimes_K (G \otimes_k K) \end{eqnarray*} last diagram becomes \[ \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { 0 & \ker f \otimes_k G & E \otimes_k G & F \otimes_k G \\ 0 & (\ker f \otimes_k K) \otimes_K (G \otimes_k K) & (E \otimes_k K) \otimes_K (G \otimes_k K) & (F \otimes_k K) \otimes_K (G \otimes_k K) \\}; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-1-2); \path[->,font=\scriptsize] (m-2-1) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-1-3); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-2-2) edge node[auto] {$$} (m-2-3); \path[->,font=\scriptsize] (m-1-3) edge node[auto] {$$} (m-2-3); \path[->,font=\scriptsize] (m-1-3) edge node[auto] {$f$} (m-1-4); \path[->,font=\scriptsize] (m-2-3) edge node[auto] {$$} (m-2-4); \path[->,font=\scriptsize] (m-1-4) edge node[auto] {$$} (m-2-4); \end{tikzpicture} \] where the bottom row is strictly exact because $K$ is maximally complete. Hence, in the diagram \[ \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { \ker f \otimes_k G & E \otimes_k G \\ (\ker f \otimes_k K) \otimes_K (G \otimes_k K) & (E \otimes_k K) \otimes_K (G \otimes_k K) \\}; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-1-2); \path[->,font=\scriptsize] (m-2-1) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-2-1); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-2-2); \end{tikzpicture} \] all maps are known to be strict monomorphism but the top horizontal, which is therefore a strict monomorphism too. This shows that the functor $(-) \otimes_{\pi, k} (-)$ is strongly exact, and applying Lemma \ref{lem:exact_completion} we deduce that $(-) \widehat{\otimes}_{\pi, k} (-)$ is exact. \ \hfill $\Box$ We now look at flatness in the closed symmetric monoidal category ${\text{\bfseries\sf{CBorn}}}_{k}$. \begin{thm} \label{thm:strct_exact_nuclear} Let $F$ be complete bornological vector spaces over $k$. If $k$ is non-Archimedean then $F$ is flat with respect the category $({\text{\bfseries\sf{CBorn}}}_{k}, \widehat{\otimes}_{k},k)$. If $k$ is Archimedean then the same conclusion holds provided that $F$ is nuclear. \end{thm} {\bf Proof.} When $k$ is Archimedean, it is of course maximally complete. So, the theorem follows immediately from combining Lemma \ref{lem:injective_tensor_strict_mono} and Proposition \ref{prop:nuclear_proj_equal_injecive}. Suppose now that $k$ is a non-Archimedean complete valuation field and fix a representation $F = \underset{j \in J}\mathop{\lim\limits_{\displaystyle\rightarrow}} F_j$ as a monomorphism filtered inductive system of Banach spaces. Unravelling the definitions, the functor $(-) \widehat{\otimes}_{\pi, k} F$ can be written as \[ (-) \widehat{\otimes}_{\pi, k} F = \widehat{(-) \otimes_{\pi, k} F} = \widehat{\mathop{\lim\limits_{\displaystyle\rightarrow}}_{j \in J}(-) \otimes_{\pi, k} F_j} = \mathop{\lim\limits_{\displaystyle\rightarrow}}_{j \in J}\widehat{(-) \otimes_{\pi, k} F_j} \] where last colimit is calculated in ${\text{\bfseries\sf{CBorn}}}_k$, meaning that it is the separated colimit of bornological vector spaces. Since ${\text{\bfseries\sf{CBorn}}}_k$ is an elementary quasi-abelian category, the filtered colimits are exact. Therefore, applying Lemma \ref{lem:nArchBanFlat} and Lemma \ref{lem:exact_completion} we see that the functor $(-) \widehat{\otimes}_{\pi, k} F$ can be written as a composition of exact functors. \ \hfill $\Box$ We conclude this section with some results about nuclearity needed in subsequent sections. \begin{prop} \label{prop:nuclear_proj} Any countable projective limit in ${\text{\bfseries\sf{CBorn}}}_{k}$ of nuclear objects is nuclear (and therefore by Theorem \ref{thm:strct_exact_nuclear} flat). \end{prop} {\bf Proof.} The proposition is proved in \cite{Hog3} Theorem 4.1.1, page 200, for the case in which $k$ is an Archimedean base field. So, we check the claim only for non-Archimedean base fields. First we show that a closed subspace of a nuclear bornological space is nuclear. Let $E \subset F$ be a closed subspace of a nuclear space and let $F \cong \underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} F_i$ as in Definition \ref{defn:nuclear_born}. As bornological vector spaces, there is an isomorphism \[ E \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} ( E \cap F_i ) \] and $E \cap F_i$ are $k$-Banach spaces. It is enough to check that the morphisms $E \cap F_i \to E \cap F_j$ are nuclear for each $i < j$. Thanks to Proposition \ref{prop:compactor_nuclear_map} this is equivalent to check that the maps $E \cap F_i \to E \cap F_j$ are compactoid. This follows from \cite{PGS}, Theorem 8.1.3 (ii) and (iii). Now, we check that countable products of nuclear bornological spaces are nuclear. Thus, suppose that $\{ E_n \}_{n \in \mathbb N}$ is a family of nuclear bornological spaces and put $E = \underset{n \in \mathbb N}\prod E_n$. Fix for any $n$ a completant bounded disk $B_n \subset E_n$ such that there exists a completant bounded disk $A_n \subset E_n$ for which $B_n \subset A_n$ and the inclusion $E_{B_n} \to E_{A_n}$ is a nuclear map (or equivalently compactoid, cf. Proposition \ref{prop:compactor_nuclear_map}). Let $B = \underset{n \in \mathbb N}\prod B_n$ and $A = \underset{n \in \mathbb N}\prod A_n$. It is clear that by varying $B_n$ over a final family of bounded completant disks of $E_n$, for each $n$, the family of disks $B \subset E$ and $A \subset E$ obtained in this way form a final family of bounded disks for the bornology of $E$. By construction the map $E_B \to E_A$ is bounded, where $E_A$ is the subspace of $\underset{n \in \mathbb N}\prod E_{A_n}$ consisting of bounded sequences equipped with the supremum norm, and $E_B$ is the analogous subspace of $\underset{n \in \mathbb N}\prod E_{B_n}$. We need to check that $E_B \to E_A$ sends the unit ball of $E_B$ to a compactoid subset of $E_A$. Fix a $\a \in |k^\times|$ such that $\a < 1$. By rescaling the norms of $E_{B_n}$ and $E_{A_n}$ we can assume that $B_n$ is contained in the ball of radius $\a^n$ of $E_{A_n}$. Since $B_n$ is compactoid in $E_{A_n}$, by Theorem 3.8.25 of \cite{PGS} there exists a zero sequence $\{ x_i^{(n)} \}_{i \in \mathbb N}$ such that $B_n \subset \Gamma(\{ x_i^{(n)} \}_{i \in \mathbb N})$, where $\Gamma$ denotes the absolutely convex hull. Then, we can consider the sequence \[ x^{(n)} = \{ (x_i^{(n)}) \}_{i \in \mathbb N} \subset E_A. \] Since by hypothesis, for each $i$ we have that \[ \lim_{n \to \infty} |x_i^{(n)}|_{E_{A_n}} = 0 \] then the sequence $x^{(n)}$ is a zero sequence of $E_A$ whose absolutely convex hull is $B$. So applying Theorem 3.8.24 of \cite{PGS} we can deduce that $B$ is compactoid in $E_A$. \ \hfill $\Box$ \begin{prop} \label{prop:nuclear_ind_lim} Any small inductive limit in ${\text{\bfseries\sf{CBorn}}}_{k}$ of nuclear objects is nuclear. \end{prop} {\bf Proof.} It is an easy consequence of Definition \ref{defn:nuclear_born} that any monomorphic filtered inductive limit of bornological nuclear spaces is a nuclear bornological space. Then, by the description of coproducts in ${\text{\bfseries\sf{CBorn}}}_k$ of Lemma 2.7 of \cite{BaBe} (which agree with coproducts in ${\text{\bfseries\sf{Ind}}}({\text{\bfseries\sf{Ban}}}_k)$) and applying Proposition \ref{prop:nuclear_proj} to finite coproducts, we obtain that coproducts of bornological nuclear spaces are nuclear bornological spaces. Therefore, it remains only to check that quotients of nuclear spaces are nuclear. The Archimedean case of the proposition is proved in Theorem 4.1.1, page 200 of \cite{Hog3}. So, let $k$ be non-Archimedean, $F$ a nuclear bornological space over $k$ and $E \subset F$ a closed subspace. Fixing an isomorphism $F \cong \underset{i \in I}\mathop{\lim\limits_{\displaystyle\rightarrow}} F_i$ as in Definition \ref{defn:nuclear_born}, we can write \[ \frac{F}{E} \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{i \in I} \frac{F_i}{F_i \cap E}. \] Thanks to Proposition \ref{prop:compactor_nuclear_map} it is enough to prove that the system morphisms $\phi_{i, j}: \frac{F_i}{F_i \cap E} \to \frac{F_j}{F_j \cap E}$ for any $i \le j$ are compactoid maps. By hypothesis $F_i \to F_j$ is a compatoid map and since the image of compactoid subsets by bounded maps are compactoid subsets, then also $F_i \to \frac{F_j}{F_j \cap E}$ is a compactoid map. By Theorem 8.1.3 (xi) of \cite{PGS} this is equivalent to say that $\phi_{i,j}$ is a compactoid map. \ \hfill $\Box$ We recall that a dagger affinoid algebra is a bornological algebra which is isomorphic to a quotient of the algebra $\mathcal{W}_k^n(\rho)$ of over-convergent analytic functions on the polycylinder of polyradius $\rho$. The isomorphism \[ \mathcal{W}_k^n(\rho) \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{r > \rho} \mathcal{T}_k^n(r) \] where $\mathcal{T}_k^n(r)$ are the $k$-Banach algebras of strictly convergent power-series on the polycylinder of polyradius $\rho$, endow $\mathcal{W}_k^n(\rho)$ with a bornology and dagger affinoid algebras are endowed with the quotient bornology. We refer to Section4 of \cite{BaBe} of chapter 3 of \cite{Bam} for more details. \begin{prop} \label{prop:dagger_nuclear} The underlying bornological vector spaces of dagger affinoid algebras are nuclear. \end{prop} {\bf Proof.} Since by Proposition \ref{prop:nuclear_ind_lim} quotients of bornological nuclear spaces are nuclear, it is enough to check that $\mathcal{W}_k^n(\rho)$ is nuclear. We first look at the case in which $k$ is non-Archimedean. It is easy to check the canonical restriction maps $\mathcal{T}_k^n(\rho) \to \mathcal{T}_k^n(\rho')$, for $\rho' < \rho$ are compactoid maps, which proves the claim. For details the reader can see Theorem 11.4.2 and Remark 11.4.3 of \cite{PGS} where there $\mathcal{A}^\dagger(\rho)$ is what in our notation is $\mathcal{W}_k^n(\rho)$. If $k$ is Archimedean one can write $\mathcal{W}_k^n(\rho)$ as the direct limit of the filtered system of Fr\'echet spaces of holomorphic functions on open polydisks of polyradius bigger than $\rho$, see Section3.3 of \cite{Bam} for a detailed proof of this fact. Thanks to a celebrated theorem of Montel these Fr\'echet spaces are nuclear Fr\'echet spaces (see for example Proposition 4 on page 26 of \cite{DV} for a proof of this fact) and by Lemma \ref{lem:frechet_nuclear} this is equivalent to say that these spaces are nuclear as bornological spaces. So, since by Lemma \ref{prop:nuclear_ind_lim} direct limits of nuclear bornological spaces are nuclear we deduce that $\mathcal{W}_k^n(\rho)$ is nuclear. \ \hfill $\Box$ \subsection{A relative flatness lemma} \label{sec:relative_flat} In last section we proved that if $F$ is a nuclear bornological vector space then the functor $(-) \widehat{\otimes}_{k} F$ is exact. Now we want to find conditions on $(-) \widehat{\otimes}_{k} F$ to preserve a different kind of projective limit: The cofiltered ones. The notion of projective tensor product was introduced by Grothendieck, in the category of locally convex spaces, in order to find a topological tensor product which commutes with projective limits. This is the origin of the name projective tensor product. But the bornological projective tensor product does not commute with projective limits, in general. So, we need to find some sufficient conditions to ensure this commutation for the projective limits we will study in Section\ref{sec:Stein_geometry}. This study involves comparing the bornological and topological projective tensor products. Thus, we start by recalling the notion of topological projective tensor product. \begin{defn} \label{defn:top_proj_tensor} Given $E, F \in {\text{\bfseries\sf{Tc}}}_k$ one defines $E \otimes_{\pi,k} F \in {\text{\bfseries\sf{Tc}}}_k$ as the algebraic tensor product equipped with the locally convex topology whose base of neighborhoods of zero is given by the family of absolutely convex hulls of subsets of $E \otimes_{k} F$ of the form \[ U \otimes V = \{ x \otimes y \ | \ x \in U, y \in V \} \] where $U$ and $V$ vary over the family of neighborhoods of $E$ and $F$ respectively. The \emph{complete projective tensor product} is defined as the separated completion of the space $E \otimes_{\pi,k} F$. The space obtained in this way is denoted $E \widehat{\otimes}_{\pi,k} F$. \end{defn} \begin{defn} \label{defn:top_nuclear} $E \in {\text{\bfseries\sf{Tc}}}_k$ is said to be \emph{nuclear} if $E \cong \underset{i \in I} \mathop{\lim\limits_{\displaystyle\leftarrow}} E_i$ for a cofiltered epimorphic projective system of semi-normed spaces $\{E_{i}\}_{i \in I}$ such that, for each $i < j$ in $I$, the maps $\widehat{E}_j \to \widehat{E}_i$ of Banach spaces induced on the separated completions are nuclear maps. \end{defn} \begin{defn} \label{defn:top_inj_tensor} Given $E, F \in {\text{\bfseries\sf{Tc}}}_k$ one defines the \emph{injective tensor product} $E \otimes_{\epsilon,k} F \in {\text{\bfseries\sf{Tc}}}_k$ as the algebraic tensor product equipped with the semi-norms $\{ \| \cdot \|_{p_i \otimes_\epsilon q_j} \}_{i,j \in I \times J}$, where for each $i,j \in I \times J$ the semi-norm $\| \cdot \|_{p_i \otimes_\epsilon q_j}$ is defined as in Definition \ref{defn:injective_norm}, where $\{ p_i \}_{i \in I}$ is a family of semi-norms that defines the topology of $E$ and $\{ q_j \}_{j \in J}$ is a family of semi-norms that defines the topology of $F$. The separated completion of $E \otimes_{\epsilon,k} F$ is by definition the \emph{complete injective tensor product} and is denoted $E \widehat{\otimes}_{\epsilon,k} F$. \end{defn} \begin{lem} \label{lem:top_inj_pro} If $E \in {\text{\bfseries\sf{Tc}}}_k$ is nuclear then the functors $(-) \widehat{\otimes}_{\epsilon,k} E$ and $(-) \widehat{\otimes}_{\pi,k} E$ are naturally isomorphic. \end{lem} {\bf Proof.} For the Archimedean case one can refer to Theorem 1 of page 25 of \cite{DV}. For the non-Archimedean case the result is a direct consequence of Theorem 8.5.1 and Theorem 10.2.7 of \cite{PGS}. \ \hfill $\Box$ \begin{defn} \label{defn:binuclear} Given $E \in {\text{\bfseries\sf{NBorn}}}_k$, we say that $E$ is \emph{binuclear} is $E$ is a nuclear bornological space and $E^t$ is a nuclear topological vector space. \end{defn} \begin{lem} \label{lem:LB_nuclear} Let $E \in {\text{\bfseries\sf{Tc}}}_k$. Suppose there exists a countable monomorphic compactoid inductive system of Banach spaces $\{E_n\}$ be such that $E \cong \underset{n \in \mathbb N} \mathop{\lim\limits_{\displaystyle\rightarrow}} E_n$ (in particular $E$ is a normal space, cf. Definition \ref{defn:normal_space}). Then, $E$ is nuclear as locally convex space and $E^b$ is nuclear as bornological vector space. \end{lem} {\bf Proof.} If $k$ is Archimedean, then this lemma is proved in \cite{Hog3} Theorem 7, page 160, since compactoid inductive limits are complete. Let $k$ be non-Archimedean. Then, by Theorem 11.3.5 (v) of \cite{PGS} $E$ is a regular LB-space (in the sense of Definition \ref{defn:regular_LF}), hence $E^b \cong \underset{n \in \mathbb N} \mathop{\lim\limits_{\displaystyle\rightarrow}} E_n$ as bornological vector space, so $E^b$ is nuclear. And by Theorem 11.3.5 (ix) of \cite{PGS} $E$ is nuclear as locally convex space. \ \hfill $\Box$ For Fr\'echet spaces the situation is a bit more complicated. \begin{lem} \label{lem:frechet_nuclear} If $k$ is Archimedean, then a Fr\'echet space $E \in {\text{\bfseries\sf{Tc}}}_k$ is nuclear as locally convex vector space if and only if $E^b$ is nuclear as bornological vector space. For non-Archimedean base fields, if a Fr\'echet space $E \in {\text{\bfseries\sf{Tc}}}_k$ is nuclear as locally convex vector space then $E^b$ is nuclear as bornological space. \end{lem} {\bf Proof.} If $k$ is Archimedean then this lemma is proved in \cite{Hog3} Theorem 7 (i) and (ii), page 160. If $k$ is non-Archimedean then the nuclear Fr\'echet space $E$ is Montel by Corollary 8.5.3 of \cite{PGS}. Therefore, by Theorem 8.4.5 ($\delta$) of \cite{PGS}, each bounded subset of $E$ is compactoid, \emph{i.e. } there is an isomorphism of bornological vector spaces ${\text{\bfseries\sf{Cpt}}}(E) \cong E^b$. Let's write \[ E^b \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{B \in \mathcal{D}_E} E_B \] as in Remark \ref{rem:gauge}, where $\mathcal{D}_E$ is the family of bounded disks of $E$, and \[ {\text{\bfseries\sf{Cpt}}}(E) \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{B \in {\text{\bfseries\sf{Cpt}}}_E} E_B \] where ${\text{\bfseries\sf{Cpt}}}_E$ is the family of compactoid disks of $E$. The family ${\text{\bfseries\sf{Cpt}}}_E$ is characterized as the family of bounded disks $B$ of $E$ for which there exists another bounded disk $B'$ such that $B \subset B'$ and the canonical map \[ E_B \to E_{B'} \] is compactoid. This characterization, in combination with the isomorphism \[ \mathop{\lim\limits_{\displaystyle\rightarrow}}_{B \in {\text{\bfseries\sf{Cpt}}}_E} E_B \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{B \in \mathcal{D}_E} E_B \] imply that for every bounded disk $B$ of $E$ there exists another bounded disk $B'$ such that $B \subset B'$ and the canonical map \[ E_B \to E_{B'} \] is compactoid. Since $E$ is complete we can always suppose that $B$ and $B'$ are Banach disks (\emph{i.e. } that $E_B$ and $E_{B'}$ are Banach). This proves that $E^b$ satisfies the conditions of Definition \ref{defn:nuclear_born}. \ \hfill $\Box$ \begin{rem} If $k$ is non-Archimedean there exists a Fr\'echet space $E$ for which $E^b$ is nuclear whereas $E$ is not nuclear. This is due to the fact that, for non-Archimedean base fields, a morphism of Banach spaces is nuclear if and only if it is compactoid. But the Archimedean analogous of the notion of compactoid map is what is called a compact map (or operator), in classical functional analysis over $\mathbb R$ and $\mathbb C$ . Therefore, a nuclear Fr\'echet space over a non-Archimedean base field, following this terminology, is analogous to what in Archimedean functional analysis is a called Schwartz-Fr\'echet space, (see \cite{TER} and the first chapter of \cite{Hog3} for a detailed account of the properties of Schwartz-Fr\'echet spaces). In the Archimedean case it is known that a Fr\'echet space is bornologically Schwartz if and only if it is Montel (cf. Theorem 8 of page 20 of \cite{Hog3}). Also in the non-Archimedean case a Fr\'echet space is bornologically nuclear if and only if it is Montel (cf. Corollary 8.5.3 of \cite{PGS}). One can prove (in both settings) that there exist Fr\'echet-Montel spaces which are not Schwartz. For explicit examples of such spaces, see Counterexamples 9.8.2 (vi) of \cite{PGS}, for $k$ non-Archimedean, and \S 4 of \cite{TER} for $k$ Archimedean. \end{rem} We will be interested in finding conditions for which the functors ${(-)}^t$ and ${(-)}^b$ intertwine the complete bornological and the complete topological projective tensor products. In general ${(-)}^t$ and ${(-)}^b$ preserve neither the complete nor the incomplete projective tensor products. \begin{prop} \label{prop:tensor_t_b_frechet} Let $E, F \in {\text{\bfseries\sf{Born}}}_k$ be bornological Fr\'echet spaces one of which is binuclear, then \[ (E \widehat{\otimes}_{\pi,k} F)^t \cong E^t \widehat{\otimes}_{\pi,k} F^t. \] If $E, F \in {\text{\bfseries\sf{Tc}}}_k$ are Fr\'echet spaces one of which is nuclear, then \[ (E \widehat{\otimes}_{\pi,k} F)^b \cong E^b \widehat{\otimes}_{\pi,k} F^b. \] \end{prop} {\bf Proof.} In page 215 of \cite{H2} it is proved that for metrizable topological or bornological spaces \[ (E \otimes_{\epsilon,k} F)^t \cong E^t \otimes_{\epsilon,k} F^t \] and \[ (E \otimes_{\epsilon,k} F)^b \cong E^b \otimes_{\epsilon,k} F^b. \] Since for metric topological or bornological spaces the bornological and the topological notion of convergence agree (see the last lines of page 108 of \cite{H2} or Proposition 1.17 of \cite{Bam2} for a detailed proof of this fact) it follows that for metric spaces the notion of bornological and topological completeness agree (see also Corollary 1.18 of \cite{Bam2}). Therefore, we deduce that \[ (E \widehat{\otimes}_{\epsilon,k} F)^t \cong E^t \widehat{\otimes}_{\epsilon,k} F^t \] and \[ (E \widehat{\otimes}_{\epsilon,k} F)^b \cong E^b \widehat{\otimes}_{\epsilon,k} F^b. \] Finally, since one of $E$ or $F$ is binuclear we deduce that the complete injective and projective tensor products coincide by Lemma \ref{lem:top_inj_pro} and Proposition \ref{prop:nuclear_proj_equal_injecive} (in both categories ${\text{\bfseries\sf{Tc}}}_k$ and ${\text{\bfseries\sf{Born}}}_k$), obtaining the required isomorphisms. \ \hfill $\Box$ We also underline that in Theorem 2.3 of \cite{M2} the Archimedean version of Proposition \ref{prop:tensor_t_b_frechet} is discussed. \begin{lem} \label{lem:reduced_loc_conv} Let $\{ E_i \}_{i\in I}$ be a cofiltered projective system in ${\text{\bfseries\sf{Tc}}}_{k}$ whose projective limit we call $E$ and $F \in {\text{\bfseries\sf{Tc}}}_{k}$. Then \[ E \widehat{\otimes}_{\pi,k} F = (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in I} E_i) \widehat{\otimes}_{\pi, k} F \cong \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in I}(E_i \widehat{\otimes}_{\pi, k} F) \] \end{lem} {\bf Proof.} Cf. Proposition 9 at page 192 of \cite{H2}. \ \hfill $\Box$ \begin{cor} \label{cor:proj_lim_1} Let $\{ E_i\}_{i\in \mathbb N}$ be a cofiltered projective system of Banach vector spaces such that $\underset{i\in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} E_i$ is a binuclear Fr\'echet bornological vector space and $F$ a Fr\'echet bornological vector space, over $k$. Then, the canonical map \[ E \widehat{\otimes}_{\pi,k} F = (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in \mathbb N} E_i) \widehat{\otimes}_{\pi, k} F \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N}(E_i \widehat{\otimes}_{\pi, k} F) \] is an isomorphism of bornological vector spaces. \end{cor} {\bf Proof.} The claim follows form the chian of isomorphisms \[ E \widehat{\otimes}_{\pi, k} F \stackrel{\ref{example:normal}}{\cong} (E \widehat{\otimes}_{\pi, k} F)^{t b} \stackrel{\ref{prop:tensor_t_b_frechet}}{\cong} (E^t \widehat{\otimes}_{\pi, k} F^t)^b \stackrel{\ref{lem:reduced_loc_conv}}{\cong} (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in \mathbb N} (E_i^t \widehat{\otimes}_{\pi, k} F^t))^b \stackrel{}{\cong} \] \[ \stackrel{\ref{obs:tbequiv}}{\cong} \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in \mathbb N} (E_i^t \widehat{\otimes}_{\pi, k} F^t)^b \stackrel{}{\cong} \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in \mathbb N} (E_i \widehat{\otimes}_{\pi, k} F) \] where last isomorphism is immediate from the definition of projective tensor product (compare Definition \ref{defn:top_proj_tensor} with Definition 3.57 of \cite{BaBe}). \ \hfill $\Box$ \begin{cor} \label{cor:proj_lim_2} Let $\{ E_i\}_{i\in \mathbb N}$ be a cofiltered projective system of binuclear Fr\'echet bornological vector spaces and $F$ bornological Fr\'echet vector space, over $k$. Then, the canonical map \[ E \widehat{\otimes}_{\pi,k} F = (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in \mathbb N} E_i) \widehat{\otimes}_{\pi, k} F \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N}(E_i \widehat{\otimes}_{\pi, k} F) \] is an isomorphism of bornological vector spaces. \end{cor} {\bf Proof.} Notice that Lemma \ref{prop:nuclear_proj}, together with the well-known fact that a projective limit of nuclear spaces in ${\text{\bfseries\sf{Tc}}}_k$ is a nuclear space and Proposition \ref{prop:limpro_commutation_t} imply that $E$ is binuclear. So, also for this corollary we have the chian of isomorphisms \[ E \widehat{\otimes}_{\pi, k} F \stackrel{\ref{example:normal}}{\cong} (E \widehat{\otimes}_{\pi, k} F)^{t b} \stackrel{\ref{prop:tensor_t_b_frechet}}{\cong} (E^t \widehat{\otimes}_{\pi, k} F^t)^b \stackrel{\ref{lem:reduced_loc_conv}}{\cong} (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in \mathbb N} (E_i^t \widehat{\otimes}_{\pi, k} F^t))^b \stackrel{}{\cong} \] \[ \stackrel{\ref{obs:tbequiv}}{\cong} \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in \mathbb N} (E_i^t \widehat{\otimes}_{\pi, k} F^t)^b \stackrel{\ref{prop:tensor_t_b_frechet}}{\cong} \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in \mathbb N} (E_i \widehat{\otimes}_{\pi, k} F). \] \ \hfill $\Box$ \subsection{Strict exact sequences in bornological and topological settings} \label{sec:exact_sequences} This section contains some results about how the notions of strictly short exact sequence in ${\text{\bfseries\sf{Born}}}_k$ and ${\text{\bfseries\sf{Tc}}}_k$ are related, which are needed later. \begin{lem} \label{lem:compactoid_quotient} Let $f: E \to F$ be a surjective continuous map between Fr\'echet spaces. For any compactoid subset $B \subset F$, there exists a compactoid subset $B' \subset E$ such that $f(B') = B$. In particular, the functor ${\text{\bfseries\sf{Cpt}}}: {\text{\bfseries\sf{Tc}}}_k \to {\text{\bfseries\sf{CBorn}}}_k$ preserves strict short exact sequences of Fr\'echet spaces. \end{lem} {\bf Proof.} When $k = \mathbb R, \mathbb C$, see \cite{M} Theorem 1.62, and for the non-Archimedean case see Theorem 3.8.33 \cite{PGS}. \ \hfill $\Box$ \begin{lem} \label{lem:compactoid_neumann} Let $F$ be a nuclear Fr\'echet space, then the von Neumann and the compactoid bornology conicides, \emph{i.e. } the identity map gives an isomorphism $F^b \cong {\text{\bfseries\sf{Cpt}}}(F)$ in ${\text{\bfseries\sf{CBorn}}}_{k}$. \end{lem} {\bf Proof.} In both cases one can show that nuclear Fr\'echet spaces are Montel spaces, and for Montel spaces the von Neumann bornology and the compactoid bornology agree (by definition). Possibile references for these results are sections 8.4 and 8.5 of \cite{PGS} for non-Archimedean base fields and Theorem 8 of page 20 of \cite{Hog3} for Archimedean base fields. \ \hfill $\Box$ \begin{lem} \label{lem:compactoid_conv} Let $E$ be a nuclear Fr\'echet space and $\{ x_n \}$ a sequence of elements of $E$. Then, the following are equivalent \begin{enumerate} \item $\{ x_n \}$ converges topologically to $0$; \item $\{ x_n \}$ converges bornologically to $0$ for the bornology of $E^b$; \item $\{ x_n \}$ converges bornologically to $0$ for the bornology of ${\text{\bfseries\sf{Cpt}}}(E)$. \end{enumerate} \end{lem} {\bf Proof.} This result is proven for Archimedean base fields in \cite{M2} Corollary 3.8. For the general case: For the equivalence of (1) and (2) we can refer to \cite{Bam2}, Proposition 1.17. Finally, conditions (2) and (3) are equivalent by Lemma \ref{lem:compactoid_neumann}. \ \hfill $\Box$ \begin{rem} We conjecture that Lemma \ref{lem:compactoid_conv} holds for all Fr\'echet spaces, without the nuclearity hypothesis. Indeed the proof given in \cite{M2} for Archimedean base fields, does not use the nuclearity hypothesis but it easily adapts to the non-Archimedean case only when the base field is locally compact. Notice that the only missing implication for a general non-Archimedean base field is (3) implies (2) or (3) implies (1). \end{rem} \begin{cor} \label{lem:compactoid_epi} Let $f: E \to F$ be a morphism of nuclear Fr\'echet spaces. Then, the following are equivalent: \begin{enumerate} \item $f$ is an epimorphism in the category of complete locally convex vector spaces; \item $f: E^b \to F^b$ is an epimorphism in ${\text{\bfseries\sf{CBorn}}}_{k}$ \item $f: {\text{\bfseries\sf{Cpt}}}(E) \to {\text{\bfseries\sf{Cpt}}}(F)$ is an epimorphism in ${\text{\bfseries\sf{CBorn}}}_{k}$; \item $f: E^b \to F^b$ has bornologically dense image. \end{enumerate} \end{cor} {\bf Proof.} The characterization of epimorphisms given in Proposition \ref{prop:strict_morphisms_born_close} in combination with Lemma \ref{lem:compactoid_neumann} yields that conditions (2), (3), and (4) are equivalent. Then, Lemma \ref{lem:compactoid_conv} implies that (1) is equivalent to (4) because Fr\'echet spaces are metric spaces. Therefore the topological closure of the image of $E$ in $F$ is equal to its limit points (for the topology of $F$). And since $F^b$ is a proper bornological vector space (because it is normal) we can apply Theorem \ref{prop:sequence_close} to deduce that the bornological closure of $E$ in $F$ agrees with its bornological limit point. \ \hfill $\Box$ \begin{lem} \label{lem:short_exact_t} If a short sequence of bornological Fr\'echet spaces \[ 0 \to E \to F \to G \to 0 \] is strictly exact in ${\text{\bfseries\sf{CBorn}}}_{k}$ then \[ 0 \to E^t \to F^t \to G^t \to 0 \] is strictly exact in ${\text{\bfseries\sf{Tc}}}_k$. \end{lem} {\bf Proof.} The ${(-)}^t$ functor is a left adjoint, so it preserve all colimits. Therefore, we have to check that it preserves kernels of morphisms of Fr\'echet spaces. Let's consider the kernel of the strict morphism \[ f^t: F^t \to G^t. \] $\ker(f^t)$ is a Fr\'echet space, and therefore it is a normal locally convex space. Hence, we can apply Proposition \ref{prop:limpro_commutation_t} to deduce that $\ker(f^t) \cong \ker(f)^t \cong E^t$, and the lemma is proved. \ \hfill $\Box$ One problem that complicates our work is that the converse statement of Lemma \ref{lem:short_exact_t} does not hold in general. However, we have the following result. \begin{lem} \label{lem:compactoid_short_exact} A short sequence of nuclear Fr\'echet spaces \[ 0 \to E \to F \to G \to 0 \] is strictly exact in ${\text{\bfseries\sf{Tc}}}_k$ if and only if \[ 0 \to E^b \to F^b \to G^b \to 0 \] is strictly exact in ${\text{\bfseries\sf{CBorn}}}_k$. \end{lem} {\bf Proof.} By Lemma \ref{lem:short_exact_t} the strict exactness of the second sequence implies the strict exactness of the first one. Applying the functor ${(-)}^b$ to the first sequence we obtain a strict exact sequence \[ 0 \to E^b \to F^b \to G^b \] because ${(-)}^b$ is a right adjoint functor. Since the sequence \[ 0 \to E^b \to F^b \to G^b \to 0 \] is manifestly algebraically exact it remains to check that $F^b \to G^b$ is a strict epimorphism. By the nuclearity hypothesis on $E, F$ and $G$ we can apply Lemma \ref{lem:compactoid_neumann} to deduce that the von Neumann bornology of these spaces coincides with the compactoid one. Finally, we can apply Lemma \ref{lem:compactoid_quotient} to conclude the proof. \ \hfill $\Box$ \begin{rem} The functor ${(-)}^b$ does not preserve strict exactness of short exact sequences in general. It is known that there exist Montel-Fr\'echet spaces $V$ whose quotient $V/W$ by a closed subspace $W$ is not Montel. Since for such a $V$ one has $V^b \cong {\text{\bfseries\sf{Cpt}}}(V)$ whereas $(V/W)^b \not\cong {\text{\bfseries\sf{Cpt}}}(V/W)$, Lemma \ref{lem:compactoid_quotient} implies that \[ {\text{\bfseries\sf{Cpt}}}(V/W) \cong \frac{V^b}{W^b}. \] So, ${(-)}^b$ does not preserve cokernels in this case. \end{rem} \subsection{Derived functors of the inverse limit functor} \label{sec:der_lim} We assume the reader is familiar with the notion of a family of injective objects with respect to a functor between quasi-abelian categories, in the sense of Schneiders (Definition 1.3.2 of \cite{SchneidersQA}). In this section we recall how to derive the inverse limit functors in quasi-abelian categories, and then focus on the case of ${\text{\bfseries\sf{CBorn}}}_k$. Conditions for the existence of the derived functor of the inverse limit functors are given the Proposition \ref{prop:inv_prosmans}, which is proven in Prosmans \cite{Pr3}. We will discuss then a bornological version of the classical Mittag-Leffler Lemma for Fr\'echet spaces, which is an important example of the use of homological algebra towards functional analysis. There is an extensive literature on the study of the derived functor in the category of locally convex spaces: Palamadov \cite{Pal}, Vogt \cite{Vogt}, and Retakh \cite{Ret} are only some of the main contributions. In this section we bound ourself to obtain bornological versions of the basic results for Fr\'echet spaces, without looking for the utmost generality. \begin{prop} \label{prop:inv_prosmans} Let $I$ be a small category and ${\tC}$ a quasi-abelian category with exact products. Then, the family of objects in ${\tC}^{I^{op}}$ which are Roos-acyclic form a $\underset{i\in I}\mathop{\lim\limits_{\displaystyle\leftarrow}}$-acyclic family. In particular, the functor \[\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in I}:{\tC}^{I^{op}} \to {\tC} \] is right derivable to a functor \[D^{+}({\tC}^{I^{op}} ) \to D^{+}({\tC} ) \] and for any object $V \in {\tC}^{I^{op}}$, we have a canonical isomorphism \begin{equation}\label{eqn:Derived2Roos} \mathbb R \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in I} V_{i}\cong Roos(V) \end{equation} where the right hand side is the Roos complex of $V$. \end{prop} {\bf Proof.} See Section3.3 of \cite{Pr3}. \hfill $\Box$ \begin{cor}The family of $\underset{i \in I}\mathop{\lim\limits_{\displaystyle\leftarrow}}$-acyclic objects form a family of injective objects relative to the functor $\underset{i \in I}\mathop{\lim\limits_{\displaystyle\leftarrow}}: {\tC}^{I} \to {\tC}$. \end{cor} In the case of a tower (\emph{i.e. } when $I = \mathbb N$ with its natural order), the situation is easier to deal with. Consider a functor $V:\mathbb{N}^{op}\to {\tC}$ where ${\tC}$ is a quasi-abelian category with exact products. We can consider the complex in degree $0$ and $1$ \begin{equation}\label{eqn:Weibel} \prod_{i=0}^{\infty} V_{i} \stackrel{\Delta}\longrightarrow \prod_{i=0}^{\infty} V_{i} \end{equation} defined by \[\Delta(\dots ,a_2,a_1,a_{0}) = (\dots, a_2 - \overline{a_3},a_1 - \overline{a_2},a_0 - \overline{a_1}) \] where $\overline{a_i}$ denotes the image of $a_i\in V_i$ inside $V_{i-1}$. In this particular case, the Roos complex reduces to the complex of equation (\ref{eqn:Weibel}). \begin{lem}\label{lem:RoosML} Let ${\tC}$ be a quasi-abelian category with exact products. Consider a functor $V:\mathbb{N}^{op}\to {\tC}$. Then $\mathbb R \underset{i\in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}} V_{i}$ is isomorphic to the complex in degree $0$ and $1$ given in equation (\ref{eqn:Weibel}). \end{lem} {\bf Proof.} Since the cardinality of the natural numbers is less than the second infinite cardinal, Theorem 3.10 of \cite{Pr4} implies that $LH^{n}(\mathbb R \underset{i\in I}\mathop{\lim\limits_{\displaystyle\leftarrow}} V_{i})=0$ for all $n\geq 2$. The fact that $\mathbb R \underset{i\in I}\mathop{\lim\limits_{\displaystyle\leftarrow}} V_{i}$ is represented by (\ref{eqn:Weibel}) follows from the general definition of the Roos complex and the isomorphism in (\ref{eqn:Derived2Roos}). \hfill $\Box$ \begin{lem}\label{lem:TopML} Let $V$ be a projective system of Fr\'{e}chet spaces in ${\text{\bfseries\sf{Tc}}}_k$ indexed by $\mathbb N$ where all system morphisms are dense. Then, the complex \begin{equation} 0 \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb{N}}V_{i} \to \prod_{i \in \mathbb{N}}V_{i} \stackrel{\Delta}\to \prod_{i \in \mathbb{N}}V_{i} \to 0 \end{equation} is strictly exact. \end{lem} {\bf Proof.} We can first show exactness in the category ${\text{\bfseries\sf{Vect}}}_{k}$ by applying the standard Mittag-Leffler theorem for (topological) Fr\'{e}chet spaces (see for example \cite{DV}, Lemma 1 of page 45) to \[\xymatrix{0 \ar[d] & 0 \ar[l] \ar[d]& 0 \ar[l] \ar[d]& \cdots \ar[l] \ar[d] \\ V_1 \ar[d] & V_2 \ar[l] \ar[d]& V_3 \ar[l] \ar[d]& \cdots \ar[l] \ar[d] \\ V_1 \ar[d] & V_1 \times V_2 \ar[l] \ar[d]& V_1 \times V_2 \times V_3\ar[d] \ar[l] & \cdots \ar[l] \ar[d]\\ 0 \ar[d] & V_1 \ar[d] \ar[l]& V_1 \times V_2 \ar[d] \ar[l]& \cdots \ar[d] \ar[l] \\ 0 & 0 \ar[l] & 0 \ar[l] & \cdots \ar[l] .} \] The lemma in \cite{DV} discusses only in the case $k = \mathbb C$, but it is easy to check the proof works over any base field since the only hypothesis used is that the spaces involved are endowed with a metric for which they are complete. It is an immediate consequence of the open mapping theorem for Fr\'echet spaces (see \cite{H2} Section4.4.7, page 61) that the morphism $\Delta$ is a strict epimorphism and therefore the sequence is strictly exact. \hfill $\Box$ We can now state our bornological version of the Mittag-Leffler Lemma. \begin{lem} (Mittag-Leffler) \\ \label{lemma:mittag_frechet} Let $E, F, G \in {\text{\bfseries\sf{CBorn}}}_k^{\mathbb N^{op}}$, be projective systems of bornological Fr\'echet spaces over $k$ indexed by $i \in \mathbb N$. Let \begin{equation} \label{eqn:short_ml} 0 \to \{ E_i \}_{i \in \mathbb N} \stackrel{\eta}{\to} \{ F_i \}_{i \in \mathbb N} \stackrel{\psi}{\to} \{ G_i \}_{i \in \mathbb N} \to 0 \end{equation} be a short exact sequence of systems where each $\eta_{i}$ and $\psi_{i}$ is strict in ${\text{\bfseries\sf{CBorn}}}_k$. Suppose also, that \begin{enumerate} \item $\{E_i \}_{i \in \mathbb N} $ is an epimorphic system; \item $\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} E_i$, $\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} F_i$ and $\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} G_i$ are nuclear bornological spaces. \end{enumerate} Then, the resulting sequence \[ 0 \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} E_i \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} F_i \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} G_i \to 0 \] is strictly exact in ${\text{\bfseries\sf{CBorn}}}_k$, where the limits are calculated in ${\text{\bfseries\sf{CBorn}}}_k$. \end{lem} {\bf Proof.} The datum of the short exact sequence (\ref{eqn:short_ml}) is equivalent to a sequence of strictly exact sequences \[ i \mapsto (0 \to E_i \stackrel{\eta_{i}}\to F_i \stackrel{\psi_{i}}\longrightarrow G_i \to 0) \] whose morphisms are compatible with the system morphisms of the projective systems. Then, the application of the functor ${(-)}^t$ to these sequences \[ i \mapsto (0 \to E_i^t \stackrel{\eta_{i}}\to F_i^t \stackrel{\psi_{i}}\longrightarrow G_i^t \to 0) \] yields strictly exact sequences in ${\text{\bfseries\sf{Tc}}}_k$, thanks to Lemma \ref{lem:short_exact_t}. The system maps $E_{j}^{t} \to E_{i}^{t}$ have topologically dense set theoretic image by Observation \ref{obs:Dense2Dense}. Therefore, we can apply the Mittag-Leffler lemma for Fr\'echet spaces (cf. Lemma \ref{lem:TopML}) to the systems $\{E_i^t \}_{i \in \mathbb N} $ to get a strictly exact sequence in ${\text{\bfseries\sf{Tc}}}_k$ \begin{equation} \label{eqn:short_ml2} 0 \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} (E_i^t) \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} (F_i^t) \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} (G_i^t) \to 0 \end{equation} of Fr\'echet spaces. By Proposition \ref{prop:limpro_commutation_t}, the functors $\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}}$ and ${(-)}^t$ commute in (\ref{eqn:short_ml2}). Applying Lemma \ref{lem:compactoid_short_exact} we deduce that the strict exactness of the sequence \[ 0 \to (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} E_i)^t \to (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} F_i)^t \to (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} G_i)^t \to 0 \] implies the strict exactness of the sequence \[ 0 \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} E_i \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} F_i \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} G_i \to 0, \] concluding the proof. \ \hfill $\Box$ \begin{cor}\label{cor:BornML} Let $V$ be a projective system of bornological Fr\'{e}chet spaces indexed by $\mathbb N$ where all system morphisms are dense and $\mathop{\lim\limits_{\displaystyle\leftarrow}} V_i$ is nuclear. Then, the complex \begin{equation} 0 \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb{N}}V_{i} \to \prod_{i \in \mathbb{N}}V_{i} \stackrel{\Delta}\to \prod_{i \in \mathbb{N}}V_{i} \to 0 \end{equation} is strictly exact. \end{cor} {\bf Proof.} Using the bornological version of the Mittag-Leffler Lemma, \emph{i.e. } Lemma \ref{lemma:mittag_frechet}, we can use the same proof of Lemma \ref{lem:TopML} to deduce the corollary. \hfill $\Box$ \begin{cor}\label{cor:BornML2} Let $V$ be a projective system of bornological Fr\'{e}chet spaces indexed by $\mathbb N$ where all system morphisms are dense and $\mathop{\lim\limits_{\displaystyle\leftarrow}} V_i$ is nuclear. Then \begin{equation} \label{eqn:ML_Roos} \mathbb R {\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb{N}}} V_{i} \cong \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb{N}}V_{i}. \end{equation} \end{cor} {\bf Proof.} Corollary \ref{cor:BornML} is equivalent to say that the Roos complex of $V$ has cohomology only in degree $0$. Therefore, Proposition \ref{prop:inv_prosmans} implies (\ref{eqn:ML_Roos}). \hfill $\Box$ \begin{lem}\label{lem:LongShort} In any quasi-abelian category ${\tC}$, a complex \[\cdots \to V^{n+1} \stackrel{d^{n+1}}\to V^{n} \stackrel{d^{n}}\to V^{n-1} \to \cdots \] is strictly exact if and only if the sequences \begin{equation} \label{eqn:exact_sequence} 0 \to \ker(d^{n}) \to V^{n} \to \ker(d^{n-1}) \to 0 \end{equation} are strictly exact for each $n$. Given a projective system of strictly exact complexes \begin{equation} \label{eqn:exact_complex} \cdots \to V_{i}^{n+1} \stackrel{d_{i}^{n+1}}\to V_{i}^{n} \stackrel{d_{i}^{n}}\to V_{i}^{n-1} \to \cdots \end{equation} where both $\{\ker(d^{n}_{i})\}_{i \in \mathbb N}$ and $\{V_{i}^{n}\}_{i \in I}$ are $\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}}$-acyclic systems, the projective limit \[\cdots \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N}V_{i}^{n+1} \stackrel{\mathop{\lim\limits_{\displaystyle\leftarrow}} d_{i}^{n+1}}\to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N}V_{i}^{n} \stackrel{\mathop{\lim\limits_{\displaystyle\leftarrow}} d_{i}^{n}}\to\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} V_{i}^{n-1} \to \cdots \] is strictly exact. \end{lem} {\bf Proof.} By Corollary 1.2.20 of \cite{SchneidersQA}, the complex (\ref{eqn:exact_complex}) is strictly exact if and only if $LH^{n}(V)=0$ for all $n$. This is equivalent to the two term complexes $0 \to \coim(d^{n + 1}) \to \ker(d^n) \to 0$ vanishing in the left heart of ${\tC}$ (which is a full subcategory of the derived category, cf. Corollary 1.2.20 of ibid.). By this explicit description of the left heart, the condition $LH^{n}(V)=0$ is equivalent to that the canonical morphism $\coim(d^{n + 1}) \to \ker(d^n)$ is an isomorphism. Therefore, the strictly exactness of sequences \[ 0 \to \ker(d^{n}) \to V^{n} \to \coim(d^{n}) \to 0 \] implies the strictly exactness of the sequences (\ref{eqn:exact_sequence}). For the second statement, observe that the sequences \[0 \to \ker{d^{n}} \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i\in \mathbb{N}} V^{n}_{i} \to \ker{d^{n-1}} \to 0 \] are strictly exact, being the application of $\mathbb{R} \underset{i\in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}}$ to the strict short exact sequences \[0 \to \ker{d_{i}^{n}} \to V^{n}_{i} \to \ker{d_{i}^{n-1}} \to 0 \] of $\underset{i\in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}}$-acyclic objects, thought of as an exact triangle in $D^{+}({\tC}^{\mathbb{N}})$. \hfill $\Box$ \section{Stein domains} \label{sec:Stein} \subsection{Bornological Fr\'echet algebras} The notion of multiplicatively convex bornological algebra (in short m-algebra) introduced in \cite{BaBe} (cf. Definition 4.1 of ibid.) is not general enough for all purposes of analytic geometry. For example, the bornological Fr\'echet algebras of analytic functions on open subsets of $\mathbb A_k^n$ are not multiplicatively convex in this sense. So, we introduce here a generalization of multiplicatively convex bornological algebras which encompass also bornological Fr\'echet algebras. We start by recalling what the spectrum of a general bornological algebra is. \begin{defn} Let $A$ be a bornological algebra, \emph{i.e. } an object of ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{Born}}}_k)$, we define the \emph{spectrum} of $A$ as the set of equivalence classes of bounded algebra morphisms of $A$ to valued extensions of $k$. The spectrum is denoted by $\mathcal{M}(A)$ and it is equipped with the weak topology, \emph{i.e. } the weakest topology for which all maps of the form $\chi \mapsto |\chi(f)| \in \mathbb R_{\ge 0}$, for all $f \in A$, are continuous. \end{defn} This definition extends the one given in Definition 4.2 of \cite{BaBe} for m-algebras, but the spectrum of a general bornological algebra is not as well behaved as the spectrum of a bornological m-algebra. For example, there exist bornological algebras whose underlying bornological vector space is complete and whose spectrum is empty. So, it is important to single out a suitable sub-category of the category of bornological algebras for which the notion of spectrum is not pathological. \begin{defn}\label{defn:ProComplete} Let $A$ be an object of ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{CBorn}}}_{k})$. We say that $A$ is \emph{pro-multiplicatively convex} (or a \emph{pro m-algebra}) if there is an isomorphism \[ A \cong \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in I} A_i \] in ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{Born}}}_{k})$, where $I$ is a cofiltered small category and $A_i$ are complete bornological m-algebras. \end{defn} \begin{defn}\label{defn:DenseStrictlyDense} Let $A $ be a pro-multiplicatively convex object of ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{CBorn}}}_{k})$. $A$ is called \emph{densely defined} if there exists an isomorphism $A \cong \underset{i \in I}\mathop{\lim\limits_{\displaystyle\leftarrow}} A_i$ of bornological algebras as in Definition \ref{defn:ProComplete} such that for any $i \in I$ the canonical map $\pi_i:A \to A_i$ has dense set-theoretic image. \end{defn} We remark that the condition in Definition \ref{defn:DenseStrictlyDense} can be stated purely categorically by requiring that the morphisms $\pi_i: A \to A_i$ are epimorphisms of the underlying complete bornological vector spaces (cf. Proposition \ref{prop:strict_morphisms_born_close}). \begin{prop} \label{prop:frechet_spectrum} Let $A$ be a densely defined pro-multiplicatively convex bornological algebra, such that \[ A \cong \mathop{\lim\limits_{\displaystyle\leftarrow}}_{n \in \mathbb N} A_n \] with $A_n$ Banach algebras. Then, $A$ is a bornological Fr\'echet algebra whose spectrum coincides with the spectrum of $\mathcal{M}(A^t)$, \emph{i.e. } \[ \mathcal{M}(A) = \bigcup_{n \in \mathbb N} \mathcal{M}(A_n) \] topologically. \end{prop} {\bf Proof.} $A$ is a bornological Fr\'echet algebra, \emph{i.e. } the underlying bornological vector space of $A$ is that of a bornological Fr\'echet space. Indeed, $\underset{n \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} (A_n^t)$ is a Fr\'echet space as consequence of Proposition \ref{prop:limpro_commutation_t}, which states that $\underset{n \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} (A_n^t) \cong (\underset{n \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} A_n)^t = A^t$. Thus, \[ (A^t)^b \cong \mathop{\lim\limits_{\displaystyle\leftarrow}}_{n \in \mathbb N} ((A_n^t)^b) \cong A. \] This implies that a character $A \to K$ is bounded if and only if it is continuous for $A^t$. The densely defined hypothesis implies that \[ \mathcal{M}(A) \cong \mathcal{M}(A^t) = \bigcup_{n \in \mathbb N} \mathcal{M}(A_n) \] because $\pi_i: A \to A_i$ is an epimorphism if and only if $\pi_i^t: A^t \to A_i^t$ is an epimorphism, since the $A_i$ are Banach. The identification \[ \mathcal{M}(A^t) = \bigcup_{n \in \mathbb N} \mathcal{M}(A_n) \] is a well-known result of the classical theory of Frech\'et algebras (\emph{e.g. }, see Section2.5 of \cite{Bam}). \ \hfill $\Box$ We end this section recalling a result from \cite{Bam2} that will be used in the next subsection. \begin{defn} \label{defn:born_web} Let $E$ be a separated bornological vector space of convex type over $k$. A pair $(\mathcal{V}, b)$ consisting of mappings $\mathcal{V} : \underset{j \in \mathbb N}\bigcup \mathbb N^j \to \mathcal{P}(E)$ and $b : \mathbb N^\mathbb N \to (|k^\times|)^\mathbb N$ is called a \emph{bornological web} if all of the conditions below hold: \begin{enumerate} \item The image of $\mathcal{V}$ consists of disks. \item $\mathcal{V}({\rm \varnothing}) = E$. \item Given a finite sequence $(n_0, \dots, n_j)$, then $\mathcal{V}(n_0, \dots, n_j)$ is absorbed by \[ \bigcup_{n \in \mathbb N} \mathcal{V}(n_0, \dots, n_j, n). \] \item For every $s: \mathbb N \to \mathbb N$ the series $\underset{n \in \mathbb N}\sum \lambda(s)_n x_n$, with $\lambda(s)_n \in k$, converges bornologically in $E$, whenever we choose $x_n \in \mathcal{V}(s(0), \dots ,s(n))$ and $|\lambda(s)_n| = b(s)_n$. \end{enumerate} \end{defn} \begin{lem} \label{lem:closed_graph} Let $A, B$ be objects of ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{Born}}}_{k})$ for which the underlying bornological vector space of $A$ is complete and the one of $B$ is a webbed bornological vector space. Let $\phi: A \to B$ be a morphism of the underlying objects in ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{Vect}}}_{k})$. Suppose that in $B$ there is a family of ideals $\Im$ such that \begin{enumerate} \item each $I \in \Im$ is (bornologically) closed in $B$ and each $\phi^{-1}(I)$ is closed in $A$; \item for each $I \in \Im$ one has $\dim_k B/I < \infty$; \item $\underset{I \in \Im} \bigcap I = (0)$. \end{enumerate} Then, $\phi$ is bounded. \end{lem} {\bf Proof.} The proof can be found in \cite{Bam2}, Proposition 4.23. \ \hfill $\Box$ And finally the following lemma permits us to apply Lemma \ref{lem:closed_graph} to the morphisms of bornological algebras we will study next sub-section. \begin{lem} \label{lem:webbed} The underlying bornological vector space of every $k$-dagger affinoid algebra is a webbed bornological vector space. \end{lem} {\bf Proof.} The assertion about dagger affinoid algebras is a direct consequence of Example 2.3 (2) of \cite{Bam2}. \ \hfill $\Box$ \begin{rem} The notion of pro-multiplicative algebra is more general than what strictly needed in this paper. However, this notion is very comfortable when both Fr\'echet algebras and LB algebras are considered in the same discussion. Moreover, the material in this section will be a reference for future works in which we will analyse dagger quasi-Stein algebras for which the use of the notion of pro-multiplicative bornological algebra is unavoidable. \end{rem} \subsection{Stein algebras and Stein spaces} \begin{defn} A \emph{Weierstrass localization} of a $k$-dagger affinoid algebra $A$ is a morphism of the form \[ A \to \frac{ A \langle r_1^{-1} X_1, \ldots, r_n^{-1} X_n \gt^\dagger}{(X_1 - f_1, \ldots, X_n - f_n)} \] for some $f_1, \dots, f_n \in A$, $(r_i) \in \mathbb R_+^n$. \end{defn} \begin{defn} Let $A, B, C$ be a $k$-dagger affinoid algebras such that $B$ and $C$ are $A$-algebras. Let $f: B \to C$ be a bounded morphism of $A$-algebras. $f$ is called \emph{inner with respect to $A$} if there exists a strict epimorphism $\pi: A \langle r_1^{-1} T_1, \dots, r_n^{-1} T_n \gt^\dagger \to B$ such that \[ \rho_C(f(\pi(T_i))) < r_i \] for all $1 \le i \le n$, where $\rho_{C}$ is the spectral semi-norm of $C$. \end{defn} \begin{defn} Let $\phi: \mathcal{M}(A) = X \to \mathcal{M}(B) = Y$ be a morphism of $k$-dagger affinoid spaces. The \emph{relative interior of $\phi$} is the set \[ {\text{\bfseries\sf{Int}}}(X/Y) = \{ x \in X | A \to \mathcal{H}(x) \text{ is inner with respect to $B$} \}. \] The complement of ${\text{\bfseries\sf{Int}}}(X/Y)$ is called the \emph{relative boundary of $\phi$} and denoted by $\partial(X/Y)$. If $B = k$, the sets ${\text{\bfseries\sf{Int}}}(X/Y)$ and $\partial(X/Y)$ are denoted by ${\text{\bfseries\sf{Int}}}(X)$ and $\partial(Y)$ and called the \emph{interior} and the \emph{boundary} of $X$. \end{defn} \begin{defn} \label{defn:stein_algebra} A \emph{dagger Stein algebra} over $k$ is a complete bornological algebra $A$ over $k$ which is isomorphic to an inverse limit of $k$-dagger affinoid algebras \begin{equation} \label{eqn:stein} \cdots \longrightarrow A_4 \longrightarrow A_3 \longrightarrow A_2 \longrightarrow A_1 \longrightarrow A_0 \end{equation} in the category ${\text{\bfseries\sf{CBorn}}}_{k}$ where each morphism is a Weierstrass localization and $\mathcal{M}(A_i)$ is contained in the interior of $\mathcal{M}(A_{i+1})$, for each $i$. The category of dagger Stein algebras is the full sub-category of ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{CBorn}}}_{k})$ identified by dagger Stein algebras over $k$ and it is denoted by ${\text{\bfseries\sf{Stn}}}_k$. \end{defn} \begin{defn} A $k$-dagger analytic space $X$ is called \emph{dagger Stein space} if it admits a dagger affinoid covering $U_1 \subset U_2 \subset \cdots $ such that \[ X = \bigcup_{i \in \mathbb N} U_i \] and the restriction morphisms $\mathcal{O}_X(U_{i+1}) \to \mathcal{O}_X(U_i)$ are Weierstrass localizations and $U_i$ is contained in the interior of $U_{i+1}$, for each $i \in \mathbb N$. The category of dagger Stein spaces is the full sub-category of the category of $k$-analytic spaces of this form. \end{defn} \begin{example} \label{exa:stein} \begin{enumerate} \item The open polycylinders are the most basic example of Stein spaces, whose exhaustion by Weierstrass subdomains is given by the closed polycylinders of smaller radius. \item Analytifications of algebraic varieties are Stein spaces, both for Archimedean and non-Archimedean base fields. \item Closed subspaces of Stein spaces are Stein spaces (see \cite{KI}). \end{enumerate} \end{example} \begin{lem} \label{lemma:regular_quasi_Stein} Let $A$ be a dagger Stein algebra over $k$. Then, $A$ is a densely defined, pro-multiplicatively convex, bornological Fr\'echet algebra over $k$. \end{lem} {\bf Proof.} Let $\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} A_i$ be a presentation of $A$ as in definition of dagger Stein algebra. Let $\tilde{A}_i$ be the completion of $A_i$ with respect to the norm \[ \|f\| = \inf_{B \in \mathcal{D}_{A_i}} |f|_{(A_i)_B}, \] where $|\cdot|_{(A_i)_B}$ denotes the gauge semi-norm, as explained in Remark \ref{rem:gauge}. Notice that $\|\cdot\|$ is a norm because by hypothesis $U_i$ has non-empty interior, therefore $\|f\| = 0 \Rightarrow f = 0$ (more precisely, since $U_i$ has non-empty interior one can find an injective bounded morphism of $A_i$ to a Banach algebra, which implies that $A_i$ is separated). The requirement of $\mathcal{M}(A_i)$ to lie in the interior of $\mathcal{M}(A_{i+1})$ can be restated by saying that there are (bounded) diagonal morphisms such that the following diagram is commutative \begin{equation} \label{eqn:diag_A_A_tilde} \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { \cdots & A_3 & A_2 & A_1 \\ \cdots & \tilde{A}_3 & \tilde{A}_2 & \tilde{A}_1 & \\}; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-1-2); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-1-3); \path[->,font=\scriptsize] (m-1-3) edge node[auto] {$$} (m-1-4); \path[->,font=\scriptsize] (m-2-1) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-2-2) edge node[auto] {$$} (m-2-3); \path[->,font=\scriptsize] (m-2-3) edge node[auto] {$$} (m-2-4); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-1-3) edge node[auto] {$$} (m-2-3); \path[->,font=\scriptsize] (m-1-4) edge node[auto] {$$} (m-2-4); \path[->,font=\scriptsize] (m-2-2) edge node[auto] {$$} (m-1-3); \path[->,font=\scriptsize] (m-2-3) edge node[auto] {$$} (m-1-4); \end{tikzpicture} \end{equation} where the vertical maps are the completions. Hence, following the diagonal maps we obtain a projective system \begin{equation} \label{eqn:system_A_A_tilde} \cdots \to A_3 \to \tilde{A}_3 \to A_2 \to \tilde{A}_2 \to A_1 \to \tilde{A}_1, \end{equation} which shows that the sub-system \[ \cdots \to \tilde{A}_3 \to \tilde{A}_2 \to \tilde{A}_1 \] is cofinal in (\ref{eqn:system_A_A_tilde}), and hence \begin{equation} \label{eqn:frechet_arens} A \cong \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} \tilde{A}_i. \end{equation} Then, by Proposition \ref{prop:limpro_commutation_t} we have $\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} (\tilde{A}_i^t) \cong (\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} (\tilde{A}_i))^{t}$, because a countable projective limit of Banach spaces in ${\text{\bfseries\sf{Tc}}}_k$ is a Fr\'echet spaces and hence normal (see Example \ref{example:normal}). So, from the relation \[A= \underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} (\tilde{A}_i) \cong (\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} (\tilde{A}_i))^{tb} \cong (\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} (\tilde{A}_i^t))^{b} \] we can deduce that $A$ is a Fr\'echet space in ${\text{\bfseries\sf{Born}}}_k$, in the sense of Definition \ref{defn:frechet}. Moreover, it is easy to see all the maps of the system (\ref{eqn:frechet_arens}) have dense images because in the diagram (\ref{eqn:diag_A_A_tilde}) all maps but the bottom horizontal ones are clearly epimorphisms, which implies that the bottom horizontal ones are epimorphism too. \hfill $\Box$ \begin{rem} \label{rem:strict_affinoid} For non-Archimedean base fields, we could have defined Stein algebras using classical affinoid algebras in place of dagger affinoid algebras (which is what we get in Lemma \ref{lemma:regular_quasi_Stein} with the algebras $\tilde{A}_i$). The cofinality argument of Lemma \ref{lemma:regular_quasi_Stein} shows that the bornological structure we defined on a dagger Stein algebra agrees with the Fr\'echet structures defined classically, for example in \cite{Ber1990}, \cite{Ch}, \cite{Cr} for the non-Archmedean theory and in \cite{GR} for the Archimedean theory. \end{rem} \begin{lem} \label{lemma:stein_nuclear} Let $A$ be a dagger Stein algebra over $k$. Then, the underlying bornological vector space of $A$ is binuclear. \end{lem} {\bf Proof.} By Proposition \ref{prop:dagger_nuclear} the underlying bornological vector spaces of dagger affinoid algebras are nuclear. We can apply Proposition \ref{prop:nuclear_proj} to any system of the form (\ref{eqn:stein}) to deduce that the underlying bornological vector space of $A$ is nuclear. It remains to see that the locally convex space $A^t$ is nuclear. Notice that if we chose a presentation $A \cong \underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} A_i$, we have that $A^t \cong \underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} (A_i^t)$, as a consequence of Proposition \ref{prop:limpro_commutation_t}, and by Lemma \ref{lem:LB_nuclear} $A_i^t$ are nuclear locally convex spaces. This implies that $A^t$ is nuclear because projective limits of nuclear locally convex spaces are always nuclear (for a proof of this fact we can refer to \cite{DV} Proposition 3.2 at page 23 in the case in which $k$ is Archimedean, and for the non-Archimedean case to Theorem 8.5.7 of \cite{PGS}). \hfill $\Box$ \begin{lem} \label{lemma:locally_compact_spectrum} Let $A$ be a dagger Stein algebra over $k$. Then, $\mathcal{M}(A)$ is a dagger Stein space over $k$. \end{lem} {\bf Proof.} The spectrum $\mathcal{M}(A)$ is a hemi-compact topological space because, by Lemma \ref{lemma:regular_quasi_Stein}, we can apply Proposition \ref{prop:frechet_spectrum} to deduce that $\mathcal{M}(A) = \underset{i \in \mathbb N}\bigcup \mathcal{M}(A_i)$ (topologically), for a presentation $A \cong \underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} A_i$ (because, in the notation of Lemma \ref{lemma:regular_quasi_Stein}, $\mathcal{M}(A_i) \cong \mathcal{M}(\tilde{A}_i)$). The condition of $\mathcal{M}(A_i)$ being contained in the Berkovich interior of $\mathcal{M}(A_{i+1})$ readily implies that $\{\mathcal{M}(A_i) \}_{i \in \mathbb N}$ is a cofinal family of compact subsets of $\mathcal{M}(A)$ (cf. Exercise 3.8.C (b) of \cite{Eng}). This family is therefore a Berkovich net on $\mathcal{M}(A)$ which induces a structure of $k$-dagger analytic space. It is straightforward to check that in this way $\mathcal{M}(A)$ is endowed with a structure of dagger Stein space because by the definition of dagger Stein algebra the morphisms $A_{i + 1} \to A_i$ are Weierstrass localizations and $\mathcal{M}(A_i)$ are mapped into ${\text{\bfseries\sf{Int}}}(\mathcal{M}(A_{i + 1}))$ by these localizations. \hfill $\Box$ \begin{lem} \label{lemma:Stein_algebra_strict} Let $A$ be a dagger Stein algebra over $k$. Then, we can always find a presentation \[ A \cong \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} A_i \] as in Definition \ref{defn:stein_algebra} such that $A_i$ are strictly dagger affinoid algebras. \end{lem} {\bf Proof.} Let $A \cong \underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} A_i$ be a presentation of $A$ as in the definition of dagger Stein algebra. It is enough to show that each localization $A_{i + 1} \to A_i$ factors through Weierstrass localizations $A_{i + 1} \to B_i \to A_i$ where $B_i$ are strictly dagger affinoid. It is easy to check that Lemma 2.5.11 of \cite{Ber1990} holds also for dagger affinoid algebras. Therefore, for any $0 < \epsilon < 1$ we can find a $\rho = (\rho_i) \in \mathbb R_+^n$ and a strict epimorphism \[ \pi: k \langle \rho_1^{-1} X_1, \ldots, \rho_n^{-1} X_n \gt^\dagger \to A_{i + 1} \] such that the image of $\mathcal{M}(A_i)$ is contained in the image of $\mathcal{M}(A \langle (\epsilon \rho_1)^{-1} \pi(X_1), \ldots, (\epsilon \rho_n)^{-1} \pi(X_n) \gt^\dagger)$ in $\mathcal{M}(A_{i + 1})$. Since the diagram \[ \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { k \langle \rho_1^{-1} X_1, \ldots, \rho_n^{-1} X_n \gt^\dagger & A_{i + 1} \\ k \langle (\epsilon \rho_1)^{-1} X_1, \ldots, (\epsilon \rho_n)^{-1} X_n \gt^\dagger & A_{i + 1} \langle (\epsilon \rho_1)^{-1} \pi(X_1), \ldots, (\epsilon \rho_n)^{-1} \pi(X_n) \gt^\dagger \\}; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$\pi$} (m-1-2); \path[->,font=\scriptsize] (m-2-1) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-2-1); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$\pi'$} (m-2-2); \end{tikzpicture} \] is a push-out square, we see that the morphism $\pi'$ is a strict epimorphism, because tensoring preserves strict epimorphisms. Therefore, choosing $(\epsilon \rho_i)^{-1} \in \sqrt{|k^\times|}$ for all $i$, which is always a possible choice because $\sqrt{|k^\times|}$ is dense in $\mathbb R$, we see that $A_{i + 1} \langle (\epsilon \rho_1)^{-1} \pi(X_1), \ldots, (\epsilon \rho_n)^{-1} \pi(X_n) \gt^\dagger$ can always be found to be a strictly dagger affinoid algebra and this is our choice for $B_i$. Moreover, the morphism $A_{i + 1} \to B_i$ is by construction a Weierstrass subdomain embedding and $A_{i + 1} \to A_i$ is by hypothesis a Weierstrass subdomain embedding which immediately implies that $B_i \to A_i$ is a Weierstrass domain embedding, concluding the proof. \hfill $\Box$ \begin{rem} In Lemma \ref{lemma:Stein_algebra_strict} we only discussed how to find a presentation of a dagger Stein algebra as an inverse limit of strictly dagger affinoid algebras, but the same argument clearly works in the classical affinoid case. Notice that this lemma crucially depends on the hypothesis that $k$ is non-trivially valued. \end{rem} \begin{lem} \label{lemma:quasi_Stein_algebra_closed_ideals} Let $A$ be a dagger Stein algebra over $k$ and $\mathfrak{m} \subset A$ a finitely generated maximal ideal. Then, $\mathfrak{m}$ is bornologically closed in $A$. \end{lem} {\bf Proof.} Let $A \cong \underset{i \in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}} A_i$ be a presentation of $A$ as given by Lemma \ref{lemma:Stein_algebra_strict}. Consider the extensions of $\mathfrak{m} \subset A$ to $A_i$, denoted $\mathfrak{m} A_i$, for every $i \in \mathbb N$. There are two possible cases to consider: either $\mathfrak{m} A_i$ is a proper ideal of $A_i$, for some $i$, or $\mathfrak{m} A_i = A_i$ for all $i$. In the first case, since $\mathfrak{m}$ is maximal in $A$ then $\pi_i^{-1}(\mathfrak{m} A_i)$ must be a proper ideal containing $\mathfrak{m}$, and therefore must be equal to $\mathfrak{m}$. Since $\mathfrak{m} A_i$ is bornologically closed in $A_i$ (cf. Theorem 4.9 (2) of \cite{BaBe}), $\pi_i$ is a bounded map and the pre-image of a bornologically closed set by bounded maps is a bornologically closed set, then $\mathfrak{m}$ is bornologically closed in $A$. On the other hand, suppose that for every $i$, $\mathfrak{m} A_i = A_i$ and let $f_1, \ldots, f_n$ denote a set of generators of $\mathfrak{m}$. Since $\mathfrak{m} A_i = A_i$, $\pi_i(f_i)$ have no common zeros in $\mathcal{M}(A_i)$. Because this is true for any $i$ and $\mathcal{M}(A) = \underset{i \in \mathbb{N}}\bigcup \mathcal{M}(A_i)$, we can deduce that $f_1, \ldots, f_n$ have no common zeros in $\mathcal{M}(A)$. Using Theorem A for dagger Stein spaces (cf. Theorem 3.2 of \cite{GK} for the case in which $k$ is non-Archimedean, instead for $k$ Archimedean the classical theorem of Cartan applies) and reasoning in the same way as Theorem V.5.4 and Theorem V.5.5 at page 161 of \cite{GR}, we can deduce that condition that $f_1, \ldots, f_n$ have no common zeros implies that there exist $g_1, \ldots, g_n \in A$ such that $1 = \underset{i = 1}{\overset{n}\sum} f_i g_i$ and hence $\mathfrak{m} = A$. But this is impossible because by hypothesis $\mathfrak{m}$ is a proper ideal of $A$. This contradiction shows that there must exist an $i$ such that $\mathfrak{m} A_i \ne A_i$. \hfill $\Box$ \begin{rem} The second part of the proof of Lemma \ref{lemma:quasi_Stein_algebra_closed_ideals} works only for fintely generated ideals. Indeed, dagger Stein algebras can have non-finitely generated maximal ideals. It follows that such ideals are necessarily non-closed and bornologically dense subsets. \end{rem} \begin{rem} \label{rem:immersion_A_i_A} Notice that the morphism of Grothendieck locally ringed spaces $\mathcal{M}(A_i) \to \mathcal{M}(A)$, induced from the projections $\pi_i: A \to A_i$ discussed so far, is an open immersion (in the sense of Definition 4.18 of \cite{Bam}), when one consider the Berkovich G-topology of analytic domains (cf. chapter 6 of \cite{Bam} or chapter 1 of \cite{Ber1993} for details about this G-topology). This is because we endow $\mathcal{M}(A)$ with the structure of $k$-dagger analytic space given by the Berkovich net $\underset{i \in \mathbb N}\bigcup \mathcal{M}(A_i)$. Therefore, each $\mathcal{M}(A_i)$ is an analytic domain in $\mathcal{M}(A)$ and $\mathcal{M}(A_i)$ is identified with its image in $\mathcal{M}(A)$. \end{rem} \begin{lem}\label{lem:FactorizationQSt} Any morphism of $k$-dagger Stein algebras comes from a morphism in ${\text{\bfseries\sf{Pro}}}({\text{\bfseries\sf{Afnd}}}^{\dagger}_{k})$ by applying the functor \[\mathop{\lim\limits_{\displaystyle\leftarrow}} :{\text{\bfseries\sf{Pro}}}({\text{\bfseries\sf{Afnd}}}^{\dagger}_{k}) \to {\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{Born}}}_{k}).\] \end{lem} {\bf Proof.} Let $f: A \to B$ be a morphism of dagger Stein algebras and let $A \cong \underset{i \in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}} A_i$, $B \cong \underset{j \in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}} B_j$ be two fixed presentations as in the definition of dagger Stein algebra. Let $\mathcal{M}(f): \mathcal{M}(B) \to \mathcal{M}(A)$ denote the morphism of dagger Stein spaces induced by $f$. Since the family $\{ \mathcal{M}(A_i)\}_{i \in \mathbb N}$ is cofinal in the family of compact subsets of $\mathcal{M}(A)$, every morphisms $\mathcal{M}(B_i) \to \mathcal{M}(A)$ must factor through a $\mathcal{M}(A_i)$, because the image of $\mathcal{M}(B_i)$ in $\mathcal{M}(A)$ is compact. Therefore, we get morphisms of $k$-dagger analytic spaces $\mathcal{M}(f_i): \mathcal{M}(B_i) \to \mathcal{M}(A_i)$ (possibly by re-indexing the systems in a suitable way). These morphisms commute with the morphisms $\mathcal{M}(f): \mathcal{M}(B) \to \mathcal{M}(A)$ and $\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\rightarrow}} \mathcal{M}(f_i) = \mathcal{M}(f)$. By Remark 4.16 of \cite{BaBe}, there exist maps $f_i: A_i \to B_i$ of $k$-dagger affinoid algebras induced by the morphisms of $k$-dagger affinoid spaces $\mathcal{M}(f_i)$ and for these maps we get $\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} f_i = f$, as required. \ \hfill $\Box$ \begin{thm} \label{prop:quasi_Stein_algebra_spaces} The category of dagger Stein algebras over $k$ is anti-equivalent to the category of dagger Stein spaces over $k$. Moreover, the forgetful functor from the category of dagger Stein algebras over $k$ to ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{Vect}}}_{k})$ is fully faithful. \end{thm} {\bf Proof.} It is clear that to any dagger Stein algebra one can associate a dagger Stein space and vice versa functorially, as explained so far (cf. Lemma \ref{lemma:locally_compact_spectrum}). So, we check the claim about the boundedness of every algebra morphism between dagger Stein algebras. Fix two dagger Stein algebras with representations $A \cong \underset{i \in \mathbb N} \mathop{\lim\limits_{\displaystyle\leftarrow}} A_i$, $B \cong \underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} B_i$, and let $\phi: A \to B$ be a morphism of their underlying objects in ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{Vect}}}_{k})$. We also suppose that all $A_i$ and $B_i$ are strictly dagger affinoid algebras, which is always a possible choice (cf. Lemma \ref{lemma:Stein_algebra_strict}). The data of the morphism $\phi$ is equivalent to a system of morphisms $\phi_i: A \to B_i$ in ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{Vect}}}_{k})$ with obvious commuting relations with the system morphism of the representation of $B$ we fixed. To show that $\phi$ is bounded, it is enough to show that each $\phi_i$ is bounded. Let $\mathfrak{m} \subset B_i$ be a maximal ideal. Then, $\mathfrak{m}$ is finitely generated and $\frac{B_i}{\mathfrak{m}}$ is a finite valued extension of $k$, because $B_i$ is strictly dagger affinoid. Then, the composition of maps \[ A \to B_i \to \frac{B_i}{\mathfrak{m}} \] identifies $\frac{A}{\phi_i^{-1}(\mathfrak{m})}$ with a sub-ring of $\frac{B_i}{\mathfrak{m}}$ so \[k \subset \frac{A}{\phi_i^{-1}(\mathfrak{m})} \subset \frac{B_i}{\mathfrak{m}}.\] This implies of course that $\frac{A}{\phi_i^{-1}(\mathfrak{m})}$ is a field. Therefore $\phi_i^{-1}(\mathfrak{m})$ is a maximal, finitely generated ideal of $A$, which by Lemma \ref{lemma:quasi_Stein_algebra_closed_ideals} must be closed. It follows that the quotient bornology of $\frac{A}{\phi_i^{-1}(\mathfrak{m})}$ is separated, and hence complete and since $\frac{A}{\phi_i^{-1}(\mathfrak{m})}$ as vector space over $k$ is finite dimensional, its bornology is necessarily isomorphic to the product bornology of finitely many copies of $k$. Lemma \ref{lemma:locally_compact_spectrum} yields \[ \mathcal{M}(A) = \bigcup_{i\in \mathbb{N}} \mathcal{M}(A_i) \] which readily implies \[ {\rm Max\,}(A) = \bigcup_{i\in \mathbb{N}} {\rm Max\,}(A_i), \] where on the left-hand side only finitely generated maximal ideals are considered. So, there exists a $j$ such that for any $j \ge i$, $\phi_i^{-1}(\mathfrak{m})A_j$ is a maximal ideal of $A_j$. It follows that there is a canonical isomorphism \[ \frac{A}{\phi_i^{-1}(\mathfrak{m})} \cong \frac{A_j}{\phi_i^{-1}(\mathfrak{m})A_j}. \] Now consider $\mathfrak{m}^n$, for any $n \in \mathbb N$. By elementary commutative algebra one can see that $\phi_i(\mathfrak{m})^n \subset \phi_i(\mathfrak{m}^n)$ which implies that there exist canonical quotient maps \[ A \to \frac{A}{\phi_i^{-1}(\mathfrak{m})^n} \to \frac{A}{\phi_i^{-1}(\mathfrak{m}^n)}. \] Since $\mathcal{M}(A_j) {\hookrightarrow} \mathcal{M}(A)$ is an immersion (\emph{i.e. } induces isomorphisms on stalks), using the same argument of proposition 7.2.2/1 of \cite{BGR} we can deduce that \[ \frac{A}{\phi_i^{-1}(\mathfrak{m})^n} \cong \frac{A_j}{\phi_i^{-1}(\mathfrak{m})^n A_j}, \ \ \forall n \in \mathbb N. \] Notice also that $\frac{A_j}{\phi_i^{-1}(\mathfrak{m})^n A_j}$ are a finite dimensional $k$-Banach algebra. Therefore, we wrote $\frac{A}{\phi_i^{-1}(\mathfrak{m}^n)}$ is a quotient of a finite dimensional $k$-Banach algebra, which is necessarily a $k$-Banach. This shows that $\phi_i^{-1}(\mathfrak{m}^n)$ is closed in $A$, for any maximal ideal of $B_i$ and any $n \in \mathbb N$. We showed that the family of all powers of maximal ideals of $B_i$ satisfies the hypothesis of Lemma \ref{lem:closed_graph} (that can be applied to $\phi_i$ thanks to Lemma \ref{lem:webbed}), proving that $\phi_i$ and hence $\phi$ are bounded maps. To conclude the proof it remains to show that every morphism of dagger Stein spaces is induced by a morphism of dagger Stein algebras. A morphism $X \to Y$ of dagger Stein spaces is by definition a morphism of locally ringed Grothendieck topological spaces between $X = \underset{i \in \mathbb{N}}\bigcup \mathcal{M}(B_i)$ and $Y= \underset{i\in \mathbb{N}}\bigcup \mathcal{M}(A_i)$, when they are equipped with their maximal Berkovich nets generated by the nets $\{ \mathcal{M}(A_i) \}_{i \in \mathbb N}$ and $\{\mathcal{M}(B_i)\}_{i \in \mathbb N}$ (cf. chapter 6 of \cite{Bam}). All the elements of the maximal nets which gives the structure of $k$-analytic spaces to $X$ and $Y$ must be compact subsets of $X$ and $Y$ respectively, and hence by the fact that $X$ and $Y$ are hemi-compact, all the elements of the maximal nets must be contained in some $\mathcal{M}(A_i)$ or $\mathcal{M}(B_i)$, respectively, as explained in the proof of Lemma \ref{lemma:locally_compact_spectrum}. Therefore, every morphism of $k$-dagger analytic spaces must come from a morphism of systems $i \mapsto (A_{i} \to B_{i})$, of the form we described in Lemma \ref{lem:FactorizationQSt}. This shows that to give a morphism between dagger Stein algebras is the same as to give a morphism of dagger Stein spaces and vice versa. \ \hfill $\Box$ The following corollary is our version of the classical Forster's theorem. A similar statement holds for the category of affinoid algebras or in the dagger affinoid setting as discussed in Remark 4.18 of \cite{BaBe}. \begin{cor} \label{cor:forster} The functor defined by $\mathcal{M}$ from the opposite category of $k$--dagger Stein algebras to the category of locally ringed Grothendieck topological spaces is fully faithful. Therefore, we can identify the category opposite to $k$-dagger Stein algebras, with a full sub-category of the category of locally ringed Grothendieck topological spaces. \end{cor} Thanks to Theorem \ref{prop:quasi_Stein_algebra_spaces} we can give the following definition. \begin{defn} \label{defn:stein_localization} A morphism $A \to B$ between dagger Stein algebras is called a \emph{localization} if it can be written as a projective limit of dagger localizations. \end{defn} \begin{defn} \label{defn:stein_open_immersion} A morphism $f: X \to Y$ between dagger Stein spaces is called an \emph{open immersion} if it is a homeomorphism of $X$ with $f(X)$ and for each $x \in X$ the induced morphism of stalks $\mathcal{O}_{Y, f(x)} \to \mathcal{O}_{X, x}$ is an isomorphism. \end{defn} \begin{rem} Using Theorem \ref{prop:quasi_Stein_algebra_spaces} and Proposition 4.22 of \cite{BaBe}, one can see that $A \to B$ is a localization of Stein algebras if and only if the associated morphism of dagger Stein spaces $\mathcal{M}(B) \to \mathcal{M}(A)$ is an open immersion. \end{rem} \section{Homological characterization of the Stein topology} \label{sec:Stein_geometry} We now need to prove last lemmata for deducing the main results of this work. \begin{lem} \label{lemma:resolution} Let $A$ be an object of ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{CBorn}}}_{k})$ and $\mathscr{L}_{A}^{\bullet}(E)$ be the Bar resolution (see Definition \ref{defn:Bar}) of an object $E\in {\text{\bfseries\sf{Mod}}}(A)$. If both $A$ and $E$ are nuclear, as object of ${\text{\bfseries\sf{CBorn}}}_{k}$, then $\mathscr{L}_{A}^{\bullet}(E)$ is isomorphic to $E$ in $D^{\leq 0}(A)$. \end{lem} {\bf Proof.} It is enough to check that we can apply Lemma \ref{lemma:flat_res}. This is possible because, by Theorem \ref{thm:strct_exact_nuclear}, nuclear objects of ${\text{\bfseries\sf{CBorn}}}_{k}$ are flat for $\widehat{\otimes}_k$. \ \hfill $\Box$ Let $A \in {\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{CBorn}}}_k)$ and $E, F \in {\text{\bfseries\sf{Mod}}}(A)$. In Definition \ref{defn:Bar} we introduced the Bar resolution of $F$ as the complex $\cdots \to \mathscr{L}_{A}^{2}(F) \to \mathscr{L}_{A}^{1}(F) \to \mathscr{L}_{A}^0(F) \to F \to 0$ \[ \mathscr{L}_{A}^{n}(F) = A \widehat{\otimes}_{k} (A^{\widehat{\otimes}^{n}_{k}} \widehat{\otimes}_{k} F) = A \widehat{\otimes}_k (\underbrace{A \widehat{\otimes}_k \cdots \widehat{\otimes}_k A}_{n \text{ times}} \widehat{\otimes}_k F) \] whose differentials are given in equation (\ref{eqn:diff_bar}). Applying to this complex the functor $E \widehat{\otimes}_A (-)$ one obtains the complex \[ \cdots \to E \widehat{\otimes}_A \mathscr{L}_{A}^{2}(F) \to E \widehat{\otimes}_A \mathscr{L}_{A}^{1}(F) \to E \widehat{\otimes}_A \mathscr{L}_{A}^{0}(F) \to E \widehat{\otimes}_A F \to 0 \] which is a representative of $E \widehat{\otimes}_A^\mathbb L F$ in $D^{\le 0}(A)$. \begin{rem} \label{rmk:bar_complex} The terms of this complex can be written in a simplified form as follows \[ E \widehat{\otimes}_A \mathscr{L}_{A}^{n}(F) = E \widehat{\otimes}_A A \widehat{\otimes}_k (A^{\widehat{\otimes}^{n}_{k}} \widehat{\otimes}_k F) \cong E \widehat{\otimes}_k A^{\widehat{\otimes}^{n}_{k}} \widehat{\otimes}_k F. \] \end{rem} Therefore, we will introduce the notation \begin{equation} \label{eqn:bar_notation} \mathscr{L}_{A}^{n}(E, F) = E \widehat{\otimes}_k A^{\widehat{\otimes}^{n}_{k}} \widehat{\otimes}_k F. \end{equation} \begin{lem} \label{lemma:new} Let $A$, $B$ be dagger Stein algebras and let $X = \mathcal{M}(B)$, $Y = \mathcal{M}(A)$ be the corresponding dagger Stein spaces. Let $f: A \to B$ be a morphism and $\mathcal{M}(f): X = \mathcal{M}(B) \to Y = \mathcal{M}(A)$ the corresponding morphism of dagger Stein spaces. Let also $C$ be another dagger Stein algebra and $g:A \to C$ an arbitrary morphism in ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{CBorn}}}_{k})$ Then, if $\mathcal{M}(f)$ is an open immersion there exist projective systems of Banach algebras $\{ A_i \}_{i \in \mathbb N}$, $\{ B_i \}_{i \in \mathbb N}$ and $\{ C_i \}_{i \in \mathbb N}$ and morphisms of systems $\{ f_i: A_i \to B_i \}_{i \in \mathbb N}$, $\{ g_i:A_i \to C_i \}_{i \in \mathbb N}$ such that \begin{enumerate} \item $\underset{i \in \mathbb N}\mathop{\lim\limits_{\displaystyle\leftarrow}} A_i \cong A$, $\underset{i \in \mathbb N} \mathop{\lim\limits_{\displaystyle\leftarrow}} B_i \cong B$, $\underset{i \in \mathbb N} \mathop{\lim\limits_{\displaystyle\leftarrow}} C_i \cong C$ and the projection maps $A \to A_i$, $B \to B_i$, $C \to C_i$ have dense images; \item $\mathop{\lim\limits_{\displaystyle\leftarrow}} f_i = f$, $\mathop{\lim\limits_{\displaystyle\leftarrow}} g_i = g$; \item for every $i \in \mathbb N$ an isomorphism \begin{equation} \label{eqn:der_iso} \mathscr{L}_{A_i}^{\bullet}(C_i, B_i) \to C_i \widehat{\otimes}_{A_i} B_i \end{equation} of objects of $D^{\le 0}(B_i)$. \end{enumerate} \end{lem} {\bf Proof.} By Lemma \ref{lem:FactorizationQSt} every morphism of Stein algebras can be represented as a system morphism of dagger affinoid algebras. So, let \[ X = \bigcup_{i \in \mathbb N} U_i, \ \ Y = \bigcup_{i \in \mathbb N} V_i, \ \ Z = \mathcal{M}(C) = \bigcup_{i \in \mathbb N} W_i \] be representations of $X, Y$ and $Z$, with $U_i = \mathcal{M}(B_i'), V_i = \mathcal{M}(A_i')$ and $W_i= \mathcal{M}(C_i')$ where $A'_i$, $B_i'$ and $C_i'$ are dagger affinoid algebras and the systems are chosen in a way that both morphisms $f$ and $A \to C$ have the same index set (for finite diagram without loops this re-indexing can always be performed; for more details see Proposition 3.3 of the Appendix of \cite{ArMaz}). Moreover, using Lemma \ref{lemma:Stein_algebra_strict} we can suppose that all $A'_i$, $B_i'$ and $C_i'$ are strictly dagger affinoid algebras. Consider the morphisms $f_i': A_i' \to B_i'$ and $g_i: A_i' \to C_i'$, induced by $f$ and $g: A \to C$ respectively. It is easy to check that $\mathcal{M}(f_i')$ are open immersions of dagger affinoid spaces. Applying Proposition 4.22 of \cite{BaBe} we can deduce that $f_i'$ are dagger affinoid localizations. Applying Theorem 5.7 of \cite{BaBe}, we obtain a strict isomorphism of complexes of $B_i'$-modules \begin{equation} \label{eqn:iso_hom_epi} C_i' \widehat{\otimes}_{A_i'}^\mathbb L B_i' \cong C_i' \widehat{\otimes}_{A_i'} B_i'. \end{equation} The object $C_i' \widehat{\otimes}_{A_i'}^\mathbb L B_i'$ can be represented by the complex $\mathscr{L}_{A_i'}^{\bullet}(C_i', B_i')$. \begin{equation} \label{eqn:iso_hom_epi2} \cdots \to \mathscr{L}_{A_i'}^{2}(C_i', B_i') \to \mathscr{L}_{A_i'}^{1}(C_i', B_i') \to \mathscr{L}_{A_i'}^{0}(C_i', B_i') \end{equation} using notation of equation (\ref{eqn:bar_notation}). It is clear that the explicit description of the differentials of the Bar resolution given in equation (\ref{eqn:diff_bar}) implies that the Bar resolution of $B_i'$ admits a uniform indexing as a diagram of ind-Banach modules, once one fixes a representation of $B_i'$ as an ind-Banch object. Moreover, by Theorem 4.9 (4) of \cite{BaBe} morphisms of dagger affinoid algebras can always be written as morphisms of inductive systems of Banach algebras. Therefore, we can find representations of $A_i' \cong \underset{\rho > 1}\mathop{\lim\limits_{\displaystyle\rightarrow}} (A_\rho'')_i$ and $B_i' \cong \underset{\rho > 1}\mathop{\lim\limits_{\displaystyle\rightarrow}} (B_\rho'')_i$, $C_i' \cong \underset{\rho > 1}\mathop{\lim\limits_{\displaystyle\rightarrow}} (C_\rho'')_i$ such that the isomorphism (\ref{eqn:iso_hom_epi}) lifts to an isomorphism of complexes Banach modules for any $\rho > 1$ small enough. Indeed to choose such a $\rho$ we can proceed as follows. Let $\rho' > 1$ be small enough such that $f_i': A_i' \to B_i'$ is representable as a morphism of inductive systems $(A''_{\rho'})_i \to (B'_{\rho'})_i$ and let $\rho'' > 1$ be such that $g_i: A_i' \to C_i'$ can be represented as a morphism of inductive systems $(A'_{\rho''})_i \to (C'_{\rho''})_i$. Then, any $\rho$ such that $1 <\rho < \min \{ \rho', \rho''\}$ is a suitable choice. In fact, all the maps of the complex of equation (\ref{eqn:iso_hom_epi2}) can be written as a morphism of inductive systems for such small $\rho$, because these maps are obtained from the differentials of the Bar resolution by tensoring $(B_\rho'')_i$ (which becomes an $(A_\rho'')_i$-module) over with $(C_\rho'')_i$ (which becomes a $(A_\rho'')_i$-modules through the map $(A''_{\rho''})_i \to (C''_{\rho''})_i$). Therefore, we have a quasi-isomorphism of complexes of Banach modules \begin{equation} \label{eqn:iso_hom_epi_ban} [\cdots \to \mathscr{L}_{(A''_\rho)_i }^{2}((C''_\rho)_i , (B''_\rho)_i ) \to \mathscr{L}_{(A''_\rho)_i }^{1}((C''_\rho)_i , (B''_\rho)_i ) \to \mathscr{L}_{(A''_\rho)_i }^{0}((C''_\rho)_i , (B''_\rho)_i ) ]\to (C''_\rho)_i \widehat{\otimes}_{(A''_\rho)_i } (B''_\rho)_i \end{equation} for $\rho$ small enough. Moreover, by the fact that the dagger subdomain embeddings \[ U_i \subset U_{i + 1}, \ \ V_i \subset V_{i + 1}, \ \ W_i \subset W_{i + 1} \] are inner for all $i \in \mathbb N$, we can find $\rho_U > 1$, $\rho_V > 1$, $\rho_W > 1$ small enough such that \[ \mathcal{M}((B_{\rho_U}'')_i) \subset U_{i + 1}, \ \mathcal{M}((A_{\rho_V}'')_i) \subset V_{i + 1}, \mathcal{M}((C_{\rho_W}'')_i) \subset W_{i + 1}. \] Therefore, fixing any $\rho > 1$ such that $ \min \{ \rho_U, \rho_V, \rho_W, \rho', \rho'' \} > \rho > 1$, we set for each $i$ \[ A_i = (A_\rho'')_i, \ \ B_i = (B_\rho'')_i, \ \ C_i = (C_\rho'')_i. \] The properties (1), (2) and (3) satisfied by the system so obtained as direct consequences of the choices we made. \hfill $\Box$ \begin{rem} Notice that the isomorphism (\ref{eqn:der_iso}) does not imply that $C_i \widehat{\otimes}_{A_i}^\mathbb L B_i \to C_i \widehat{\otimes}_{A_i} B_i$ is an isomorphism in general. Because for Archimedean base fields Banach spaces might not be flat in ${\text{\bfseries\sf{CBorn}}}_k$ and therefore the Bar resolution is not a flat resolution. \end{rem} \begin{thm} \label{thm_stein_homotopy} Let $i: U \to V$ be an open immersion of dagger Stein spaces corresponding to a morphism $A_V\to B_U$ of dagger Stein algebras over $k$. For any dagger Stein $A_V$-algebra $C_W$ the morphism \[ C_W \widehat{\otimes}_{A_V}^\mathbb L B_U \to C_W \widehat{\otimes}_{A_V} B_U \] is an isomorphism in $D^{\le 0}(B_U)$. Also (in the case that $C_W = B_U$) $A_{V} \to B_{U}$ is a homotopy epimorphism. \end{thm} {\bf Proof.} Lemma \ref{lemma:new}, applied to $A_V, B_U, C_W$ and the morphism $i: U \to V$ yields three systems of Banach algebras such that \[ \underset{i \in \mathbb N} \mathop{\lim\limits_{\displaystyle\leftarrow}} A_i \cong A_V, \ \ \underset{i \in \mathbb N} \mathop{\lim\limits_{\displaystyle\leftarrow}} B_i \cong B_U, \ \ \underset{i \in \mathbb N} \mathop{\lim\limits_{\displaystyle\leftarrow}} C_i \cong C_W \] such that the opposite morphisms to $i$ and the morphism $A_V \to C_W$ can be written as morphisms of these projective systems. Moreover, for each $i \in \mathbb N$, Lemma \ref{lemma:new} also yields to a strictly exact complex \[ \cdots \to \mathscr{L}_{A_i}^{2}(C_i, B_i) \to \mathscr{L}_{A_i}^{1}(C_i, B_i) \to \mathscr{L}_{A_i}^{0}(C_i, B_i) \to C_i \widehat{\otimes}_{A_i} B_i \to 0 \] The following commutative diagram \[ \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { B_j \widehat{\otimes}_k C_j & B_i \widehat{\otimes}_k C_i \\ B_j \widehat{\otimes}_{A_j} C_j & B_i \widehat{\otimes}_{A_i} C_i \\}; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-1-2); \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-2-1); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-2-1) edge node[auto] {$$} (m-2-2); \end{tikzpicture} \] show that the bottom horizontal map is an epimorphism because the rest of the arrows are. Continuing to tensor we get a epimorphism $\mathscr{L}_{A_j}^{n}(C_j, B_j) \to \mathscr{L}_{A_i}^{n}(C_i, B_i)$ for any $n$. It is easy to check that these morphisms commutes with differentials, giving a projective system of morphism of complexes \begin{equation} \label{eqn:system_complex} \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { \cdots &\mathscr{L}_{A_j}^{1}(C_j, B_j) & \mathscr{L}_{A_j}^{0}(C_j, B_j) & C_j \widehat{\otimes}_{A_j} B_j & 0 \\ \cdots &\mathscr{L}_{A_i}^{1}(C_i, B_i) & \mathscr{L}_{A_i}^{0}(C_i, B_i) & C_i \widehat{\otimes}_{A_i} B_i & 0 \\}; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$d_j^2$} (m-1-2); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$d_j^1$} (m-1-3); \path[->,font=\scriptsize] (m-1-3) edge node[auto] {$d_j^0$} (m-1-4); \path[->,font=\scriptsize] (m-1-4) edge node[auto] {$$} (m-1-5); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-1-3) edge node[auto] {$$} (m-2-3); \path[->,font=\scriptsize] (m-1-4) edge node[auto] {$$} (m-2-4); \path[->,font=\scriptsize] (m-2-1) edge node[auto] {$d_i^2$} (m-2-2); \path[->,font=\scriptsize] (m-2-2) edge node[auto] {$d_i^1$} (m-2-3); \path[->,font=\scriptsize] (m-2-3) edge node[auto] {$d_i^0$} (m-2-4); \path[->,font=\scriptsize] (m-2-4) edge node[auto] {$$} (m-2-5); \end{tikzpicture} \end{equation} of whose limit we denote by \begin{equation} \label{eqn:limit_complex} \cdots \longrightarrow \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} (C_{i}\widehat{\otimes}_{k} A_{i} \widehat{\otimes}_{k} B_i) \longrightarrow \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} (C_{i}\widehat{\otimes}_{k} B_i) \longrightarrow \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} (C_{i}\widehat{\otimes}_{A_i} B_i) \longrightarrow 0. \end{equation} Using the isomorphisms $\coim(d_i^{n+1}) \cong \ker (d_i^n)$ and $\coim(d_j^{n+1}) \cong \ker(d_j^n)$ the commutative squares \begin{equation} \xymatrix{ \mathscr{L}_{A_j}^{n}(C_j, B_j)\ar[r] \ar[d] & \ker(d_j^{n-1}) \ar[d] \\ \mathscr{L}_{A_i}^{n}(C_i, B_i)\ar[r] & \ker(d_i^{n-1}) } \end{equation} are seen to have all arrows epimorphisms except the one on between the kernels. Therefore, it also is an epimorphism. Now both the systems $\{ \ker(d_i^{n}) \}_{i \in \mathbb{N}}$ and $\{\mathscr{L}_{A_i}^{n}(C_i, B_i)\}_{i \in \mathbb{N}}$ are epimorphic. Notice that \[\underset{i \in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}} \ker(d_i^{n}) \cong \ker(\underset{i \in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}} d_{i}^{n}). \] Proposition \ref{prop:nuclear_proj} implies that $\underset{i \in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}} \mathscr{L}_{A_i}^{n}(C_i, B_i)$ is nuclear. Therefore, $\underset{i \in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}} \ker(d_i^{n})$ is nuclear as it can be identified with a closed subspace of $\mathscr{L}_{A_V}^{n}(C_W, B_U)$. Therefore, Corollary \ref{cor:BornML2} implies that $\{ \ker(d_i^{n}) \}_{i \in \mathbb{N}}$ and $\{\mathscr{L}_{A_i}^{n}(C_i, B_i)\}_{i \in \mathbb{N}}$ are $\underset{i \in \mathbb{N}}\mathop{\lim\limits_{\displaystyle\leftarrow}}$ acyclic. Applying Lemma \ref{lem:LongShort} the to the system of complexes (\ref{eqn:system_complex}) we deduce that complex of equation (\ref{eqn:limit_complex}) is strictly acyclic. Moreover, applying Corollary \ref{cor:proj_lim_1} to each term in degree strictly less than zero of the complex (\ref{eqn:system_complex}), we see that (\ref{eqn:limit_complex}) is strictly isomorphic to the complex \begin{equation} \label{eqn:limit_complex_2} \cdots \longrightarrow (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} C_{i} )\widehat{\otimes}_{k} (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} A_{i}) \widehat{\otimes}_{k} (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} B_i) \longrightarrow ( \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} C_{i} )\widehat{\otimes}_{k} (\mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} B_i ) \longrightarrow \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} (C_{i}\widehat{\otimes}_{A_i} B_i) \longrightarrow 0. \end{equation} Thus we showed that the complex \[\cdots \longrightarrow C_{W}\widehat{\otimes}_{k} A_{V} \widehat{\otimes}_{k} B_U \longrightarrow C_{W}\widehat{\otimes}_{k} B_U \longrightarrow \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} (C_{i}\widehat{\otimes}_{A_i} B_i) \longrightarrow 0 \] is strictly exact. Since $\coker(\mathscr{L}_{A_V}^{1}(C_W, B_U) \to \mathscr{L}_{A_V}^{0}(C_W, B_U)) \cong C_W \widehat{\otimes}_{A_V} B_U$ we have shown the isomorphism in $D^{\le 0}(B_U)$ \[C_W \widehat{\otimes}_{A_V}^\mathbb L B_U \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N} (C_{i}\widehat{\otimes}_{A_i} B_i) \cong C_W \widehat{\otimes}_{A_V} B_U. \] In the special case that the system $\{B_{i}\}_{i \in \mathbb N}$ and the system $\{C_{i}\}_{i \in \mathbb N}$ are equal we have $B_{U}=C_{W}$ and that $B_{i} \widehat{\otimes}^{\mathbb{L}}_{A_i} B_{i} \cong B_{i} \widehat{\otimes}_{A_i} B_{i} \to B_{i}$ is an isomorphism by Theorem 5.11 of \cite{BaBe}. We can infer that \[ B_U \widehat{\otimes}_{A_V}^\mathbb L B_U \to \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N}(B_{i} \widehat{\otimes}_{A_{i}}B_{i}) \cong \mathop{\lim\limits_{\displaystyle\leftarrow}}_{i \in \mathbb N}B_{i} \cong B_{U} \] is an isomorphism, concluding the proof of the theorem. \ \hfill $\Box$ \begin{lem}\label{lemma:proj_homotopy_epi} Let $A$ be a dagger Stein algebra presented by a system of (strictly) dagger affinoid algebras $A_{i}$. Then the projection maps $A \to A_i$ are homotopy epimorphisms. \end{lem} {\bf Proof.} We can write $A_i$ as a direct limit \[ A_i \cong \mathop{\lim\limits_{\displaystyle\rightarrow}}_{\rho > 1} (A_i)_\rho \] where $(A_i)_\rho$ are Stein algebras of Stein spaces that admit closed embeddings in polydisks and where each morphism $(A_i)_\rho \to (A_i)_{\rho'}$, for $\rho' < \rho$ corresponds geometrically to an open embedding. Such a system of Stein spaces can be found via a presentation of $A_i \cong \frac{W_k^n}{I}$ and writing $W_k^n$ as the direct limit of the Stein algebras of open polydisks of radius bigger that one, which form a base of neighborhoods of the closed unital polydisk. Applying Theorem \ref{thm_stein_homotopy} we can infer that each $(A_i)_\rho \to (A_i)_{\rho'}$ is a homotopy epimorphism and applying Lemma \ref{lem:ind_limit_homotopy_epi} we can deduce that the canonical morphisms $(A_i)_\rho \to A_i$ are homotopy epimorphisms. Since $A$ is a Stein algebra and $A \to A_i$ corresponds geometrically to an open embedding there exists a $\rho > 1$ small enough such that $A \to A_i$ factors through $A \to A_\rho$ and that $A \to A_\rho$ corresponds geometrically to an open embedding. Applying Theorem \ref{thm_stein_homotopy} we can deduce that $A \to A_\rho$ is a homotopy epimorphism and therefore $A \to A_i$ is a homotopy epimorphism because it can be written as a composition of two homotopy epimorphisms. \ \hfill $\Box$ And finally, the last result of characterization of open Stein immersions. \begin{thm}\label{thm:DaggerQStHoEpToImm} Let $f: A_V \to B_U$ be a morphism of dagger Stein algebras over $k$. If $f$ is a homotopy epimorphism as morphism of ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{CBorn}}}_k)$, then it is a localization. \end{thm} {\bf Proof.} The condition of $f$ being a homotopy epimorphism means that \[ B_U \widehat{\otimes}_{A_V}^\mathbb L B_U \cong B_U \widehat{\otimes}_{A_V} B_U \cong B_{U}. \] Let $\underset{i \in \mathbb N} \mathop{\lim\limits_{\displaystyle\leftarrow}} A_i \cong A_V, \underset{i \in \mathbb N} \mathop{\lim\limits_{\displaystyle\leftarrow}} B_i \cong B_U$ be representations of $A_V$ and $B_U$ such that $f$ can be written as a morphism of projective systems $f_i:A_i \to B_i$ of dagger affinoid algebras. By Lemma \ref{lemma:proj_homotopy_epi} the projections $A_V \to A_i$ and $B_U \to B_i$ are homotopy epimorphisms. Lemma \ref{lem:composition_HomotopyMon} applied to the bottom horizontal map of the commutative diagram \[ \begin{tikzpicture} \matrix(m)[matrix of math nodes, row sep=2.6em, column sep=2.8em, text height=1.5ex, text depth=0.25ex] { A & B \\ A_i & B_i \\}; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-1-2); \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$$} (m-2-1); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$$} (m-2-2); \path[->,font=\scriptsize] (m-2-1) edge node[auto] {$$} (m-2-2); \end{tikzpicture} \] implies that $A_i \to B_i$ is a homotopy epimorphism for every $i \in \mathbb N$. This is equivalent to say that $f$ can be written as a projective system of homotopy epimorphisms of dagger affinoid algebras. Applying Theorem 5.11 of \cite{BaBe} we obtain that the morphisms $A_i \to B_i$ are open immersions of dagger affinoid spaces. Therefore, $\mathcal{M}(f): \mathcal{M}(B_U) \to \mathcal{M}(A_V)$ can be written as filtered inductive limit of open embeddings, and it is easy to check that this implies that $\mathcal{M}(f)$ is an open immersion. \ \hfill $\Box$ To conclude our homological and categorical characterization of the topology of Stein spaces it remains to characterize coverings. Consider a Stein space $X$ and an arbitrary covering \[ X = \bigcup_{i \in I} Y_i \] of $X$ made of Stein spaces $Y_i$. By definition the topology of $X$ is hemi-compact and $X$ is also paracompact. Therefore, the family $\{ Y_i \}_{i \in I}$ admits a countable sub-family $\{ Y_j \}_{j \in J}$, where $J \subset I$, such that \[ X = \bigcup_{j \in J} Y_j. \] \begin{lem} \label{lem:cover_one_way} Let $A$ be a dagger Stein algebra and let $\{f_i: A \to A_{V_i} \}_{i\in I}$ be a family of localizations such that for some countable subset $J \subset I$ the corresponding family of functors \[ \{F_{i}: {\text{\bfseries\sf{Mod}}}^{RR}(A) \to {\text{\bfseries\sf{Mod}}}^{RR}(A_{V_i})\}_{i \in J} \] for is conservative. Then, the morphism $\phi: \underset{i \in J} \coprod {\rm Max\,}(A_{V_i} ) \to {\rm Max\,}(A)$ is surjective. \end{lem} {\bf Proof.} Assume that the family of functors $\{F_i\}_{i \in J}$ is conservative and that $\phi: \underset{i \in J}\coprod {\rm Max\,}(A_{V_i} ) \to {\rm Max\,}(A)$ is not surjective. We will deduce a contradiction. Let $\mathfrak{m}_x \in {\rm Max\,}(A)$ be a point which is not in the image of $\phi$. Consider the quotient $A/\mathfrak{m}_x$. This is a Stein algebra: see Example \ref{exa:stein}. By \ref{thm_stein_homotopy} $A/\mathfrak{m}_x\in {\text{\bfseries\sf{Mod}}}^{RR}(A)$ and it is a non-trivial module. We also have \[ A_{V_i} \widehat{\otimes}_A (A/\mathfrak{m}_x) = 0 \] for all $i$, because the extension of $\mathfrak{m}_x$ to $A_{V_i}$ is equal to the improper ideal for all $i \in J$. This proves that the family $F_i$ is not conservative. \ \hfill $\Box$ \begin{defn}\label{defn:RRqcoh_F} We denote with ${\text{\bfseries\sf{Mod}}}_F^{RR}(A)$ the full sub-category of ${\text{\bfseries\sf{Mod}}}^{RR}(A)$ for which $M \in {\text{\bfseries\sf{Mod}}}^{RR}(A)$ is a bornological Fr\'echet space. \end{defn} \begin{lem} \label{prop:proj_lim_A} Let $\{ E_i \}_{i\in I}$ be a countable set of binuclear bornological Fr\'echet modules over $A$ and $F$ a bornological Fr\'echet modules over $A$, where $A$ is a nuclear Fr\'echet algebra. Then, the canonical map \[ E \widehat{\otimes}_{A} F = (\prod_{i\in I} E_i) \widehat{\otimes}_{A} F \to \prod_{i \in I}(E_i \widehat{\otimes}_{A} F) \] is an isomorphism of bornological modules. \end{lem} {\bf Proof.} \[ \prod_{i \in I} F \widehat{\otimes}_A E_i = \prod_{i \in I} \coker ( F \widehat{\otimes}_k A \widehat{\otimes}_k E_i \stackrel{d_1}\to F \widehat{\otimes}_k E_i) \] where $d_1$ is induced by the differential in degree $1$ of the Bar resolution. Since direct products of bornological spaces preserve cokernels (see Proposition 1.9 of \cite{PrSc} for a proof of this fact for bornological spaces over $\mathbb C$ and Proposition 1.2.12 of \cite{Bam} for the same proof worked out in a more general settings) we see that \[ \prod_{i \in I} \coker ( F \widehat{\otimes}_k A \widehat{\otimes}_k E_i \stackrel{d_1}\to F \widehat{\otimes}_k E_i) \cong \coker ( \prod_{i \in I} ( F \widehat{\otimes}_k A \widehat{\otimes}_k E_i) \stackrel{d_1}\to \prod_{i \in I} (F \widehat{\otimes}_k E_i)) \] to which we can apply Corollary \ref{cor:proj_lim_2} (cofiltering the infinite direct product by its finite products) to deduce that \[ \coker ( \prod_{i \in I} (F \widehat{\otimes}_k A \widehat{\otimes}_k E_i) \stackrel{d_1}\to \prod_{i \in I} (F \widehat{\otimes}_k E_i)) \cong \coker ( F \widehat{\otimes}_k A \widehat{\otimes}_k (\prod_{i \in I} E_i) \stackrel{d_1}\to F \widehat{\otimes}_k (\prod_{i \in I} E_i)) \cong F \widehat{\otimes}_{A} E. \] \ \hfill $\Box$ \begin{cor} \label{cor:proj_lim_A} Under the same hypothesis of Lemma \ref{prop:proj_lim_A} we have that for any countable set $I$, \[ E \widehat{\otimes}_{A}^\mathbb L F = (\prod_{i\in I} E_i) \widehat{\otimes}_{A}^\mathbb L F \to \prod_{i \in I}(E_i \widehat{\otimes}_{A}^\mathbb L F) \] is an isomorphism of bornological modules. \end{cor} {\bf Proof.} The same reasoning of Lemma \ref{prop:proj_lim_A} can be extended to $E \widehat{\otimes}_{A}^\mathbb L F$ representing it with the complex ${\mathscr L}_A^\bullet(E, F)$, using the notation introduced so far. Therefore, for each $n \in \mathbb N$ we have that \[ {\mathscr L}_A^n(E, F) = E \widehat{\otimes}_k A^{\widehat{\otimes} n} \widehat{\otimes}_k F = (\prod_{i \in I} E_i )\widehat{\otimes}_k A^{\widehat{\otimes} n} \widehat{\otimes}_k F. \] When $I$ is finite, using the fact that the completed projective tensor product commutes with finite products, we can deduce that ${\mathscr L}_A^n(E, F) \cong \underset{i \in I}\prod (E_i \widehat{\otimes}_k A^{\widehat{\otimes} n} \widehat{\otimes}_k F)$. By writing a coundable product as a cofiltered projective system of finite products, we can use Corollary \ref{cor:proj_lim_2} to deduce that \[ {\mathscr L}_A^n(E, F) \cong \prod_{i \in I} (E_i \widehat{\otimes}_k A^{\widehat{\otimes} n} \widehat{\otimes}_k F) \] for general countable collections. Now $\underset{i \in I} \prod$ defines a functor $\underset{i \in I} \prod D^{\le 0}({\text{\bfseries\sf{Mod}}}(A)) \to D^{\le 0}({\text{\bfseries\sf{Mod}}}(A))$ because direct products are exact and so we get a strict quasi-isomorphism of the complexes representing $E \widehat{\otimes}_{A}^\mathbb L F $ and $\underset{i\in I}\prod(E_i \widehat{\otimes}_{A}^\mathbb L F)$. \ \hfill $\Box$ \begin{lem} \label{lem:cover_the_other_way} Let $A$ be a Stein algebra and $\{ V_i \}_{i \in \mathbb N}$ a countable collection of Stein domains that covers $X = {\rm Max\,}(A)$. Then, the corresponding family of functors ${\text{\bfseries\sf{Mod}}}_F^{RR}(A) \to {\text{\bfseries\sf{Mod}}}_F^{RR}(A_{V_i})$ is conservative. \end{lem} {\bf Proof.} Let $f: M \to N$ in ${\text{\bfseries\sf{Mod}}}_F^{RR}(A)$ be any morphism such that $f_{i}: M \widehat{\otimes}_{A}A_{V_i} \to N \widehat{\otimes}_{A}A_{V_i} $ are isomorphisms for all $i$. The \v{C}ech-Amitsur complex \begin{equation} \label{eqn:cech_amistur} 0 \to A \to \prod_{i_1 \in \mathbb N} A_{V_{i_1}} \to \prod_{i_1, i_2 \in \mathbb N} A_{V_{i_1}} \widehat{\otimes}_{A} A_{V_{i_2}} \to \cdots \end{equation} is strictly exact as a consequence of the Theorem B for Stein spaces (cf. Fundamental Theorem in page 124 of \cite{GR} for the Archimedean version of Theorem B and Satz 2.4 \cite{KI} for the non-Archimedean one). Theorem \ref{thm:extend_tensor} permits to apply the functor $M \widehat{\otimes}_{A}^{\mathbb{L}}(-)$ to the complex (\ref{eqn:cech_amistur}) because it permits to extend $M \widehat{\otimes}_{A}^{\mathbb{L}}(-)$ to a functor on the derived category of unbounded complexes. Therefore, by applying the derived functor $M \widehat{\otimes}_{A}^{\mathbb{L}}(-)$ we are left with an object strictly quasi-isomorphic to zero. Notice that in this case that $M \widehat{\otimes}^{\mathbb{L}}_{A} (-)$ commutes with the relavant countable products (by Corollary \ref{cor:proj_lim_A}) because $M$ is Fr\'echet and products respect cokernels. Furthermore, as $M$ is RR-quasi-coherent, we have \[M \widehat{\otimes}^{\mathbb{L}}_{A} (A_{V_{i_1}}\widehat{\otimes}_{A} \cdots \widehat{\otimes}_{A} A_{V_{i_n}} ) \cong M \widehat{\otimes}_{A} (A_{V_{i_1}}\widehat{\otimes}_{A} \cdots \widehat{\otimes}_{A} A_{V_{i_n}} ) \] for each $i_i, \dots i_n$. Therefore, we can apply Corollary \ref{cor:proj_lim_2} and by a small computation, obtain a a strictly exact complex \[0 \to M \to \prod_{i_1 \in \mathbb N} (M \widehat{\otimes}_{A} A_{V_{i_1}}) \to \prod_{i_1,i_2 \in \mathbb N} (M \widehat{\otimes}_{A} A_{V_{i_1}} \widehat{\otimes}_{A} A_{V_{i_2}}) \to \cdots \] We can do the same thing for $N$ and this yields that the morphisms $f_{i}$ extend uniquely to morphisms of the complexes resolving $M$ and $N$. Therefore $f$ is an isomorphism. \ \hfill $\Box$ \begin{thm} \label{thm:coverings} A collection of homotopy epimorphisms of Stein algebras $\{A_V \to A_{V_i}\}_{i \in I}$ is a covering of ${\rm Max\,}(A_V)$ if and only if the family of functors $\{{\text{\bfseries\sf{Mod}}}_F^{RR}(A) \to {\text{\bfseries\sf{Mod}}}_F^{RR}(A_{V_i})\}_{i \in J}$ is conservative, with $J \subset I$ a countable subset. \end{thm} {\bf Proof.} The assertion of the theorem is simply the combination of Lemma \ref{lem:cover_one_way} and Lemma \ref{lem:cover_the_other_way}. \ \hfill $\Box$ We summarize the main results of this section in the following corollary. \begin{cor} \label{cor:main_results} Consider $({\text{\bfseries\sf{CBorn}}}_k, \widehat{\otimes}_k, k)$ as a closed symmetric monoidal elementary quasi-abelian category. The natural inclusion of categories ${\text{\bfseries\sf{Stn}}}_k {\hookrightarrow} {\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{CBorn}}}_k)$ permits to define a countable version of the formal homotopy Zariski topology on ${\text{\bfseries\sf{Stn}}}_k$ as in Definition \ref{defn:homotopy_Zariski}. The coverings of Stein spaces by Stein spaces (in the usual sense for Archimedean base fields, and in the rigid sense for non-Archimedean base fields) corresponds precisely with the families of morphisms $\{A_V \to A_{V_i}\}_{i \in I}$ in the category ${\text{\bfseries\sf{Comm}}}({\text{\bfseries\sf{CBorn}}}_{k})$ for which the family of functors $\{{\text{\bfseries\sf{Mod}}}_F^{RR}(A) \to {\text{\bfseries\sf{Mod}}}_F^{RR}(A_{V_i})\}_{i \in J}$ is conservative, and $A_V \to A_{V_i}$ is a homotopy epimorphism for each $i \in J$ with $J \subset I$ some countable subset. \end{cor} {\bf Proof.} Theorems \ref{thm:DaggerQStHoEpToImm} and \ref{thm_stein_homotopy} precisely mean that in ${\text{\bfseries\sf{Stn}}}_k$ a morphism is a homotopy epimorphism if and only if it is an open immersion. Whereas, the claim on the coverings is obtained in Theorem \ref{thm:coverings}. \ \hfill $\Box$ \begin{rem}If instead, we want to consider finite covers of Stein spaces by Stein spaces (so that $J$ is finite), then the analogue of Corollary \ref{cor:main_results}gives a description of the formal homotopy Zariski topology and in this case one can replace ${\text{\bfseries\sf{Mod}}}_F^{RR}$ with ${\text{\bfseries\sf{Mod}}}^{RR}$, using all RR-quasicoherent modules instead of just the Frechet ones. Notice also that within the proof of Lemma \ref{lem:cover_the_other_way} we have shown that given a countable open Stein cover of a Stein space and any $M \in {\text{\bfseries\sf{Mod}}}_F^{RR}(A)$ that the complex \[0 \to M \to \prod_{i_1 \in \mathbb N} (M \widehat{\otimes}_{A} A_{V_{i_1}}) \to \prod_{i_1,i_2 \in \mathbb N} (M \widehat{\otimes}_{A} A_{V_{i_1}} \widehat{\otimes}_{A} A_{V_{i_2}}) \to \cdots \] is strictly exact. The same holds for finite open Stein covers of a Stein space and any $M \in {\text{\bfseries\sf{Mod}}}^{RR}(A)$. \end{rem} \bibliographystyle{amsalpha}
1,477,468,751,263
arxiv
\section{Introduction} \label{sec:intro} \subsection{Background and Problem Definition} \label{ssec:background} Halfspaces are Boolean functions $h_{\mathbf{w}}: \mathbb{R}^d \to \{ \pm 1\}$ of the form $h_{\mathbf{w}}(\mathbf{x}) = \mathrm{sign} \left(\langle \mathbf{w}, \mathbf{x} \rangle \right)$, where $\mathbf{w} \in \mathbb{R}^d$ is the associated weight vector. (The function $\mathrm{sign}: \mathbb{R} \to \{ \pm 1\}$ is defined as $\mathrm{sign}(u)=1$ if $u \geq 0$ and $\mathrm{sign}(u)=-1$ otherwise.) The problem of learning an unknown halfspace with a margin condition (in the sense that no example is allowed to lie too close to the separating hyperplane) is as old as the field of machine learning --- starting with Rosenblatt's Perceptron algorithm~\cite{Rosenblatt:58} --- and has arguably been one of the most influential problems in the development of the field, with techniques such as SVMs~\cite{Vapnik:98} and AdaBoost~\cite{FreundSchapire:97} coming out of its study. In this paper, we study the problem of learning $\gamma$-margin halfspaces in the {\em agnostic} PAC model~\cite{Haussler:92, KSS:94}. Specifically, there is an unknown distribution $\mathcal{D}$ on $\mathbb{B}_d \times \{ \pm 1\}$, where $\mathbb{B}_d$ is the unit ball on $\mathbb{R}^d$, and the learning algorithm $\mathcal{A}$ is given as input a training set $S = \{(\mathbf{x}^{(i)}, y^{(i)}) \}_{i=1}^m$ of i.i.d. samples drawn from $\mathcal{D}$. The goal of $\mathcal{A}$ is to output a hypothesis whose error rate is competitive with the $\gamma$-margin error rate of the optimal halfspace. In more detail, the {\em error rate} (misclassification error) of a hypothesis $h: \mathbb{R}^d \to \{\pm 1\}$ (with respect to $\mathcal{D}$) is $\mathrm{err}_{0-1}^{\mathcal{D}}(h) \stackrel{{\mathrm {\footnotesize def}}}{=} \mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}}[h(\mathbf{x}) \neq y]$. For $\gamma \in (0, 1)$, the {\em $\gamma$-margin error rate} of a halfspace $h_{\mathbf{w}}(\mathbf{x})$ with $\|\mathbf{w}\|_2 \leq 1$ is $\mathrm{err}^{\mathcal{D}}_{\gamma}(\mathbf{w}) \stackrel{{\mathrm {\footnotesize def}}}{=} \mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}} \left[y \langle \mathbf{w}, x \rangle \leq \gamma \right]$. We denote by $\mathrm{OPT}_{\gamma}^{\mathcal{D}} \stackrel{{\mathrm {\footnotesize def}}}{=} \min_{\|\mathbf{w}\|_2 \leq 1} \mathrm{err}^{\mathcal{D}}_{\gamma}(\mathbf{w})$ the minimum $\gamma$-margin error rate achievable by any halfspace. We say that $\mathcal{A}$ is an {\em $\alpha$-agnostic learner}, $\alpha \geq 1$, if it outputs a hypothesis $h$ that with probability at least $1-\tau$ satisfies $\mathrm{err}_{0-1}^{\mathcal{D}}(h) \leq \alpha \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}} +\epsilon$. (For $\alpha$ = 1, we obtain the standard notion of agnostic learning.) If the hypothesis $h$ is itself a halfspace, we say that the learning algorithm is {\em proper}. This work focuses on proper learning algorithms. \subsection{Related and Prior Work} \label{ssec:related-work} In this section, we summarize the prior work that is directly related to the results of this paper. First, we note that the sample complexity of our learning problem (ignoring computational considerations) is well-understood. In particular, the ERM that minimizes the number of {\em $\gamma$-margin errors} over the training set (subject to a norm constraint) is known to be an agnostic learner ($\alpha = 1$), assuming the sample size is $\Omega(\log(1/\tau)/(\epsilon^2\gamma^2))$. Specifically, $\Theta(\log(1/\tau)/(\epsilon^2\gamma^2))$ samples\footnote{To avoid clutter in the expressions, we will henceforth assume that the failure probability $\tau = 1/10$. Recall that one can always boost the confidence probability with an $O(\log(1/\tau))$ multiplicative overhead in the sample complexity.} are known to be sufficient and necessary for this learning problem (see, e.g.,~\cite{BartlettM02, McAllester03}). In the realizable case ($\mathrm{OPT}_\gamma^{\mathcal{D}} = 0$), i.e., if the data is linearly separable with margin $\gamma$, the ERM rule above can be implemented in $\mathrm{poly}(d, 1/\epsilon, 1/\gamma)$ time using the Perceptron algorithm. The non-realizable setting ($\mathrm{OPT}_\gamma^{\mathcal{D}} >0$) is much more challenging computationally. The agnostic version of our problem ($\alpha=1$) was first considered in \cite{BenDavidS00}, who gave a {\em proper} learning algorithm with runtime $\mathrm{poly}(d) \cdot (1/\epsilon)^{\tilde{O}(1/\gamma^2)}$. It was also shown in \cite{BenDavidS00} that agnostic proper learning with runtime $\mathrm{poly}(d, 1/\epsilon, 1/\gamma)$ is NP-hard. A question left open by their work was characterizing the computational complexity of proper learning as a function of $1/\gamma$. Subsequent works focused on {\em improper} learning. The $\alpha=1$ case was studied in~\cite{SSS09, SSS10} who gave a learning algorithm with sample complexity $\mathrm{poly}(1/\epsilon) \cdot 2^{\tilde{O}(1/\gamma)}$ -- i.e., {\em exponential} in $1/\gamma$ -- and computational complexity $\mathrm{poly}(d/\epsilon) \cdot 2^{\tilde{O}(1/\gamma)}$. The increased sample complexity is inherent in their approach, as their algorithm works by solving a convex program over an expanded feature space. \cite{BirnbaumS12} gave an $\alpha$-agnostic learning algorithm for all $\alpha \geq 1$ with sample complexity $\mathrm{poly}(1/\epsilon) \cdot 2^{\tilde{O}(1/(\alpha \gamma))}$ and computational complexity $\mathrm{poly}(d/\epsilon) \cdot 2^{\tilde{O}(1/(\alpha \gamma))}$. (We note that the Perceptron algorithm is known to achieve $\alpha = 1/\gamma$~\cite{Servedio:01lnh}. Prior to \cite{BirnbaumS12}, \cite{LS:11malicious} gave a $\mathrm{poly}(d, 1/\epsilon, 1/\gamma)$ time algorithm achieving $\alpha = \Theta ((1/\gamma)/\sqrt{\log(1/\gamma)})$.) \cite{BirnbaumS12} posed as an open question whether their upper bounds for improper learning can be achieved with a proper learner. \new{A related line of work~\cite{KLS09, ABL17, DKKLMS16, LaiRV16, DKK+17, DKKLMS18-soda, DKS18a, KlivansKM18, DKS19, DKK+19-sever} has given polynomial time robust estimators for a range of learning tasks. Specifically,~\cite{KLS09, ABL17, DKS18a, DKK+19-sever} obtained efficient PAC learning algorithms for halfspaces with malicious noise~\cite{Valiant:85short, keali93}, under the assumption that the uncorrupted data comes from a ``tame'' distribution, e.g., Gaussian or isotropic log-concave. It should be noted that the class of $\gamma$-margin distributions considered in this work is significantly broader and can be far from satisfying the structural properties required in the aforementioned works. A growing body of theoretical work has focused on \emph{adversarially robust learning} (e.g.,~\cite{BubeckLPR19,MontasserHS19,DegwekarNV19,Nakkiran19}). In adversarially robust learning, the learner seeks to output a hypothesis with small \emph{$\gamma$-robust misclassification error}, which for a hypothesis $h$ and a norm $\|\cdot\|$ is typically defined as $\mathbf{Pr}_{(\mathbf{x}, y) \sim \D}[\exists \mathbf{x}' \textrm{ with } \|\mathbf{x}' - \mathbf{x}\| \leq \gamma \textrm{ s.t. } h(\mathbf{x}') \ne y]$. Notice that when $h$ is a halfspace and $\|\cdot\|$ is the Euclidean norm, the $\gamma$-robust misclassification error coincides with the $\gamma$-margin error in our context. (It should be noted that most of the literature on adversarially robust learning focuses on the $\ell_{\infty}$-norm.) However, the objectives of the two learning settings are slightly different: in adversarially robust learning, the learner would like to output a hypothesis with small $\gamma$-robust misclassification error, whereas in our context the learner only has to output a hypothesis with small zero-one misclassification error. Nonetheless, as we point out in Remark~\ref{remark:adv-learning}, our algorithms can be adapted to provide guarantees in line with the adversarially robust setting as well. Finally, in the distribution-independent agnostic setting without margin assumptions, there is compelling complexity-theoretic evidence that even weak learning of halfspaces is computationally intractable~\cite{GR:06, FGK+:06short, DOSW:11, Daniely16, BhattacharyyaGS18}. } \subsection{Our Contributions} \label{ssec:our-results} We study the complexity of {\em proper} $\alpha$-agnostic learning of $\gamma$-margin halfspaces on the unit ball. Our main result nearly characterizes the complexity of constant factor approximation to this problem: \begin{theorem} \label{thm:constant-factor-bounds} There is an algorithm that uses $O(1/(\epsilon^2\gamma^2))$ samples, runs in time $\mathrm{poly}(d/\epsilon) \cdot 2^{\tilde{O}(1/\gamma^2)}$ and is an $\alpha = 1.01$-agnostic proper learner \new{for $\gamma$-margin halfspaces} with confidence probability $9/10$. Moreover, assuming the Randomized Exponential Time Hypothesis, any proper learning algorithm that achieves \new{any} constant factor approximation has runtime $\mathrm{poly}(d/\epsilon) \cdot \Omega(2^{(1/\gamma)^{2 - o(1)}})$. \end{theorem} The reader is referred to Theorems~\ref{thm:constant-factor-alg} and~\ref{thm:run-time} for detailed statements of the upper and lower bound respectively. A few remarks are in order: First, we note that the approximation ratio of $1.01$ in the above theorem statement is not inherent. Our algorithm achieves $\alpha = 1+\delta$, for any $\delta>0$, with runtime $\mathrm{poly}(d/\epsilon) \cdot 2^{\tilde{O}(1/(\delta \gamma^2))}$. The runtime of our algorithm significantly improves the runtime of the best known agnostic proper learner~\cite{BenDavidS00}, achieving fixed polynomial dependence on $1/\epsilon$, independent of $\gamma$. This gain in runtime comes at the expense of losing a small constant factor in the error guarantee. It is natural to ask whether there exists an $1$-agnostic proper learner matching the runtime of our Theorem~\ref{thm:constant-factor-bounds}. In Theorem~\ref{thm:param}, we establish a computational hardness result implying that such an improvement is unlikely. The runtime dependence of our algorithm scales as $2^{\tilde{O}(1/\gamma^2)}$ (which is nearly best possible for proper learners), as opposed to $2^{\tilde{O}(1/\gamma)}$ in the best known improper learning algorithms~\cite{SSS09, BirnbaumS12}. In addition to the interpretability of proper learning, we note that the sample complexity of our algorithm is quadratic in $1/\gamma$ (which is information-theoretically optimal), as opposed to exponential for known improper learners. Moreover, for moderate values of $\gamma$, our algorithm may be faster than known improper learners, as it only uses spectral methods and ERM, as opposed to convex optimization. Finally, we note that the lower bound part of Theorem~\ref{thm:constant-factor-bounds} implies a computational separation between proper and improper learning for our problem. In addition, we explore the complexity of $\alpha$-agnostic learning for large $\alpha>1$. The following theorem summarizes our results in this setting: \begin{theorem} \label{thm:alpha-factor-bounds} There is an algorithm that uses $\tilde O(1/(\epsilon^2\gamma^2))$ samples, runs in time $\mathrm{poly}(d) \cdot (1/\epsilon)^{\tilde{O}(1/(\alpha \gamma)^2)}$ and is an $\alpha$-agnostic proper learner \new{for $\gamma$-margin halfspaces} with confidence probability $9/10$. Moreover, assuming NP $\ne$ RP and the Sliding Scale Conjecture, there exists an absolute constant $c > 0$, such that no $(1/\gamma)^c$-agnostic proper learner runs in $\mathrm{poly}(d,1/\varepsilon,1/\gamma)$ time. \end{theorem} The reader is referred to Theorem~\ref{alphaTheorem} for the upper bound and Theorem~\ref{thm:inapx} for the lower bound. In summary, we give an $\alpha$-agnostic proper learning algorithm with runtime exponential in $1/(\alpha\gamma)^2$, as opposed to $1/\gamma^2$, and we show that achieving $\alpha = (1/\gamma)^{\Omega(1)}$ is computationally hard. (Assuming only NP $\ne$ RP, we can rule out polynomial time $\alpha$-agnostic proper learning for $\alpha = (1/\gamma)^{\frac{1}{\text{polyloglog}(1/\gamma)}}$.) \new{ \begin{remark} \label{remark:adv-learning} {\em While not stated explicitly in the subsequent analysis, our algorithms (with a slight modification to the associated constant factors) not only give a halfspace $\mathbf{w}^{\ast}$ with zero-one loss at most $\alpha \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}} +\epsilon$, but this guarantee holds for the $0.99\gamma$-margin error\footnote{Here the constant $0.99$ can be replaced by any constant less than one, with an appropriate increase to the algorithm's running time.} of $\mathbf{w}^{\ast}$ as well. Thus, our learning algorithms also work in the adversarially robust setting (under the Euclidean norm) with a small loss in the ``robustness parameter'' (margin) from the one used to compute the optimum (i.e., $\gamma$) to the one used to measure the error of the output hypothesis (i.e., $0.99\gamma$).} \end{remark} } \subsection{Our Techniques} \label{ssec:techniques} \paragraph{Overview of Algorithms.} For the sake of this intuitive explanation, we provide an overview of our algorithms when the underlying distribution $\mathcal{D}$ is explicitly known. The finite sample analysis of our algorithms follows from standard generalization bounds (see Section~\ref{sec:alg}). Our constant factor approximation algorithm relies on the following observation: Let $\mathbf{w}^{\ast}$ be the optimal weight vector. The assumption that $|\langle \mathbf{w}^{\ast}, \mathbf{x} \rangle |$ is large for almost all $\mathbf{x}$ (by the margin property) implies a relatively strong condition on $\mathbf{w}^{\ast}$, which will allow us to find a relatively small search space containing a near-optimal solution. A first idea is to consider the matrix $\mathbf{M} = \mathbf{E}_{(\mathbf{x}, y) \sim \mathcal{D}}[\mathbf{x} \mathbf{x}^T]$ and note that ${\mathbf{w}^{\ast}}^T \mathbf{M} \mathbf{w}^{\ast} = \Omega(\gamma^2)$. This in turn implies that $\mathbf{w}^{\ast}$ has a large component on the subspace spanned by the largest $O(1/(\epsilon\gamma^2))$ eigenvalues of $\mathbf{M}$. This idea suggests a basic algorithm that computes a net over unit-norm weight vectors on this subspace and outputs the best answer. \new{This basic algorithm has runtime $\mathrm{poly}(d) \cdot 2^{\tilde O(1/(\epsilon\gamma^2))}$ and is analyzed in Section~\ref{ssec:alg-basic}.} To obtain our $\mathrm{poly}(d/\epsilon) \cdot 2^{\tilde O(1/\gamma^2)}$ time constant factor approximation algorithm (establishing the upper bound part of Theorem~\ref{thm:constant-factor-bounds}), we use a refinement of the above idea. Instead of trying to guess the projection of $\mathbf{w}^{\ast}$ onto the space of large eigenvectors {\em all at once}, we will do so in stages. In particular, it is not hard to see that $\mathbf{w}^{\ast}$ has a non-trivial projection onto the subspace spanned by the top $O(1/\gamma^2)$ eigenvalues of $\mathbf{M}$. If we guess this projection, we will have some approximation to $\mathbf{w}^{\ast}$, but unfortunately not a sufficiently good one. However, we note that the difference between $\mathbf{w}^{\ast}$ and our current hypothesis $\mathbf{w}$ will have a large average squared inner product with the misclassified points. This suggests an iterative algorithm that in the $i$-th iteration considers the second moment matrix $\mathbf{M}^{(i)}$ of the points not correctly classified by the current hypothesis $\mathrm{sign}(\langle \mathbf{w}^{(i)}, \mathbf{x} \rangle)$, guesses a vector $\mathbf{u}$ in the space spanned by the top few eigenvalues of $\mathbf{M}^{(i)}$, and sets $\mathbf{w}^{(i+1)} = \mathbf{u} + \mathbf{w}^{(i)}$. This procedure can be shown to produce a candidate set of weights with cardinality $2^{\tilde O (1/\gamma^2)}$ one of which has the desired misclassification error. \new{This algorithm and its analysis are given in Section~\ref{ssec:alg-main}.} Our general $\alpha$-agnostic algorithm (upper bound in Theorem~\ref{thm:alpha-factor-bounds}) relies on approximating the {\em Chow parameters} of the target halfspace $f_{\mathbf{w}^{\ast}}$, i.e., the $d$ numbers $\mathbf{E}[f_{\mathbf{w}^{\ast}}(\mathbf{x}) \mathbf{x}_i]$, $i \in [d]$. A classical result~\cite{Chow:61} shows that the exact values of the Chow parameters of a halfspace (over any distribution) uniquely define the halfspace. Although this fact is not very useful under an arbitrary distribution, the margin assumption implies a strong {\em approximate identifiability} result (Lemma~\ref{lem:chow-vs-dist}). Combining this with an algorithm of~\cite{DeDFS14}, we can efficiently compute an approximation to the halfspace $f_{\mathbf{w}^{\ast}}$ given an approximation to its Chow parameters. In particular, if we can approximate the Chow parameters to $\ell_2$-error \new{$\nu \cdot \gamma$}, we can approximate $f_{\mathbf{w}^{\ast}}$ within error \new{$\mathrm{OPT}_{\gamma}^{\mathcal{D}}+\nu$}. A naive approach to approximate the Chow parameters would be via the empirical Chow parameters, namely $\mathbf{E}_{(\mathbf{x}, y) \sim \mathcal{D}}[y \mathbf{x}]$. In the realizable case, this quantity indeed corresponds to the vector of Chow parameters. Unfortunately however, this method does not work in the agnostic case and it can introduce an error of $\omega(\mathrm{OPT}_{\gamma}^{\mathcal{D}})$. To overcome this obstacle, we note that in order for a small fraction of errors to introduce a large error in the empirical Chow parameters, it must be the case that there is some direction $\mathbf{w}$ in which many of these erroneous points introduce a large error. If we can guess some error that correlates well with $\mathbf{w}$ and also guess the correct projection of our Chow parameters onto this vector, we can correct a decent fraction of the error between the empirical and true Chow parameters. We show that making the correct guesses $\tilde O (1/(\gamma \alpha)^2)$ times, we can reduce the empirical error sufficiently so that it can be used to find an accurate hypothesis. Once again, we can compute a hypothesis for each sequence of guesses and return the best one. \new{See Section~\ref{ssec:alg-bicrit} for a detailed analysis.} \paragraph{Overview of Computational Lower Bounds.} Our hardness results are shown via two reductions. These reductions take as input an instance of a computationally hard problem and produce a distribution $\mathcal{D}$ on $\mathbb{B}_d \times \{\pm 1\}$. If the starting instance is a YES instance of the original problem, then $\mathrm{OPT}_{\gamma}^{\mathcal{D}}$ is small for an appropriate value of $\gamma$. On the other hand, if the starting instance is a NO instance of the original problem, then $\mathrm{OPT}_{0-1}^{\mathcal{D}}$ is large\footnote{We use $\mathrm{OPT}_{0-1}^{\mathcal{D}} \stackrel{{\mathrm {\footnotesize def}}}{=} \min_{\mathbf{w} \in \mathbb{R}^d} \mathrm{err}^{\mathcal{D}}_{0-1}(\mathbf{w})$ to denote the minimum error rate achievable by any halfspace.}. As a result, if there is a ``too fast'' ($\alpha$-)agnostic proper learner for $\gamma$-margin halfspaces, then we would also get a ``too fast'' algorithm for the original problem as well, which would violate the corresponding complexity assumption. To understand the margin parameter $\gamma$ we can achieve, we need to first understand the problems we start with. For our reductions, the original problems can be viewed in the following form: select $k$ items from $v_1, \dots, v_N$ that satisfy certain ``local constraints''. For instance, in our first construction, the reduction is from the $k$-Clique problem: Given a graph $G$ and an integer $k$, the goal is to determine whether $G$ contains a $k$-clique as a subgraph. For this problem, $v_1, \dots, v_N$ correspond to the vertices of $G$ and the ``local'' constraints are that every pair of selected vertices induces an edge. Roughly speaking, our reduction produces a distribution $\mathcal{D}$ on $\mathbb{B}_d \times \{\pm 1\}$ in dimension $d = N$, with the $i$-th dimension corresponding to $v_i$. The ``ideal'' solution in the YES case is to set $\mathbf{w}_i = \frac{1}{\sqrt{k}}$ iff $v_i$ is selected and set $\mathbf{w}_i = 0$ otherwise. In our reductions, the local constraints are expressed using ``sparse'' sample vectors (i.e., vectors with only a constant number of non-zero coordinates all having the same magnitude). For example, in the case of $k$-Clique, the constraints can be expressed as follows: For every non-edge $(i, j)$, we must have $\left(\frac{1}{\sqrt{2}}\mathbf{e}^i + \frac{1}{\sqrt{2}}\mathbf{e}^j\right) \cdot \mathbf{w} \leq \frac{1}{\sqrt{2k}}$, where $\mathbf{e}^i$ and $\mathbf{e}^j$ denote the $i$-th and $j$-th vectors in the standard basis. A main step in both of our proofs is to show that the reduction still works even when we ``shift'' the right hand side by a small multiple of $\frac{1}{\sqrt{k}}$. For instance, in the case of $k$-Clique, it is possible to show that, even if we replace $\frac{1}{\sqrt{2k}}$ with, say, $\frac{0.99}{\sqrt{2k}}$, the correctness of the construction remains, and we also get the added benefit that now the constraints are satisfied with a margin of $\gamma = \Theta(\frac{1}{\sqrt{k}})$ for our ideal solution in the YES case. In the case of $k$-Clique, the above idea yields a reduction to 1-agnostic learning $\gamma$-margin halfspaces with margin $\gamma = \Theta(\frac{1}{\sqrt{k}})$, where the dimension $d$ is $N$ (and $\varepsilon = \frac{1}{\mathrm{poly}(N)}$). As a result, if there is an $f(\frac{1}{\gamma})\mathrm{poly}(d,\frac{1}{\varepsilon})$-time algorithm for the latter for some function $f$, then there also exists a $g(k)\mathrm{poly}(N)$-time algorithm for $k$-Clique for some function $g$. The latter statement is considered unlikely, as it would break a widely-believed hypothesis in the area of parameterized complexity. Ruling out $\alpha$-agnostic learners, for $\alpha>1$, is slightly more complicated, since we need to produce the ``gap'' of $\alpha$ between $\mathrm{OPT}_{\gamma}^{\mathcal{D}}$ in the YES case and $\mathrm{OPT}_{0-1}^{\mathcal{D}}$ in the NO case. To create such a gap, we appeal to the PCP Theorem~\cite{AroraS98,AroraLMSS98}, which can be thought of as an NP-hardness proof of the following ``gap version'' of 3SAT: given a 3CNF formula as input, distinguish between the case that the formula is satisfiable and the case that the formula is not even $0.9$-satisfiable\footnote{In other words, for any assignment to the variables, at least $0.1$ fraction of the clauses are unsatisfied.}. Moreover, further strengthened versions of the PCP Theorem~\cite{Dinur07,MR10} actually implies that this Gap-3SAT problem cannot even be solved in time $O(2^{n^{0.999}})$, where $n$ denotes the number of variables in the formula, assuming the Exponential Time Hypothesis (ETH)\footnote{ETH states that the \emph{exact} version of 3SAT cannot be solved in $2^{o(n)}$ time.}. Once again, (Gap-)3SAT can be viewed in the form of ``item selection with local constraints''. However, the number of selected items $k$ is now equal to $n$, the number of variables of the formula. With a similar line of reasoning as above, the margin we get is now $\gamma = \Theta(\frac{1}{\sqrt{k}}) = \Theta(\frac{1}{\sqrt{n}})$. As a result, if there is, say, a $2^{(1/\gamma)^{1.99}}\mathrm{poly}(d,\frac{1}{\varepsilon})$-time $\alpha$-agnostic proper learner for $\gamma$-margin halfspaces (for an appropriate $\alpha$), then there is an $O(2^{n^{0.995}})$-time algorithm for Gap-3SAT, which would violate ETH. Unfortunately, the above described idea only gives the ``gap'' $\alpha$ that is only slightly larger than $1$, because the gap that we start with in the Gap-3SAT problem is already pretty small. To achieve larger gaps, our actual reduction starts from a generalization of 3SAT, called constraint satisfaction problems (CSPs), whose gap problems are hard even for very large gap. This concludes the outline of the main intuitions in our reductions. \new{The detailed proofs are given in Section~\ref{sec:lb}.} \subsection{Preliminaries} \label{ssec:prelims} For $n \in \mathbb{Z}_+$, we denote $[n] \stackrel{{\mathrm {\footnotesize def}}}{=} \{1, \ldots, n\}$. We will use small boldface characters for vectors and capital boldface characters for matrices. For a vector $\mathbf{x} \in \mathbb{R}^d$, and $i \in [d]$, $\mathbf{x}_i$ denotes the $i$-th coordinate of $\mathbf{x}$, and $\|\mathbf{x}\|_2 \stackrel{{\mathrm {\footnotesize def}}}{=} (\mathop{\textstyle \sum}_{i=1}^d \mathbf{x}_i^2)^{1/2}$ denotes the $\ell_2$-norm of $\mathbf{x}$. We will use $\langle \mathbf{x}, \mathbf{y} \rangle$ for the inner product between $\mathbf{x}, \mathbf{y} \in \mathbb{R}^d$. For a matrix $\mathbf{M} \in \mathbb{R}^{d \times d}$, we will denote by $\|\mathbf{M}\|_2$ its spectral norm and by $\mathrm{tr}(\mathbf{M})$ its trace. Let $\mathbb{B}_d = \{ \mathbf{x} \in \mathbb{R}^d: \|\mathbf{x}\|_2 \leq 1 \}$ be the unit ball and $\mathbb{S}_{d-1} = \{ \mathbf{x} \in \mathbb{R}^d: \|\mathbf{x}\|_2 = 1 \}$ be the unit sphere in $\mathbb{R}^d$. An origin-centered halfspace is a Boolean-valued function $h_{\mathbf{w}}: \mathbb{R}^d \to \{\pm 1\}$ of the form $h_{\mathbf{w}}(\mathbf{x}) = \mathrm{sign} \left(\langle \mathbf{w}, \mathbf{x} \rangle \right)$, where $\mathbf{w} \in \mathbb{R}^d$. (Note that we may assume w.l.o.g. that $\|\mathbf{w}\|_2 =1$.) Let $\mathcal{H}_{d} = \left\{ h_{\mathbf{w}}(\mathbf{x}) = \mathrm{sign} \left(\langle \mathbf{w}, \mathbf{x} \rangle \right), \mathbf{w} \in \mathbb{R}^d \right\}$ denote the class of all origin-centered halfspaces on $\mathbb{R}^d$. Finally, we use $\mathbf{e}^i$ to denote the $i$-th standard basis vector, i.e., the vector whose $i$-th coordinate is one and the remaining coordinates are zero. \section{Efficient Proper Agnostic Learning of Halfspaces with a Margin} \label{sec:alg} \subsection{Warm-Up: Basic Algorithm} \label{ssec:alg-basic} In this subsection, we present a basic algorithm that achieves $\alpha = 1$ and whose runtime is $\mathrm{poly}(d) 2^{\tilde{O}(1/(\epsilon\gamma^2))}$. Despite its slow runtime, this algorithm serves as a warm-up for our more sophisticated constant factor approximation algorithm in the next subsection. We start by establishing a basic structural property of this setting which motivates our basic algorithm. We start with the following simple claim: \begin{claim} \label{claim:M-spectral-norm} Let $\mathbf{M}^{\mathcal{D}} = \mathbf{E}_{{(\mathbf{x},y)\sim \mathcal{D}}} [\mathbf{x} \mathbf{x}^T]$ and $\mathbf{w}^{\ast}$ be a unit vector such that $\mathrm{err}^{\mathcal{D}}_{\gamma}(\mathbf{w}^{\ast}) \leq \mathrm{OPT}_{\gamma}^{\mathcal{D}} \leq 1/2$. Then, we have that $\| \mathbf{M}^{\mathcal{D}}\|_2 \geq {\mathbf{w}^{\ast}}^T \mathbf{M}^{\mathcal{D}} \mathbf{w}^{\ast} \geq \gamma^2/2$. \end{claim} \begin{proof By assumption, $\mathbf{Pr}_{(\mathbf{x},y)\sim \mathcal{D}}[ \left| \langle \mathbf{w}^{\ast} , \mathbf{x} \rangle \right| \geq \gamma] \geq 1/2$, which implies that $\mathbf{E}_{(\mathbf{x},y)\sim \mathcal{D}}[ \left(\langle \mathbf{w}^{\ast} , \mathbf{x} \rangle \right)^2] \geq \gamma^2/2$. The claim follows from the fact that $\mathbf{v}^{T} \mathbf{M}^{\mathcal{D}} \mathbf{v} = \mathbf{E}_{(\mathbf{x},y)\sim \mathcal{D}}[ \left(\langle \mathbf{v} , \mathbf{x} \rangle \right)^2]$, for any $\mathbf{v} \in \mathbb{R}^d$, and the definition of the spectral norm. \end{proof} Claim~\ref{claim:M-spectral-norm} allows us to obtain an approximation to the optimal halfspace by projecting on the space of large eigenvalues of $\mathbf{M}^{\mathcal{D}}$. We will need the following terminology: For $\delta>0$, let $V^{\mathcal{D}}_{\geq \delta}$ be the space spanned by the eigenvalues of $\mathbf{M}^{\mathcal{D}}$ with magnitude at least $\delta$ and $V^{\mathcal{D}}_{< \delta}$ be its complement. Let $\mathrm{Proj}_V(\mathbf{v})$ denote the projection operator of vector $\mathbf{v}$ on subspace $V$. Then, we have the following: \begin{lemma} \label{lem:proj-approx} Let $\delta>0$ and $\mathbf{w}' = \mathrm{Proj}_{V^{\mathcal{D}}_{\geq \delta}} (\mathbf{w}^{\ast})$. Then, we have that $\mathrm{err}^{\mathcal{D}}_{\gamma/2}(\mathbf{w}') \leq \mathrm{err}^{\mathcal{D}}_{\gamma}(\mathbf{w}^{\ast}) + 4\delta/\gamma^2.$ \end{lemma} \begin{proof} Let $\mathbf{w}^{\ast} = \mathbf{w}'+ \mathbf{w}''$, where $\mathbf{w}'' = \mathrm{Proj}_{V^{\mathcal{D}}_{< \delta}} (\mathbf{w}^{\ast})$. Observe that for any $(\mathbf{x}, y)$, if $y \langle \mathbf{w}', \mathbf{x} \rangle \leq \gamma/2$ then $y \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle \leq \gamma$, unless $|\langle \mathbf{w}'', \mathbf{x} \rangle| \geq \gamma/2$. Hence, $\mathrm{err}^{\mathcal{D}}_{\gamma/2}(\mathbf{w}') \leq \mathrm{err}^{\mathcal{D}}_{\gamma}(\mathbf{w}^{\ast}) + \mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}}[|\langle \mathbf{w}'' , \mathbf{x} \rangle| \geq \gamma/2]$. By definition of $\mathbf{w}''$ and $\mathbf{M}^{\mathcal{D}}$, we have that $\mathbf{E}_{{(\mathbf{x}, y) \sim \mathcal{D}}}[(\langle \mathbf{w}'' , \mathbf{x} \rangle)^2] \leq \delta$. By Markov's inequality, we thus obtain $\mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}}[(\langle \mathbf{w}'' , \mathbf{x} \rangle)^2 \geq \gamma^2/4] \leq 4\delta/\gamma^2$, completing the proof of the lemma. \end{proof} Motivated by Lemma~\ref{lem:proj-approx}, the idea is to enumerate over $V_{\geq \delta}^{\mathcal{D}}$, for $\delta = \Theta(\epsilon \gamma^2)$, and output a vector $\mathbf{v}$ with smallest empirical $\gamma/2$-margin error. To turn this into an actual algorithm, we work with a finite sample set and enumerate over an appropriate cover of the space $V_{\geq \delta}^{{\mathcal{D}}}$. The pseudocode is as follows: \begin{algorithm} \caption{Basic $1$-Agnostic Proper Learning Algorithm} \begin{algorithmic}[1] \State Draw a multiset $S = \{(\mathbf{x}^{(i)}, y^{(i)}) \}$ of i.i.d. samples from $\mathcal{D}$, where $m = \Omega(\log(1/\tau)/(\epsilon^2\gamma^2))$. \State Let $\widehat{\mathcal{D}}_m$ be the empirical distribution on $S$. \State Let $\mathbf{M}^{\widehat{\mathcal{D}}_m} = \mathbf{E}_{{(\mathbf{x},y)\sim \widehat{\mathcal{D}}_m}} [\mathbf{x} \mathbf{x}^T]$. \State Set $\delta = \epsilon \gamma^2/16$. Use SVD to find a basis of $V^{\widehat{\mathcal{D}}_m}_{\geq \delta}$. \State Compute a $\delta/2$-cover, $C_{\delta/2}$, in $\ell_2$-norm, of $V^{\widehat{\mathcal{D}}_m}_{\geq \delta} \cap \mathbb{S}_{d-1}$. \State Let $\mathbf{v} \in \mathrm{argmin}_{\mathbf{w} \in C_{\delta/2}} \mathrm{err}_{\gamma/4}^{\widehat{D}_m}(\mathbf{w})$. \State \Return $h_{\mathbf{v}}(\mathbf{x}) = \mathrm{sign}(\langle \mathbf{v}, \mathbf{x} \rangle)$. \end{algorithmic} \end{algorithm} First, we analyze the runtime of our algorithm. The SVD of $\mathbf{M}^{\widehat{\mathcal{D}}_m}$ can be computed in $\mathrm{poly}(d/\delta)$ time. Note that $V^{\widehat{\mathcal{D}}_m}_{\geq \delta}$ has dimension at most $1/\delta$. This follows from the fact that $\mathbf{M}^{\widehat{\mathcal{D}}_m}$ is PSD and its trace is $\sum_{i=1}^d \lambda_i = \mathrm{tr}(\mathbf{M}^{\widehat{\mathcal{D}}_m}) = \mathbf{E}_{{(\mathbf{x},y)\sim \widehat{\mathcal{D}}_m}}[\mathrm{tr}(\mathbf{x} \mathbf{x}^T)] \leq 1$, where we used that $\|\mathbf{x}\|_2 \leq 1$ with probability $1$ over $\widehat{\mathcal{D}}_m$. Therefore, the unit sphere of $V^{\widehat{\mathcal{D}}_m}_{\geq \delta}$ has a $\delta/2$-cover $C_{\delta/2}$ of size $(2/\delta)^{O(1/\delta)} = 2^{\tilde{O}(1/(\epsilon \gamma^2))}$ that can be computed in output polynomial time. We now prove correctness. The main idea is to apply Lemma~\ref{lem:proj-approx} for the empirical distribution $\widehat{\mathcal{D}}_m$ combined with \new{the following statistical bound:} \begin{fact}[\cite{BartlettM02, McAllester03}]\label{fact:erm-margin} Let $S = \{(\mathbf{x}^{(i)}, y^{(i)}) \}_{i=1}^m$ be a multiset of i.i.d. samples from $\mathcal{D}$, where $m = \Omega(\log(1/\tau)/(\epsilon^2\gamma^2))$, and $\widehat{\mathcal{D}}_m$ be the empirical distribution on $S$. Then with probability at least $1-\tau$ over $S$, simultaneously for all unit vectors $\mathbf{w}$ and margins $\gamma>0$, if $h_{\mathbf{w}}(\mathbf{x}) = \mathrm{sign}(\langle \mathbf{w}, \mathbf{x} \rangle)$, we have that $\mathrm{err}^{\mathcal{D}}_{0-1} (h_{\mathbf{w}}) \leq \mathrm{err}_{\gamma}^{\widehat{\mathcal{D}}_m} (\mathbf{w}) + \epsilon$. \end{fact} We proceed with the formal proof. First, we claim that for $m = \Omega(\log(1/\tau)/\epsilon^2)$, with probability at least $1-\tau/2$ over $S$, we have that $\mathrm{err}_{\gamma}^{\widehat{\mathcal{D}}_m}(\mathbf{w}^{\ast}) \leq \mathrm{err}_{\gamma}^{\mathcal{D}} (\mathbf{w}^{\ast}) +\epsilon/8$. To see this, note that $\mathrm{err}_{\gamma}^{\widehat{\mathcal{D}}_m} (\mathbf{w}^{\ast})$ can be viewed as a sum of Bernoulli random variables with expectation $\mathrm{err}_{\gamma}^{\mathcal{D}} (\mathbf{w}^{\ast})$. Hence, the claim follows by a Chernoff bound. By an argument similar to that of Lemma~\ref{lem:proj-approx}, we have that $\mathrm{err}_{\gamma/4}^{\widehat{D}_m}(\mathbf{v}) \leq \mathrm{err}_{\gamma/2}^{\widehat{D}_m}(\mathbf{w}') + \epsilon/2$. Indeed, we can write $\mathbf{v} = \mathbf{w}' + \mathbf{r}$, where $\|\mathbf{r}\|_2 \leq \delta/2$, and follow the same argument. In summary, we have the following sequence of inequalities: \begin{eqnarray*} \mathrm{err}_{\gamma/4}^{\widehat{D}_m}(\mathbf{v}) &\leq& \mathrm{err}_{\gamma/2}^{\widehat{D}_m}(\mathbf{w}') + \epsilon/2 \leq \mathrm{err}_{\gamma}^{\widehat{D}_m}(\mathbf{w}^{\ast}) + \epsilon/2 + \epsilon/4 \\ &\leq& \mathrm{err}_{\gamma}^{\mathcal{D}}(\mathbf{w}^{\ast}) + \epsilon/2 + \epsilon/4 + \epsilon/8 \;, \end{eqnarray*} where the second inequality uses Lemma~\ref{lem:proj-approx} for $\widehat{\mathcal{D}}_m$. Finally, we use Fact~\ref{fact:erm-margin} for $\gamma/4$ and $\epsilon/8$ to obtain that $\mathrm{err}^{\mathcal{D}}_{0-1} (h_{\mathbf{v}}) \leq \mathrm{err}_{\gamma/4}^{\widehat{D}_m}(\mathbf{v})+\epsilon/8 \leq \mathrm{OPT}_{\gamma}^{\mathcal{D}} +\epsilon$. The proof follows by a union bound. \subsection{Main Algorithm: Near-Optimal Constant Factor Approximation} \label{ssec:alg-main} In this section, we establish the following theorem, which gives the upper bound part of Theorem~\ref{thm:constant-factor-bounds}: \begin{theorem} \label{thm:constant-factor-alg} Fix $0< \delta \leq 1$. There is an algorithm that uses $O(1/(\epsilon^2\gamma^2))$ samples, runs in time $\mathrm{poly}(d/\epsilon) \cdot 2^{\tilde{O}(1/(\delta \gamma^2))}$ and is a $(1+\delta)$-agnostic proper learner \new{for $\gamma$-margin halfspaces} with confidence probability $9/10$. \end{theorem} Our algorithm in this section produces a finite set of candidate weight vectors and outputs the one with the smallest empirical $\gamma/2$-margin error. For the sake of this intuitive description, we will assume that the algorithm knows the distribution $\mathcal{D}$ in question supported on $\mathbb{B}_d \times \{ \pm 1\}$. By assumption, there is a unit vector $\mathbf{w}^{\ast}$ so that $\mathrm{err}_{\gamma}^{\mathcal{D}}(\mathbf{w}^{\ast}) \leq \mathrm{OPT}_{\gamma}^{\mathcal{D}}$. We note that if a hypothesis $h_{\mathbf{w}}$ defined by vector $\mathbf{w}$ has $\gamma/2$-margin error at least a $(1+\delta)\mathrm{OPT}_{\gamma}^{\mathcal{D}}$, then there must be a large number of points correctly classified with $\gamma$-margin by $h_{\mathbf{w}^\ast}$, but not correctly classified with $\gamma/2$-margin by $h_\mathbf{w}$. For all of these points, we must have that $|\langle \mathbf{w}^{\ast}-\mathbf{w}, \mathbf{x} \rangle| \geq \gamma/2$. This implies that the $\gamma/2$-margin-misclassified points of $h_\mathbf{w}$ have a large covariance in the $\mathbf{w}^{\ast}-\mathbf{w}$ direction. In particular, we have: \begin{claim} \label{clm:spectral-norm-diff} Let $\mathbf{w} \in \mathbb{R}^d$ be such that $\mathrm{err}_{\gamma/2}^{\mathcal{D}}(\mathbf{w}) > (1+\delta) \mathrm{OPT}_{\gamma}^{\mathcal{D}}$. Let $\mathcal{D}'$ be $\mathcal{D}$ conditioned on $y \langle \mathbf{w}, \mathbf{x} \rangle\leq \gamma/2$. Let $\mathbf{M}^{\mathcal{D}'} = \mathbf{E}_{(\mathbf{x},y)\sim \mathcal{D}'}[\mathbf{x} \mathbf{x}^T]$. Then $(\mathbf{w}^{\ast}-\mathbf{w})^T \mathbf{M}^{\mathcal{D}'} (\mathbf{w}^{\ast}-\mathbf{w}) \geq \delta \gamma^2/8.$ \end{claim} \begin{proof} We claim that with probability at least $\delta/2$ over $(\mathbf{x}, y) \sim \mathcal{D}'$ we have that $y \langle \mathbf{w}, \mathbf{x} \rangle \leq \gamma/2$ and $y \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle \geq \gamma$. To see this, we first note that $\mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}'}[y \langle \mathbf{w}, \mathbf{x} \rangle >\gamma/2]=0$ holds by definition of $\mathcal{D}'$. Hence, we have that $$\mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}'}[y \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle \leq \gamma] \leq \frac{\mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}}[y \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle \leq \gamma]}{\mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}}[y \langle \mathbf{w}, \mathbf{x} \rangle\leq \gamma/2]} < \frac{\mathrm{OPT}_\gamma^{\mathcal{D}}}{(1+\delta) \mathrm{OPT}_{\gamma}^{\mathcal{D}}} = \frac{1}{(1+\delta)} \;.$$ By a union bound, we obtain $\mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}'}[(y \langle \mathbf{w}, \mathbf{x} \rangle >\gamma/2) \cup (y \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle \leq \gamma)] \leq \frac{1}{(1+\delta)}$. Therefore, with probability at least $\delta/(1+\delta) \geq \delta/2$ (since $\delta \leq 1$) over $(\mathbf{x}, y) \sim \mathcal{D}'$ we have that $y \langle \mathbf{w}^{\ast}-\mathbf{w}, \mathbf{x}\rangle \geq \gamma/2$, which implies that $\langle \mathbf{w}^{\ast}-\mathbf{w}, \mathbf{x} \rangle^2 \geq \gamma^2/4$. Thus, $(\mathbf{w}^{\ast}-\mathbf{w})^T \mathbf{M}^{\mathcal{D}'} (\mathbf{w}^{\ast}-\mathbf{w}) = \mathbf{E}_{(\mathbf{x},y)\sim \mathcal{D}'}[(\langle \mathbf{w}^{\ast}-\mathbf{w}, \mathbf{x} \rangle)^2] \geq \delta \gamma^2/8$, completing the proof. \end{proof} Claim~\ref{clm:spectral-norm-diff} says that $\mathbf{w}^{\ast}-\mathbf{w}$ has a large component on the large eigenvalues of $\mathbf{M}^{\mathcal{D}'}$. Building on this claim, we obtain the following result: \begin{lemma} \label{lem:large-projection-k} Let $\mathbf{w}^{\ast},\mathbf{w},\mathbf{M}^{\mathcal{D}'}$ be as in Claim~\ref{clm:spectral-norm-diff}. There exists $k \in \mathbb{Z}_+$ so that if $V_k$ is the span of the top $k$ eigenvectors of $\mathbf{M}^{\mathcal{D}'}$, we have that $\|\mathrm{Proj}_{V_k}(\mathbf{w}^{\ast}-\mathbf{w})\|_2^2 \geq k \delta \gamma^2/8$. \end{lemma} \begin{proof} Note that the matrix $\mathbf{M}^{\mathcal{D}'}$ is PSD and let $0> \lambda_{\max} = \lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_d \geq 0$ be its set of eigenvalues. We will denote by $V_{\geq t}$ the space spanned by the eigenvectors of $\mathbf{M}^{\mathcal{D}'}$ corresponding to eigenvalues of magnitude at least $t$. Let $d_t = \dim(V_{\geq t})$ be the dimension of $V_{\geq t}$, i.e., the number of $i \in [d]$ with $\lambda_i \geq t$. Since $\mathbf{x}$ is supported on the unit ball, for $(\mathbf{x}, y) \sim \mathcal{D}'$, we have that $\mathrm{tr}(\mathbf{M}^{\mathcal{D}'}) = \mathbf{E}_{(\mathbf{x}, y) \sim \mathcal{D}'}[\mathrm{tr}(\mathbf{x} \mathbf{x}^T)] \leq 1$. Since $\mathbf{M}^{\mathcal{D}'}$ is PSD, we have that $\mathrm{tr}(\mathbf{M}^{\mathcal{D}'}) = \mathop{\textstyle \sum}_{i=1}^d \lambda_i$ and we can write \begin{equation} \label{eqn:ena} 1\geq \mathrm{tr}(\mathbf{M}^{\mathcal{D}'}) = \mathop{\textstyle \sum}_{i=1}^d \lambda_i = \mathop{\textstyle \sum}_{i=1}^d \mathop{\textstyle \int}_0^{\lambda_i} 1 dt =\mathop{\textstyle \sum}_{i=1}^d \mathop{\textstyle \int}_0^{\lambda_{\max}} \mathbf{1}_{\lambda_i \geq t} dt = \mathop{\textstyle \int}_0^{\lambda_{\max}} d_t dt, \end{equation} where the last equality follows by changing the order of the summation and the integration. If the projection of $(\mathbf{w}^{\ast}-\mathbf{w})$ onto the $i$-th eigenvector of $\mathbf{M}^{\mathcal{D}'}$ has $\ell_2$-norm $a_i$, we have that \begin{equation} \label{eqn:dyo} \delta \gamma^2/8 \leq (\mathbf{w}^{\ast}-\mathbf{w})^T \mathbf{M}^{\mathcal{D}'} (\mathbf{w}^{\ast}-\mathbf{w}) = \mathop{\textstyle \sum}_{i=1}^d \lambda_i a_i^2 = \mathop{\textstyle \sum}_{i=1}^d \mathop{\textstyle \int}_0^{\lambda_{\max}} a_i^2 \mathbf{1}_{\lambda_i \geq t} dt =\mathop{\textstyle \int}_0^{\lambda_{\max}} \|\mathrm{Proj}_{V_{\geq t}}(\mathbf{w}^{\ast}-\mathbf{w})\|_2^2 dt, \end{equation} where the first inequality uses Claim~\ref{clm:spectral-norm-diff}, the first equality follows by the Pythagorean theorem, and the last equality follows by changing the order of the summation and the integration. Combining \eqref{eqn:ena} and \eqref{eqn:dyo}, we obtain $\mathop{\textstyle \int}_0^{\lambda_{\max}} \|\mathrm{Proj}_{V_{\geq t}}(\mathbf{w}^{\ast}-\mathbf{w})\|_2^2 dt \geq (\delta \gamma^2/8) \mathop{\textstyle \int}_0^{\lambda_{\max}} d_t dt $. By an averaging argument, there exists $0\leq t \leq \lambda_{\max}$ such that $\|\mathrm{Proj}_{V_{\geq t}}(\mathbf{w}^{\ast}-\mathbf{w})\|_2^2 \geq (\delta \gamma^2/8) d_t$. Letting $k=d_t$ and noting that $V_{\geq t} = V_k$ completes the proof. \end{proof} Lemma~\ref{lem:large-projection-k} suggests a method for producing an approximation to $\mathbf{w}^{\ast}$, or more precisely a vector that produces empirical $\gamma/2$-margin error at most $(1+\delta)\mathrm{OPT}_{\gamma}^{\mathcal{D}}$. We start by describing a non-deterministic procedure, which we will then turn into an actual algorithm. The method proceeds in a sequence of stages. At stage $i$, we have a hypothesis weight vector $\mathbf{w}^{(i)}$. (At stage $i=0$, we start with $\mathbf{w}^{(0)}=\mathbf{0}$.) At any stage $i$, if $\mathrm{err}_{\gamma/2}^{\mathcal{D}} (\mathbf{w}^{(i)}) \leq (1+\delta) \mathrm{OPT}_{\gamma}^{\mathcal{D}}$, then $\mathbf{w}^{(i)}$ is a sufficient estimator. Otherwise, we consider the matrix $\mathbf{M}^{(i)} = \mathbf{E}_{(\mathbf{x},y)\sim \mathcal{D}^{(i)}} [\mathbf{x}\bx^T]$, where $\mathcal{D}^{(i)}$ is $\mathcal{D}$ conditioned on $y \langle \mathbf{w}^{(i)}, \mathbf{x} \rangle \leq \gamma/2$. By Lemma~\ref{lem:large-projection-k}, we know that for some positive integer value $k^{(i)}$, we have that the projection of $\mathbf{w}^{\ast}-\mathbf{w}^{(i)}$ onto $V_{k^{(i)}}$ has squared norm at least $\delta k^{(i)} \gamma^2/8$. Let $\mathbf{p}^{(i)}$ be this projection. We set $\mathbf{w}^{(i+1)}=\mathbf{w}^{(i)}+\mathbf{p}^{(i)}$. Since the projection of $\mathbf{w}^{\ast}-\mathbf{w}^{(i)}$ and its complement are orthogonal, we have \begin{equation} \label{eqn:norm-decreases} \|\mathbf{w}^{\ast}-\mathbf{w}^{(i+1)}\|_2^2 = \|\mathbf{w}^{\ast}-\mathbf{w}^{(i)}\|_2^2 - \|\mathbf{p}^{(i)}\|_2^2 \leq \|\mathbf{w}^{\ast}-\mathbf{w}^{(i)}\|_2^2 - \delta k^{(i)}\gamma^2/8 \;, \end{equation} where the inequality uses the fact that $\|\mathbf{p}^{(i)} \|_2^2 \geq k^{(i)} \delta \gamma^2/8$ (as follows from Lemma~\ref{lem:large-projection-k}). Let $s$ be the total number of stages. We can write $$1 \geq \|\mathbf{w}^{\ast}-\mathbf{w}^{(0)}\|_2^2 - \|\mathbf{w}^{\ast}-\mathbf{w}^{(s)}\|_2^2 = \mathop{\textstyle \sum}_{i=0}^{s-1} \left(\|\mathbf{w}^{\ast}-\mathbf{w}^{(i)}\|_2^2 - \|\mathbf{w}^{\ast}-\mathbf{w}^{(i+1)}\|_2^2\right) \geq (\delta \gamma^2/8) \mathop{\textstyle \sum}_{i=0}^{s-1} k^{(i)} \;,$$ where the first inequality uses that $\|\mathbf{w}^{\ast}-\mathbf{w}^{(0)}\|_2^2 =1$ and $\|\mathbf{w}^{\ast}-\mathbf{w}^{(s)}\|_2^2 \geq 0$, the second notes the telescoping sum, and the third uses \eqref{eqn:norm-decreases}. We thus have that $s \leq \mathop{\textstyle \sum}_{i=0}^{s-1} k^{(i)} \leq 8/(\delta \gamma^2)$. Therefore, the above procedure terminates after at most $8/(\delta \gamma^2)$ stages at some $\mathbf{w}^{(s)}$ with $\mathrm{err}_{\gamma/2}^{\mathcal{D}} (\mathbf{w}^{(s)}) \leq (1+\delta) \mathrm{OPT}_{\gamma}^{\mathcal{D}}$. We now describe how to turn the above procedure into an actual algorithm. Our algorithm tries to simulate the above described procedure by making appropriate guesses. In particular, we start by guessing a sequence of positive integers $k^{(i)}$ whose sum is at most $8/(\delta \gamma^2)$. This can be done in $2^{O(1/(\delta \gamma^2))}$ ways. Next, given this sequence, our algorithm guesses the vectors $\mathbf{w}^{(i)}$ over all $s$ stages in order. In particular, given $\mathbf{w}^{(i)}$, the algorithm computes the matrix $\mathbf{M}^{(i)}$ and the subspace $V_{k^{(i)}}$, and guesses the projection $\mathbf{p}^{(i)} \in V_{k^{(i)}}$, which then gives $\mathbf{w}^{(i+1)}$. Of course, we cannot expect our algorithm to guess $\mathbf{p}^{(i)}$ exactly (as there are infinitely many points in $V_{k^{(i)}}$), but we can guess it to within $\ell_2$-error $\mathrm{poly}(\gamma)$, by taking an appropriate net. This involves an additional guess of size $(1/\gamma)^{O(k^{(i)})}$ in each stage. In total, our algorithm makes $2^{\tilde O(1/(\delta \gamma^2))}$ many different guesses. We note that the sample version of our algorithm is essentially identical to the idealized version described above, by replacing the distribution $\mathcal{D}$ by its empirical version and leveraging \new{Fact~\ref{fact:erm-margin}.} \noindent The pseudo-code is given in Algorithm~\ref{alg:opt-constant} below. \medskip \begin{algorithm} \caption{\label{alg:opt-constant} Near-Optimal $(1+\delta)$-Agnostic Proper Learner} \begin{algorithmic}[1] \State Draw a multiset $S = \{(\mathbf{x}^{(i)}, y^{(i)}) \}_{i=1}^m$ of i.i.d. samples from $\mathcal{D}$, where $m = \Omega(\log(1/\tau)/(\epsilon^2\gamma^2))$. \State Let $\widehat{\mathcal{D}}_m$ be the empirical distribution on $S$. \For{all sequences $k^{(0)}, k^{(1)},\ldots, k^{(s-1)}$ of positive integers with sum at most $8/(\delta \gamma^2)+2$} \State Let $\mathbf{w}^{(0)}=\mathbf{0}$. \For{$i=0, 1, \ldots, s-1$} \State Let $\mathcal{D}^{(i)}$ be $\widehat{\mathcal{D}}_m$ conditioned on $y \langle \mathbf{w}^{(i)}, \mathbf{x} \rangle \leq \gamma/2$. \State Let $\mathbf{M}^{(i)} = \mathbf{E}_{(\mathbf{x},y)\sim \mathcal{D}^{(i)}} [\mathbf{x}\bx^T]$. \State Use SVD on $\mathbf{M}^{(i)}$ to find a basis for $V_{k^{(i)}}$, the span of the top $k^{(i)}$ eigenvectors. \State Let $C^{(i)}$ be a $\delta \gamma^3$-cover, in $\ell_2$-norm, of $V_{k^{(i)}}\cap \mathbb{B}_d$ of size $(1/(\delta \gamma))^{O(k^{(i)})}$. \State For each $\mathbf{p}^{(i)} \in C^{(i)}$ repeat the next step of the for loop with $\mathbf{w}^{(i+1)}=\mathbf{w}^{(i)}+\mathbf{p}^{(i)}$. \EndFor \EndFor \State Let $C$ denote the set of all $\mathbf{w}^{(i)}$ generated in the above loop. \State Let $\mathbf{v} \in \mathrm{argmin}_{\mathbf{w} \in C} \mathrm{err}_{\gamma/2}^{\widehat{D}_m}(\mathbf{w})$. \State \Return $h_{\mathbf{v}}(\mathbf{x}) = \mathrm{sign}(\langle \mathbf{v}, \mathbf{x} \rangle)$. \end{algorithmic} \end{algorithm} To show the correctness of the algorithm, we begin by arguing that the set $C$ of candidate weight vectors produced has size $2^{\tilde O(1/(\delta \gamma^2))}$. This is because there are only $2^{O(1/(\delta \gamma^2))}$ many possibilities for the sequence of $k^{(i)}$, and for each such sequence the product of the sizes of the $C^{(i)}$ is $(1/(\delta \gamma))^{O(\sum k^{(i)})} = 2^{\tilde O(1/(\delta \gamma^2))}.$ We note that, by the aforementioned analysis, for any choice of $k^{(0)},\ldots,k^{(i-1)}$ and $\mathbf{w}^{(i)}$, we either have that $\mathrm{err}_{\gamma/2}^{\widehat{\mathcal{D}}_m} (\mathbf{w}^{(i)}) \leq (1+\delta) \mathrm{OPT}_{\gamma}^{\widehat{\mathcal{D}}_m}$ or there is a choice of $k^{(i)}$ and $\mathbf{p}^{(i)} \in C^{(i)}$ such that $$\|\mathbf{w}^{\ast}-\mathbf{w}^{(i)}-\mathbf{p}^{(i)}\|_2^2 \leq \|\mathbf{w}^{\ast}-\mathbf{w}^{(i)} \|_2^2 - \delta k^{(i)} \gamma^2/8 + O(\delta^2 \gamma^6) \;,$$ where we used \eqref{eqn:norm-decreases} and the fact that $C^{(i)}$ is a $\delta \gamma^3$-cover of $V_{k^{(i)}}$. Following the execution path of the algorithm, we either find some $\mathbf{w}^{(i)}$ with $\mathrm{err}_{\gamma/2}^{\widehat{\mathcal{D}}_m} (\mathbf{w}^{(i)}) \leq (1+\delta) \mathrm{OPT}_{\gamma}^{\widehat{\mathcal{D}}_m}$, or we find a $\mathbf{w}^{(i)}$ with $$\|\mathbf{w}^{\ast}-\mathbf{w}^{(i)} \|_2^2 \leq 1-\left(\mathop{\textstyle \sum}_{j=0}^{i-1} k^{(j)}\right) \delta \gamma^2/8 + O(\delta \gamma^4) \;,$$ where the last term is an upper bound for $\left(\mathop{\textstyle \sum}_{j=0}^{i-1} k^{(j)}\right) \cdot O(\delta^2 \gamma^6)$. Note that this sequence terminates in at most $O(1/(\delta \gamma^2))$ stages, when it becomes impossible that $\sum k^{(j)} > 8/(\delta \gamma^2)+1$. Thus, the output of our algorithm must contain some weight vector $\mathbf{v}$ with $\mathrm{err}_{\gamma/2}^{\widehat{\mathcal{D}}_m} (\mathbf{v}) \leq (1+\delta) \mathrm{OPT}_{\gamma}^{\widehat{\mathcal{D}}_m}$. The proof now follows by an application of Fact~\ref{fact:erm-margin}. \new{This completes the proof of Theorem~\ref{thm:constant-factor-alg}.} \subsection{$\alpha$-Agnostic Proper Learning Algorithm} \label{ssec:alg-bicrit} In this section, we show that if one wishes to obtain an $\alpha$-agnostic proper learner for some large $\alpha \gg 1$, one can obtain runtime exponential in $1/(\alpha\gamma)^2$ rather than $1/\gamma^2$. Formally, we prove: \begin{theorem}\label{alphaTheorem} There is an algorithm that uses $\tilde O(1/(\epsilon^2\gamma^2))$ samples, runs in time $\mathrm{poly}(d) \cdot (1/\epsilon)^{\tilde{O}(1/(\alpha \gamma)^2)}$ and is an $\alpha$-agnostic proper learner for $\gamma$-margin halfspaces with probability $9/10$. \end{theorem} Let $\mathcal{D}$ be a distribution over $\mathbb{B}_d\times \{-1,1\}$. Suppose that there exists a unit vector $\mathbf{w}^{\ast} \in \mathbb{R}^d$ such that $\mathbf{Pr}_{(\mathbf{x},y) \sim \mathcal{D}}[y \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle \geq \gamma] \geq 1- \mathrm{OPT}_{\gamma}^{\mathcal{D}}$ for some $\mathrm{OPT}_{\gamma}^{\mathcal{D}}>0$. Suppose additionally that $\gamma,\epsilon>0$ and $\alpha>1$. We will describe an algorithm that given sample access to $\mathcal{D}$ along with $\gamma,\alpha,\epsilon$ and $\mathrm{OPT}_{\gamma}^{\mathcal{D}}$, draws $O(\log(\alpha/\epsilon)/(\gamma\epsilon)^2)$ samples, runs in time $\mathrm{poly}(d) \cdot (1/\gamma\epsilon)^{\tilde O(1/(\alpha\gamma)^2)}$ and with probability at least $9/10$ returns a $\mathbf{w}$ with $$\mathbf{Pr}_{(\mathbf{x},y)\sim\mathcal{D}}[\mathrm{sign}(\langle \mathbf{w}, \mathbf{x} \rangle) \neq y] < O(\alpha \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}} +\epsilon) \;.$$ We begin by giving an algorithm that works if the distribution $\mathcal{D}$ is known explicitly. We will be able to reduce to this case by using the empirical distribution over a sufficiently large set of samples. That is, we start by establishing the following: \begin{proposition}\label{alphaAlgProp} Let $\mathcal{D}$ be an explicit distribution over $\mathbb{B}_d\times \{-1,1\}$. Suppose there exists a unit vector $\mathbf{w}^{\ast}$ so that $\mathbf{Pr}_{(\mathbf{x},y)\sim \mathcal{D}}[y \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle \geq \gamma] \geq 1- \mathrm{OPT}_{\gamma}^{\mathcal{D}}$ for some $\mathrm{OPT}_{\gamma}^{\mathcal{D}}>0$. Additionally, let $\gamma>0$ and $\alpha>1$. There exists an algorithm that given $\mathcal{D}$ along with $\gamma,\alpha,\mathrm{OPT}_{\gamma}^{\mathcal{D}}$, runs in time $\mathrm{poly}(d) \cdot (|{\mathsf{supp}}(\mathcal{D})|/(\alpha\gamma \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}}))^{\tilde O(1/(\alpha\gamma)^2)}$ and returns a weight vector $\mathbf{w}$ with $\mathbf{Pr}_{(\mathbf{x},y)\sim\mathcal{D}}[\mathrm{sign}(\langle \mathbf{w}, \mathbf{x} \rangle )\neq y] < O(\alpha \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}}).$ \end{proposition} Our main technical tool here will be the vector of Chow parameters~\cite{Chow:61, OS11:chow, DeDFS14}, i.e., vector of degree-$1$ ``Fourier coefficients'', of the target halfspace: \begin{definition} \label{def:chow} Given a Boolean function $f: \mathbb{B}_d \to \{ \pm 1\}$, and a distribution $\mathcal{D}_{\mathbf{x}}$ on $\mathbb{B}_d$ the {\em Chow parameters vector} of $f$, is the vector $\mathbf{Chow}(f)$ given by the expectation $\mathbf{E}_{ \mathbf{x} \sim \mathcal{D}_{\mathbf{x}}}[f(\mathbf{x}) \mathbf{x}]$. \end{definition} \new{It is well-known~\cite{Chow:61} that the vector of Chow parameters uniquely identifies any halfspace within the class of all Boolean functions. Several robust versions of this fact are known (see, e.g.,~\cite{Goldberg:06b, OS11:chow, DiakonikolasServedio:09, DeDFS14, DKS18a, DK19-degd}) under various structural assumptions on the underlying distribution. Here we leverage the margin assumption to obtain a robust version of this fact. Specifically,} we show that learning the Chow parameters of the halfspace $f_{\mathbf{w}^\ast}(\mathbf{x})=\mathrm{sign}(\langle \mathbf{w}^\ast, \mathbf{x} \rangle)$ determines the function $f_{\mathbf{w}^\ast}$ up to small error. \new{In the following, we will denote by $\mathcal{D}_{\mathbf{x}}$ the marginal distribution of $\mathcal{D}$ on $\mathbb{B}_d$.} We have the following simple lemma: \begin{lemma} \label{lem:chow-vs-dist} Let $g: \mathbb{B}_d \to \{\pm 1\}$ be any Boolean function that satisfies $\mathbf{Pr}_{\mathbf{x} \sim \mathcal{D}_{\mathbf{x}}}[f_{\mathbf{w}^\ast}(\mathbf{x}) \neq g(\mathbf{x})]\geq \nu + \mathrm{OPT}_{\gamma}^{\mathcal{D}}$, for some $\nu > 0$. Then, we have that $\|\mathbf{Chow}(f_{\mathbf{w}^\ast})-\mathbf{Chow}(g)\|_2 \geq \nu \cdot \gamma$. \end{lemma} \begin{proof} We can write \begin{align*} \|\mathbf{Chow}(f_{\mathbf{w}^\ast})-\mathbf{Chow}(g)\|_2 &\geq \langle \mathbf{w}^\ast, \mathbf{Chow}(f_{\mathbf{w}^\ast})-\mathbf{Chow}(g) \rangle\\ & = \mathbf{E}_{\mathbf{x}\sim \mathcal{D}_{\mathbf{x}}}[\langle \mathbf{w}^\ast, \mathbf{x} \rangle (f_{\mathbf{w}^\ast}(\mathbf{x})-g(\mathbf{x}))]\\ & = 2\mathbf{E}_{\mathbf{x}\sim \mathcal{D}_{\mathbf{x}}}[|\langle \mathbf{w}^\ast, \mathbf{x} \rangle | \cdot \textbf{1}_{f_{\mathbf{w}^\ast}(\mathbf{x})\neq g(\mathbf{x})}] \;. \end{align*} Recalling our assumptions $\mathbf{Pr}_{(\mathbf{x},y)\sim \mathcal{D}}[y \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle \geq \gamma] \geq 1- \mathrm{OPT}_{\gamma}^{\mathcal{D}}$ and $\mathbf{Pr}_{\mathbf{x} \sim \mathcal{D}_{\mathbf{x}}}[f_{\mathbf{w}^\ast}(\mathbf{x}) \neq g(\mathbf{x})]\geq \nu + \mathrm{OPT}_{\gamma}^{\mathcal{D}}$, we note that there is at least a \new{$\nu$} probability \new{over $(\mathbf{x}, y) \sim \mathcal{D}$} that $f_{\mathbf{w}^\ast}(\mathbf{x})\neq g(\mathbf{x})$ and $y \langle \mathbf{w}^\ast, \mathbf{x} \rangle \geq \new{\gamma}$, which implies that $|\langle \mathbf{w}^\ast, \mathbf{x} \rangle |\geq$ \new{$\gamma$}. Therefore, the above expectation is at least $\nu \cdot \gamma$. \end{proof} Lemma~\ref{lem:chow-vs-dist}, combined with the algorithms in ~\cite{TTV:09short, DeDFS14}, implies that learning an approximation to $\mathbf{Chow}(f_{\mathbf{w}^\ast})$ is sufficient to learn a good hypothesis. \begin{lemma} \label{ChowToWLem} There is a polynomial time algorithm that given an explicit distribution $\mathcal{D}$ and a vector $\mathbf{c}$ with $\|\mathbf{Chow}(f_{\mathbf{w}^\ast}) - \mathbf{c}\|_2 \leq \nu \cdot \gamma$, returns a vector $\mathbf{w}$ that with high probability satisfies $\mathbf{Pr}_{(\mathbf{x}, y)\sim \mathcal{D}}[f_{\mathbf{w}}(\mathbf{x}) \neq f_{\mathbf{w}^\ast}(\mathbf{x})] \leq O(\nu+\mathrm{OPT}_{\gamma}^{\mathcal{D}})$. In particular, for this $\mathbf{w}$ we have that $\mathbf{Pr}_{(\mathbf{x},y)\sim \mathcal{D}}[\mathrm{sign}(\langle \mathbf{w}, \mathbf{x} \rangle) \neq y] = O(\nu +\mathrm{OPT}_{\gamma}^{\mathcal{D}}).$ \end{lemma} Thus, it will suffice to approximate the Chow parameters of $f_{\mathbf{w}^{\ast}}$ to error \new{$\alpha \gamma \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}}$}. One might consider using the empirical Chow parameters, namely $P=\mathbf{E}_{(\mathbf{x},y)\sim \mathcal{D}}[y\mathbf{x}]$ for this purpose. In the realizable case, this would be the right thing to do, but this naive approach fails in the agnostic setting. Instead, our approach hinges on the following observation: Since $y=f_{\mathbf{w}^\ast}(\mathbf{x})$ for all but an $\mathrm{OPT}_{\gamma}^{\mathcal{D}}$-fraction of $\mathbf{x}$'s, and since the $\mathbf{x}$'s are supported in the unit ball, the error has $\ell_2$-norm at most $\mathrm{OPT}_{\gamma}^{\mathcal{D}}$. In fact, if we have some vector $\mathbf{w}$ so that $\langle \mathbf{w}, P-\mathbf{Chow}(f_{\mathbf{w}^{\ast}}) \rangle \geq \alpha \gamma \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}} $, then there must be some $(\mathbf{x},y)$ in the domain of $\mathcal{D}$ with $|\langle \mathbf{x}, \mathbf{w} \rangle| \geq \alpha\gamma$. The idea is to guess this $\mathbf{w}$ and then guess the true projection of $\mathbf{Chow}(f_{\mathbf{w}^{\ast}})$ onto $\mathbf{w}$. We present the pseudo-code for the algorithm establishing Proposition~\ref{alphaAlgProp} as Algorithm~\ref{alg:alpha-finite-support} below. \begin{algorithm} \caption{\label{alg:alpha-finite-support} $\alpha$-Agnostic Proper Learner of Proposition \ref{alphaAlgProp}} \begin{algorithmic}[1] \State Let $m = \lceil \log(1/\alpha\gamma)/(\alpha\gamma)^2 \rceil$. \State Let $P=\mathbf{E}_{(\mathbf{x},y)\sim \mathcal{D}}[y\mathbf{x}]$ \For{every sequence $\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(m)}$ from ${\mathsf{supp}}(\mathcal{D})$} \State Let $V$ be the span of $\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(m)}$. \State Let $\mathcal{C}$ be a $(\alpha\gamma\cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}})$-cover of the unit ball of $V$. \For{each $g\in\mathcal{C}$} \State Let $P'$ be obtained by replacing the projection of $P$ onto $V$ with $g$. In particular, $P' = P-\mathrm{Proj}_V(P)+g$. \State Run the algorithm of Lemma \ref{ChowToWLem} to find a hypothesis $\mathbf{w}$. \label{line:best-chow} \EndFor \EndFor \State \Return The hypothesis that produces smallest empirical error among all $\mathbf{w}$'s in Line~\ref{line:best-chow}. \end{algorithmic} \end{algorithm} \begin{proof}[Proof of Proposition \ref{alphaAlgProp}] Firstly, note that the runtime of this algorithm is clearly $\mathrm{poly}(d)\left(\frac{|{\mathsf{supp}}(\mathcal{D})|}{\mathrm{OPT}_{\gamma}^{\mathcal{D}} \cdot \alpha\gamma} \right)^{\tilde O(1/(\alpha\gamma)^2)}$. It remains to show correctness. We note that by Lemma~\ref{ChowToWLem} it suffices to show that some $P'$ is within $O(\alpha \gamma\cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}})$ of $\mathbf{Chow}(f_{\mathbf{w}^{\ast}})$. For this it suffices to show that there is a sequence $\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(m)}$ so that $\|\mathrm{Proj}_{V^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2 = O(\alpha \gamma \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}})$. To show this, let $V_i$ be the span of $\mathbf{x}^{(1)}, \mathbf{x}^{(2)},\ldots,\mathbf{x}^{(i)}$. We claim that if $\|\mathrm{Proj}_{V_i^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2 \geq \alpha \gamma \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}}$, then there exists an $\mathbf{x}^{(i+1)}$ in the support of $\mathcal{D}$ such that $$ \|\mathrm{Proj}_{V_{i+1}^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2^2 = \|\mathrm{Proj}_{V_i^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2^2 \cdot (1 - (\alpha\gamma)^2). $$ To show this, we let $\mathbf{w}$ be the unit vector in the direction of $\mathrm{Proj}_{V_i^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)$. We note that $$\|\mathrm{Proj}_{V_i^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2 = \langle \mathbf{w}, \mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P \rangle = \mathbf{E}_{(\mathbf{x},y)\sim \mathcal{D}}[\langle \mathbf{w}, \mathbf{x} \rangle (\mathrm{sign}(\langle \mathbf{w}^{\ast}, \mathbf{x} \rangle)-y)] \;.$$ Since $\mathrm{sign}(\langle \mathbf{w}^{\ast}, \mathbf{x} \rangle)-y$ is $0$ for all but an $\mathrm{OPT}_{\gamma}^{\mathcal{D}}$-fraction of $(\mathbf{x},y)$, we have that there must be some $\mathbf{x}^{(i+1)}$ so that $\langle \mathbf{x}^{(i+1)}, \mathbf{w} \rangle \geq \|\mathrm{Proj}_{V_i^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2/\mathrm{OPT}_{\gamma}^{\mathcal{D}} \geq \alpha \gamma$. If we chose this $\mathbf{x}^{(i+1)}$, we have that \begin{align*} \|\mathrm{Proj}_{V_{i+1}^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2^2 & \leq \|\mathrm{Proj}_{V_{i}^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2^2 - \langle \mathbf{x}^{(i+1)}, \mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P \rangle ^2\\ & = \|\mathrm{Proj}_{V_{i}^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2^2 \cdot (1-\langle \mathbf{x}^{(i+1)}, \mathbf{w} \rangle^2)\\ & = \|\mathrm{Proj}_{V_i^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2^2 \cdot (1 - (\alpha\gamma)^2). \end{align*} Therefore, unless $\|\mathrm{Proj}_{V_i^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2^2 < \alpha\gamma \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}}$ already for some $i < m$, there exists a sequence $\mathbf{x}^{(1)}, \mathbf{x}^{(2)}, \cdots, \mathbf{x}^{(m)}$ such that \begin{align*} \|\mathrm{Proj}_{V_m^\perp}(\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P)\|_2^2 &\leq \|P-\mathbf{Chow}(f_{\mathbf{w}^{\ast}})\|_2^2 \cdot (1 - (\alpha \gamma)^2)^{-m} \\ &\leq \|P-\mathbf{Chow}(f_{\mathbf{w}^{\ast}})\|_2^2 \cdot \exp(-m \cdot (\alpha\gamma)^2)\\ &\leq \mathrm{OPT}_{\gamma}^{\mathcal{D}} \cdot \exp(\log(\alpha \gamma)) \\ &= \mathrm{OPT}_{\gamma}^{\mathcal{D}} \cdot \alpha\gamma. \end{align*} So in either case, we have some sequence of $\mathbf{x}$'s so that the projection onto $V^\perp$ of $\mathbf{Chow}(f_{\mathbf{w}^{\ast}})-P$ is sufficiently small. This completes our analysis. \end{proof} In order to extend this to a proof of Theorem \ref{alphaTheorem}, we will need to reduce to solving the problem on a finite sample set. This result can be obtained from Proposition \ref{alphaAlgProp} by some fairly simple reductions. Firstly, we note that we can assume that $\mathrm{OPT}_{\gamma}^{\mathcal{D}} \geq \epsilon/\alpha$, as increasing it to this value does not change the problem. Secondly, we note that if we let $\widehat{\mathcal{D}}$ be the empirical distribution over a set of $\Omega(d/\epsilon^2)$ random samples, then with at least $2/3$ probability we have the following: \begin{itemize} \item $\mathbf{Pr}_{(\mathbf{x},y)\sim \widehat{\mathcal{D}}}[y \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle \geq \gamma] \geq 1- O(\mathrm{OPT}_{\gamma}^{\mathcal{D}})$. \item For any vector $\mathbf{w}$, $\mathbf{Pr}_{(\mathbf{x},y)\sim\mathcal{D}}[\mathrm{sign}( \langle \mathbf{w}, \mathbf{x} \rangle )\neq y] = \mathbf{Pr}_{(\mathbf{x},y)\sim \widehat{\mathcal{D}}}[\mathrm{sign}( \langle \mathbf{w}, \mathbf{x} \rangle)\neq y] + O(\epsilon)$. \end{itemize} The first statement here is by applying the Markov inequality to the probability that $y \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle < \gamma$, and the second is by the VC-inequality~\cite{DL:01}. We note that if the above hold, applying the algorithm from Proposition \ref{alphaAlgProp} to $\widehat{\mathcal{D}}$ will produce an appropriate $\mathbf{w}$. This produces an algorithm that uses $O(d/\epsilon^2)$ samples and has runtime $O(d /\gamma\epsilon)^{\tilde O(1/(\alpha\gamma)^2)}.$ Unfortunately, this algorithm is not quite satisfactory as the runtime and sample complexity scale poorly with the dimension $d$. In order to fix this, we will make use of an idea from~\cite{KlivansServedio:04coltmargin}. Namely, we will first apply dimension reduction to a smaller number of dimensions before applying our algorithm. In particular, we will make use of the Johnson-Lindenstrauss lemma: \begin{lemma}[\cite{JohnsonLindenstrauss:84}] There exists a probability distribution over linear transformations $A:\mathbb{R}^d\rightarrow \mathbb{R}^m$ with $m=O(\log(1/\delta)/\epsilon^2)$ so that for any unit vectors $\mathbf{v}, \mathbf{w}\in \mathbb{R}^d$, $\mathbf{Pr}_A[|\langle \mathbf{v}, \mathbf{w} \rangle - \langle A \mathbf{v}, A\mathbf{w} \rangle | > \epsilon] < \delta.$ Additionally, there are efficient algorithms to sample from such distributions over $A$. \end{lemma} We note that this implies in particular that $\|A\mathbf{v}\|_2 = 1\pm \epsilon$ except for with probability $\delta$. Thus, by tweaking the parameters a little bit and letting $h_A(\mathbf{v}) = A\mathbf{v}/\|A\mathbf{v}\|_2$, we have that $h_A(\mathbf{v})$ is always a unit vector and that $\langle h_A(\mathbf{v}), h_A(\mathbf{w}) \rangle= \langle \mathbf{v}, \mathbf{w}\rangle \pm \epsilon$ except with probability $\delta.$ Next, we note that by taking $\epsilon=\gamma/2$ and $\delta = \mathrm{OPT}_{\gamma}^{\mathcal{D}}$ in the above we have that \begin{eqnarray*} &&\mathbf{Pr}_{A,(\mathbf{x},y)\sim \mathcal{D}}[y \langle h_A(\mathbf{w}^{\ast}), h_A(\mathbf{x}) \rangle < \gamma/2] \\ &\leq& \mathbf{Pr}_{(\mathbf{x},y)\sim \mathcal{D}}[y\langle \mathbf{w}^{\ast}, \mathbf{x} \rangle <\gamma] + \mathbf{Pr}_{A,(\mathbf{x},y)\sim \mathcal{D}}[| \langle h_A(\mathbf{w}^{\ast}), h_A(\mathbf{x}) \rangle - \langle \mathbf{w}^{\ast}, \mathbf{x} \rangle|>\gamma/2] \\ &=& O(\mathrm{OPT}_{\gamma}^{\mathcal{D}}). \end{eqnarray*} Thus, by the Markov inequality, with large constant probability over $A$ we have that $$ \mathbf{Pr}_{(\mathbf{x},y) \sim \mathcal{D}}[y \langle h_A(\mathbf{w}^{\ast}), h_A(\mathbf{x})\rangle < \gamma/2] = O(\mathrm{OPT}_{\gamma}^{\mathcal{D}}). $$ But this means that the distribution $(h_A(\mathbf{x}),y)$ satisfies the assumptions for our algorithm (with $\gamma$ replaced by $\gamma/2$ and $\mathrm{OPT}_{\gamma}^{\mathcal{D}}$ by $O(\mathrm{OPT}_{\gamma}^{\mathcal{D}})$), but in dimension $m=O(\log(\alpha/\epsilon)/\gamma^2)$. Running the algorithm described above on this set will find us a vector $\mathbf{w}$ so that $$ \mathbf{Pr}_{(\mathbf{x},y)\sim \mathcal{D}}[\mathrm{sign}(\langle \mathbf{w}, h_A(\mathbf{x}) \rangle)\neq y] = O(\alpha \cdot \mathrm{OPT}_{\gamma}^{\mathcal{D}}+\epsilon). $$ However, it should be noted that $$\mathrm{sign}(\langle \mathbf{w}, h_A(\mathbf{x}) \rangle) = \mathrm{sign}(\langle \mathbf{w}, A\mathbf{x} \rangle /\|A\mathbf{x}|_2) = \mathrm{sign}(\langle \mathbf{w}, A\mathbf{x} \rangle) = \mathrm{sign}(\langle A^T \mathbf{w}, \mathbf{x}\rangle) \;.$$ Thus, $A^T\mathbf{w}$ satisfies the necessary conditions. Our final algorithm is given below: \begin{algorithm} \caption{$\alpha$-Agnostic Proper Learner of Theorem~\ref{alphaTheorem}} \begin{algorithmic}[1] \State Pick $A:\mathbb{R}^d\rightarrow \mathbb{R}^m$ with $m=O(\log(\alpha/\epsilon)/\gamma^2)$ from an appropriate Johnson-Lindenstrauss family and define $f_A$ appropriately. \State Take $O(m/\epsilon^2)$ random samples and let $\widehat{\mathcal{D}}$ be the uniform distribution over $(A\mathbf{x}/\|A\mathbf{x}|_2,y)$ for samples $(\mathbf{x},y)$ from this set. \State Run the algorithm from Proposition \ref{alphaAlgProp} on $\widehat{\mathcal{D}}$ using $\gamma/2$ instead of $\gamma$ to find a vector $\mathbf{w}$. \State \Return $A^T\mathbf{w}$. \end{algorithmic} \end{algorithm} \section{Computational Hardness Results} \label{sec:lb} In this section, we provide several computational lower bounds for agnostic learning of halfspaces with a margin. To clarify the statements below, we note that we say ``there is no algorithm that runs in time $T(d, \frac{1}{\gamma}, \frac{1}{\varepsilon})$'' to mean that no $T(d, \frac{1}{\gamma}, \frac{1}{\varepsilon})$-time algorithm works for \emph{all} combinations of parameters $d,\gamma$ and $\varepsilon$. (Note that we discuss the lower bounds with stronger quantifiers in Section~\ref{sec:lb-strong-quantifier}.) Moreover, we also ignore the dependency on $\tau$ (the probability that the learner can be incorrect), since we only use a fixed $\tau$ (say $1/3$) in all the bounds below. First, we show that, for any constant $\alpha > 1$, $\alpha$-agnostic learning of $\gamma$-margin halfspaces requires $2^{(1/\gamma)^{2- o(1)}}\mathrm{poly}(d,1/\varepsilon)$ time. Up to the lower order term $\gamma^{o(1)}$ in the exponent, this matches the runtime of our algorithm (in Theorem~\ref{thm:constant-factor-alg}). In fact, we show an even stronger result, namely that if the dependency of the running time on the margin is, say, $2^{(1/\gamma)^{1.99}}$, then one has to pay a nearly exponential dependence on $d$, i.e., $2^{d^{1 - o(1)}}$. This result holds assuming the so-called (randomized) exponential time hypothesis (ETH)~\cite{ImpagliazzoP01,ImpagliazzoPZ01}, which postulates that there is no (randomized) algorithm that can solve 3SAT in time $2^{o(n)}$, where $n$ denotes the number of variables. ETH is a standard hypothesis used in proving (tight) running time lower bounds. We do not discuss ETH further here, but interested readers may refer to a survey by Lokshtanov et al.~\cite{LokshtanovMS11} for an in-depth discussion and several applications of ETH. Our first lower bound can be stated more precisely as follows: \begin{theorem} \label{thm:run-time} Assuming the (randomized) ETH, for any universal constant $\alpha \geq 1$, there is no proper $\alpha$-agnostic learner for $\gamma$-margin halfspaces that runs in time $O(2^{(1/\gamma)^{2-o(1)}}2^{d^{1 - o(1)}})f(\frac{1}{\varepsilon})$ for any function $f$. \end{theorem} Secondly, we address the question of whether we can achieve $\alpha = 1$ (standard agnostic learning) while retaining running time similar to that of our algorithm. We answer this in the negative (assuming a standard parameterized complexity assumption): there is no $f(\frac{1}{\gamma}) \mathrm{poly}(d,\frac{1}{\varepsilon})$-time $1$-agnostic learner for any function $f$ (e.g., even for $f(\frac{1}{\gamma}) = 2^{2^{2^{1/\gamma}}}$). This demonstrates a stark contrast between what we can achieve with and without approximation. \begin{theorem} \label{thm:param} Assuming W[1] is not contained in randomized FPT, there is no proper $1$-agnostic learner for $\gamma$-margin halfspaces that runs in time $f(\frac{1}{\gamma})\mathrm{poly}(d,\frac{1}{\varepsilon})$ for any function $f$. \end{theorem} Finally, we explore the other extreme of the trade-off between the running time and approximation ratio, by asking: what is the best approximation ratio we can achieve if we only consider proper learners that run in $\mathrm{poly}(d,\frac{1}{\varepsilon},\frac{1}{\gamma})$-time? On this front, it is known~\cite{Servedio:01lnh} that the perceptron algorithm achieves $1/\gamma$-approximation. We show that a significant improvement over this is unlikely, by showing that $(1/\gamma)^{\frac{1}{\text{polyloglog}(1/\gamma)}}$-approximation is not possible unless NP = RP. If we additionally assume the so-called Sliding Scale Conjecture~\cite{BGLR94}, this ratio can be improved to $(1/\gamma)^{c}$ for some constant $c > 0$. \begin{theorem} \label{thm:inapx} Assuming NP $\ne$ RP, there is no proper $(1/\gamma)^{1/\text{polyloglog}(1/\gamma)}$-agnostic learner for $\gamma$-margin halfspaces that runs in time $\mathrm{poly}(d,\frac{1}{\varepsilon},\frac{1}{\gamma})$. Furthermore, assuming NP $\ne$ RP and the Sliding Scale Conjecture (Conjecture~\ref{conj:ssc}), there is no proper $(1/\gamma)^c$-agnostic learning for $\gamma$-margin halfspaces that runs in time $\mathrm{poly}(d,\frac{1}{\varepsilon},\frac{1}{\gamma})$ for some constant $c > 0$. \end{theorem} We note here that the constant $c$ in Theorem~\ref{thm:inapx} is not explicit, i.e., it depends on the constant from the Sliding Scale Conjecture (SSC). Moreover, even when assuming the most optimistic parameters of SSC, the constant $c$ we can get is still very small. For instance, it is still possible that a say $\sqrt{1/\gamma}$-agnostic learning algorithm that runs in polynomial time exists, and this remains an interesting open question. We remark that Daniely et al.~\cite{DanielyLS14b} have made partial progress in this direction by showing that, any $\mathrm{poly}(d,\frac{1}{\varepsilon},\frac{1}{\gamma})$-time learner that belongs to a ``generalized linear family'' cannot achieve approximation ratio $\alpha$ better than $\Omega\left(\frac{1/\gamma}{\mathrm{polylog}(1/\gamma)}\right)$. We note that the inapproximability ratio of \cite{DanielyLS14b} is close to being tight for a natural, yet restricted, family of improper learners. On the other hand, our proper hardness result holds against {\em all} proper learners under a widely believed worst-case complexity assumption. \subsection{Lower Bounds with Stronger Quantifiers on Parameters} \label{sec:lb-strong-quantifier} Before we proceed to our proofs, let us first state a running time lower bound with stronger quantifiers. Recall that previously we only rule out algorithms that work \emph{for all} combinations of $d, \gamma, \varepsilon$. Below we relax the quantifier so that we need the \emph{for all} quantifier only for $d$. \begin{lemma} \label{lem:strong-quantifier} Assuming the (randomized) ETH, for any universal constant $\alpha \geq 1$, there exists $\varepsilon_0 = \varepsilon_0(\alpha)$ such that there is no $\alpha$-agnostic learner for $\gamma$-margin halfspaces that runs in time $O(2^{(1/\gamma)^{2 - o(1)}})\mathrm{poly}(d)$ for all $d$ and for some $0 < \varepsilon < \varepsilon_0$ and $\frac{1}{d^{0.5 - o(1)}} \leq \gamma = \gamma(d) \leq \frac{1}{(\log d)^{0.5 + o(1)}}$ that satisfies $\frac{\gamma(d + 1)}{\gamma(d)} \geq \Omega(1)$. \end{lemma} We remark here that the lower and upper bounds on $\gamma$ are essentially (i.e., up to lower order terms) the best possible. On the upper bound front, if $\gamma \geq \tilde{O}\left(\frac{1}{\sqrt{\log d}}\right)$, then our algorithmic result (Theorem~\ref{thm:constant-factor-alg}) already give a $\mathrm{poly}(d, \frac{1}{\varepsilon})$-time $\alpha$-agnostic learner for $\gamma$-margin halfspaces (for all constant $\alpha > 1$). On the other hand, if $\gamma \leq O(\frac{1}{d^{0.5 + o(1)}})$, then the trivial algorithm that exactly solves ERM for $m = O\left(\frac{d}{\varepsilon^2}\right)$ samples only takes $2^{O(d/\varepsilon^2)}$ time, which is already asymptotically faster than $2^{(1/\gamma)^{2 - o(1)}}$. The last condition that $\frac{\gamma(d + 1)}{\gamma(d)}$ is not too small is a sanity-check condition that prevents ``sudden jumps'' in $\gamma(d)$ such as $\gamma(d) = \frac{1}{(\log d)^{0.1}}$ and $\gamma(d + 1) = \frac{1}{(d + 1)^{0.1}}$; note that the condition is satisfied by ``typical functions'' such as $\gamma(d) = \frac{1}{d^c}$ or $\gamma(d) = \frac{1}{(\log d)^c}$ for some constant $c$. As for $\varepsilon$, we only require the algorithm to work for any $\varepsilon$ that is not \emph{too large}, i.e., no larger than $\varepsilon_0(\alpha)$. This latter number is just a constant (when $\alpha$ is a constant). We note that it is still an interesting open question to make this requirement as mild as possible; specifically, is it possible to only require the algorithm to work for any $\varepsilon < 1/2$? \subsection{Reduction from $k$-Clique and Proof of Theorem~\ref{thm:param}} We now proceed to the proofs of our results, starting with Theorem~\ref{thm:param}. To prove Theorem~\ref{thm:param}, we reduce from the $k$-Clique problem. In $k$-Clique, we are given a graph $G$ and an integer $k$, and the goal is to determine whether the graph $G$ contains a $k$-clique (as a subgraph). We take the perspective of \emph{parameterized complexity}. Recall that a parameterized problem with parameter $k$ is said to be \emph{fixed parameter tractable (FPT)} if it can be solved in time $f(k)\mathrm{poly}(n)$ for some computable function $f$, where $n$ denotes the input size. It is well-known that $k$-Clique is complete for the class W[1]~\cite{DowneyF95}. In other words, under the (widely-believed) assumption that W[1] does not collapse to FPT (the class of fixed parameter tractable problems), we cannot solve $k$-Clique in time $f(k) \mathrm{poly}(n)$ for any computable function $f$. We shall not formally define the class W[1] here; interested readers may refer to the book of Downey and Fellows for an in-depth treatment of the topic~\cite{DowneyF13}. Our reduction starts with an instance of $k$-Clique and produces an instance of agnostic learning with margin $\gamma$ such that $\gamma = \Omega(1/k)$ (and the dimension is polynomial): \begin{lemma} \label{lem:clique-red} There exists a polynomial-time algorithm that takes as input an $n$-vertex graph instance $G$ and an integer $k$, and produces a distribution $\mathcal{D}$ over $\mathbb{B}_d \times \{\pm 1\}$ and $\gamma, \kappa \in [0, 1]$ such that \begin{itemize} \item (Completeness) If $G$ contains a $k$-clique, then $\mathrm{OPT}_{\gamma}^{\mathcal{D}} \leq \kappa$. \item (Soundness) If $G$ does not contains a $k$-clique, then $\mathrm{OPT}_{0-1}^{\mathcal{D}} > \kappa + \frac{0.001}{n^3}$. \item (Margin Parameter) $\gamma \geq \Omega(\frac{1}{\sqrt{k}})$. \end{itemize} \end{lemma} We remark here that, in Lemma~\ref{lem:clique-red} and throughout the remainder of the section, we say that an algorithm produces a distribution $\mathcal{D}$ over $\mathbb{B}_d \times \{\pm 1\}$ to mean that it outputs the set of samples $\{(\mathbf{x}^{(i)}, y^{(i)})\}_{i \in [m]}$ and numbers $d_i$ for each $i \in [m]$ representing the probability of $(\mathbf{x}^{(i)}, y^{(i)})$ with respect to $\mathcal{D}$. Note that this is stronger than needed since, to prove hardness of learning, it suffices to have an oracle that can sample from $\mathcal{D}$, but here we actually explicitly produce a full description of $\mathcal{D}$. Moreover, note that this implicitly implies that the support of $\mathcal{D}$ is of polynomial size (and hence, for any given $h$, $\mathrm{err}_{\gamma}^{\mathcal{D}}(h)$ and $\mathrm{err}_{0-1}^{\mathcal{D}}(h)$ can be efficiently computed). As stated above, Lemma~\ref{lem:clique-red} immediately implies Theorem~\ref{thm:param} because, if we can agnostically learn $\gamma$-margin halfspaces in time $f(\frac{1}{\gamma})\mathrm{poly}(d,\frac{1}{\varepsilon})$, then we can solve $k$-Clique in $f(O(\sqrt{k})) \mathrm{poly}(n)$ time, which would imply that W[1] is contained in (randomized) FPT. This is formalized below. \begin{proof}[Proof of Theorem~\ref{thm:param}] Suppose that we have an $f(\frac{1}{\gamma})\mathrm{poly}(d,\frac{1}{\varepsilon})$-time agnostic learner for $\gamma$-margin halfspaces. Given an instance $(G, k)$ of $k$-Clique, we run the reduction from Lemma~\ref{lem:clique-red} to produce a distribution $\mathcal{D}$. We then run the learner on $\mathcal{D}$ with $\varepsilon = \frac{0.001}{n^3}$ (and with $\delta = 1/3$). Note that the learner runs in time $f(O(\sqrt{k}))\mathrm{poly}(n)$ and produces a halfspace $h$. We then compute $\mathrm{err}_{0-1}^{\mathcal{D}}(h)$; if it is no more than $\kappa + \frac{0.001}{n^3}$, then we output YES. Otherwise, we output NO. The algorithm described above solves $k$-Clique (correctly with probability 2/3) in FPT time. Since $k$-Clique is W[1]-complete, this implies that W[1] is contained in randomized FPT. \end{proof} We now move on to prove Lemma~\ref{lem:clique-red}. Before we do so, let us briefly describe the ideas behind it. The dimension $d$ will be set to $n$, the number of vertices of $G$. Each coordinate $\mathbf{w}_i$ is associated with a vertex $i \in V(G)$. In the completeness case, we would like to set $\mathbf{w}_i = \frac{1}{\sqrt{k}}$ iff $i$ is in the $k$-clique and $\mathbf{w}_i = 0$ otherwise. To enforce a solution to be of this form, we add two types of samples that induces the following constraints: \begin{itemize} \item \emph{Non-Edge Constraint:} for every \emph{non-}edge $(i, j)$, we should have $\mathbf{w}_i + \mathbf{w}_j \leq \frac{1}{\sqrt{k}}$. That is, we should ``select'' at most one vertex among $i, j$. \item \emph{Vertex Selection Constraint:} each coordinate of $\mathbf{w}$ is at least $\frac{1}{\sqrt{k}}$. Note that we will violate such constraints for all vertices, except those that are ``selected''. \end{itemize} If we select the probabilities in $\mathcal{D}$ so that the non-edge constraints are weighted much larger than the vertex selection constraints, then it is always better to not violate any of the first type of constraints. When this is the case, the goal will now be to violate as few vertex selection constraints as possible, which is the same as finding a maximum clique, as desired. While the above paragraph describes the core idea of the reduction, there are two additional issues we have to resolve: \begin{itemize} \item \emph{Constant Coordinate: } first, notice that we cannot actually quite write a constraint of the form $\mathbf{w}_i + \mathbf{w}_j \leq \frac{1}{\sqrt{k}}$ using the samples because there is no way to express a value like $\frac{1}{\sqrt{k}}$ directly. To overcome this, we have a ``constant coordinate'' $\mathbf{w}_*$, which is supposed to be a constant, and replace the right hand side of non-edge constraints by $\frac{\mathbf{w}_*}{\sqrt{k}}$ (instead of $\frac{1}{\sqrt{k}}$). The new constraint can now be represented by a sample. \item \emph{Margin:} in the above reduction, there was no margin at all! To get the appropriate margin, we ``shift'' the constraint slightly so that there is a margin. For instance, instead of $\frac{\mathbf{w}_*}{\sqrt{k}}$ for a non-edge constraint, we use $\frac{1.1 \mathbf{w}_*}{\sqrt{k}}$. We now have a margin of $\approx \frac{0.1}{\sqrt{k}}$ and it is still possible to argue that the best solution is still to select a clique. \end{itemize} The reduction, which follows the above outline, is formalized below. \begin{proof}[Proof of Lemma~\ref{lem:clique-red}] Given a graph $G = (V, E)$, we use $n$ to denote the number of vertices $|V|$ and we rename its vertices so that $V = [n]$. We set $d = n + 1$; we name the first coordinate $*$ and each of the remaining coordinates $i \in [n]$. For brevity, let us also define $\beta = 1 - \frac{0.01}{n^2}$. The distribution $\mathcal{D}$ is defined as follows: \begin{itemize} \item Add a labeled sample $(-\mathbf{e}^*, -1)$ with probability $\frac{\beta}{2}$ in $\mathcal{D}$. We refer to this as the \emph{positivity constraint for $*$}. \item For every pair of distinct vertices $i, j$ that do not induce an edge in $E$, add a labeled sample $(\frac{1}{2}\left(\frac{1.1}{\sqrt{k}}\mathbf{e}^* - \mathbf{e}^{i} - \mathbf{e}^{j}\right), 1)$ with probability $\frac{\beta}{2\left(\binom{n}{2} - |E|\right)}$ in $\mathcal{D}$. We refer to this as the \emph{non-edge constraint for $(i, j)$}. \item For every vertex $i$, add a labeled sample $(\frac{1}{2}\left(\mathbf{e}^{i} - \frac{0.9}{\sqrt{k}}\mathbf{e}^*\right), 1)$ with probability $\frac{0.01}{n^3}$ in $\mathcal{D}$. We refer to this as the \emph{vertex selection constraint for $i$}. \end{itemize} Finally, let $\gamma = \frac{0.1}{2\sqrt{2k}}$, $\kappa = (n - k)\left(\frac{0.01}{n^3}\right)$. It is obvious that the reduction runs in polynomial time. \paragraph{Completeness.} Suppose that $G$ contains a $k$-clique; let $S \subseteq V$ denote the set of its vertices. We define $\mathbf{w}$ by $\mathbf{w}_* = \frac{1}{\sqrt{2}}$ and, for every $i \in V$, $\mathbf{w}_i = \frac{1}{\sqrt{2k}}$ if $i \in S$ and $\mathbf{w}_i = 0$ otherwise. It is clear that $\|\mathbf{w}\|_2 = 1$ and that, for every $(\mathbf{x}, y) \in {\mathsf{supp}}(\mathcal{D})$, we have $|\left<\mathbf{w}, \mathbf{x}\right>| \geq \frac{0.1}{2\sqrt{2k}}$. Finally, observe that all the first two types of constraints are satisfied, and a vertex selection constraint for $i$ is unsatisfied iff $i \notin S$. Thus, we have $\mathrm{err}_\gamma^{\mathcal{D}}(\mathbf{w}) = (n - k)\left(\frac{0.01}{n^3}\right) = \kappa$, which implies that $\mathrm{OPT}_\gamma^{\mathcal{D}} \leq \kappa$ as desired. \paragraph{Soundness.} Suppose contrapositively that $\mathrm{OPT}_{0-1}^{\mathcal{D}} \leq \kappa + \frac{0.001}{n^3}$; that is, there exists $\mathbf{w}$ such that $\mathrm{err}_{0-1}^{\mathcal{D}}(\mathbf{w}) \leq \kappa + \frac{0.001}{n^3}$. Observe that each labeled sample of the first two types of constraints has probability more than $\frac{\beta}{2n^2} > \kappa + \frac{0.001}{n^3}$. As a result, $\mathbf{w}$ must correctly classifies these samples. Since $\mathbf{w}$ correctly classifies $(-\mathbf{e}^*, -1)$, it must be that $w_* > 0$. Now, let $T$ be the set of vertices $i$ such that $\mathbf{w}$ mislabels the vertex selection constraint for $i$. Observe that $|T| < \frac{\left(\kappa + \frac{0.001}{n^3}\right)}{\frac{0.01}{n^3}} < n - k + 1$. In other words, $S := V \setminus T$ is of size at least $k$. We claim that $S$ induces a $k$-clique in $G$. To see that this is true, consider a pair of distinct vertices $i, j \in S$. Since $\mathbf{w}$ satisfies the vertex selection constraints for $i$ and for $j$, we must have $\mathbf{w}_i, \mathbf{w}_j \geq \frac{0.9}{\sqrt{k}}$. This implies that $(i, j)$ is an edge, as otherwise $\mathbf{w}$ would mislabel the non-edge constraint for $(i, j)$. As a result, $G$ contains a $k$-clique as desired. \end{proof} \subsection{Reduction from $k$-CSP and Proofs of Theorems~\ref{thm:run-time},~\ref{thm:inapx} and Lemma~\ref{lem:strong-quantifier}} In this section, we will prove Theorems~\ref{thm:run-time} and~\ref{thm:inapx}, by reducing from the hardness of approximation of constraint satisfaction problems (CSPs), given by PCP Theorems. \subsubsection{CSPs and PCP Theorem(s)} Before we can state our reductions, we have to formally define CSPs and state the PCP theorems we will use more formally. We start with the definition of $k$-CSP: \begin{definition}[$k$-CSP] For any integer $k \in \mathbb{N}$, a $k$-CSP instance $\mathcal{L} = (V, \Sigma, \{\Pi_q\}_{q \in \mathcal{Q}})$ consists of \begin{itemize} \item The variable set $V$, \item The alphabet $\Sigma$, which we sometimes refer to as labels, \item Constraints set $\{\Pi_S\}_{S \in \mathcal{Q}}$, where $\mathcal{Q} \subseteq \binom{V}{k}$ is a collection of $k$-size subset of $V$. For each subset $S = \{v_1, \dots, v_k\}$, $\Pi_S \subseteq \Sigma^S$ is the set of accepting answers for the constraint $\Pi_S$. Here we think of each $f \in \Sigma^S$ as a function from $f: S \to \Sigma$. \end{itemize} A $k$-CSP instance is said to be \emph{regular} if each variable appears in the same number of constraints. An assignment $\phi$ is a function $\phi: V \to \Sigma$. Its value, denoted by $\text{val}_{\mathcal{L}}(\phi)$, is the fraction of constraints $S \in \mathcal{Q}$ such that\footnote{We use $\phi|_S$ to denote the restriction of $\phi$ on the domain $S$.} $\phi|_S \in \Pi_S$. Such constraints are said to be \emph{satisfied by $\phi$.} The value of $\mathcal{L}$, denoted by $\text{val}(\mathcal{L})$, is the maximum value among all assignments, i.e., $\text{val}(\mathcal{L}) := \max_{\phi} \text{val}_{\mathcal{L}}(\phi)$. In the \textsc{$\nu$-Gap-$k$-CSP} problem, we are given a regular instance $\mathcal{L}$ of $k$-CSP, and we want to distinguish between $\text{val}(\mathcal{L}) = 1$ and $\text{val}(\mathcal{L}) < \nu$. \end{definition} Throughout this subsection, we use $n$ to denote the instance size of $k$-CSP, that is $n = \sum_{S \in \mathcal{Q}} |\Pi_S|$. The celebrated PCP theorem~\cite{AroraS98,AroraLMSS98} is equivalent to the proof of NP-hardness of approximating $\nu$-Gap-$k$-CSP for some constant $k$ and $\nu < 1$. Since we would like to prove (tight) running time lower bounds, we need the versions of PCP Theorems that provides strong running time lower bounds as well. For this task, we turn to the Moshkovitz-Raz PCP theorem, which can not only achieve arbitrarily small constant $\nu > 0$ but also almost exponential running time lower bound. \begin{theorem}[Moshkovitz-Raz PCP~\cite{MR10}] \label{thm:mr-pcp} Assuming ETH, for any $0 < \nu < 1$, $\nu$-Gap-2-CSP cannot be solved in time $O(2^{n^{1 - o(1)}})$, even for instances with $|\Sigma| = O_\nu(1)$. \end{theorem} As for our hardness of approximation result (Theorem~\ref{thm:inapx}), we are aiming to get as large a ratio as possible. For this purpose, we will use a PCP Theorem of Dinur, Harsha and Kindler, which achieves $\nu = \frac{1}{\mathrm{poly}(n)}$ but need $k$ to be $\text{polyloglog}(n)$. \begin{theorem}[Dinur-Harsha-Kindler PCP~\cite{DHK15}] \label{thm:dhk-pcp} $n^{-\Omega(1)}$-Gap-$\text{polyloglog}(n)$-CSP is NP-hard. \end{theorem} Finally, we state the Sliding Scale Conjecture (SSC) of Bellare et al.~\cite{BGLR94}, which says that the NP-hardness with $\nu = \frac{1}{\mathrm{poly}(n)}$ holds even in the case where $k$ is constant: \begin{conj}[Sliding Scale Conjecture~\cite{BGLR94}] \label{conj:ssc} For some constant $k$, $n^{-\Omega(1)}$-Gap-$k$-CSP is NP-hard. \end{conj} \subsubsection{Reducing from $k$-CSP to Agnostically Learning Halfspaces with Margin} Having set up the notation, we now move on to the reduction from $k$-CSP to agnostic learning of halfspaces with margin. Our reduction can be viewed as a modification of the reduction from~\cite{ABSS97}; compared to~\cite{ABSS97}, we have to (1) be more careful so that we can get the margin in the completeness and (2) modify the reduction to work even for $k > 2$. Before we precisely state the formal properties of the reduction, let us give a brief informal intuition behind the reduction. Given an instance $\mathcal{L} = (V, \Sigma, \{\Pi_S\}_{S \in \mathcal{Q}})$ of $k$-CSP, we will create a distribution $\mathcal{D}$ on $\mathbb{B}_d \times \{\pm 1\}$, where the dimension $d$ is equal to $n$. Each coordinate is associated with an accepting answer of each constraint; that is, each coordinate is $(S, f)$ where $S \in \mathcal{Q}$ and $f \in \Pi_S$. In the completeness case where we have a perfect assignment $\phi$, we would like the halfspace's normal vector to set $\mathbf{w}_{(S, f)} = 1$ iff $f$ is the assignment to predicate $S$ in $\phi$ (i.e., $f = \phi|_S$), and zero otherwise. To enforce this, we add three types of constraints: \begin{itemize} \item \emph{Non-negativity Constraint}: that each coordinate of $\mathbf{w}$ should be non-negative. \item \emph{Satisfiability Constraint}: that for each $S \in \mathcal{Q}$, $\mathbf{w}_{(S, f)}$ is positive for at least one $f \in \Pi_S$. \item \emph{Selection Constraint}: for each variable $v \in V$ and label $\sigma \in \Sigma$, we add a constraint that the sum of all $\mathbf{w}_{(S, f)}$, for all $S$ that $v$ appears in and all $f$ that assigns $\sigma$ to $v$, is non-positive. \end{itemize} Notice that, for the completeness case, we satisfy the first two types of constraints, and we violate the selection constraints only when $\phi(v) = \sigma$. Intuitively, in the soundness case, we will not be able to ``align'' the positive $\mathbf{w}_{(S, f)}$ from different $S$'s together, and we will have to violate a lot more selection constraints. Of course, there are many subtle points that the above sketch does not address, such as the margin; on this front, we add one more special coordinate $\mathbf{w}_*$, which we think of as being equal to 1, and we add/subtract $\delta$ times this coordinate to each of the constraints, which will create the margin for us. Another issue is that the normal vector of the halfspace (and samples) as above have norm more than one. Indeed, our assignment in the completeness case has norm $O(\sqrt{n})$. Hence, we have to scale the normal vector down by a factor of $O(\sqrt{n})$, which results in a margin of $\gamma = \Omega(1/\sqrt{n})$. This is the reason why we arrive at the running time lower bound of the form $2^{\gamma^{2 - o(1)}}$. The properties and parameter dependencies of the reduction are encapsulated in the following theorem: \begin{theorem} \label{thm:red} There exists a polynomial time reduction that takes as input a regular instance $\mathcal{L} = (V, \Sigma, \{\Pi_S\}_{S \in \mathcal{Q}})$ of $k$-CSP and a real number $\nu > 0$, and produces a distribution $\mathcal{D}$ over $\mathbb{B}_d \times \{\pm 1\}$ and positive real numbers $\gamma, \kappa, \varepsilon, \alpha$ such that \begin{itemize} \item (Completeness) If $\mathcal{L}$ is satisfiable, then $\mathrm{OPT}_{\gamma}^{\mathcal{D}} \leq \kappa$. \item (Soundness) If $\text{val}(\mathcal{L}) < \nu$, then $\mathrm{OPT}_{0-1}^{\mathcal{D}} > \alpha \cdot \kappa + \varepsilon$. \item (Margin Parameter) $\gamma = \Omega\left(\frac{1}{\Delta |\Sigma|^{3k}\sqrt{|\mathcal{Q}|}}\right)$, where $\Delta$ denotes the number of constraints each variable appears in. \item (Approximation Ratio) $\alpha = \Omega\left(\frac{(1/\nu)^{1/k}}{k}\right)$. \item (Error Parameter) $\varepsilon = \Omega\left(\frac{1}{\Delta |\Sigma|^k}\right) \cdot \alpha$. \item (Dimension) $d = n + 1$. \end{itemize} \end{theorem} \begin{proof} Before we define $\D$, let us specify the parameters: \begin{itemize} \item First, we let $d$ be $1 + n$. We name the first coordinate as $*$ and each of the remaining coordinates are named $(S, f)$ for a constraint $S \in \mathcal{Q}$ and $f \in \Pi_S$. \item Let $Z := 2\left(|V| \cdot |\Sigma| + 2k|\mathcal{Q}| + 2k\sum_{e \in E} |\Pi_e|\right)$ be our ``normalizing factor'', which will be used below to normalized the probability. \item Let $\delta := \frac{0.1}{\Delta |\Sigma|^{2k}}$ be the ``shift parameter''. Note that this is not the margin $\gamma$ (which will be defined below). \item Let $s := 10\Delta|\Sigma|^k$ be the scaling factor, which we use to make sure that all our samples lie within the unit ball. \item Let the gap parameter $\alpha$ be $\frac{(1/\nu)^{1/k}}{40k}$. \item Finally, let $\kappa = \frac{|V|}{Z}$ and $\varepsilon = \kappa \cdot \alpha$. \end{itemize} Note that $\alpha$ as defined above can be less than one. However, this is not a problem: in the subsequent proofs of Theorems~\ref{thm:run-time} and~\ref{thm:inapx}, we will always choose the settings of parameters so that $\alpha > 1$. We are now ready to define the distribution $\D$ on $\mathbb{B}_d \times \{\pm 1\}$, as follows: \begin{enumerate} \item Add a labeled sample $(-\mathbf{e}^*, -1)$ with probability $1/2$ to $\D$. This corresponds to the constraint $\mathbf{w}_* > 0$; we refer to this as the \emph{positivity constraint for $*$}. \item Next, for each coordinate $(S, f)$, add a labeled sample $\left(\frac{1}{s}\left(\mathbf{e}^{(S, f)} + \delta \cdot \mathbf{e}^{*}\right), 1\right)$ with probability $2k/Z$ to $\D$. This corresponds to $\mathbf{w}_{(S, f)} + \delta \cdot \mathbf{w}_{*} \geq 0$ scaled down by a factor of $1/s$ so that the vector is in the unit ball; we refer to this as the \emph{non-negativity constraint for $(S, f)$.} \label{bullet:pos} \item For every $S \in \mathcal{Q}$, add a labeled sample $\left(\frac{1}{s}\left(\sum_{f \in \Pi_S} \mathbf{e}^{(S, f)} - (1 - \delta) \mathbf{e}^*\right), 1\right)$ with probability $2k/Z$ to $\D$. This corresponds to the constraint $\sum_{f \in \Pi_S} \mathbf{w}_{(S, f)} \geq (1 - \delta) \mathbf{w}_*$, scaled down by a factor of $1/s$. We refer to this constraint as the \emph{satisfiability constraint for $S$}. \label{bullet:sat} \item For every variable $v \in V$ and $\sigma \in \Sigma$, add a labeled sample \\ $\left(\frac{1}{s}\left(\sum_{S \in \mathcal{Q}: v \in S} \sum_{f \in \Pi_S: f(v) = \sigma} \mathbf{e}^{(S, f)} - \delta \mathbf{e}^{*}\right), -1\right)$ with probability $1/Z$ to $\D$. This corresponds to the constraint $\sum_{S \in \mathcal{Q}: v \in S} \sum_{f \in \Pi_S: f(v) = \sigma} \mathbf{w}_{(S, f)} < \delta \cdot \mathbf{w}_{*}$, scaled down by a factor of $1/s$. We refer to this as the \emph{selection constraint for $(v, \sigma)$}. \end{enumerate} \paragraph{Completeness.} Suppose that there exists an assignment $\phi: V \to \Sigma$ that satisfies all the constraints of $\mathcal{L}$. Consider the halfspace with normal vector $\mathbf{w}$ defined by $\mathbf{w}_{*} = \zeta$ and \begin{align*} \mathbf{w}_{(S, f)} = \begin{cases} \zeta &\text{ if } f = \phi|_S, \\ 0 &\text{ otherwise,} \end{cases} \end{align*} where $\zeta := \frac{1}{\sqrt{1 + |\mathcal{Q}|}}$ is the normalization factor. It is easy to see that the positivity constraints and the satisfiability constraints are satisfied with margin at least $\gamma = \zeta \cdot \delta/s = \Omega\left(\frac{1}{\Delta|\Sigma|^{3k}\sqrt{|\mathcal{Q}|}}\right)$. Finally, observe that the sum $\sum_{S \in \mathcal{Q}: v \in S} \sum_{f \in \Pi_S: f(v) = \sigma} \mathbf{w}_{(S, f)}$ is zero if $f(v) \ne \sigma$; in this case, the selection constraint for $(v, \sigma)$ is also satisfied with margin at least $\gamma$. As a result, we only incur an error (with respect to margin $\gamma$) for the selection constraint for $(v, \phi(v))$ for all $v \in V$; hence, we have $\mathrm{err}_{\gamma}^{\D}(\mathbf{w}) \leq \frac{1}{Z} \cdot |V| = \kappa$ as desired. \paragraph{Soundness.} Suppose contrapositively that there exists $\mathbf{w}$ with $\mathrm{err}_{0-1}^{\mathcal{D}}(\mathbf{w}) \leq \alpha \cdot \kappa + \varepsilon = 2\alpha\kappa$. We will ``decode'' back an assignment with value at least $\nu$ of the CSP from $\mathbf{w}$. To do so, first observe that from the positivity constraint for $*$, we must have $\mathbf{w}_* > 0$, as otherwise we would already incur an error of $1/2 > 2\alpha\kappa$ with respect to $\D$. Now, since scaling (by a positive factor) does not change the fraction of samples violated, we may assume w.l.o.g. that $\mathbf{w}_* = 1$. Next, we further claim that we may assume without loss of generality that $\mathbf{w}$ does not violate any non-negativity constraints (\ref{bullet:pos}) or satisfiability constraints (\ref{bullet:sat}). The reason is that, if $\mathbf{w}$ violates a non-negativity constraint for $(S = \{v_1, \dots, v_k\}, f)$, then we may simply change $\mathbf{w}_{(S, f)}$ to zero. This reduces the error by $2k/Z$, while it may only additionally violate $k$ additional selection constraints for $(v_1, f(v_1)), \dots, (v_k, f(v_k))$ which weights $k/Z$ in total with respect to $\D$. As a result, this change only reduces the error in total. Similarly, if the satisfiability constraint of $S$ is unsatisfied, we may change $\mathbf{w}_{(S, f)}$ for some $f \in \Pi_S$ to a sufficiently large number so that this constraint is satisfied; once again, in total the error decreases. Hence, we may assume that the non-negativity constraints (\ref{bullet:pos}) and satisfiability constraints (\ref{bullet:sat}) all hold. Now, for every vertex $v$, let $L_v \subseteq \Sigma$ denote the set of labels $\sigma \in \Sigma$ such that the selection constraint for $(v, \sigma)$ is violated. Since we assume that $\mathrm{err}_{0-1}^{\mathcal{D}}(\mathbf{w}) \leq 2\alpha\kappa$, we must have $\sum_{v \in V} |L_v| \leq (2\alpha\kappa) / (1/Z) = 2\alpha|V|$. Next, let $V_{\text{small}}$ denote the set of all variables $v \in V$ such that $|L_v| \leq 20\alpha k$. From the bound we just derived, we must have $|V_{\text{small}}| \geq \left(1 - \frac{1}{10k}\right)|V|$. Another ingredient we need is the following claim: \begin{claim} \label{claim:decode} For every constraint $S = \{v_1, \dots, v_k\} \in \mathcal{Q}$, there exist $\sigma_1 \in L_{v_1}, \dots, \sigma_k \in L_{v_k}$ that induces an accepting assignment for $\Pi_S$ (i.e., $f \in \Pi_S$ where $f$ is defined by $f(v_i) = \sigma_i$). \end{claim} \begin{proof} Suppose for the sake of contradiction that no such $\sigma_1 \in L_{v_1}, \dots, \sigma_k \in L_{v_k}$ exists. In other words, for every $f \in \Pi_S$, there must exist $i \in [k]$ such that the selection constraint for $(v_i, f(v_i))$ is not violated. This means that \begin{align*} \delta = \delta \cdot \mathbf{w}_{*} &> \sum_{S' \in \mathcal{Q}: v \in S'} \sum_{f' \in \Pi_{S'}: f'(v) = \sigma} \mathbf{w}_{(S', f')} \\ &\geq \mathbf{w}_{(S, f)} + \sum_{S' \in \mathcal{Q}: v \in S'} \sum_{f' \in \Pi_{S'}: f'(v) = \sigma} -\delta \cdot \mathbf{w}_* \\ &\geq \mathbf{w}_{(S, f)} - \delta \cdot \Delta |\Sigma|^k \;, \end{align*} where the second inequality comes from our assumption, that the non-negativity constraints are satisfied. Hence, by summing this up over all $f \in \Pi_S$, we get \begin{align*} \sum_{f \in \Pi_S} \mathbf{w}_{(S, f)} \leq \delta \cdot (\Delta |\Sigma|^k + 1) \cdot |\Sigma|^k < (1 - \delta), \end{align*} which means that the satisfiability constraint for $S$ is violated, a contradiction. \end{proof} We can now define an assignment $\phi: V \to \Sigma$ for $\mathcal{L}$ as follows. For every $v \in V$, let $\phi(v)$ be a random label in $L_v$. Notice here that, by Claim~\ref{claim:decode}, the probability that a constraint $S = \{v_1, \dots, v_k\}$ is satisfied is at least $\prod_{i \in [k]} |L_{v_i}|^{-1}$. Hence, the expected total number of satisfied constraints is at least \begin{align*} \sum_{S = \{v_1, \dots, v_k\} \in \mathcal{Q}} \prod_{i \in [k]} |L_{v_i}|^{-1} &\geq \sum_{S = \{v_1, \dots, v_k\} \in \mathcal{Q}: v_1, \dots, v_k \in V_{\text{small}}} \prod_{i \in [k]} |L_{v_i}|^{-1} \\ &\geq \sum_{S = \{v_1, \dots, v_k\} \in \mathcal{Q}: v_1, \dots, v_k \in V_{\text{small}}} (20\alpha k)^{-k}. \end{align*} Recall that we have earlier bound $|V_{\text{small}}|$ to be at least $\left(1 - \frac{1}{10k}\right)|V|$. Hence, the fraction of constraints that involves some variable outside of $V_{\text{small}}$ is at most $\left(\frac{1}{10k}\right) \cdot (k) = 0.1$. Plugging this into the above inequality, we get that the expected total number of satisfied constraints is at least \begin{align*} 0.9|\mathcal{Q}| \cdot (20\alpha k)^{-k} > |\mathcal{Q}| \cdot \nu, \end{align*} where the equality comes from our choice of $\alpha$. In other words, we have $\text{val}(\mathcal{L}) > \nu$ as desired. \end{proof} \subsubsection{Proofs of Theorems~\ref{thm:run-time},~\ref{thm:inapx} and Lemma~\ref{lem:strong-quantifier}} \label{sec:hardness-main-proofs} We now prove Theorem~\ref{thm:run-time}, by simply applying Theorem~\ref{thm:red} with appropriate parameters on top of the Moshkovitz-Raz PCP. \begin{proof}[Proof of Theorem~\ref{thm:run-time}] Suppose contrapositively that, for some constant $\tilde{\alpha} \geq 1$ and $\zeta > 0$, we have an $O(2^{{(1/\gamma)}^{2-\zeta}}2^{d^{1-\zeta}})f(\frac{1}{\varepsilon})$ time $\tilde{\alpha}$-agnostic proper learner for $\gamma$-margin halfspaces. Let $\nu > 0$ be a sufficiently small constant so that the parameter $\alpha$ (when $k = 2$) from Theorem~\ref{thm:red} is at least $\tilde{\alpha}$. (In particular, we pick $\nu = \frac{1}{C(\tilde{\alpha})^k}$ for some sufficiently large constant $C$.) Given an instance $\mathcal{L}$ of $\nu$-Gap-2-CSP, we run the reduction from Theorem~\ref{thm:red} to produce a distribution $\mathcal{D}$. We then run the learner on $\mathcal{D}$ with error parameter $\varepsilon$ as given by Theorem~\ref{thm:red} (and with $\delta = 1/3$). Note that the learner runs in $O(2^{{(1/\gamma)}^{2-\zeta}}2^{d^{1-\zeta}})f(\frac{1}{\varepsilon}) = 2^{O(n^{1 - \zeta/2})}$ time, and produces a halfspace $h$. We compute $\mathrm{err}_{0-1}^{\mathcal{D}}(h)$; if it is no more than $\alpha \cdot \kappa + \varepsilon$, then we output YES. Otherwise, output NO. The algorithm describe above solves $\nu$-Gap-2-CSP (correctly with probability 2/3) in $2^{O(n^{1 - \zeta/2})}$ time, which, by Theorem~\ref{thm:mr-pcp}, violates (randomized) ETH. \end{proof} Next, we prove Lemma~\ref{lem:strong-quantifier}. The main difference from the above proof is that, since the algorithm works only \emph{for some} margin $\gamma = \gamma(d)$. We will select the dimension $d$ to be as large as possible so that $\gamma(d)$ is still smaller than the margin given by Theorem~\ref{thm:red}. This dimension $d$ will be larger than the dimension given by Theorem~\ref{thm:red}; however, this is not an issue since we can simply ``pad'' the remaining dimensions by setting the additional coordinates to zeros. This is formalized below. \begin{proof}[Proof of Lemma~\ref{lem:strong-quantifier}] Let $\tilde{\alpha} \geq 1$ be any constant. Let $\nu > 0$ be a sufficiently small constant so that the parameter $\alpha$ (when $k = 2$) from Theorem~\ref{thm:red} is at least $\tilde{\alpha}$. (In particular, we pick $\nu = \frac{1}{C(\tilde{\alpha})^k}$ for some sufficiently large constant $C$.) Let $\varepsilon_0 = \varepsilon_0(\tilde{\alpha})$ be the parameter $\varepsilon$ given by Theorem~\ref{thm:red}. Suppose contrapositively that, for some $\zeta > 0$, there is an $\tilde{\alpha}$-agnostic learner $\mathcal{A}$ for $\gamma(\tilde{d})$-margin halfspaces that runs in time $O(2^{(1/\gamma)^{2 - \zeta}}) \mathrm{poly}(\tilde{d})$ for all dimensions $\tilde{d}$ and for some $0 < \varepsilon^* < \varepsilon_0(\alpha)$ and $\gamma(\tilde{d})$ that satisfies \begin{align} \label{eq:gamma-bound} \frac{1}{\tilde{d}^{0.5 - \zeta}} \leq \gamma(\tilde{d}) \leq \frac{1}{(\log \tilde{d})^{0.5 + \zeta}} \end{align} and \begin{align} \label{eq:gamma-consec} \frac{\gamma(\tilde{d} + 1)}{\gamma(\tilde{d})} \geq \zeta. \end{align} We may assume without loss of generality that $\zeta < 0.1$. We create an algorithm $\mathcal{B}$ for $\nu$-Gap-2-CSP as follows: \begin{itemize} \item Given an instance $\mathcal{L}$ of $\nu$-Gap-2-CSP of size $n$, we first run the reduction from Theorem~\ref{thm:red} with $\nu$ as selected above to produce a distribution $\D$ on $\mathbb{B}_d \times \{\pm 1\}$ (where $d = n + 1$). Let the margin parameter $\gamma$ be as given in Theorem~\ref{thm:red}; observe that $\gamma = \Omega_{\nu}(1/\sqrt{n})$. \item Let $\tilde{d}$ be the largest integer so that $\gamma(\tilde{d}) \geq \gamma$. Observe that, from the lower bound in~\eqref{eq:gamma-consec}, we have $\gamma(d) \geq \frac{1}{d^{0.5 - \zeta}}$. Hence, for a sufficiently large $d$, $\gamma(d)$ is larger than $\gamma$ (which is $O_{\nu}(1/\sqrt{d})$). In other words, we have $\tilde{d} \geq d$. \item Create a distribution $\mathcal{D}'$ as follows: for each $(\mathbf{x}, y) \in {\mathsf{supp}}(\mathcal{D})$, we create a sample $(\mathbf{x}', y)$ in $\mathcal{D}'$ with the same probability and where $\mathbf{x}' \in \mathbb{B}_{\tilde{d}}$ is $\mathbf{x}$ concatenated with $0$s in the last $\tilde{d} - d$ coordinates. \item Run the learner $\mathcal{A}$ on $\mathcal{D}'$ with parameter $\gamma(\tilde{d})$ and $\varepsilon$. Suppose that it outputs a halfspace $h$. We compute $\mathrm{err}_{0-1}^{\mathcal{D}'}(h)$; if this is no more than $\alpha \cdot \kappa + \varepsilon_0(\alpha)$, then output YES. Otherwise, output NO. \end{itemize} It is simple to see that, in the completeness case, we must have $\mathrm{OPT}_{\gamma(\tilde{d})}^{\mathcal{D}'} \leq \mathrm{OPT}_{\gamma}^{\mathcal{D}'} = \mathrm{OPT}_{\gamma}^{\mathcal{D}} \leq \kappa$; hence, $\mathcal{A}$ would (with probability 2/3) output a halfspace $h$ with 0-1 error at most $\alpha \cdot \kappa + \varepsilon_0(\alpha)$, and we output YES. On the other hand, in the soundness case, we have $\mathrm{OPT}_{0-1}^{\mathcal{D}'} = \mathrm{OPT}_{0-1}^{\mathcal{D}'} > \alpha \cdot \kappa + \varepsilon_0(\tilde{\alpha})$, and we always output NO. Hence, the algorithm is correct with probability 2/3. Next, to analyze the running time of $\mathcal{B}$, let us make a couple additional observations. First, from~\eqref{eq:gamma-consec}, we have \begin{align} \gamma(\tilde{d}) \leq \gamma / \zeta \leq O(1/\sqrt{n}). \end{align} Furthermore, from the upper bound in~\eqref{eq:gamma-consec}, we have \begin{align} \tilde{d} \leq 2^{(1/\gamma(\tilde{d}))^{\frac{1}{0.5+\zeta}}} \leq 2^{O(n^{\frac{1}{1 + 2\zeta}})} \leq 2^{O(n^{1 - \zeta})}, \end{align} where the last inequality follows from $\zeta < 0.1$. As a result, the algorithm runs in time $O(2^{(1/\gamma(d))^{2 - \zeta}})\mathrm{poly}(\tilde{d}) \leq 2^{O(n^{1 - \zeta/2})}$, which from Theorem~\ref{thm:mr-pcp} would break the (randomized) ETH. \end{proof} Finally, we prove Theorem~\ref{thm:inapx}, which is again by simply applying Theorem~\ref{thm:red} to the Dinur-Harsha-Kindler PCP and the Sliding Scale Conjecture. \begin{proof}[Proof of Theorem~\ref{thm:inapx}] By plugging in our reduction from Theorem~\ref{thm:inapx} to Theorem~\ref{thm:dhk-pcp}, we get that it is NP-hard to, given a distribution $\mathcal{D}$, distinguish between $\mathrm{OPT}_{\gamma}^{\mathcal{D}} \leq \kappa$ or $\mathrm{OPT}_{0-1}^{\mathcal{D}} > \alpha \cdot \kappa + \Omega(\frac{1}{\mathrm{poly}(d)})$, where $\gamma = \frac{1}{d^{\text{polyloglog}(d)}}$ and $\alpha = d^{1/\text{polyloglog}(d)} = (1/\gamma)^{1/\text{polyloglog}(1/\gamma)}$. In other words, if we have a polynomial time $\alpha$-agnostic learner for $\gamma$-margin halfspaces for this parameter regime, then NP = RP. Similarly, by plugging in our reduction the Sliding Scale Conjecture, we get that it is NP-hard to, given a distribution $\mathcal{D}$, distinguish between $\mathrm{OPT}_{\gamma}^{\mathcal{D}} \leq \kappa$ or $\mathrm{OPT}_{0-1}^{\mathcal{D}} > \alpha \cdot \kappa + \Omega(\frac{1}{\mathrm{poly}(d)})$, where $\gamma = 1/d^{O(1)}$ and $\alpha = d^{\Omega(1)} = (1/\gamma)^{\Omega(1)}$. In other words, if we have a polynomial time $\alpha$-agnostic learner for $\gamma$-margin halfspaces for this parameter regime, then NP = RP. \end{proof} \new{ \section{Conclusions and Open Problems} \label{sec:conc} This work gives nearly tight upper and lower bounds for the problem of $\alpha$-agnostic proper learning of halfspaces with a margin, for $\alpha = O(1)$. Our upper and lower bounds for $\alpha = \omega(1)$ are far from tight. Closing this gap is an interesting open problem. Charactering the fine-grained complexity of the problem for improper learning algorithms remains a challenging open problem. More broadly, an interesting direction for future work would be to generalize our agnostic learning results to broader classes of geometric functions. Finally, we believe that finding further connections between the problem of agnostic learning with a margin and adversarially robust learning is an intriguing direction to be explored. } \bibliographystyle{alpha}
1,477,468,751,264
arxiv
\section{Introduction} We consider the following stochastic differential equation (SDE in short) driven by fractional Brownian motion (fBm in short) \begin{equation}{\label{eq1}} \left\{ \begin{aligned} \mathrm{d}X_{t}&=b(X_{t})\mathrm{d}t+\sigma\mathrm{d}B^{H}_{t},\\ X_{0}&=x, \end{aligned} \right. \end{equation} where $x\in\mathbb{R}$ is the initial value of the process $\{X_{t}, t\geq 0\}$, $b:\mathbb{R}\rightarrow\mathbb{R}$ is an unknown continuous function, $\sigma\in \mathbb{R}$ is a constant and $\{B^{H}_{t}, t\geq 0\}$ is a fBm of Hurst index $H\in(\frac{1}{2},1)$ defined on a complete probability space $(\Omega, \mathcal{F}, \{\mathcal{F}_{t}\}_{t\geq0}, \mathbb{P})$, where $\mathcal{F}_{t}$ is the $\sigma$-algebra generated by $\{B_{s}^{H}, s\leq t\}$. We consider the stochastic process $\{X_{t}\}$ observed at regularly sapced time points $\{t_{k}=k\alpha_{n}, k=0, 1, \cdots, n\}$, where $\alpha_{n}$ is the time frequency and $n$ is the sample size. Based on the high-frequency observations, we are aiming to give an estimation for the drift function $b(\cdot)$. With the development of technology, the parameter estimation problem in SDEs has gained much attention in recent years due to their increased applications in the broad fields. It is necessary that the parameters which characterize the SDE system should be estimated via the data in many real world applications. Along the last decades, there is a huge literture devoted to the problems of parameter estimation. When the drit term is linear which means that $b(X_{t})=-\theta X_{t}$, then $X$ is a fractional Ornstein–Uhlenbeck processes and the estimation of $\theta$ has been widely studied. \cite{Le Breton(2002)} propose a maximum likehood estimator via the approximated likelihood ratio based on continuous or discrete observations. On the other hand, \cite{Hu(2010)} propose the least squares estimator, they consider the integral with respect to $B^{H}$ for $H\in[\frac{1}{2},\frac{3}{4})$. They extend the former work and established a central limit theorem for the least squares estimator for $H\in(0,\frac{3}{4}]$ and a noncentral limit theorem for $H\in(\frac{3}{4},1)$ \citep{Hu(2019)}. Moreover, a ergodic estimator for $\theta$ is also proposed \citep{Hu(2010),Hu(2019)}. When the drift term is $\theta b(X_{t})$, the maximum likehood estimator of $\theta$ is studied both with continuous and discrete observations \citep{Tudor(2007)}. \cite{Hu(2018)} propose the least squares estimator for $\theta$ with $H\in (\frac{1}{4},1)$, they also derive a maximum inequality for It$\hat{o}$-Skorohod integrals. When the drift term is $b(\theta, X_{t})$, \cite{Neuenkirch(2014)} propose a least squares estimator for $\theta$ based on discrete observations of the process $\{X_{t}, t\geq 0\}$. In reality, the drift function is seldom known, hence it is significant and needed urgently to give a reasonable estimator. On the nonparametric estimation of drift in Eq. (\ref{eq1}), there are only a few references. The most popular used tools are kernel function $K_{h}(\cdot)$ where $h$ is the bandwidth. In \cite{Saussereau(2014)}, a Nadaraya–Watson estimator is defined at time $t_{n}$ as \begin{equation}\label{eq2} \hat{b}_{t_{n} . h}(x)=\frac{\sum_{k=0}^{n-1}\left(t_{n}-t_{k}\right)^{1-2 H} K\left(\left(X_{t_{k}}-x\right) / h\right)\left(X_{t_{k+1}}-X_{t_{k}}\right)}{\sum_{k=0}^{n-1}\left(t_{n}-t_{k}\right)^{1-2 H} K\left(\left(X_{t_{k}}-x\right) / h\right)\left(t_{k+1}-t_{k}\right)}. \end{equation} They obtain the consistency of the estimator under a one sided dissipative Lipschitz condition which insures the ergodic property for the solution of Eq. (\ref{eq1}). In \cite{Fabienne(2019)}, they propose a Nadaraya–Watson estimator as \begin{equation*} \widehat{b}_{T, h}(x):=\frac{\int_{0}^{T} K\left(\frac{X(s)-x}{h}\right) \mathrm{d} X(s)}{\int_{0}^{T} K\left(\frac{X(s)-x}{h}\right) \mathrm{d} s}, \end{equation*} where the stochastic integral with respect to $X$ is taken in the sense of It$\hat{o}$-Skorokhod, which is similar to the Eq. (\ref{eq2}). In \cite{Prakasa(2011)}, they propose an estimator of trend function $b_{t}=b(x^{0}_{t})$ as following \begin{equation}\label{eq3} \hat{b}_{t}=\frac{1}{h} \int_{0}^{T} K\left(\frac{\tau-t}{h}\right) \mathrm{d} X_{\tau}, \end{equation} where $x^{0}_{t}$ is the solution of Eq. (\ref{eq1}) when $\sigma=0$. They obtain the asymptotic behaviour of the estimator Eq. (\ref{eq3}) when $\sigma\rightarrow0$. One can refer to \cite{Kutoyants(2004)} for nonparameter estimation in diffusion processes. As we all know, for a nonparameter $b(\cdot)$, one can usually give the estimator \citep{N(1964),W(1964)} \begin{equation}\label{eq4} \hat{b}(x)=\int_{-\infty}^{\infty}K_{h}(y-x)b(y)\mathrm{d}y. \end{equation} Motivated by the aforementioned works, in this paper, we propose a Nadaraya-Waston estimator of the drift function $b(\cdot)$ based on the discrete obervations as \begin{equation}\label{eq5} \hat{b}_{n,h}(x)=\sum_{k=0}^{n-1}W_{n,k}(X_{t_{k}}, x)\diamond(\frac{X_{t_{k+1}}-X_{t_{k}}}{\alpha_{n}}), \end{equation} where `$\diamond$' is the Wick product which has been associated with integrals of It$\hat{o}$-Skorokhod type \citep{Hu(1996),Holden(1996)} and the weight function $W_{n,k}(X_{t_{k}},x)$ is given by \begin{equation*} W_{n,k}(X_{t_{k}},x)=\frac{K_{h}(X_{t_{k}}-x)}{\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)}, \end{equation*} where $K_{h}(\cdot)=1/hK(\cdot/h)$ for $k=0, 1, \cdots, n$, $K(\cdot)$ is a kernel function and $h$ is the bandwidth. The estimator defined in Eq. (\ref{eq5}) is an analogue of Eq. (\ref{eq4}). This paper is organized as follows. In Section \ref{sec2}, we describe some priliminary results on stochastic integral. Based on some assumptions, we obtain the ergodic properties of $X_{t}$ and a bound for the Malliavin derivative of $X_{t}$. Our main results are proposed in Section \ref{sec3}. Based on some prepared lemmas, we obtain the consistency of the estimator. \section{Preliminaries}\label{sec2} In this section, we give some notations and state our assumptions. The fBm $\left(B_{t}^{H}, t \in \mathbb{R}\right)$ with Hurst parameter $H \in(0,1)$ is a zero mean Gaussian process with covariance \begin{equation*} \mathbb{E}\left(B_{t}^{H} B_{s}^{H}\right)=R_{H}(s, t)=\frac{1}{2}\left(|t|^{2 H}+|s|^{2 H}-|t-s|^{2 H}\right). \end{equation*} On any finite interval, all paths of fBm are $\alpha$-H$\ddot{o}$lder continuous with $\alpha<H$. Denote by $\eta_{T}$ the coefficient of fBm on the interval $[0,T]$, it means that \begin{equation}\label{eq6} \sup_{t\neq s\in [0,T]}|B^{H}_{t}-B^{H}_{s}|\leq\eta_{T}|t-s|^{\alpha}. \end{equation} Futhermore, we have $\mathbb{E}|\eta_{T}|^{p}=T^{p(H-\alpha)}\mathbb{E}|\eta_{1}|^{p}$ by the self-similarity property of fBm for any $p>1$. Let $f ,g :\mathbb{R}_{+}\rightarrow \mathbb{R}$ be Borel measurable functions. Denote by $\mathcal{H}$ the Hilbert space equipped with the inner product \begin{equation*} \Braket{f,g}_{\mathcal{H}}=H(2H-1)\int_{0}^{\infty}\int_{0}^{\infty}f(t)g(s)|t-s|^{2H-2}\mathrm{d}t\mathrm{d}s. \end{equation*} Then $\mathcal{H}$ is a Banach space with the norm $\|\cdot\|_{\mathcal{H}}$. Futhermore, for any $f\in L^{\frac{1}{H}}([0,\infty); \mathbb{R})$, we have \begin{equation}\label{eq7} \|f\|_{\mathcal{H}}\leq c_{H}\|f\|_{L^{\frac{1}{H}}}, \end{equation} where $c_{H}$ is a constant with respect to $H$ defined in \cite{Nualart(2006)}. Let $B_{H}(\phi)=\int_{0}^{\infty}\phi_{t}\mathrm{d}B_{t}^{H}$. For a smooth and cylindrical random variable $F=f(B^{H}(\phi_{1}),B^{H}(\phi_{2}),\cdots,B^{H}(\phi_{d}))$, we define its Malliavin derivative as the $\mathcal{H}$-valued random variable as \begin{equation*} \mathrm{D}F=\sum_{i=1}^{d}\frac{\partial f}{\partial x_{i}}\left(B^{H}(\phi_{1}),B^{H}(\phi_{2}),\cdots,B^{H}(\phi_{d})\right)\phi_{i}. \end{equation*} Let $f\in L^{p}(\Omega, \mathcal{F}, \mathbb{P})$ for each $p\geq 1$, the It$\hat{o}$ type stochastic integral is the limit of the Riemann sums defined in terms of the Wick product \citep{Duncan(2000)} \begin{equation}\label{eq8} \int_{0}^{T}f_{s}\mathrm{d}B^{H}_{s}=\lim_{\alpha_{n}\rightarrow 0}\sum_{k=0}^{n-1}f_{t_{k}}\diamond(B^{H}_{t_{k+1}}-B^{H}_{t_{k}}). \end{equation} We make use of the notation $\delta(u)=\int_{0}^{\infty}u_{t}\mathrm{d}B^{H}_{t}$ and and call $\delta(u)$ the divergence integral of $u$ with respect to the fBm $B_{\cdot}^{H}$. The relationship between divergence integral and pathwise integral is stated as the following formula \begin{equation*} \int_{0}^{T}u_{t}\mathrm{d}B_{t}^{H}=\int_{0}^{T}u_{t}\delta B_{t}^{H}+H(2H-1)\int_{0}^{T}\int_{0}^{t}\mathrm{D}_{s}u_{t}|t-s|^{2H-2}\mathrm{d}s\mathrm{d}t, \end{equation*} where `$\delta$' means that the integral is taken in the sense of Stratonovich. Let $\|f\|_{\infty}=\sup_{\cdot\in \mathbb{R}} f(\cdot)$. We will make use of the following assumptions: \begin{enumerate}[i.] \item\label{asp1} The dift function $b(\cdot)$ satisfies a global Lipschitz conditions which is there exists a positive constant $L>0$, such that \begin{equation*} |b(x)-b(y)|\leq L|x-y|, \quad x, y \in \mathbb{R}. \end{equation*} \item\label{asp2} The dift term $b(\cdot)$ is continuously differentiable with a polynomial growth condition on its derivative $b^{\prime}(\cdot)$ and $b(\cdot)$ itself, there exists a constant $C>0$ and $m\in \mathbb{N}$ such that \begin{equation*} |b(x)|+|b^{\prime}(x)|\leq C(|1+|x|^{m}), x\in \mathbb{R}. \end{equation*} \item\label{asp3} There exists a constant $M>0$ such that \begin{equation*} \left(b(x)-b(y)\right)\times(x-y)\leq -M|x-y|^{2}, x, y\in \mathbb{R}. \end{equation*} \item\label{asp4} The observation time frequency $\alpha_{n}=O(n^{-1+\frac{1}{\gamma}})$, $t_{n}=n\alpha_{n}\rightarrow \infty$ as $n\rightarrow \infty$ where $\gamma>\max\{1+m^{2}H,2\}$. \item\label{asp5} The kernel function $K(\cdot)$ is continuously differentiable, non-negative with support $[-1, 1]$. The bandwidth $h$ satifies $h\rightarrow 0$ as $n\rightarrow \infty$. \item\label{asp6} $\|b(\cdot)\|_{\infty}+\|b^{\prime}(\cdot)\|_{\infty}<\infty$ and $\|K\|_{\infty}<\infty$. \end{enumerate} We define the shift operators $\theta_{t}: \Omega\rightarrow\Omega$ as \begin{equation*} \theta_{t}w(\cdot)=w(\cdot+t)-w(t), t\in\mathbb{R}, w\in\Omega. \end{equation*} We summarizes the ergodic property for the process $\{X_{t}, t\geq0\}$ in the following theorem, and for a proof one can see in \cite{Saussereau(2014)}. One can see in \cite{Hairer(2005),Hairer(2007),Haier(2011)} for more about the ergodic properties of diffusion processes. \begin{thm}\label{th1} Under the conditions \ref{asp2}, \ref{asp3} and \ref{asp4}. Let $\varphi$ be a continuously differentiable function such that \begin{equation*} |\varphi(x)|+|\varphi^{\prime}(x)|\leq C_{\varphi}|1+|x|^{p}|, \quad y \in \mathbb{R}, \end{equation*} for some $C_{\varphi}>0$ and $p\in \mathbb{N}$. There extists a random variable $\bar{X}$ with finite moments of any order such that \begin{enumerate} \item We have \begin{equation*} \lim _{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T} \varphi\left(X_{s}\right) \mathrm{d} s=\mathbb{E}(\varphi(\bar{X})), \end{equation*} almost surely. \item If $\gamma>1+\left(m^{2}+p\right) H$ and $\gamma>p+1$ then \begin{equation}\label{eq9} \lim _{n \rightarrow \infty} \frac{1}{t_{n}} \int_{0}^{t_{n}}\left\{\sum_{k=0}^{n-1} \varphi\left(X_{t_{k}}\right) \mathbf{1}_{\left[t_{k}, t_{k+1}\right)}(s)\right\} \mathrm{d} s=\mathbb{E}(\varphi(\bar{X})), \end{equation} almost surely. \end{enumerate} \end{thm} Next, we give a upper bound for the Malliavin derivative of $X_{t}$. More estimates for the p-th moment of $X_{t}$ one can refer to Proposition 4.1 in \cite{Hu(2019)}. \begin{prop}\label{prop1} Under the contions \ref{asp1}, \ref{asp2} and \ref{asp3}, we have that the Malliavin derivative of $X_{t}$ defined in Eq. (\ref{eq1}) satifies that for all $0\leq s\leq t$, \begin{equation*} |\mathrm{D}_{s}X_{t}|\leq |\sigma|e^{-M(t-s)}. \end{equation*} \end{prop} \noindent\textbf{Proof of Proposition \ref{prop1}}. Note that \begin{equation*} \mathrm{D}_{s}X_{t}=\int_{s}^{t}b^{\prime}(X_{u})D_{s}X_{u}\mathrm{d}u+\sigma. \end{equation*} Denote $Z_{t}=\mathrm{D}_{s}X_{t}$ for $t>s$. We can write the above equation as the following ordinary differential equation for $t \geq s$: \begin{equation*} \mathrm{d}Z_{t}=b^{\prime}(X_{t})Z_{t}\mathrm{d}t, \quad\quad Z_{s}=\sigma. \end{equation*} The solution of the above equation is \begin{equation*} \mathrm{D}_{s}X_{t}=\sigma e^{\int_{s}^{t}b^{\prime}(X_{r})\mathrm{d}r}. \end{equation*} Moreover, differentiating $|Z_{t}|^{2}$ with respect to $t$ and by \ref{asp3}, we have \begin{equation*} \frac{\mathrm{d}|Z_{t}|^{2}}{\mathrm{d}t}=2 b^{\prime}(X_{t})Z^{2}_{t}\leq -2M|Z_{t}|^{2}. \end{equation*} Then we obtain the desired results directly by Gronwall's inequality. \hfill$\square$ \section{Asymptotic behavior of the estimator}\label{sec3} In this section, we consider asymptotic behavior of the Nadaraya-Watson estimator defined in Eq. (\ref{eq5}) of the drift function. First note that \begin{equation*} X_{t_{k+1}}-X_{t_{k}}=\int_{t_{k}}^{t_{k+1}}\left(b(X_{s})-b(X_{t_{k}})\right)\mathrm{d}s+b(X_{t_{k}})\alpha_{n}+\sigma(B_{t_{k+1}}^{H}-B_{t_{k}}^{H}), \end{equation*} then the N-W estimator defined in Eq. (\ref{eq5}) can be partitioned into three components as follows: \begin{equation}\label{eq10} \begin{aligned} \hat{b}_{n,h}(x) =&\frac{\frac{1}{n\alpha_{n}}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\diamond(X_{t_{k+1}}-X_{t_{k}})}{\frac{1}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)}\\ =&\frac{\frac{1}{n\alpha_{n}}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\left(\int_{t_{k}}^{t_{k+1}}\left(b(X_{s})-b(X_{t_{k}})\mathrm{d}s\right)\right)}{\frac{1}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)}\\ &+\frac{\frac{1}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)b(X_{t_{k}})}{\frac{1}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)}\\ &+\frac{\frac{1}{n\alpha_{n}}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\left(\sigma\diamond(B_{t_{k+1}}-B_{t_{k}}^{H})\right)}{\frac{1}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)}\\ =&\frac{I+II+III}{S_{n,h}(x)}.\\ \end{aligned} \end{equation} We first state some lemmas in order to obtain our main consistency results. \begin{lem}\label{lem1} Under the conditions \ref{asp1}-\ref{asp5}, we have \begin{equation*} S_{n,h}(x)\rightarrow \mathbb{E}\big(K_{h}(\bar{X})\big), \end{equation*} almost surely, as $n\rightarrow\infty$. \end{lem} \noindent\textbf{Proof of Lemma \ref{lem1}}. By properties of the Kernel function $K(\cdot)$, we have that there exists a constant $C_{K}<\infty$ and $p=1$ such that \begin{equation*} |K(x)|+|K^{\prime}(x)|\leq C_{K}\big|1+|x|\big|. \end{equation*} Then by Theorem \ref{th1}, we have \begin{equation*} \begin{aligned} \frac{1}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)&=\frac{1}{n\alpha_{n}}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\alpha_{n} =\frac{1}{t_{n}}\sum_{k=0}^{n-1}\int_{t_{k}}^{t_{k+1}}K_{h}(X_{t_{k}}-x)\mathrm{d}s\\ &=\frac{1}{t_{n}}\int_{0}^{t_{n}}\left\{\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\mathbf{1}_{\left[t_{k}, t_{k+1}\right)}(s)\mathrm{d}s\right\}. \end{aligned} \end{equation*} Therefore, the desired convergence result follows from Eq. (\ref{eq9}) directly. \hfill$\square$ The following lemma shows that $I\rightarrow 0$ as $n\rightarrow \infty$. \begin{lem}\label{lem2} Under the conditions \ref{asp1}, \ref{asp2} and \ref{asp4}, we have \begin{equation*} \sup_{t, s\in [t_{k},t_{k+1})}|X_{t}-X_{s}|\leq \left|\sigma\eta_{T}(t-s)^{\alpha}+b(X_{s})(t-s)\right|e^{L(t-s)}, \end{equation*} almost surely, as $n\rightarrow\infty$. \end{lem} \noindent\textbf{Proof of Lemma \ref{lem2}}. By using the Lipschitz condition of $b(\cdot)$ and Eq. (\ref{eq6}), we have \begin{equation*} \begin{aligned} &\sup_{s\leq t\in [0,t_{n}]}|X_{t}-X_{s}|\\ &\leq\sup_{s\leq t\in [0,t_{n}]}\int_{s}^{t}b(X_{u})\mathrm{d}u+\sigma\eta_{T}|t-s|^{\alpha}\\ &=\sup_{s\leq t\in [0,t_{n}]}\int_{s}^{t}\left(b(X_{u})-b(X_{s})\right)\mathrm{d}u+b(X_{s})|t-s|+\sigma\eta_{T}|t-s|^{\alpha}\\ &\leq L\int_{s}^{t}\sup_{s\leq t\in [0,t_{n}]}|X_{t}-X_{s}|\mathrm{d}u+b(X_{s})|t-s|+\sigma\eta_{T}|t-s|^{\alpha}. \end{aligned} \end{equation*} By Gronwall's inequality, we have \begin{equation*} \sup_{s\leq t\in [0,t_{n}]}|X_{t}-X_{s}|\leq \left|\sigma\eta_{T}(t-s)^{\alpha}+b(X_{s})(t-s)\right|e^{L(t-s)}, \end{equation*} which completes the proof. \hfill$\square$ Based on the Lemma \ref{lem2}, we could get the following lemma immediately. \begin{lem}\label{lem3} Under the conditions \ref{asp1}-\ref{asp5}, we have \begin{equation*} \frac{1}{n\alpha_{n}}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\left(\int_{t_{k}}^{t_{k+1}}\left(b(X_{s})-b(X_{t_{k}})\mathrm{d}s\right)\right)\rightarrow 0 \quad \text{a.s.}. \end{equation*} \end{lem} \noindent\textbf{Proof of Lemma \ref{lem3}}. Based on the Lipschitz condition of $b(\cdot)$, we have \begin{equation*} \begin{aligned} &\frac{1}{n\alpha_{n}}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\int_{t_{k}}^{t_{k+1}}\left(b(X_{s})-b(X_{t_{k}})\right)\mathrm{d}s\\ \leq&\frac{L}{n\alpha_{n}}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\int_{t_{k}}^{t_{k+1}}\left|X_{s}-X_{t_{k}}\right|\mathrm{d}s\\ \leq&\frac{L}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\sup_{s\in [t_{k},t_{k+1})}\left|X_{s}-X_{t_{k}}\right|. \end{aligned} \end{equation*} By using Lemma \ref{lem2}, $\alpha_{n}<1$ and $\alpha<H$, we have \begin{equation*} \begin{aligned} \sup_{s\in [t_{k},t_{k+1})}|X_{s}-X_{t_{k}}| &\leq \left|\sigma\eta_{T}\alpha_{n}^{\alpha}+b(X_{s})\alpha_{n}\right|e^{L\alpha_{n}}\\ &\leq |\sigma\eta_{T}+b(X_{t_{k}})|\alpha_{n}^{\alpha}e^{L\alpha_{n}}. \end{aligned} \end{equation*} By Theorem \ref{th1}, we have \begin{equation*} \begin{aligned} &\frac{L}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\sup_{s\in [t_{k},t_{k+1})}\left|X_{s}-X_{t_{k}}\right|\\ \leq& LS_{n,h}(x)\sup_{s\in [t_{k},t_{k+1})}|X_{s}-X_{t_{k}}|\\ =&O(\alpha_{n}^{\alpha}), \end{aligned} \end{equation*} which goes to $0$, as $n\rightarrow\infty$. \hfill$\square$ Based on change of random variables and propertier of $K(\cdot)$, we give the following lemma. \begin{lem}\label{lem4} Under the conditions \ref{asp5} and \ref{asp6}, we have \begin{equation*} \frac{\frac{1}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)b(X_{t_{k}})}{\frac{1}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)}\rightarrow b(x) \quad \text{a.s.}. \end{equation*} \end{lem} \noindent\textbf{Proof of Lemma \ref{lem4}}. Denote $X_{t_{k}}=x+u_{t_{k}}h$, by using change of variables, we have \begin{equation*} \begin{aligned} \frac{II}{S_{n,h}(x)}=&\frac{\frac{1}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)b(X_{t_{k}})}{\frac{1}{n}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)}\\ =&\frac{\frac{1}{n}\sum_{k=0}^{n-1}K(u_{t_{k}})b(x+u_{t_{k}}h)}{\frac{1}{n}\sum_{k=0}^{n-1}K(u_{t_{k}})}. \end{aligned} \end{equation*} We do a further Taylor expansion of $\frac{II}{S_{n,h}(x)}$ around $h$ and obtain \begin{equation*} \begin{aligned} \frac{II}{S_{n,h}(x)}=&\frac{\sum_{k=0}^{n-1}K(u_{t_{k}})\left(b(x)+hb^{\prime}(x+u_{t_{k}}h)+o(h^{2}))\right)}{\sum_{k=0}^{n-1}K(u_{t_{k}})}\\ =&b(x)+h\frac{\sum_{k=0}^{n-1}K(u_{t_{k}})b^{\prime}(x+u_{t_{k}}h)}{\sum_{k=0}^{n-1}K(u_{t_{k}})}+o(h^{2})\\ \rightarrow& b(x) \quad a.s., \end{aligned} \end{equation*} which completes the proof. \hfill$\square$ The following lemma shows that $III\rightarrow 0$, as $n\rightarrow\infty$. \begin{lem}\label{lem5} Under conditions \ref{asp4}, \ref{asp5} and \ref{asp6}, we have \begin{equation*} \frac{1}{n\alpha_{n}}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\left(\sigma \diamond(B_{t_{k+1}}-B_{t_{k}}^{H})\right)\rightarrow 0, \end{equation*} in probability, as $n\rightarrow\infty$. \end{lem} \noindent\textbf{Proof of Lemma \ref{lem5}}. Note that the It$\hat{o}$-Skorokhod integral coulded be defined as the Riemann sum using Wick product Eq. (\ref{eq8}), then \begin{equation*} \lim_{n\rightarrow\infty}III=\frac{\sigma}{t_{n}}\int_{0}^{t_{n}}K_{h}(X_{t_{k}}-x)\mathbf{1}_{\left[t_{k}, t_{k+1}\right)}(t)\mathrm{d}B_{t}^{H}. \end{equation*} From the properties of It$\hat{o}$ integral, we have \begin{equation*} \mathbb{E}\big(\int_{0}^{t_{n}}K_{h}(X_{t_{k}}-x)\mathbf{1}_{\left[t_{k}, t_{k+1}\right)}(t)\mathrm{d}B^{H}_{t}\big)=0. \end{equation*} As a consequence of Meyer's inequality \citep{Hu(2005)}, we have \begin{equation*} \begin{aligned} &\mathbb{E}\bigg(\int_{0}^{t_{n}}K_{h}(X_{t_{k}}-x)\mathbf{1}_{\left[t_{k}, t_{k+1}\right)}(t)\mathrm{d}B^{H}_{t}\bigg)^{2}\\ \leq&C_{p}\big(\mathbb{E}\|K_{h}\|^{2}_{\mathcal{H}}+\mathbb{E}\|DK_{h}\|^{2}_{\mathcal{H}\otimes\mathcal{H}}\big), \end{aligned} \end{equation*} where $C_{p}$ is a constant. By Eq. (\ref{eq7}), we have \begin{equation*} \begin{aligned} &\mathbb{E}\bigg(\int_{0}^{t_{n}}K_{h}(X_{t_{k}}-x)\mathbf{1}_{\left[t_{k}, t_{k+1}\right)}(t)\mathrm{d}B^{H}_{t}\bigg)^{2}\\ \leq&C_{p,H}\bigg[\bigg(\int_{0}^{t_{n}}\mathbb{E}\big|K_{h}(X_{t_{k}}-x)\mathbf{1}_{[t_{k}, t_{k+1})}(t)\big|^{\frac{1}{H}}\mathrm{d}t\bigg)^{2H}\\ &+\mathbb{E}\bigg(\int_{0}^{T}\int_{0}^{T}\big|K^{\prime}_{h}(X_{t_{k}}-x)\mathbf{1}_{[t_{k}, t_{k+1})}(t)\mathrm{D}_{s}X_{t_{k}}\big|^{\frac{1}{H}}\mathrm{d}s\mathrm{d}t\bigg)^{2H}\bigg]. \end{aligned} \end{equation*} Under conditions \ref{asp5}, we have \begin{equation*} \begin{aligned} \bigg(\int_{0}^{t_{n}}\mathbb{E}\big|K_{h}(X_{t_{k}}-x)\mathbf{1}_{[t_{k}, t_{k+1})(t)}\big|^{\frac{1}{H}}\mathrm{d}t\bigg)^{2H}\leq \|K_{h}\|_{\infty}t_{n}^{2H}=t^{2H}h^{-2}\|K\|_{\infty}. \end{aligned} \end{equation*} Under conditions \ref{asp6} and Proposition \ref{prop1}, we have \begin{equation*} \begin{aligned} &\bigg(\int_{0}^{t_{n}}\int_{0}^{t}\big|K^{\prime}_{h}(X_{t_{k}}-x)\mathbf{1}_{[t_{k}, t_{k+1})}(t)\mathrm{D}_{s}X_{t_{k}}\big|^{\frac{1}{H}}\mathrm{d}s\mathrm{d}t\bigg)^{2H}\\ \leq&\bigg(\int_{0}^{t_{n}}\int_{0}^{t}\big|K^{\prime}_{h}(X_{t_{k}}-x)\mathbf{1}_{[t_{k}, t_{k+1})}(t)\sigma e^{-M(t_{k}-s)}\big|^{\frac{1}{H}}\mathrm{d}s\mathrm{d}t\bigg)^{2H} \\ \leq&\frac{1}{M^{2H}}\|K_{h}^{\prime}\|_{\infty}^{2}\sigma^{2}\bigg(\int_{0}^{t_{n}}(1-e^{-\frac{Mt}{H}})\mathrm{d}t\bigg)^{2H}\\ =&O(h^{-4}t_{n}^{2H}). \end{aligned} \end{equation*} Combining the above two equations, we have \begin{equation*} \mathbb{E}\bigg(\int_{0}^{t_{n}}K\big(\frac{X_{t_{k}}-x}{h}\big)\mathbf{1}_{\left[t_{k}, t_{k+1}\right)}(t)\mathrm{d}B^{H}_{t}\bigg)^{2}\leq O(h^{-4}t_{n}^{2H}) \end{equation*} By Chebyshev's Inequality, we have that for $\gamma\in(H-1,0)$, \begin{equation*} \begin{aligned} &\mathbb{P}\bigg(\frac{1}{t_{n}}\sum_{k=0}^{n-1}K_{h}(X_{t_{k}}-x)\left(\sigma\diamond(B_{t_{k+1}}-B_{t_{k}}^{H})\right)\geq t_{n}^{\gamma}h^{-2}\bigg)\leq O(t_{n}^{2H-2-2\gamma}), \end{aligned} \end{equation*} which goes to $0$ as $n\rightarrow\infty$. \hfill$\square$ The following theorem establishes the consistency of this estimator. \begin{thm}\label{th2} Under the conditions \ref{asp1}-\ref{asp6}, we have that the estimator $\hat{b}_{n,h}(x)$ defined in Eq. (\ref{eq5}) convergences to $b(x)$ in probability, as $n\rightarrow\infty$. \end{thm} \noindent\textbf{Proof of Theorem \ref{th2}}. For $H\in(\frac{1}{2},1)$, we have that the estimator defined in Eq. (\ref{eq5}) is equivalent to the estimator defined in Eq. (\ref{eq10}). By Lemma \ref{lem1} and Lemma \ref{lem3}, we have \begin{equation*} \frac{I}{S_{n,h}(x)}\rightarrow 0 \quad \text{a.s.}. \end{equation*} By Lemma \ref{lem1} and Lemma \ref{lem4}, we have \begin{equation*} \frac{II}{S_{n,h}(x)}\rightarrow b(x) \quad \text{a.s.}. \end{equation*} By Lemma \ref{lem1} and \ref{lem5}, we have \begin{equation*} \frac{III}{S_{n,h}(x)}\rightarrow 0 \quad \text{in probability}. \end{equation*} Combining the above three equations, we obtain the desired results. \hfill$\square$
1,477,468,751,265
arxiv
\section{Introduction} The set $\mathcal{Q}$ of quasi-alternating links was defined by Ozsv\'ath and Szab\'o \cite{Osvath-Szabo:DoubleCovers} as the smallest set of links satisfying the following:\\ \begin{minipage}{3.8in} \begin{itemize} \item the unknot is in $\mathcal{Q}$ \item if link $\mathcal{L}$ has a diagram $L$ with a crossing $c$ such that \begin{enumerate} \item both smoothings of $c$, $L_0$ and $L_\infty$ are in $\mathcal{Q}$ \item $\det(L_0)\neq 0 \neq \det(L_\infty)$ \item $\det(L)=\det(L_0)+\det(L_\infty)$ \end{enumerate} then $\mathcal{L}$ is in $\mathcal{Q}$. \end{itemize} \end{minipage} \begin{minipage}{2.4in} \begin{center} \includegraphics[width=2.4 in]{smoothing.pdf} \end{center} \end{minipage}\\ The set $\mathcal{Q}$ includes the class of non-split alternating links. Like alternating links, quasi-alternating links are homologically thin for both Khovanov homology and knot Floer homology \cite{Manolescu}. The branched double covers of quasi-alternating links are L-spaces \cite{Osvath-Szabo:DoubleCovers}. These properties make $\mathcal{Q}$ an interesting class to study from the knot homological point of view. However the recursive definition makes it difficult to decide whether a given knot or link is quasi-alternating. The first author and Kofman showed that the quasi-alternating property is preserved by replacing a quasi-alternating crossing by any rational tangle extending the crossing, and they used it to give a sufficient condition for pretzel links to be quasi-alternating \cite{Champanerkar}. Subsequently Greene showed that this condition was necessary and provided the first examples of homologically thin, non-quasi-alternating links \cite{Greene}. Results in \cite{Champanerkar} and \cite{Greene} provide a complete classification of quasi-alternating pretzel links. Using the structure and symmetry of Montesinos links and their determinants, we generalize the sufficient conditions given in \cite{Champanerkar} and \cite{Greene} to provide a sufficient condition for Montesinos links to be quasi-alternating, in terms of their rational parameters (Theorem \ref{thm:qam}). Using recent results on left-orderable groups, Heegaard Floer L-spaces and branched double covers of Montesinos links we also obtain conditions on the rational parameters of a Montesinos link to be non-quasi-alternating (Theorem \ref{thm:nqam}). Furthermore we discuss families of examples which are not covered by the above results. Our results include all known classes of quasi-alternating links appearing in \cite{Champanerkar}, \cite{Greene} and \cite{Widmer}. See also the recent preprint of Qazaqzeh, Chbili, and Qublan \cite{Khaled}.\footnote{The results in this paper were obtained independently of \cite{Khaled}.} Watson gives an iterative construction for obtaining every quasi-alternating Montesinos link using surgery on a strongly invertible L-space knot \cite{Watson}. It is an interesting problem to determine the relation between Watson's construction and the conditions in Theorem \ref{thm:qam}. This paper is organized as follows: Section 2 defines Montesinos links and related notation, Section 3 proves results about the structure and symmetry of Montesinos links, Section 4 proves the determinant formula for Montesinos links, and Sections 5 and 6 prove the main theorems and discuss examples. \subsection*{Acknowledgements} The authors would like to thank Joshua Greene, Liam Watson and Steven Boyer for helpful conversations. \section{Notation} \subsection{ Fractions} For integers $a_i, 1\leq i \leq m$, $a_1 \neq 0$, let $[a_m,a_{m-1},\dots,a_1]$ denote the continued fraction $$[a_m,a_{m-1},\dots,a_1]:= a_m+\cfrac{1}{a_{m-1}+\cfrac{1}{\ddots+\cfrac{1}{a_1}}} \ . $$ Let $\displaystyle{t=\frac{\alpha}{\beta}}\in \mathbb{Q}$ with $\alpha,\beta$ relatively prime and $\beta > 0$. The \emph{floor} of $t$ is $\displaystyle{\left\lfloor t \right\rfloor = \frac{\alpha - (\alpha \mod \beta)}{\beta}}$, and the \emph{fractional part} of $t$ is $\displaystyle{\{t\} = \frac{\alpha \mod \beta}{\beta} < 1}$. For $t \neq 0$, define $ \displaystyle{\widehat{t} = \frac{1}{\{\frac{1}{t}\} } > 1}$. For example, if $t=\frac{-29}{9}$ then $\left\lfloor t \right\rfloor = -4,\ \{ t \} = \frac{7}{9}, \mathrm{\ and \ } \widehat{t}=\frac{29}{20}$. Note that if $t > 1$ then $\widehat{t}=t$. \subsection{ Rational tangles} We follow the original exposition due to Conway \cite{Conway}. A \emph{tangle} is a portion of a link diagram enclosed by a circle that meets the link in exactly four points. The four ends of a tangle are identified with the compass directions NW, NE, SW, SE. Given a pair of tangles $s$ and $t$, the \emph{tangle sum}, denoted $s+t$, is formed by joining the NE, SE ends of $s$ to the NW, SW ends, respectively, of $t$. The elementary tangles $0,\pm 1, \infty$ are shown in Figure \ref{fig:tangleexample}. Adding $n$ copies of the tangle $1$ and $\overline{1}=-1$ results in the \emph{integral tangles} $n=1+1+\dots+1$ and $\overline{n}=-n=\overline{1}+\overline{1}+\dots+\overline{1}$, respectively. The \emph{tangle product}, denoted $st$, is the tangle obtained by first reflecting the diagram of $s$ in the plane through its NW-SE axis and then adding $t$. If $a_1,\dots,a_m$ are integral tangles, the tangle $a_1 a_2 \dots a_m:=((\dots(a_1 a_2)a_3\dots a_{m-1})a_m)$ is called a \emph{rational tangle}. See Figure \ref{fig:tangleexample}. \begin{figure}[htbp] \begin{center} \includegraphics{tangleexample.pdf} \caption{Rational tangles are composed of sums and products of elementary tangles.} \label{fig:tangleexample} \end{center} \end{figure} Conway devised the following correspondence between rational tangles and continued fractions: Let $t\neq 0, \pm 1$ be a rational number, and let $a_1, \ldots , a_m$ be integers such that $a_1 \geq 2$, $a_k \geq 1$ for $k=2,\ldots, m-1$, and $a_m \geq 0$. If $t=[a_m,a_{m-1},\dots,a_1]$, then $t$ corresponds to the positive rational tangle $a_1a_2\ldots a_m$. If $t=[-a_m,-a_{m-1},\dots,-a_1]$, then $t$ corresponds to the negative rational tangle $\overline{a_1}\overline{a_2}\ldots \overline{a_m}$, where $\overline{a}$ denotes $-a$. Every rational tangle, except for the elementary tangles $0$, $\pm 1$, and $\infty$, has a continued fraction expansion as one of the above \cite{Conway}. The product of a rational tangle with the zero tangle inverts the associated fraction; if $a_1\dots a_m$ corresponds to $t$, then $a_1\dots a_m0$ corresponds to $1/t$. Notice that $a_1\dots a_m a_{m+1}$ is equivalent to $a_1\dots a_m0 + a_{m+1}$, which corresponds to the fraction $a_{m+1}+1/t$, and this explains the correspondence between rational tangles and continued fractions. A \emph{flype} is a tangle equivalence between tangles $1+t$ and $t_h+1$, where $t_h$ is the rotation of tangle $t$ about its horizontal axis. A \emph{positive flype} is the operation that replaces $t$ by the equivalent tangle $1+t_h+\overline{1}$, and a \emph{negative flype} results in $\overline{1}+t_h+1$. For a rational tangle $t$, the tangle $t_h$ can be seen to be equivalent to $t$ by a sequence of generalized flype moves that transpose tassels above and below the horizontal axis. There are two ways to join the free ends of a tangle (without introducing further crossings) to form a link. Let $1*t$ denote the \emph{vertical closure} of tangle $t$ obtained by joining the NW end to the NE end and joining the SW end to the SE end. Joining the NW/SW ends and the NE/SE ends produces the \emph{horizontal closure} of $t$, which is isotopic to $1*\overline{t}0$. See Figure \ref{fig:tangleclosure}. A \emph{rational link} is the closure of a rational tangle. \begin{figure}[t] \begin{center} \includegraphics{tangleclosures.pdf} \caption{The vertical closure (left) and horizontal closure (center) of tangle $324$, the latter of which is equivalent to the vertical closure of $\overline{324}0$ (right).} \label{fig:tangleclosure} \end{center} \end{figure} \subsection{ Montesinos links } Let $t_{i}\neq 0, \pm 1$, for $i=1,\ldots,p$, be a rational number with a continued fraction expansion as above, and let $e$ be an integer. A \emph{Montesinos link} is defined, using Conway notation, as $M(e;t_1, \ldots, t_p)=1*(e+t_10+\ldots +t_p0)$. See Figure \ref{fig:montesinos}; the dotted circle labeled $t_i$ contains tangle $t_i0$. We define $\displaystyle{\varepsilon= e + \sum_{i=1}^p \left\lfloor\frac{1}{t_i}\right\rfloor}$. Note that this presentation of Montesinos links differs slightly from that of Burde-Zieschang \cite{Burde-Zieschang} and the one used by Greene \cite{Greene} in that the sign of $e$ is reversed. For example, Figure \ref{fig:montesinosvar} illustrates the isotopy taking the link of Figure \ref{fig:montesinos} into the form of a Montesinos link used in \cite{Greene}. If $t_i$ were $\pm 1$, then the application of a flype would move the crossing left, where it could be absorbed by the parameter $e$. \begin{figure}[h!] \begin{center} \includegraphics{montesinosexample.pdf} \caption{Montesinos link $M(3;\frac{31}{7},\frac{5}{16},\frac{-29}{9}) =M(3;324,530,\overline{2}\overline{4}\overline{3})=M(5;\frac{31}{7},\frac{5}{1},\frac{29}{20})$.} \label{fig:montesinos} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[height=3in]{montesinosvariant.pdf} \caption{Rotate the link of Figure \ref{fig:montesinos} clockwise 90 degrees, isotope the $-e$ crossings to the left side, and apply flypes to put each tangle into braid form.} \label{fig:montesinosvar} \end{center} \end{figure} \section{Classification of Montesinos links} Let $L$ be the Montesinos link $M(e;t_1,\dots,t_p)=M(e;\alpha_1/\beta_1,\ldots,\alpha_p/\beta_p)$. \begin{prop}\label{prop:2tangle} If $p<3$, then $L$ is isotopic to a rational link. \end{prop} \begin{proof} Let $[a_k,\ldots,a_1]$ and $[b_{\ell},\ldots,b_1]$ be continued fraction expansions of $t_1$ and $t_2$, respectively. Applying $e$ flypes to the first tangle moves the $e$ crossings between the two tangles. Applying isotopies and flypes to the tassels $b_\ell, b_{\ell-1},\dots,b_1$, in that order, results in rational tangle form. See Figure \ref{fig:2tanglemontesinos}. The parity of $k+\ell$ determines the appropriate tangle closure. It follows that $L$ is isotopic to the rational link $1*t$ if $k+\ell$ is odd and $1*\overline{t0}$ if $k+\ell$ is even, where $t=[b_1,\dots,b_\ell,e,a_k,\dots,a_1]$. \end{proof} \begin{figure}[htbp] \begin{center} \includegraphics{2tanglemontesinos.pdf} \caption{Two-tangle Montesinos links are rational links.} \label{fig:2tanglemontesinos} \end{center} \end{figure} If $p\geq 3$ then $L$ is classified by the rational number $e+\sum_{i=1}^p \beta_i/\alpha_i$ and the ordered set of fractions $\left(\left\{\beta_1/\alpha_1\right\},\ldots,\left\{\beta_p/\alpha_p\right\}\right)$ up to cyclic permutation and reversal of order \cite{Bonahon} (see also \cite{Burde-Zieschang}). It follows that $\varepsilon$ defined above is an invariant of Montesinos links. \begin{prop} \label{lem:mont-reduce} The Montesinos link $ M(e; t_1, \dots, t_p)$ is isotopic to $M(\varepsilon;\widehat{t_1}, \ldots, \widehat{t_p})$. \end{prop} \begin{proof} A rational tangle $t$ is equivalent to the tangle sum of its integral and fractional parts. In particular, a positive rational tangle $a_1\dots a_m$ is equivalent to $a_m + a_1\dots a_{m-1}0$, where $a_m$ and $a_1\dots a_{m-1}0$ correspond to $\lfloor t\rfloor$ and $\{t\}$, respectively. A negative rational tangle $\overline{a_1}\dots\overline{a_m}$ is equivalent to $\overline{a_m}+\overline{a_1}\dots\overline{a_{m-1}}0$, and a negative flype results in the equivalent tangle $\overline{1}+(\overline{a_m}+\overline{a_1}\dots\overline{a_{m-1}}0)+1$. But, $(\overline{1}+\overline{a_m})$ and $(\overline{a_1}\dots\overline{a_{m-1}}0+1)$ are equivalent to $\lfloor t\rfloor$ and $\{t\}$. The proposition follows by applying the above tangle decomposition to each tangle of the Montesinos link : \begin{align*} M(e; t_1, \dots, t_p) &= 1*(e + t_10 + \dots + t_p0) \\ & = 1*\left(e + \left(\left\lfloor t_10 \right\rfloor + \{t_10 \} \right) + \dots + \left(\left\lfloor t_p0 \right\rfloor + \{t_p0 \} \right) \right) \\ &=1*\left( \varepsilon + \widehat{t_1}0 + \dots + \widehat{t_p}0 \right) = M(\varepsilon;\widehat{t_1}, \dots, \widehat{t_p}).\qedhere \end{align*} \end{proof} The link $M(\varepsilon;\widehat{t_1}, \ldots, \widehat{t_p})$ is known as the {\it reduced form} of the Montesinos link $M(e; t_1, \dots, t_p)$. \begin{lemma} \label{lem:flype} Let $t_i=\frac{\alpha}{\beta}$, for some $i$, and let $e$ be any integer. \begin{enumerate} \item (Positive flype) If $t_i > 0$, then $M(e;t_1,\ldots,t_p)=M(e+1;t_1,\ldots,t_{i-1},t_i^f,t_{i+1},\ldots,t_p)$, where $\displaystyle{t_i^f=\frac{\alpha}{\beta- \alpha}}$. \item (Negative flype) If $t_i < 0$, then $M(e;t_1,\ldots,t_p)=M(e-1;t_1,\ldots,t_{i-1},t_i^f,t_{i+1},\ldots,t_p)$, where $\displaystyle{t_i^f=\frac{\alpha}{\beta + \alpha}}$. \end{enumerate} \end{lemma} \begin{proof} Suppose $t_i>0$. In Conway notation, $$M(e;t_1,\ldots,t_p)=1*(e+t_10+\ldots+t_{i-1}0+t_i0+t_{i+1}0+\dots +t_p0).$$ A positive flype of the first $i$ rational tangles $t_1,\ldots,t_i$ results in the equivalent link $$1*(e+1+(t_10+\ldots+t_{i-1}0+t_i0)_h+\overline{1}+t_{i+1}0+\dots +t_p0).$$ The horizontal rotation of a tangle sum is the sum of the summands horizontally rotated. This fact and the invariance of rational tangles under horizontal rotation implies that the link is equivalent to $$1*(e+1+t_10+\ldots+t_{i-1}0+t_i0+\overline{1}+t_{i+1}0+\dots +t_p0).$$ Furthermore, $$t_i0+\overline{1}=(\alpha/\beta)0+\overline{1}=\beta/\alpha-1=(\beta-\alpha)/\alpha=(\alpha/(\beta-\alpha))0=t_i^f0,$$ as required. Applying a negative flype, the $t_i<0$ case follows similarly. \end{proof} \begin{prop} \label{prop:adq-alt} Let $L=M(e;t_1, \ldots, t_p)$ be a Montesinos link and $\varepsilon$ as above. \begin{enumerate} \item If $\displaystyle{\left|\varepsilon + \frac{p}{2}\right|>\frac{p}{2}-1}$, then $L$ has an alternating diagram. \item If $\displaystyle{\left|\varepsilon + \frac{p}{2}\right|<\frac{p}{2}-1}$, then $L$ has a non-alternating and adequate diagram. \end{enumerate} \end{prop} \begin{proof} $L$ is equivalent to $ L'=M(\varepsilon;\widehat{t_1}, \ldots, \widehat{t_p})$ by Proposition \ref{lem:mont-reduce} above. The inequality $\left|\varepsilon + \frac{p}{2}\right|>\frac{p}{2}-1$ implies that $\varepsilon\geq 0$ or $\varepsilon\leq -p$. If $\varepsilon\geq 0$, then the reduced form $L'$ is alternating since the tangles $\widehat{t_i}$ are positive for all $i=1,\ldots, p$. If $\displaystyle{\varepsilon \leq -p}$, then applying a positive flype to each of the $p$ tangles of $L'$ as in Lemma \ref{lem:flype}, yields an alternating diagram with all negative tangles. This proves the first case. For the second case, suppose $t_i=\alpha_i/\beta_i$, where $\alpha_i>0$ for all $i=1,\dots,p$. Then $L$ is equivalent to $L' = M\left(\varepsilon; \alpha_1/(\beta_1\mod{\alpha_1}), \dots,\alpha_p/(\beta_p\mod{\alpha_p})\right)$. Since $\left|\varepsilon + \frac{p}{2}\right|<\frac{p}{2}-1$, $ -p+1 < \varepsilon < -1$, hence $1 < |\varepsilon| < p-1$. Applying a positive flype to each of the last $m=|\varepsilon|$ tangles of $L'$ results in an equivalent link $\displaystyle{L''=M\left(0;r_1,\dots,r_n,s_1,\dots,s_m\right)}$, where $n=p-m$, $\displaystyle{r_i=\frac{\alpha_i}{\beta_i\mod{\alpha_i}}>0}$ for $i=1,\dots,n$, and $\displaystyle{s_j=\frac{\alpha_j}{(\beta_j\mod{\alpha_j})-\alpha_j}<0}$ for $j=1, \dots, m$. Hence $L''$ has at least two positive tangles and at least two negative tangles. It follows that the reduced form for $L''$ is non-alternating and adequate. \end{proof} For a rational tangle $t=a_1\ldots a_m$ as above, let $\overline{t}=\overline{a_1}\overline{a_2}\ldots\overline{a_m}$ denote its reflection. \begin{lemma} \label{lem:ref-sym} Let $L=M(e;t_1,\ldots,t_p)$ be a Montesinos link and $L^r=M(-e;\overline{t_1},\ldots,\overline{t_p})$ denote its reflection. Then $\varepsilon(L^r)=-\varepsilon(L)-p$. \end{lemma} \begin{proof} The continued fraction expansion of $t$ implies that the reflection of $t$, $\overline{t}=-t$. It follows that $\bfloor{1/\, \overline{t}}=\bfloor{1/-t}=-\bfloor{1/t}-1$. Hence $$\varepsilon(L^r)= -e+\sum_{i=1}^p \bfloor{1/\, \overline{t_i}} = -e+\sum_{i=1}^p \left( -\bfloor{1/t_i}-1 \right) =\left(-e-\sum_{i=1}^p \bfloor{1/t_i}\right) -p = -\varepsilon(L)-p. \qedhere$$ \end{proof} \section{Determinant of Montesinos links} The determinant of rational and Montesinos links follows directly from Conway's \emph{determinant fraction} identities for tangle sum $t_{a+b}$ and product $t_{ab}$ given in \cite{Conway}: \begin{equation*} \label{det-sum} \frac{det(1* t_{a+b})}{det(1* t_{a+b}0)}=\frac{det(1* t_{a})}{det(1* t_{a}0)}+\frac{det(1* t_{b})}{det(1* t_{b}0)} \ \mathrm{and} \ \frac{det(1* t_{ab})}{det(1* t_{ab}0)}=\frac{det(1* t_{a}0)}{det(1* t_{a})}+\frac{det(1* t_{b})}{det(1* t_{b}0)}. \end{equation*} where $det(K)$ is Conway's determinant. The usual deteminant $\det(K) = |det(K)|$ (Section 7 in \cite{Conway}). We derive the formula for the determinant of Montesinos links (see also \cite{Asaeda}). \begin{prop} $\displaystyle{\det\left(M\left(e;\frac{\alpha_1}{\beta_1},\ldots,\frac{\alpha_p}{\beta_p}\right)\right)= \left| \left( \prod_{i=1}^p \alpha_i \right) \left(e+\sum_{i=1}^p \frac{\beta_i}{\alpha_i} \right) \right |}$. \end{prop} \begin{proof} Let $t=\alpha/\beta$ be the rational tangle $a_1 a_2 \dots a_m$, as above. Then it follows by induction on $m$ and by the determinant fraction identity for the product that $det(1*t)=\alpha$ and $det(1*t0)=\beta$. Let $\displaystyle{t_i=\frac{\alpha_i}{\beta_i}}$, so $\displaystyle{\frac{det(1*t_i)}{det(1*t_i0)}=\frac{\alpha_i}{\beta_i}}$. Since $1*(e+t_10+\ldots+t_p0)0= 1*t_1 \ \# \ldots \# \ 1*t_p$, $det(1*(e+t_10+\ldots+t_p0)0)=det(1*t_1) \times \ldots \times det(1*t_p) = \prod_{i=1}^p \alpha_i$. Using the determinant fraction identity for the sum we get \begin{align*} \frac{det(1*(e+t_10+\ldots+t_p0))}{det(1*(e+t_10+\ldots+t_p0)0)}&= \frac{det(1*e)}{det(1*e0)}+\sum_{i=1}^p \frac{det(1*t_i0)}{det(1*t_i00)},\\ \det(1*(e+t_10+\ldots+t_p0))&= \left| \left( \prod_{i=1}^p \alpha_i \right) \left( e + \sum_{i=1}^p \frac{\beta_i}{\alpha_i} \right)\right|. \qedhere \end{align*} \end{proof} \section{Quasi-alternating Montesinos links} \begin{prop} Let $s, r_i\neq 1$ be positive rational numbers for $i=1,\ldots, n$. Then the Montesinos link $M(0;r_1, \ldots, r_n, -s)$ is quasi-alternating if $s > \mathrm{min}\{r_1,\ldots, r_n\}$. The statement is true for any position of the tangle $-s$. \label{thm:e=0} \end{prop} \begin{proof} We will prove the statement by induction on $n$. For $n=1$, Proposition \ref{prop:2tangle} implies that $M(0;r,-s)$ is a rational link, which is quasi-alternating when $r \neq s$ (for $r=s$, this gives the unlink on two components which is not quasi-alternating). This proves the base case. Let $s > \mathrm{min}\{r_1,\ldots,r_n\}$. By the induction hypothesis $M(0;r_1,\ldots, r_n, -s)$ is quasi-alternating. Let $L$ be the diagram $M(0;r_1, \ldots, r_n, 1,-s)$ and let $c$ be the single crossing to the left of the $-s$ tangle. $L_{\infty}=M(0;r_1,\ldots, r_n, -s)$. $L_0$ naturally splits as a connect sum of horizontal closure of tangles of the type $t0$, for a rational tangle $t$. Since the horizontal closure of $t0$ is isotopic to $1*\overline{t}$, we have $L_0=1*\overline{r_1} \# 1*\overline{r_2}\# \ldots \# 1*\overline{r_n} \# 1*s$. See Figure \ref{fig:connectsum}. \begin{figure}[htbp] \begin{center} \includegraphics[height=2in]{connectsum.pdf} \caption{The link $L_0$ is a connect sum of horizontal closures of rational tangles.} \label{fig:connectsum} \end{center} \end{figure} Let $r_i = \alpha_i/\beta_i$ and $s=\alpha/\beta$, where all the $\alpha$'s and $\beta$'s are positive integers. By the formula for the determinant of rational and Montesinos links \begin{align*} \mathrm{det}(L_0)= \mathrm{det}(1*s) \left (\prod_{i=1}^n \mathrm{det}(1*\overline{r_i}) \right ) = \left|\alpha \prod_{i=1}^n \alpha_i\ \right| ,\ \ \mathrm{det}(L_{\infty})= \left| \alpha \prod_{i=1}^n \alpha_i \left ( \sum_{i=1}^n \frac{\beta_i}{\alpha_i}-\frac{\beta}{\alpha} \right ) \right |, \end{align*} and $\det(L_\infty)\neq 0$ because $L_\infty = M(0;r_1,\dots,r_n,-s)$ is assumed to be quasi-alternating. Since $s > \mathrm{min}\{r_1,\ldots,r_n\}$, we have $\displaystyle{ \left( \sum_{i=1}^n \frac{1}{r_i}-\frac{1}{s} \right ) = \left( \sum_{i=1}^n \frac{\beta_i}{\alpha_i}-\frac{\beta}{\alpha} \right ) >0}$. Hence, \begin{align*} \mathrm{det}(L_0)+\mathrm{det}(L_{\infty}) = \alpha \prod_{i=1}^n \alpha_i \left (1+ \sum_{i=1}^n \frac{\beta_i}{\alpha_i}-\frac{\beta}{\alpha} \right ) = \mathrm{det}(L). \end{align*} $L_0$ is quasi-alternating by Lemma 2.3 in \cite{Champanerkar} and $L_{\infty}$ is quasi-alternating by the induction hypothesis, hence $L$ is quasi-alternating at the crossing $c$. Using Theorem 2.1 in \cite{Champanerkar}, we can extend $c$ to a rational tangle. This shows that if $s > \mathrm{min}\{r_1,\ldots, r_{n+1}\}$ then $M(0;r_1,\ldots, r_{n+1}, -s)$ is quasi-alternating. Since we did not use the position of the tangle $-s$ in the argument, the same argument works for any position of the tangle $-s$. \end{proof} \begin{remark} Unlike the statement of Proposition 2.2 in \cite{Greene}, the condition $s>\min\{r_1,\dots,r_n\}$ appearing above is not a necessary condition. For example, $$ M(0;2,7,-4)= M(-1;2,7,4/3)=M(0;2,-7/6,4/3).$$ The leftmost diagram satisfies the condition of Proposition \ref{thm:e=0}, and, hence, it is quasi-alternating. However, the rightmost diagram fails to satisfy the above condition. \end{remark} We will use Proposition \ref{thm:e=0} to prove a sufficient condition for any Montesinos link to be quasi-alternating, in terms of the invariant $\varepsilon$ defined above. Recall that for $0\neq t=\displaystyle{\alpha/\beta} \in \mathbb{Q}$, $$\widehat{t}=\frac{1}{\{ \frac{1}{t} \} },\quad t^f=\frac{\alpha}{\beta- \alpha} \mathrm{\ if\ } t >0,\quad t^f=\frac{\alpha}{\beta + \alpha} \mathrm{\ if\ } t <0,\quad \varepsilon= e + \sum_{i=1}^p \left\lfloor\frac{1}{t_i}\right\rfloor. $$ \begin{theorem} \label{thm:qam} Let $L=M(e;t_1, \ldots, t_p)$ be a Montesinos link. Then $L$ is quasi-alternating if \begin{enumerate} \item $\varepsilon > -1$, or \item $\varepsilon=-1$ and $|\widehat{t_{i}}^f| > \widehat{t_j}$ for some $i \neq j$, or \item $\varepsilon < 1-p$, or \item $\varepsilon=1-p$ and $|\widehat{t_{i}}^f| < \widehat{t_j}$ for some $i \neq j$. \end{enumerate} \end{theorem} \begin{proof} Since reflections of quasi-alternating links are quasi-alternating, it is enough to consider $L$ or its reflection. By the symmetry of $\varepsilon$ under reflections proved in Lemma \ref{lem:ref-sym}, it suffices to consider the case when $\varepsilon \geq -p/2$. Cases (3) and (4) follow from cases (1) and (2) respectively. If $\varepsilon > -1$ then, by Proposition \ref{prop:adq-alt}, $L$ has an alternating diagram, and hence it is quasi-alternating. If $\varepsilon=-1$ then $L=M(-1;\widehat{t_1},\ldots,\widehat{t_p})$. The condition $|\widehat{t_{i}}^f| > \widehat{t_{j}}$ for some $i \neq j$ implies that we can use a positive flype on the tangle $\widehat{t_{i}}$ to convert $L$ to an equivalent link which satisfies the condition in Proposition \ref{thm:e=0}. \end{proof} \subsection{Examples} \begin{enumerate} \item $M(3;\frac{31}{7},\frac{5}{16},\frac{-29}{9})$, $\varepsilon = 3 + \lfloor\frac{7}{31}\rfloor + \lfloor\frac{16}{5}\rfloor + \lfloor\frac{9}{-29}\rfloor = 3 + 0 + 3 -1 = 5 > -1$ which is quasi-alternating by case 1 of Theorem \ref{thm:qam}. \\ \item $M(-1; \frac{3}{2}, \frac{4}{3}, \frac{7}{4})$, which is in reduced form; i.e., $\widehat{t_i}=t_i$. $|\widehat{t_1}^f|=\frac{3}{1}$, $|\widehat{t_2}^f|=\frac{4}{1}$, $|\widehat{t_3}^f|=\frac{7}{3}$. Since $|\widehat{t_1}^f|>\widehat{t_2}$, this link is quasi-alternating by case 2 of Theorem \ref{thm:qam}. In particular, $M(-1; \frac{3}{2}, \frac{4}{3}, \frac{7}{4}) = M(0;\frac{-3}{1},\frac{4}{3}, \frac{7}{4})$, by applying a positive flype to the first tangle. The resulting link is quasi-alternating by Proposition \ref{thm:e=0}. \end{enumerate} \section{Non-quasi-alternating Montesinos links} \begin{theorem} Let $L=M(e;t_1, \ldots, t_p)$ be a Montesinos link with $p \geq 3$. Then $L$ is non-quasi-alternating if \begin{enumerate} \item $1-p < \varepsilon < -1$, or \item $\varepsilon=-1$ and $\widehat{t_i}>2$ for all $i=1,\dots, p$, or \item $\varepsilon=1-p$ and $|\widehat{t_i}^f|>2$ for all $i=1,\dots, p$. \end{enumerate} \label{thm:nqam} \end{theorem} \begin{proof} Case (1) implies that $|\varepsilon+p/2|<p/2-1$ and, by Proposition \ref{prop:adq-alt}, $L$ has a non-alternating and adequate diagram. The Khovanov homology of a link $L$ with such a diagram is thick \cite{Khovanov}, which implies that $L$ is not quasi-alternating \cite{Manolescu}. For case (2), assume that $L$ is in reduced form with $\varepsilon=-1$ and $t_i=\widehat{t_i}>2$ for all $i=1,\ldots,p$. We will show that the double branched cover $\Sigma(L)$ is not an $L$-space. A closed, connected 3-manifold $Y$ is an \emph{$L$-space} if it is a rational homology sphere with the property that the rank of its Heegaard Floer homology $HF(Y)$ equals $|H_1(Y;\mathbb{Z})|$. Recall that the branched double cover $\Sigma(L)$ of a Montesinos link $L$ is the orientable Seifert fibered space $S(0;\varepsilon,t_1,\dots,t_p)$ with base orbifold $S^2$ \cite{Montesinos}. The manifold $\Sigma(L)$ is a rational homology sphere iff $\det(L)\neq 0$ (see for example \cite{lickorish}). If $\det(L)=0$ then $L$ is non-quasi-alternating. Otherwise, the following theorem provides the $L$-space obstruction. First, define a group $G$ to be \emph{left-orderable} if there exists a left invariant strict total ordering on $G$. \begin{theorem}[\cite{Boyer-Gordon-Watson}] A closed connected Seifert fibered 3-manifold $X$ is not an $L$-space iff $\pi_1(X)$ is left-orderable. \label{thm:ls}\end{theorem} \noindent The next result offers the exact conditions for an orientable Seifert fibered space to have a left-orderable fundamental group. \begin{theorem}[\cite{Boyer-Rolfsen-Wiest}] Let $X$ be an orientable Seifert fibered 3-manifold which is a rational homology sphere. Then $\pi_1(X)$ is left-orderable iff $\pi_1(X)$ is infinite, the base orbifold of $X$ is the 2-sphere with cone points, and $X$ admits a horizontal foliation. \label{thm:lo}\end{theorem} \noindent The fundamental group $\pi_1(\Sigma(L))$ is infinite if $\Sigma_{i=1}^p 1/\alpha_i \leq p-2$, where $t_i=\alpha_i/\beta_i$ \cite{Burde-Zieschang}. This condition is satisfied for $p\geq 3$ and $t_i>2$. Thus it remains to show that the space $\Sigma(L)$ admits a \emph{horizontal foliation}; i.e., a foliation which is everywhere transverse to the Seifert fibers. The following result provides the conditions under which a Seifert fibered space admits a horizontal foliation. \begin{theorem}[\cite{Eisenbud-Hirsch-Neumann},\cite{Jankins-Neumann},\cite{Naimi}] Let $S=S(0;-1, \alpha_1/\beta_1,\dots, \alpha_n/\beta_n)$ be an orientable Seifert fibered 3-manifold, where $n \geq 3$ and $\alpha_i/\beta_i> 1$ are rational numbers. Then $S$ admits a horizontal foliation iff there exist relatively prime integers $0 < a < m$ such that $$ \frac{\alpha_{\sigma(1)}}{\beta_{\sigma(1)}} >\frac{m}{a}, \quad \frac{\alpha_{\sigma(2)}}{\beta_{\sigma(2)}} >\frac{m}{m-a}, \quad \frac{\alpha_{\sigma(i)}}{\beta_{\sigma(i)}} >m, $$ where $3\leq i \leq n$ and $\sigma$ is a permutation of $\{1,2,...,n\}$. \label{thm:hf}\end{theorem} \noindent Given $\varepsilon=-1$ and $t_i>2$ for all $i=1,\dots, p$, Theorem \ref{thm:hf} implies that, for the choice of $m=2$ and $a=1$, the Seifert fibered space $\Sigma(L)$ admits a horizontal foliation. The fundamental group $\pi_1(\Sigma(L))$ is left-orderable according to Theorem \ref{thm:lo}. Finally, $\Sigma(L)$ is not an L-space by Theorem \ref{thm:ls}, and, therefore, $L$ is non-quasi-alternating. For case (3), assume that $L$ is in reduced form with $\varepsilon = 1-p$ and $|\widehat{t_i}^f| = |t_i^f| > 2$ for all $i=1,\ldots,p$. The reflection $L^r=M(p-1;\overline{t_1}, \ldots, \overline{t_p})$. Note that, \begin{equation} t=\frac{\alpha}{\beta}>1,\ \ \ \ \overline{t}=\frac{-\alpha}{\beta},\ \ \ \ \overline{t}^f=\frac{\alpha}{\alpha - \beta} = |t^f| > 1. \label{eqn:tbarf} \end{equation} These relations together with Lemma \ref{lem:flype} imply that the application of $p$ negative flypes on $L^r$ yields $M(-1;|t_1^f|, \ldots, |t_p^f|)$, which is in reduced form by Equation (\ref{eqn:tbarf}). Case (3) now follows from case (2) and the fact that reflections of quasi-alternating links are quasi-alternating. \end{proof} \begin{remark} There are more families of non-quasi-alternating Montesinos links accessible by the proof of Theorem \ref{thm:nqam}. For example, by choosing $m=3$ and $a=2$ in the notation of Theorem \ref{thm:hf}, one easily shows that if $\widehat{t_1}>3/2$ and $\widehat{t_i} > 3$ for $i\geq 2$, then $L$ is not quasi-alternating. In fact, for any choice of $m$ with $a=m-1$, the link is non-quasi-alternating for $\widehat{t_1}>m/(m-1)$ and $\widehat{t_i}>m$, $i\geq 2$. In general, any Montesinos link whose parameters satisfy the hypothesis of Theorem \ref{thm:hf} is non-quasi-alternating by the same argument. \end{remark} The proof of Theorem \ref{thm:nqam} offers an alternative to the 4-manifold techniques Greene used to establish obstructions to the quasi-alternating property of pretzel links. A key step in the classification of quasi-alternating pretzel links is Proposition 2.2 in \cite{Greene}, which states that the pretzel $P(0;p_1,\dots,p_n,-q)$ is quasi-alternating iff $q>\min\{p_1,\dots,p_n\}$, where $n\geq 2$, $p_1,\dots,p_n\geq 2$, and $q\geq 1$. We obtain an alternative obstruction in most cases. \begin{prop} For the same conditions above, the pretzel $P(0;p_1,...,p_n,-q)$ is non-quasi-alternating if $q+1<\min\{p_1,\ldots, p_n\}$. \end{prop} \begin{proof} If $q=1$, the pretzel $P(0;p_1,...,p_n,-q)$ is equivalent to the reduced Montesinos link $M(-1; p_1,\dots,p_n)$ with $2 < \min\{p_1,\dots,p_n\}$. According to case (2) of Theorem \ref{thm:nqam}, the links are non-quasi-alternating. If $q>1$, then the pretzel is equivalent to the reduced Montesinos link $M(-1;p_1,\dots,p_n,q/(q-1))$. The fact that $q/(q-1) > (q+1)/q$ implies that choosing $m=q+1$ and $a=q$, in the notation of Theorem \ref{thm:hf}, demonstrates that the branched double cover admits a horizontal foliation. \end{proof} \subsection*{6.1 Further questions and examples} It is a natural question to ask whether the condition given in Theorem 5.3 is necessary. Indeed Qazaqzeh, Chbili, and Qublan have asserted the following: \begin{conjecture}[\cite{Khaled}] A Montesinos link is quasi-alternating if and only if it satisfies the conditions of Theorem \ref{thm:qam}. \end{conjecture} Theorem \ref{thm:nqam} partially resolves the conjecture. It remains to investigate Montesinos links whose parameters satisfy neither the conditions of Theorem \ref{thm:qam} nor the conditions for admitting a horizontal foliation given in Theorem \ref{thm:hf}. These are Montesinos links that may be non-quasi-alternating but whose double branched covers are L-spaces. Below we discuss several families of such links. \\ \begin{enumerate} \item A Montesinos link $M(-1;t_1,t_2,\ldots, t_n)$ in reduced form and such that $|t_i^f|= t_j$, where $t_i$ and $t_j$ are the least and second least among the parameters $t_1,t_2,\ldots, t_n$. Greene's first example of a non-quasi-alternating knot with thin homology, $11n50 = M(-1;5/2,3,5/3)$, is an example of such a link. In the preprint, he remarks that his proof generalizes to show that the infinite family $M(0;(m^2+1)/m,n,-(m^2+1)/m)= M(-1;(m^2+1)/m,n,(m^2+1)/(m^2-m+1))$ for positive integers $m, n\geq 2$ is non-quasi-alternating \cite{Greene}.\\ \item A pretzel link $P(0;p_1,...,p_n,-q)=M(-1;p_1,\dots,p_n,q/(q-1))$ that satisfies the condition $q=\min\{p_1,\ldots, p_n\}$. Any such link is known to be non-quasi-alternating \cite{Greene}. \\ \item A pretzel link $P(0;p_1,...,p_n,-q)=M(-1;p_1,\dots,p_n,q/(q-1))$, for which $q+1=\min\{p_1,\ldots, p_n\}$ and $p_i = q+1$ for all $i$. However, if $p_i$ exceeds the numerator of a rational number between $q$ and $q+1$ for all $i$ except one, then the link will satisfy the conditions of Theorem \ref{thm:hf}. In general, the pretzels $P(0;p_1,...,p_n,-q)=M(-1;p_1,\dots,p_n,q/(q-1))$ for which $q+1=\min\{p_1,\ldots, p_n\}$ is known to be non-quasi-alternating \cite{Greene}. \\ \item The pretzel $P(0; 3, 3, 3, -2) = 11n81$ is such an example, and it has thick Khovanov homology. Since adding rational tangles preserves the width of Khovanov homology (\cite{lowrance, watson-1}), one may obtain infinite families of Montesinos links which do not satisfy either conditions. Having thick Khovanov homology, these are non-quasi-alternating. \\ \item Watson pointed us to another family of the form $M(0;(2n+1)/2, n+1, (-2n-1)/2 ) = M(-1;(2n+1)/2, n+1, (2n+2)/(2n-1) )$, where $n\geq 2$. See Figure \ref{fig:liam-examples}. Their quasi-alternating status is undetermined. \end{enumerate} \begin{figure}[h] \begin{center} \includegraphics[height=1.5in]{liamexamples.pdf} \caption{Montesinos links $M(0;(2n+1)/2, n+1, (-2n-1)/2 )$ for $n\geq2$ do not satify the conditions of Theorem \ref{thm:qam} and Theorem \ref{thm:hf}. } \label{fig:liam-examples} \end{center} \end{figure} \bibliographystyle{plain}
1,477,468,751,266
arxiv
\section{Introduction} \label{sec:intro} In the United States, numerous laws have been enacted to protect very specific classes of information about individuals: the Fair Credit Reporting Act (FCRA) protects information used to determine credit worthiness~\cite{fcra:ftc}; % the Health Insurance Portability and Accountability Act (HIPAA) protects medical records~\cite{hipaa03summary}; the Family Educational Rights and Privacy Act (FERPA) protects education records~\cite{ferpa:ed}. However, the U.S.\ lacks comprehensive privacy protections. In particular, databases used for marketing purposes lack accountability or oversight. Numerous marketing databases and marketing data brokers provide information with little or no restrictions, and sometimes even on a free-trial basis. The database records often contain names, addresses, email addresses, interests, and numerous other pieces of information about an individual. These databases are often created by the bulk purchasing of commercial transaction records and/or aggregating public records~\cite{ftc14report}. Considered in the abstract and by themselves, most individuals are unlikely to have expectations of privacy about these data sources. However, the aggregation and cross-referencing of this data from multiple sources is likely to raise privacy concerns. The piecemeal privacy regulations in the U.S. currently do not cover data used for this purpose. Indeed, from a business standpoint, the biggest constraint on this data is that it \emph{must not} be used for purposes that would bring it under the FCRA (or other regulations). That is, the brokers purposefully avoid regulations that might bring accountability. At the same time, these brokers are aware of how unsettling their practices are to consumers: while purchasing this type of data, we observed that some data brokers insist on source confidentiality. That is, if a consumer inquires, the recipient must not disclose the identity of the data broker from which the data was purchased. Despite these databases being beyond the reach of privacy regulations in the U.S., such as the FCRA, and doing their best to remain as such, the information they collect is potentially dangerous. We investigate some possible criminal uses of these marketing databases, especially when combined with information obtained from web searches. We studied two possible attacks. First, we utilize email to address mappings in conjunction with YouTube searches. Next, we purchased a list of ``brides to be'' from a data broker and correlated this information with publicly available data on Facebook. Both experiments reproduce previous ``cybercasing''~\cite{Friedland2010,Friedland2011} experiments, which examined how geotags could be abused. Unlike geotagging, however, a person has effectively no control over the release of data by marketing data brokers. We hope our study will encourage regulators to look at regulating such marketing databases more closely. \section{Prior Work} Friedland and Sommer previously performed several studies to examine how publicly available data could be misused~\cite{Friedland2010}. Their studies focused on high-accuracy location information attached as meta-data to audio, image, and video files. Specifically, they examined how geotags could be used for ``cybercasing,'' using online data and services to mount real-world attacks. The first scenario involved tracking a specific person, in this case TV show host Adam Savage who is an active Twitter user. It turned out that most images posted to his feed contained an exact geolocation attached by his smartphone, allowing them to locate his studio, places where he walks his dog, his home, and also where he met with other celebrities while on travel. In the second scenario, the authors inspected a random sample of Craigslist postings containing geotagged images. They examined all postings to the San Francisco Bay Area's \emph{For Sale} section over a period of four days, in total collecting 68,729 images, of which about 1.3\,$\%$ of the images were tagged with GPS coordinates. A fair amount of the geotagged postings offered high-valued goods, such as diamonds apparently photographed at home, making them potential targets for burglars. In addition, many posters even offered specifics about when and how they wanted to be contacted (``please call Sunday after 3pm''), which allowed for speculation about when that person might or might not be at home. In a third scenario, the authors examined whether one can semi-automatically identify the home addresses of people who normally live in a certain area but are currently on vacation. Such knowledge offers opportunities for burglars to break into their unoccupied houses. They wrote a script using the YouTube API that, given a home location, a radius, and a keyword, finds a set of matching videos shot within this radius and containing the keyword. For all the videos found, the script then gathers the associated YouTube user names and downloads all of their videos that are a certain \emph{vacation distance} away but have been uploaded within the last couple of days. The home location was set to be in downtown Berkeley, CA, and the radius to 100\,km. The authors searched for the keyword ``kids'' since many people publish home videos of their children. The vacation distance was 1000\,miles. Even though only about 3\,$\%$ of the YouTube content was geotagged at the time, the script reported 1000 hits (the maximum number the site returns for any query) for the initial set of matching videos. These then expanded to about 50,000 total videos in the second step identifying all other videos from the corresponding users. 106 of these turned out to have been taken more than 1000 miles away and uploaded the same week. Sifting quickly through the titles of these videos, the authors easily found that about a dozen looked promising for a successful burglary. Friedland and Choi built on this work by removing the need for geotags using an automatic location estimation system~\cite{SC2011}. Their approach to location estimation was a machine-learning and semantic-web driven method based on the open service \url{GeoNames.org}. GeoNames covers all countries and contains 8 million entries of place names and corresponding geo-coordinates. It provides a web-based search engine and an API which returns a list of matching entries ordered by their relevance to the query. They showed how geotags are unnecessary for cybercasing by searching for videos that contained keywords of known cities, and then correlating any names found in the videos with phone book data. Friedland et al.\ examined these methods more generally by showing how multiple data sources could be aggregated to make better inferences~\cite{Friedland2011}. In this manner, they showed how criminals could use multiple public data sources to increase the likelihood that a potential burglary target will not be home, improve a stalker's reach, or even to frame someone. While prior work in this area examined combining data from multiple sources to increase a criminal's effectiveness, no one has yet explored how data brokers might fit into this ecosystem. Unlike prior work, in which more privacy awareness would have been beneficial for users (e.g., disable geotagging), there is not much users can do for the release of consumer data by data brokers~\cite{ftc14report}. We posit that the market for consumer data creates a potential boon for criminals well beyond the control of consumers and what has previously been discussed in the literature. \section{The Ecology of Data} \textbf{The Light Side:} Whenever a user interacts with a company, such as buying a product or filling out a sweepstakes form, that creates a data footprint. A product order tells the company the person's name, shipping address, billing address, email address, and what they purchased. The company might then sell this data element to data brokers without the customer's knowledge or explicit permission. These data brokers coalesce, analyze, filter, aggregate, and resell the resulting data, with each broker attempting to create a more accurate profile of all individuals in their data set. While data can sometimes end up corrupted, especially for those with ``common-name@big-provider'' email addresses that others might mistakenly use, it can often provide accurate information about some individuals. Some estimates suggest there are 4000 separate companies involved in this process~\cite{databroker_estimate}, and many brokers make the data available to any buyer willing to pay. Access to a broker's data usually occurs in one of two forms, an append interface or provided lists~\cite{ftc14report}. An append interface has the buyer providing a list of records and the data broker annotates the results with other features if available, charging for each successful annotation. For example, the data customer might provide a list of email addresses, to which the broker will append features such as demographic information, purchase habits, income estimates, home ownership, or other fields. In particular, some brokers specifically support annotating mailing addresses, allowing the data customer to associate email addresses to mailing addresses. Some companies also offer interactive access for appending information. Rapleaf provides an API-based interface where a customer can provide either email addresses or mailing addresses and receive a demographic profile in a claimed 50ms of processing time~\cite{Rapleaf}. Demographic parameters include gender, age range, income level, home ownership, and various interests such as sports, travel, pets, outdoor and adventure, and whether the person tends to donate to charitable causes. The second form of access is to provide lists about individuals matching a given criteria, such as ``brides to be'' or ``rape victims'' (in a notorious case, which the provider subsequently insisted was simply a test, not an actual list~\cite{hill13forbes}% ), often with additional constraints such as zip code or domain specific data, such as wedding date. The data broker then provides an agreed upon number of matching entries and the specified fields, such as email address and mailing address. \textbf{The Dark Side:} Criminals have discovered the benefits of aggregating and reselling identifying and financial data for the purposes of identity theft. Credit report data is remarkably cheap, with a full target credit report costing a reported \$15 giving the target's full name, address, date of birth, and social security number (SSN)~\cite{krebs:ssndob:cost}. The service also offers to provide someone's SSN and date of birth given their name and address for just \$1.50. There was also a report back in 2007 of criminals using marketing lists to find elderly scam victims for telemarketing fraud~\cite{criminal_use}. While marketing lists are certainly not a new technology, we have recently reached a point where ubiquitous online data can be augmented with these lists to create unprecedented views into every aspects of an individual's life. In an attempt to draw attention to this issue, we performed two experiments using the information that we purchased from data brokers. \section{Study One: How's The Trip?} We initially set out to determine whether we could reproduce our previous cybercasing~\cite{Friedland2010} experiment without utilizing geotags. In the previous study, we searched for vacation videos with geotags, and then discovered home videos from the same account with geotags. In our new study, we began by searching for videos based on the vacation topic (list) and extracting the Google username. After excluding obviously bogus usernames, we submitted 2824 names to a data broker as an append request. (We do not name the data broker we used since our contract with it appears to prohibit disclosing its identity.) The overall cost was a \$500 setup fee plus an additional \$0.10 for each match successfully appended. The result was surprisingly negative: out of the 2824 addresses submitted, only 9 were successfully appended. We believe this is due to three factors: a lack of correlation with purchasing behavior, the list-focused nature of the data broker we utilized, and the relative quality of this particular data broker. First, if a user doesn't utilize their Google account for making purchases, there will be no link between the mailing address and email address available to a data broker to sell. Google itself may have information about the user's address, but Google has no incentive to sell this information to others as it represents a competitive advantage. Second, list-focused data brokers do not prioritize complete coverage as highly as more traditional data brokers, such as credit reporting agencies. In credit reporting, a small number of brokers strive for complete coverage. If a credit agency only had information on 50\% of the population, it would not be competitive in the marketplace since the data consumers select the queried names. A list-centric data broker's incentives are different: they don't need complete coverage, rather they need quality in the data they have since the broker gets to select its best data matching the requested criteria to share with the data consumer. This broker in particular focuses as a reseller of lists with a wide variety of topics, including religious affiliation (Catholic, Jewish, Islamic, etc), economic profile (credit score), ethnicity, political donation habits, holders of handgun concealed carry permits, and even such esoteric lists as ``boat owners in LaGuna Niguel, California.'' Third, we selected this broker mostly due to setup cost. Most data brokers are only interested in large orders. Even this broker required a \$500 setup cost for the append query, and this may represent a case of ``you get what you pay for.'' We also did some spot checking on results, and found that append data may be of marginal quality. For example, although it correctly identified one author's father's address and one cousin, the address for another cousin and the author himself were completely wrong: not even in the correct state. \section{Study Two: What a Happy Bride} List purchases, however, don't suffer the same defects: not only do the providers claim high accuracy (often over 90\%, and sometimes as high as 95\%), but the nature of list construction prevents the ``null entry'' problem faced when purchasing append data. Thus, we considered the possibility of lists as targets for theft. Numerous criteria, ranging from known gun owners to any selection criteria for high income might have potential. For our study, we chose ``brides to be,'' with name, mailing address, email address, and date of wedding. The cost of this data was only \$0.20 an entry for 5000 entries. Having the email address allows some additional searching for ancillary data and indicates an online presence, the mailing address gives the person's location, and the wedding date itself provides a day when the person's home will likely be empty. We explore this ancillary data as a means of estimating list accuracy. Our first check was to determine whether we could find bridal registries for the listed names, as a lower bound on the correctness. We utilized \url{registry.weddingchannel.com}, a bridal registry aggregator service that indexes multiple bridal registries and allows search by name, with the returned information including city, state, and date of wedding. We uses several matching criteria. A strong match, where full name, city, state, and wedding date matched, and a weaker match where the full name, city, and state match but the wedding date does not. One quarter of the names featured a strong match, with an additional 7.4\% obtaining a weaker match. Of particular note, however, is that the pair of first name and last name did not match at all in 41\% of the cases. Given the breadth of the wedding registry aggregator itself (16 separate registries), this suggests that either there are other items feeding into the bridal list (i.e., beyond companies selling their bridal registries to data brokers) or that the bridal list has an error rate significantly higher than that claimed by the data broker. We also checked if some registries seems overrepresented in the data. The most significant matches were with Bed Bath \& Beyond (942), Target (643), and Macy's (406). We could not draw any conclusions about the broker's sources given the overall popularity of these stores and the lack of domination by any one of them. Facebook is also a rich source of ancillary data. Previously, Facebook's Graph API enabled searching by email, but this interface is now deprecated. Instead, we searched on a combination of first name, last name, and city, which is a fuzzier match. First, we retrieved a list of user matches using Facebook Graph API with queries '[first name] [last name] [city name]' with 'user' as the search type. Then for each user in the retrieved list, we parsed each of the user's timeline HTML to gather the current city and state information, since these pieces of information are not available when only using the Graph API. We were able to obtain candidate Facebook accounts for 17.1\% of the list entries that had a first name, last name, city and state match. We then manually examined a portion of the Facebook matches. We examined 129 data-list entries that also had a strong match on the bridal registry list. Amongst them, we found 64 (50\%) to have clear indications of an upcoming wedding. We also examined 290 data-list entries that lacked a matching registry entry. We found 107 (37\%) to have such indications. Of particular note, 50 Facebook pages included photographs of the bride's wedding or engagement ring, an indicator of income level, while 5 pages included information about the bride's honeymoon plans. \section{The Rise of Criminal Brokers?} Criminal brokers already exist for finical data acting as ``append'' services, but there is nothing stopping similar services from developing for the sorts of non-financial data we examined. Given the lower barriers to access, it would be straightforward for criminal groups to set up their own data brokers. The likeliest target would be criminal lists, akin to the marketing lists, of high candidate potential victims. For burglary or similar activity, a subscription service could provide lists by zip-code of possible candidates with associated profiles. The lists themselves don't need to be too expensive to be profitable. Since our purchases cost~\$0.20 a name, if only 1/50th of the names are salable, the selling cost of such lists needs to be only \$10/name for the criminal broker to break even. Likewise, the consumer of the list doesn't need to obtain much more than \$10 of value from a name for a \$10 purchase to be worthwhile. The best target is probably gun ownership. Within the criminal black market, guns represent a unique product, where there isn't a substantial loss in value when attempting to fence. Although a gun-ownership list doesn't provide a set of times when a target is away, it does provide a list of homes which contain particularly valuable items. Obtaining the lists we used was straightforward: a legitimate looking email address (we used our own \url{.edu} address when purchasing data) and a credit card to buy the data. The biggest obstacle was learning the correct terms when communicating with the data brokers. Overall, the biggest limitation is list accuracy. We have trouble believing the 90\% accuracy rate claimed by the list brokers, but its also clear from the bridal data that this does achieve an accuracy rate that appears to be roughly 50\% accuracy. List inaccuracy increases the overall cost to the attacker, as any false positive in the list represents wasted resources when an attacker evaluates the result. For marketeers, inaccuracy is a minor but tolerable cost: a false match is a wasted mailer, but the total cost per mismatch is only a dollar or so. Criminal uses may have a higher penalty for mismatch: if someone needs to investigate a target in person, a false match might have a cost measured in tens or perhaps even hundreds of dollars. We've shown that, to at least some degree, list inaccuracy is countered with ancillatory data. For our bridal example, we used Facebook or registry services to validate portions the raw data. The ancillatory data, especially if it has a single sided error (people seldom post about a nonexistent wedding on Facebook), is of particular use since it acts to ensure a sub-list of true positives. Low cost validation strategies may depend on the context but, when available, can produce a much cleaner data stream. One of the most powerful tools that we did not investigate is Google Maps streetview. The lists already contain the target's address, which makes a simple matter of putting the address into Google to instantly gauge any obvious security systems (such as signs), the income level, and secondary signals such as the bumper stickers of cars in the driveway (as it is highly unlikely for a truck with an NRA bumper sticker to be owned by a non-gun owner). \section{Conclusion} \label{sec:conclusions} We live in a soup of data, producing little eddies of information with every action we take. A whole host of data brokers exist to slurp up this information, process it, and digest it into a form enjoyed by marketeers and merchants. Yet this data, although not generally regulated by the U.S.\ government, is not without its risk. We showed the ability to partially replicate the previous cybercasing result without requiring any geotagged data, an exercise that will probably grow in precision as marketeers attempt to map email to physical location on a more regular basis. We also showed the possibility of creating criminal lists derived from a public purchased list of brides to be, and how such lists can be both enhanced and cleaned using ancillatory search data such as Facebook profiles and bridal registry information. Overall, we believe that marketing data is not necessarily harmless: there is significant potential for abuse. \section*{Acknowledgements} This research was supported by the U.S.\ National Science Foundation (NSF) grants CNS 1065240 and CNS 1514509. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S.\ government or any other entity. \bibliographystyle{plain}
1,477,468,751,267
arxiv
\section*{\large Supplemental Material:\\ From semimetal to chiral Fulde-Ferrell superfluids} \indent In this Supplementary Material we provide the details of the model realization, and the proof of the generic theorem to determine the topology, i.e. the Chern number of a chiral superfluid/superconductor. \section{Effective Hamiltonian} \begin{figure}[h] \centerline{\includegraphics[width=\columnwidth]{RealizationS.pdf}} \caption{(a) Proposed experimental setting of realization. The standing wave lights formed by $\mathbf{E}_{1x,1z}$ generate a blue-detuned square lattice. The incident polarization of $\mathbf{E}_{1z}$ is $\hat{e}_\bot = \alpha \hat{e}_x + i \beta \hat{e}_y$, and the $\lambda/4$-wave plate changes the polarization of the reflected field to $\hat{e}'_\bot = \alpha \hat{e}_x - i \beta \hat{e}_y$. (b,c) As illustrated for $\hspace*{0pt}^{40}K$ fermions, the lattice potential for the state $\left| F=9/2, m_F=9/2 (7/2) \right>$ is generated by $\mathbf{E}_{1x,1z}$. (d) The Raman coupling is generated by $\mathbf{E}_{1z}$ and $\mathbf{E}_2$, which has a tilt angle $\theta$ with respect to $x$-axis in the $x$-$y$ plane.} \label{Realization} \end{figure} \subsection{Atomic States} The spin-1/2 pseudospin is defined by the two atomic states $\left| g_\uparrow \right> = \left| F=9/2 , m_F = 9/2 \right>$ and $\left| g_\downarrow \right> = \left| F=9/2 , m_F = 7/2 \right>$ of the atom $\hspace*{0pt}^{40}K$. They are then coupled to the excited states in the manifold $\hspace*{0pt}^2P_{3/2}$ and $\hspace*{0pt}^2P_{1/2}$ through various two-photon processes depicted in Fig.~\ref{Realization}(b-d). \\ \subsection{Light fields} The electric field of standing-wave lights for the present realization can be written as: \begin{eqnarray} \mathbf{E} &=& E_{1x} \left(e^{i\left(k_0 x + \phi_1\right)} + e^{i\left(-k_0 x + \phi_2\right)}\right)\hat{e}_z + E_{1z} \left(\alpha \left( e^{i\left(k_0 z + \phi_3\right)} + e^{i\left(-k_0 z + \phi_4\right)} \right)\hat{e}_x + \beta \left(e^{i\left(k_0 z + \phi_3\right)} + e^{i\left(-k_0 z + \phi_4 + \pi\right)}\right) \hat{e}_y \right) \nonumber \\ &&+ E_2 e^{i \left(k_1 y + \phi_5\right)} \hat{e}_x \nonumber \\ &=& 2 E_{1x}e^{i\left(\frac{\phi_1+\phi_2}{2}\right)}\cos \left(k_0 x + \frac{\phi_1-\phi_2}{2}\right)\hat{e}_z + 2E_{1z} \left(\alpha e^{i\left(\frac{\phi_3+\phi_4}{2}\right)}\cos \left(k_0 z + \frac{\phi_3-\phi_4}{2}\right)\hat{e}_x \right.\nonumber \\ &&+ \left. i\beta e^{i\left(\frac{\phi_3+\phi_4}{2}\right)}\sin \left(k_0 z + \frac{\phi_3-\phi_4}{2}\right)\hat{e}_y \right) + E_2 e^{i\left(k_1 y + \phi_5\right)} \hat{e}_x \nonumber \\ &=& 2e^{i\phi_B}E_{1x} \cos k_0 x \hat{e}_z + 2E_{1z} \left( \alpha \cos k_0 z \hat{e}_x + i\beta \sin k_0 z \hat{e}_y\right) + E_2 e^{i\left(k_1y + \phi_A\right)} \hat{e}_x, \end{eqnarray} where $\phi_A = -\frac{k_1}{k_0}\frac{\phi_1-\phi_2}{2}-\frac{\phi_3+\phi_4}{2}+\phi_5$, $\phi_B = \frac{\phi_1+\phi_2}{2}-\frac{\phi_3+\phi_4}{2}$. In the last line of the above equation we have made the change of parameters $x\rightarrow x-\left(\phi_1-\phi_2\right)/\left(2k_0\right)$ and $z\rightarrow z-\left(\phi_3-\phi_4\right)/\left(2k_0\right)$ and a multiplication of an overall phase factor $e^{-i\left(\phi_3+\phi_4\right)/2}$. The Rabi-frequencies of the transitions described in Fig.~\ref{Realization}(b-d) can be derived. The light denoted by black line has the Rabi-frequency $\Omega_\pi = 2\Omega_\pi' E_{1x}\cos k_x$. Those lights depicted as red lines and connecting existing states with $\Delta F=\pm 1$ have the Rabi-frequency $\Omega_\pm = \frac{\Omega_\pm'}{\sqrt{2}} 2E_{1z}\left(\mp \alpha \cos k_0x + \beta \sin k_0z \right)$, and the one depicted as blue line is $\Omega_p = \Omega_p' E_2 e^{ik_0x\cos\theta} = \Omega_p' E_2 e^{ik_1x}$. The phase $\phi_{A,B}$ are irrelevant and so are omitted here. The quantities $\Omega_i'$ is the dipole matrix element $\left<g\left|\mathbf{r}_q\right|e\right>$, where $g$ and $e$ is the corresponding ground state and excited state, and $\mathbf{r}_q$ is the spherical component of vector $\mathbf{r}$. \begin{table} \begin{equation} \begin{array}{|c|c|c|c|c|c|c|c|c|} \cline{1-4}\cline{6-9} \frac{1}{2} \rightarrow \frac{3}{2}; \frac{9}{2} & q=-1 & q=0 & q=1 & \hspace*{50pt}& \frac{1}{2} \rightarrow \frac{3}{2}; \frac{7}{2} & q=-1 & q=0 & q=1\\ \cline{1-4}\cline{6-9} F'=\frac{11}{2} & \sqrt{\frac{1}{2}} & -\sqrt{\frac{1}{11}} & \sqrt{\frac{1}{110}} & & F'=\frac{11}{2} & \sqrt{\frac{9}{22}} & -\sqrt{\frac{9}{55}} & \sqrt{\frac{3}{110}}\\ \cline{1-4}\cline{6-9} F'=\frac{9}{2} & & \sqrt{\frac{8}{33}} & -\sqrt{\frac{16}{297}} & & F'=\frac{9}{2} & \sqrt{\frac{16}{297}} & \sqrt{\frac{392}{2673}} & -\sqrt{\frac{256}{2673}}\\ \cline{1-4}\cline{6-9} F'=\frac{7}{2} & & & -\sqrt{\frac{14}{135}} & & F'=\frac{7}{2} & & \sqrt{\frac{28}{1215}} & \sqrt{\frac{98}{1215}}\\ \cline{1-4}\cline{6-9} \end{array} \nonumber \end{equation} \begin{equation} \begin{array}{|c|c|c|c|c|c|c|c|c|} \cline{1-4}\cline{6-9} \frac{1}{2} \rightarrow \frac{1}{2}; \frac{9}{2} & q=-1 & q=0 & q=1 & \hspace*{50pt}& \frac{1}{2} \rightarrow \frac{1}{2}; \frac{7}{2} & q=-1 & q=0 & q=1\\ \cline{1-4}\cline{6-9} F'=\frac{11}{2} & & \sqrt{\frac{1}{3}} & -\sqrt{\frac{2}{27}} & & F'=\frac{11}{2} & \sqrt{\frac{2}{27}} & \sqrt{\frac{49}{243}} & -\sqrt{\frac{32}{243}}\\ \cline{1-4}\cline{6-9} F'=\frac{9}{2} & & & \sqrt{\frac{16}{27}} & & F'=\frac{9}{2} & & \sqrt{\frac{32}{243}} & \sqrt{\frac{112}{243}}\\ \cline{1-4}\cline{6-9} \end{array} \nonumber \end{equation} \caption{The value of the ratio of $\left<g\left|\mathbf{r}_q\right|F',m_F'=m_F-q\right>$ and $\left|\left<J=1/2 \left|\left| e\mathbf{r} \right|\right| J' \right> \right| = \alpha_i $. The top-left corners of each table contains three numbers denoting $J$, $J'$ and $m_F$, respectively, where $J$ and $J'$ is the quantum number of $\mathbf{J}$ corresponding to the ground and excited states respectively and $\left|g\right> = \left|F=\frac{9}{2},m_F\right>$.} \label{DipoleT} \end{table} \subsection{Lattice and Raman potentials} The lattice potential is generated by the two-photon processes which intermediate states are in the manifold $\hspace*{0pt}^2S_{1/2}$ (described in the diagram Fig.~\ref{Realization}(b,c) by the process generated by black and red lines). Each process gives a contribution $\sum_j \left|\Omega_i\right|^2/\Delta_j$ to the lattice potential, where $\Omega_i$ is the corresponding Rabi-frequency, $j$ runs through all possible atomic states and $\Delta_j = \Delta$ or $\Delta+\Delta_s$ if the intermediate state considered is in the manifold $\hspace*{0pt}^2P_{1/2}$ or $\hspace*{0pt}^2P_{3/2}$. According to experimental data, for reference, see ~\cite{TGK40}) all different $\Delta$ or $\Delta+\Delta_s$ corresponding to intermediate states of different $m_F$ are of the same order of magnitude (order of THz) and the differences among each group are negligible (order of 100MHz to 1GHz). The lattice potential of both spin is \begin{equation} V\left(x,z\right) = \frac{4}{3}\left(\frac{2\alpha_2^2}{\Delta}+\frac{\alpha_1^2}{\Delta+\Delta_s}\right)\left(E_{1x}^2 \cos^2 k_0x+\left(\alpha^2-\beta^2\right) E_{1z}^2 \cos^2 k_0z\right), \end{equation} where we have omitted the constant term, and $\alpha_i = \left|\left< J=\frac{1}{2}\left|\left|e\mathbf{r}\right|\right|J=i-\frac{1}{2}\right>\right|$ ($i=1,2$) is the reduced matrix element between the total angular momentum $J=1/2$ and $J=3/2$. (The coefficients in $V_0$ can be calculated with the help of Table~\ref{DipoleT} in a straightforward way.) Notice that the coupling between spin-up $\left|9/2,9/2\right>$ and spin-down $\left|9/2,7/2\right>$ state through these processes are negligible since $E_{g_\uparrow} - E_{g_\downarrow} \gg \left| \Omega^2/\Delta \right|$, where $E_{g_s}$ is the energy of the spin-up or spin-down state. The Raman lattice is generated via only one Raman process (described in the diagram Fig.~\ref{Realization}(d)). The Raman potential generated is given by the formula $\sum_{j}\Omega_-^* \Omega_p / \Delta_j$. In this case, the generated potential is \begin{equation} M_{\rm eff}=M_0\left(\alpha\cos k_0z+\beta \sin k_0z \right)e^{ik_1x}|\uparrow\rangle\langle\downarrow|, \label{RamanF} \end{equation} where $M_0=\frac{1}{9}\left(\frac{\alpha_2^2}{\Delta}-\frac{\alpha_1^2}{\Delta+\Delta_s}\right)\sqrt{2}E_2E_{1z}$. The Zeeman term $m_z \sigma_z$ is generated by the a small off-resonant in the Raman process, where $m_z = \tilde{\delta}/2 = \left(E_{g_\uparrow} - E_{g_\downarrow} - \omega_1+\omega_2\right)/2$. \subsection{Tight-binding Model} We derive the tight-binding model by considering the hopping contributed from lattice and Raman potentials, respectively. The lattice potential contributes the spin-conserved hopping terms as \begin{equation} H_{\rm TB}^{(1)}=\left(-t_x' \sum_{j_x,j_z} C^\dagger_{j_x,j_z,\uparrow}C_{j_x+1,j_z,\uparrow}+C^\dagger_{j_x,j_z,\downarrow}C_{j_x+1,j_z,\downarrow}-t_z \sum_{j_x,j_z} C^\dagger_{j_x,j_z,\uparrow}C_{j_x,j_z+1,\uparrow}+C^\dagger_{j_x,j_z,\downarrow}C_{j_x,j_z+1,\downarrow}\right)+h.c., \end{equation} where $t_x' = -V_0E_{1x}^2 \int dxdz \phi^*_{0,0}\left(x,z\right) \cos^2 k_0x \phi_{1,0}\left(x,z\right)$, $t_z = -V_0\left(\alpha^2-\beta^2\right)E_{1z}^2 \int dxdz \phi^*_{0,0}\left(x,z\right) \cos^2 k_0z \phi_{0,1}\left(x,z\right)$ and $k_0a=\pi$, and $\phi_{j_x,j_z}$ is the wavefunction at centered at $\left(x,z\right)=\left(j_x,j_z\right)a$. On the other hand, for the spin-flip term induced by Raman coupling, we have that the first term of~\eqref{RamanF} provides spin-flip hopping along the $z$-direction of strength \begin{eqnarray} &&\alpha M_0\int dxdz \phi^*_{j_x,j_z}\left(x,z\right) \cos k_0z e^{ik_1x} \phi_{j_x,j_z\pm 1}\left(x,z\right) \nonumber \\ &=& \alpha M_0\int dxdz \phi^*_{0,0}\left(x,z\right) \cos k_0\left(z+j_za\right) e^{ik_1(x+j_xa)} \phi_{0,\pm 1}\left(x,z\right) \nonumber \\ &=& \left(-1\right)^{j_z}e^{i\frac{k_1}{k_0}\pi j_x} \alpha M_0\int dxdz \phi^*_{0,0}\left(x,z\right) \cos k_0z e^{ik_1x} \phi_{0,\pm 1}\left(x,z\right) \nonumber \\ &=& \mp \left(-1\right)^{j_z}e^{i\frac{k_1}{k_0}\pi j_x} t_{so}, \nonumber \end{eqnarray} where $t_{so}=-\alpha M_0\int dxdz \phi^*_{0,0}\left(x,z\right) \cos k_0z e^{ik_1x} \phi_{0,1}\left(x,z\right) $. The same term gives no contribution to the hopping along $x$-direction since $\cos k_0z$ is antisymmetric in the $x$-direction at the local minimum of lattice potential. Likewise, the second term of~(\ref{RamanF}) gives rise to an onsite Zeeman term \begin{eqnarray} &&\beta M_0\int dxdz \phi^*_{j_x,j_z}\left(x,z\right) \sin k_0z e^{ik_1x} \phi_{j_x,j_z}\left(x,z\right) \nonumber \\ &=&\beta M_0\int dxdz \phi^*_{0,0}\left(x,z\right) \sin k_0\left(z+j_za\right) e^{ik_1\left(x+j_xa\right)} \phi_{0,0}\left(x,z\right) \nonumber \\ &=&\left(-1\right)^{j_z}e^{i\frac{k_1}{k_0}\pi j_x} m_x,\nonumber \end{eqnarray} where $m_x = \beta M_0 \int dxdz \phi^*_{0,0}\left(x,z\right) \sin k_0z e^{ik_1x} \phi_{0,0}\left(x,z\right)$. This term gives negligible contribution to the hopping term since $\beta \ll 1$. Therefore, in the tight-binding model, the Raman potential contributes \begin{eqnarray} H_{\rm TB}^{(2)}=\sum_{j_x,j_z}\left(-1\right)^{j_z}e^{i\frac{k_1}{k_0}\pi j_x}\left[t_{so}\left(c^\dagger_{j_x,j_z,\uparrow}c_{j_x,j_z+1,\downarrow}-c^\dagger_{j_x,j_z,\uparrow}c_{j_x,j_z-1,\downarrow}\right) + m_x c^\dagger_{j_x,j_z,\uparrow}c_{j_x,j_z,\downarrow} \right]+h.c. \end{eqnarray} The total tight-binding Hamiltonian reads $H_{\rm TB}=H_{\rm TB}^{(1)}+H_{\rm TB}^{(2)}$, which can be simplified by applying the gauge transformation \begin{equation} c_{j_x,j_z,\uparrow/\downarrow} \rightarrow (-i)^{j_z+j_x} e^{\pm i\left(\frac{k_1\pi}{2k_0} j_x+\frac{\pi}{2} j_z\right)} c_{j_x,j_z,\uparrow/\downarrow}. \end{equation} With the Fourier transformation $c_{j_x,j_z\sigma} \rightarrow \frac{1}{\sqrt{N}} c_{k\sigma} e^{i k \cdot \left(j_x,j_z\right)a}$, we can finally get the Bloch Hamiltonian of the tight-binding model \begin{eqnarray} \mathcal{H}_{\text{TB}} &=& \left( m_z - 2t_z \cos k_z\right) \sigma_z +2t_{so} \sin k_z \sigma_y + m_x \sigma_x + 2t_x'\left(\begin{matrix} \cos \left(k_x+\frac{\pi k_1}{2k_0}-\frac{\pi}{2}\right) & 0\\ 0 & \cos \left(k_x-\frac{\pi k_1}{2k_0}-\frac{\pi}{2}\right) \end{matrix}\right) \nonumber \\ &=& \left( m_z - 2t_x \cos k_x - 2t_z \cos k_z\right) \sigma_z +2t_{so} \sin k_z \sigma_y + t_{xI} \sin k_x \sigma_0 + m_x \sigma_x, \end{eqnarray} where $t_x = t_x' \sin \left( \pi \cos \theta/2 \right)$ and $t_{xI} = 2t_x' \cos \left( \pi \cos \theta/2 \right)$. All the parameters are independently tunable except that $t_x$ and $t_{xI}$ are related by $t_x^2 + t_{xI}^2/4=t_0^2$. It can be seen that the inversion symmetry is controlled by the tilt angle $\theta$, and the gap opening at Dirac points is controlled by the $\beta$, the $\hat e_y$-component of the $\bold E_{1z}$ field. Note that $m_x$ is induced by onsite spin-flip transition. Thus a small $\beta$-term in Eq.~\eqref{RamanF} can generate a relatively large $m_x$. For our purpose, we shall consider a small $\beta$-term compared with $\alpha$-term. Thus the $\bold E_{1z}$ field is mainly polarized in the $\hat e_x$ direction. \section{BdG Hamiltonian} An attractive Hubbard interaction can be described effectively as $H_U=-U\sum_i n_{i\uparrow} n_{i\downarrow}$. We introduce three order parameters $\Delta_{\pm 2Q}$ and $\Delta_0$ when considering superconducting pairing, where $\Delta_{2q} = \left(-U/N\right) \sum_k \left< c_{q+k}^\dagger c_{q-k}^\dagger \right>$ and $q=\pm Q$ or $0$. If only one of them is non-zero, the BdG Hamiltonian can be written as $H_{\text{BdG}} = \sum_k \Psi_k^\dagger \mathcal{H}_{\text{BdG}} \Psi_k /2$, where \begin{equation} \mathcal{H}_{\text{BdG}} = \left( \begin{matrix} {H}_{\text{TB}}(k) & \Delta_{2q}\\ \Delta_{2q}^\dag & -{H}_{\text{TB}}^T(2q-k) \end{matrix} \right) \label{BdGNotFolded} \end{equation} and the basis of $\Psi_k$ is $\left(c_{k\uparrow}, c_{k\downarrow}, c_{2q-k\uparrow}^\dagger, c_{2q-k\downarrow}^\dagger\right)^T$. The pairing matrix $\Delta_{2q}$ is assumed to be real valued, which can be set via a change of overall phase factor of $c$. Here, $q$ is the center-of-mass momentum of the non-zero pairing. If there are more than one of them is non-zero, we need to fold the Brillouin zone. Let $q=Q=m\pi/n$, where $m$ and $n$ are coprime integers. Then the density of the BdG Hamiltonian can be written as \begin{equation} \mathcal{H}_{\text{BdG}}= \left(\begin{matrix} H_0 & 0 & 0 & \cdots & 0 & \widetilde{\Delta}_0 & \widetilde{\Delta}_{-Q} & 0 & \cdots & \widetilde{\Delta}_{2Q} \\ 0 & H_{2Q} & 0 & \cdots & 0 & \widetilde{\Delta}_Q & \widetilde{\Delta}_0 & \widetilde{\Delta}_{-2Q} & \cdots & 0 \\ 0 & 0 & H_{4Q} & \cdots & 0 & 0 & \widetilde{\Delta}_Q & \widetilde{\Delta}_0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & H_{-2Q} & \widetilde{\Delta}_{-2Q} & 0 & 0 & \cdots & \widetilde{\Delta}_0\\ \widetilde{\Delta}_0^\dagger & \widetilde{\Delta}_Q^\dagger & 0 & \cdots & \widetilde{\Delta}_{-2Q}^\dagger & H'_{0} & 0 & 0 & \cdots & 0\\ \widetilde{\Delta}_{-2Q}^\dagger & \widetilde{\Delta}_0^\dagger & \widetilde{\Delta}_Q^\dagger & \cdots & 0 & 0 & H'_{-2Q} & 0 & \cdots & 0\\ 0 &\widetilde{\Delta}_{-2Q}^\dagger & \widetilde{\Delta}_0^\dagger & \cdots & 0 & 0 & 0 & H'_{-4Q} & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \widetilde{\Delta}_{2Q}^\dagger & 0 & 0 & \cdots & \widetilde{\Delta}_0^\dagger & 0 & 0 & 0 & \cdots & H'_{2Q}\\ \end{matrix}\right) \label{BdGFolded}, \end{equation} where $H^{(')}_{rQ} = \pm H^{\left(T\right)}\left(\pm k_x+rQ,\pm k_z\right)$, $\widetilde{\Delta}_{2pQ} = \left(\begin{matrix}0 & \Delta_{2pQ} \\-\Delta_{2pQ} & 0\end{matrix}\right)$, $r = 0,1,2,\cdots,n-1$ and $p=0 \text{ or} \pm 1$. The Nambu basis in the folded Brillouin zone, where $k_y \in \left[0,2\pi/n\right]$, is \begin{equation} \left(\begin{matrix} c_{k_x,k_y} & c_{k_x+2Q,k_z} & \cdots & c_{k_x-2Q,k_z} & c_{-k_x,-k_z}^\dagger & c_{-k_x-2Q,-k_z}^\dagger & \cdots & c_{-k_x+2Q,-k_z}^\dagger \end{matrix}\right)^T \nonumber. \end{equation} In this basis, the gap equation can be derived through diagonalizing~(\ref{BdGNotFolded}) or~(\ref{BdGFolded}) and then calculate the expectation values of the order parameters. Therefore, from a particular value of $U$ the order parameters can be solved self-consistently. \section{BKT Transition} The BKT Temperature can be calculated through the approximate BKT criterion \begin{equation} k_B T_{\text{BKT}} = \frac{\pi}{2} \rho_s \left(T_{\text{BKT}}\right) \approx \frac{\pi}{2} \rho_s \left(T=0\right), \end{equation} where $\rho_s$ is the superfluid density, which is can be calculated by $\rho_s=j_s/\delta q$. $j_s$ is the supercurrent density and $-Q$ is the center-of-mass momentum of two paired fermions when the system cut only one FS, which is of center-of-mass momentum $-Q$. $j_s$ is to be calculated by the variation of the zero-temperature energy, i.e. $j_s=\delta E_{\text{Total}}/\delta q$. The total energy $E_{\text{Total}}$ can be calculated through diagonalization of the total Hamiltonian in the Nambu basis, by assuming the center-of-mass momentum of the pairing to be $-Q+\delta q$. The numerical result is shown in Fig.~\ref{Fig:TBKT} \begin{figure}[h] \centerline{\includegraphics[width=.4\columnwidth]{TBKT.pdf}} \caption{The BKT temperature when $m_z=2.92t_z, t_x=0.92t_z, t_{so}=t_z, t_{xI}=0.8t_z, m_x=0.3t_z$.} \label{Fig:TBKT} \end{figure} \section{Generic Theory for chiral topological superfluid} After presenting the generic result, we shall show step by step the generic theory that the topology of 2D chiral superfluid/superconductor can be precisely determined by knowing only the properties of the Fermi surfaces and the topology of other normal bands away from the Fermi surfaces. \subsection{Generic formalism} For simplicity, we do not consider the case that the normal bands barely touch the Fermi energy. We classify the normal bands into three groups: upper (lower) bands, which are those with energy above (below) Fermi energy, and middle bands which are crossed by Fermi energy. Let $n_{L}$ ($n_U$) be the total Chern number of the upper (lower) band and $n_F^{(i_M)}$ be the Chern number of $i_M$-th middle band. Each middle band may form multiples Fermi surfaces. We denote the pairing of the $(i_M, j)$-th Fermi surface, projected on $i_M$-th band, to be $\Delta^{(i_M,j)}_k$ and its phase $\theta_k^{(i_M,j)} = \arg \Delta^{(i_M,j)}_k$. We denote $\vec{\mathcal{S}}_{i_M,j}$ to be the area enclosed by the Fermi surface $\partial\vec{\mathcal{S}}_{i_M,j}$ and $\vec{\mathcal{S}}_{i_M,\text{out}}$ to be the unenclosed area. Notice that among each of the regions $\vec{\mathcal{S}}_{i_M,j}$ or $\vec{\mathcal{S}}_{i_M,\text{out}}$, the energy of the $i_M$-th band is of the same sign. We shall show that in general, the formula for the Chern number of the negative energy states, after considering the superconducting pairing, can be written as \begin{equation} Ch_1 = n_L - n_U +\sum_{i_M} \left( (-1)^{q_{i_M}} n_F^{(i_M)} + \sum_j (-1)^{q'_{i_M,j}}\int_{\partial \vec{\mathcal{S}}_{i_M,j}} \nabla_k \theta_k^{(i_M,j)} \cdot dk \right), \end{equation} where the integral in the bracket is the winding number of $\theta_k^{(i_M,j)}$. The integral is evaluated in the direction satisfying right hand rule. The phase factor $(-1)^{q'_{i_M,j}}$ and $(-1)^{q_{i_M}}$ is $1$ if the energy of the $i_M$-th band is negative in the region $\vec{\mathcal{S}}_{i_M,j}$ and $\vec{\mathcal{S}}_{i_M,\text{out}}$ respectively, and they are $-1$ if the corresponding energy is positive. We have chosen a gauge here such that the phase of the Berry connection of the $i_M$-th band is continuous throughout the regions $\vec{\mathcal{S}}_{i_M,j}$. The case with an alternative gauge can be found below. This quantity is non-zero if the system is topological. \subsection{One band, One FS} First, we consider the case where there is only one band in the system and there is only one FS. So the Berry curvature, and thus the Chern number, of the normal states is zero since we can choose a gauge of real eigenvectors where the Berry connection is identically zero.\\ We can calculate the Berry connection, after considering the superconducting pairing, by direct calculation over each region $\vec{\mathcal{S}}$. The density of the BdG Hamiltonian and the Berry connection can be written as \begin{eqnarray} \mathcal{H}_{\text{BdG}}(k) &=& \left( \begin{matrix} \epsilon_k-\mu & \Delta_k \\ \Delta^*_k & -\epsilon_{-k}+\mu \end{matrix} \right); \nonumber\\ \mathcal{A}_{k\pm} &=& i \left( \begin{matrix} \alpha_\pm^*(k) & \beta_\pm^*(k) \\ \end{matrix}\right) \nabla_k \left( \begin{matrix} \alpha_\pm(k) \\ \beta_\pm(k) \end{matrix}\right) \nonumber, \end{eqnarray} where the $\pm$ sign denotes upper (lower) band. $\alpha$ and $\beta$ can be found by diagonalizing the $\mathcal{H}_{\text{BdG}}$: \begin{eqnarray} \alpha_{\pm}(k)&=&\frac{\frac{\epsilon_k + \epsilon_{-k}}{2}-\mu \pm \sqrt{\left| \Delta_k\right|^2+\left(\frac{\epsilon_k + \epsilon_{-k}}{2}-\mu \right)^2}}{N_{\pm}(k)}; \nonumber\\ \beta_{\pm}(k)&=&\frac{\Delta_k^*}{N_{\pm}(k)}; \nonumber\\ N_{\pm}(k) &=& \sqrt{\left| N_{\pm}(k) \alpha_{\pm}(k) \right|^2 + \left| N_{\pm}(k) \beta_{\pm}(k) \right|^2}. \nonumber \end{eqnarray} So the Berry connection of the lower band is: \begin{equation} \mathcal{A}_{k}=i\frac{\Delta_k \nabla_k \Delta_k^* - \Delta_k^* \nabla_k \Delta_k}{4\sqrt{\left( \frac{\epsilon_k + \epsilon_{-k}}{2}-\mu \right)^2 + \left| \Delta_k\right|^2}\left(\sqrt{\left( \frac{\epsilon_k + \epsilon_{-k}}{2}-\mu \right)^2 + \left| \Delta_k\right|^2} - \left(\frac{\epsilon_k + \epsilon_{-k}}{2}-\mu\right)\right)}\nonumber. \end{equation} Since the Chern number remains unchanged under all continuous changes, we can assume that $\Delta = \gamma \Delta$ and let $\gamma \rightarrow 0$. We can also continuously deform the system so that the FS is symmetric. In this case, the Berry curvature is \begin{equation} \mathcal{B}_{k} = \nabla_k \times \mathcal{A}_{k} = \nabla_k \times \left(\frac{i\theta_{\pm\vec{\mathcal{S}}} }{2}\frac{\Delta_k \nabla_k \Delta_k^{*} - \Delta_k^{*} \nabla_k \Delta_k}{\left| \Delta_k \right|^2} \right) \nonumber, \end{equation} where $\theta_{\pm \mathcal{S}}$ denotes a function which valued to 1 inside (outside) the region $\vec{\mathcal{S}}$ and 0 outside (inside), and the upper (lower) sign means that the region $\vec{\mathcal{S}}$ is of positive (negative) energy. The Chern number can be calculated through an integral of $\mathcal{B}_{k}$ \begin{equation} Ch_1 = \int \mathcal{B}_{k} d^2k = \mp \int_{\partial \vec{\mathcal{S}}} \nabla_k \theta_k \cdot dk, \label{SIChern1} \end{equation} where $\theta_k=\arg\Delta_k$ and that the Berry curvature localizes on the boundaries of $\vec{\mathcal{S}}$. \subsection{One band, multiple FS} \label{Sec:OMFS} To generalize the equation~(\ref{SIChern1}), we notice that when $\gamma \rightarrow 0$, the momentum modulation is unimportant. Thus the Chern number of the lower band depends only on the pairing within each FS. Therefore, the total Chern number is the summation of $\mp n^{(j)}$, where $n^{(j)}$ is the winding number of $\theta_k^{(j)} =\arg\Delta_k^{(j)}$, and $\Delta_k^{(j)}$ is the pairing order of the $j$-th FS; When the energy of the normal states is positive (negative) within the region $\vec{\mathcal{S}_j}$, we take the negative (positive) sign in $\mp n^{(j)}$. It is worthwhile to note that in general there maybe order parameters connecting two different FS (like the BCS order $\Delta_0$ in the main text). To simplify our discussion, we assume that the system contains minimal order parameters to be fully gapped. Without lost of generality, we can assume that the FS looks like Fig.~\ref{GenTS}(b). (If we have $E<0$ inside $\vec{\mathcal{S}}$, we can just flip the sign as indicated by~(\ref{SIChern1}).) Folding up the Brillouin zone, we will find that two FS coincides with each other and that \begin{equation} \left( \begin{matrix} \epsilon_{k-Q} & 0 & \Delta_0\left(k-Q\right) & \Delta_{-Q}\left(k\right) \\ 0 & \epsilon_{k+Q} & \Delta_Q\left(k\right) & \Delta_0\left(k-Q\right) \\ \Delta_0^*\left(k-Q\right) & \Delta_{Q}^*\left(k\right) & -\epsilon_{-k+Q} & 0 \\ \Delta_{-Q}^*\left(k\right) & \Delta_0^*\left(k+Q\right) & 0 & -\epsilon_{-k-Q} \\ \end{matrix} \right) \label{FBZChern} \end{equation} is the subspace of the BdG Hamiltonian in the folded Brillouin zone, where $\pm Q$ is the two center-of-mass momentum of the FS and $k$ is defined from $-Q/n$ to $Q/n$, where $n$ is an integer number. The minimal order parameters to fully gap the system can be $\Delta_{\pm Q}$, which we have discussed, or only $\Delta_0$ alone. When only $\Delta_0$ is non-zero, the winding numbers are calculated for $\Delta_0$ around $Q$ and $-Q$ (as indicated by the diagonal term in the $\Delta$ matrix). \begin{figure}[h] \centerline{\includegraphics[width=.9\columnwidth]{GenTS.pdf}} \caption{(a) Schematic diagram showing how the winding number should be counted. (b) Schematic diagram of the description in the proof concerning multiple FS in one band. (c) Schematic diagram of the case when two FS coincide with each another in folded Brillouin zone. Blue and red means the part of two bands which form the FS. Thick and thin means particle and its hole duplicates. The green and orange dashed line indicates how the band structure will change when the pairing between the bands denoted as blue and red lines are introduced.} \label{GenTS} \end{figure} \subsection{Multiple bands, One FS} If there are many bands in the system, we call the Chern number of the upper (lower) band be $n_{L(U)}$. These bands contributes $n_L-n_U$ to the total Chern number for negative energy states in the duplicated space. The arguments in the one band system need generalization since the Berry curvature of the normal bands are non-zero and an intuitive generalization of $\Delta_k^{(i_M)}$ to $\left<u_k^{(i_M)}\right| \Delta(k) \left|u_k^{(i_M)} \right>$, where $\left|u_k^{(i_M)} \right>$ is the eigenvector of the single particle Hamiltonian, is now gauge dependent. The single particle Hamiltonian density $\mathcal{H}$ have eigenvectors $u_k^{(i_M)}$ so that $\mathcal{H}(k) u_k^{(i_M)} = \epsilon_k^{(i_M)} u_k^{(i_M)}$ and the BdG Hamiltonian density can be written as \begin{equation} \mathcal{H}_{\text{BdG}}(k) = \left( \begin{matrix} \mathcal{H}(k)-\mu & \Delta(k) \\ \Delta^\dag(k) & -\mathcal{H}^T(-k)+\mu \end{matrix} \right). \nonumber \end{equation} Write the BdG Hamiltonian density in the bases $ \left(\begin{matrix} u_k^{(i)} & 0 \end{matrix}\right)^T$ and $ \left(\begin{matrix} 0 & u_{-k}^{(i)*} \end{matrix}\right)^T$, $\mathcal{H}_\text{BdG}$ has diagonal entries with values $\epsilon_k^{(i_M)}-\mu$ and $-\epsilon_{-k}^{(i_M)}+\mu$ and the off-diagonal entries due to $\Delta(k)$, i.e. $\Delta_k^{(i,j)} = \left<u_k^{(i)}\right| \Delta(k) \left| u_{-k}^{(j)*}\right>$. Let $\Delta = \gamma \Delta$ and $\gamma \rightarrow 0$, the upper and lower bands are diagonalized automatically. The Hamiltonian in the subspace of middle bands can also be diagonalized as follows \begin{eqnarray} H_{\text{BdG}}(k) \left( \begin{matrix} \alpha_\pm^{(i_M)}(k) u_k^{(i_M)} \\ \beta_\pm^{(i_M)}(k) u_{-k}^{(i_M)*} \end{matrix}\right) &=& \left(\frac{\epsilon_k^{(i_M)} - \epsilon_{-k}^{(i_M)}}{2} \pm \sqrt{\left| \Delta_k^{(i_M)}\right|^2+\left(\frac{\epsilon_k^{(i_M)} + \epsilon_{-k}^{(i_M)}}{2}-\mu \right)^2}\right) \left( \begin{matrix} \alpha_\pm^{(i_M)}(k) u_k^{(i_M)} \\ \beta_\pm^{(i_M)}(k) u_{-k}^{(i_M)*} \end{matrix}\right); \nonumber\\ \alpha_{\pm}^{(i_M)}(k)&=&\frac{\frac{\epsilon_k^{(i_M)} + \epsilon_{-k}^{(i_M)}}{2}-\mu \pm \sqrt{\left| \Delta_k^{(i_M)}\right|^2+\left(\frac{\epsilon_k^{(i_M)} + \epsilon_{-k}^{(i_M)}}{2}-\mu \right)^2}}{N_{\pm}^{(i_M)}(k)}; \nonumber\\ \beta_{\pm}^{(i_M)}(k)&=&\frac{\Delta_k^{(i_M)*}}{N_{\pm}^{(i_M)}(k)}; \nonumber\\ N_{\pm}^{(i_M)}(k) &=& \sqrt{\left| N_{\pm}^{(i_M)}(k) \alpha_{\pm}^{(i_M)}(k) \right|^2 + \left| N_{\pm}^{(i_M)}(k) \beta_{\pm}^{(i_M)}(k) \right|^2}, \nonumber \end{eqnarray} where $\Delta_k^{(i_M)}=\Delta_k^{(i_M,i_M)}$.\\ Therefore, when the $i_M$-th middle band is duplicated and gapped by $\Delta(k)$, the Berry connection of the upper and lower band are \begin{eqnarray} \mathcal{A}^{(i_M)}_{k\pm} &=& i \left( \begin{matrix} \alpha_\pm^{(i_M)*}(k) u_k^{(i_M)\dagger} & \beta_\pm^{(i_M)*}(k) u_{-k}^{(i_M)T} \\ \end{matrix}\right) \nabla_k \left( \begin{matrix} \alpha_\pm^{(i_M)}(k) u_k^{(i_M)} \\ \beta_\pm^{(i_M)}(k) u_{-k}^{(i_M)*} \end{matrix}\right). \nonumber \\ &=& i \alpha_{\pm}^{(i_M)*} \nabla_k \alpha_{\pm}^{(i_M)} + i \left| \alpha_{\pm}^{(i_M)} \right|^2 u_k^{(i_M)\dagger} \nabla_k u_k^{(i_M)} + i \beta_{\pm}^{(i_M)*} \nabla_k \beta_{\pm}^{(i_M)} + i \left| \beta_{\pm}^{(i_M)} \right|^2 u_{-k}^{(i_M)T} \nabla_k u_{-k}^{(i_M)*} \nonumber\\ &=& i \left(\left| \alpha_{\pm}^{(i_M)}\right|^2 - \left| \beta_{\pm}^{(i_M)} \right|^2 \right) u_k^{(i_M)\dagger} \nabla_k u_k^{(i_M)} \nonumber \\ && + i\frac{\Delta_k^{(i_M)} \nabla_k \Delta_k^{(i_M)*} - \Delta_k^{(i_M)*} \nabla_k \Delta_k^{(i_M)}}{4\sqrt{\left( \frac{\epsilon_k^{(i_M)} + \epsilon_{-k}^{(i_M)}}{2}-\mu \right)^2 + \left| \Delta_k^{(i_M)}\right|^2}\left(\sqrt{\left( \frac{\epsilon_k^{(i_M)} + \epsilon_{-k}^{(i_M)}}{2}-\mu \right)^2 + \left| \Delta_k^{(i_M)}\right|^2} \pm \left(\frac{\epsilon_k^{(i_M)} + \epsilon_{-k}^{(i_M)}}{2}-\mu\right)\right)}\nonumber \end{eqnarray} There are three contributions when we calculate the Berry curvature, which is the curl of this quantity: (i) The curvature from the normal band; (ii) The extra curvature due to the mixing of two bands (from the derivative of $|\alpha|^2-|\beta|^2$); (iii) The curvature due to superconducting pairing. After continuously deforming the FS so that it is symmetric, we can write the Berry curvature of the lower band to be \begin{eqnarray} \mathcal{B}^{(i_M)}_{k\pm} &=& \pm\left(\left(\theta_{-\vec{\mathcal{S}}}-\theta_{\vec{\mathcal{S}}}\right) \widetilde{\mathcal{B}}_k^{(i_M)} + \left( \nabla_k \left(\theta_{-\vec{\mathcal{S}}}-\theta_{\vec{\mathcal{S}}}\right) \right) \times \widetilde{\mathcal{A}}_k^{(i_M)} \right) \nonumber \\ &+& \nabla_k \times \left(\frac{i\theta_{\pm\mathcal{S}} }{2}\frac{\Delta_k^{(i_M)} \nabla_k \Delta_k^{(i_M)*} - \Delta_k^{(i_M)*} \nabla_k \Delta_k^{(i_M)}}{\left| \Delta_k^{(i_M)} \right|^2} \right) \nonumber, \end{eqnarray} where the upper (lower) sign is taken when the $i_M$-th band has positive (negative) energy in the region $\vec{\mathcal{S}}$, $\theta_{\pm\vec{\mathcal{S}}}$ denotes a function which valued to 1 inside (outside) $\vec{\mathcal{S}}$ and 0 outside (inside) $\mathcal{S}$ and $\widetilde{\mathcal{A}}_k^{(i_M)}$ ($\widetilde{\mathcal{B}}_k^{(i_M)}$) denotes the Berry connection (curvature) of the corresponding normal band. \\ The integral of Berry curvature can be simplified to (writing $\theta_k^{(i_M)} = \arg \Delta_k^{(i_M)}$): \begin{eqnarray} \int \mathcal{B}^{(i_M)}_{k\pm} dk &=& \mp\left(2\int_{\vec{\mathcal{S}}} \widetilde{\mathcal{B}}_k^{(i_M)}d^2k-n_F^{(i_M)}-2 \int_{\partial\vec{\mathcal{S}}} \widetilde{\mathcal{A}}_k^{(i_M)} \cdot dk + \int_{\partial \vec{\mathcal{S}}} \frac{i}{2}\frac{\Delta_k^{(i_M)} \nabla_k \Delta_k^{(i_M)*} - \Delta_k^{(i_M)*} \nabla_k \Delta_k^{(i_M)}}{\left| \Delta_k^{(i_M)} \right|^2} \cdot dk \right)\nonumber \\ &=& \mp\left(2\int_{\vec{\mathcal{S}}} \widetilde{\mathcal{B}}_k^{(i_M)}d^2k-n_F^{(i_M)}-2 \int_{\partial\vec{\mathcal{S}}} \widetilde{\mathcal{A}}_k^{(i_M)} \cdot dk + \int_{\partial \vec{\mathcal{S}}} \nabla_k \theta_k^{(i_M)}\cdot dk \right) \nonumber, \end{eqnarray} Notice that the third and fourth terms are gauge dependent, and they all add up to a gauge independent value. We now specify the gauge to finish the calculation. We require that: \begin{equation} \int_{\partial\vec{\mathcal{S}}} \widetilde{\mathcal{A}}_k^{(i_M)} \cdot dk = \int_{\vec{\mathcal{S}}} \widetilde{\mathcal{B}}_k^{(i_M)}d^2k \nonumber. \end{equation} We call this gauge $\vec{\mathcal{S}}$ and for the corresponding eigenfunctions we call them $u_{k[\vec{\mathcal{S}}]}^{(i_M)}$. Then the phase of $\Delta_k^{(i_M)}$ can be defined unambiguously. (i.e. we define the matrix element in $\vec{\mathcal{S}}$ gauge: $\Delta_{k[\vec{\mathcal{S}}]}^{(i,j)} = \left<u_{k[\vec{\mathcal{S}}]}^{(i)}\right| \Delta(k) \left| u_{-k[\vec{\mathcal{S}}]}^{(j)*}\right>$). The gauge merely means that the eigenvectors $u_k^{(i_m)}$ are continuous within the region $\vec{\mathcal{S}}$. If we have other choice of gauge, we count the number of singularity (with the sign determined by right hand rule) within $\vec{\mathcal{S}}$. Each singularity contributes to $-2$ extra winding of $\theta_k^{(i_M)}$ and is canceled by the subtraction of integral of Berry connection from $2\int_{\vec{\mathcal{S}}} \widetilde{\mathcal{B}}_k^{(i_M)}d^2k$. Therefore, we know that the winding in different gauge can be related as \begin{equation} \int_{\partial \vec{\mathcal{S}}} \nabla_k \theta_{k[\vec{\mathcal{S}}]}^{(i_M)} \cdot dk= \int_{\partial \vec{\mathcal{S}}} \nabla_k \theta_{k[ \mathcal{G}]}^{(i_M)} \cdot dk + 2n_{s,\vec{\mathcal{S}}[\mathcal{G}]}, \end{equation} where $n_{s,\vec{\mathcal{S}}[\mathcal{G}]}$ is the number of singularity inside $\vec{\mathcal{S}}$ in gauge $\mathcal{G}$. Therefore total Chern number of the negative energy states is \begin{equation} Ch_1=n_L - n_U \pm \left( n_F^{i_M} - \int_{\partial\vec{\mathcal{S}}} \nabla_k \theta_{k[\vec{\mathcal{S}}]}^{(i_M)} \cdot dk \right) = n_L - n_U \pm \left( n_F^{(i_M)} - \int_{\partial \vec{\mathcal{S}}} \nabla_k \theta_{k[\mathcal{G}]}^{(i_M)} \cdot dk - 2n_{s,\vec{\mathcal{S}}[\mathcal{G}]} \right) \label{MOChern}, \end{equation} for a general gauge $\mathcal{G}$. \subsection{Multiple bands, Multiple FS} Note that each particular middle band $i_M$ can contribute to the Chern number by \begin{equation} n_M^{(i_M)} = (-1)^{q_{i_M}} n_F^{(i_M)} + (-1)^{q'_{i_M,j}} \left( \int_{\partial \vec{\mathcal{S}}_{i_M,j}} \nabla_k \theta_{k[\mathcal{G}]}^{(i_M,j)} + 2n_{s,\vec{\mathcal{S}}_{i_M,j}[\mathcal{G}]} \right), \end{equation} where the phase factor $(-1)^{q'_{i_M,j}}$ and $(-1)^{q_{i_M}}$ is $1$ if the energy of the band is negative in the region $\vec{\mathcal{S}}_{i_M,j}$ and $\vec{\mathcal{S}}_{i_M,\text{out}}$ respectively, and they are $-1$ if the corresponding energy is positive. Therefore, the contribution from all bands can be written as \begin{equation} Ch_1=n_L - n_U +\sum_{i_M} n_M^{(i_M)}. \end{equation} It is worth noting that as in this case, one more complexity arises. As in~(\ref{FBZChern}), we consider order parameter that connect two FS: \begin{equation} \left( \begin{matrix} \epsilon_{k-Q}^{(i_M)} & 0 & \Delta_0^{(i_M,j_M)}\left(k-Q\right) & \Delta_{-Q}^{(i_M,i_M)}\left(k\right) \\ 0 & \epsilon_{k+Q}^{(j_M)} & \Delta_Q^{(j_M,j_M)}\left(k\right) & \Delta_0^{(j_M,i_M)}\left(k-Q\right) \\ \Delta_0^{(i_M,j_M)*}\left(k-Q\right) & \Delta_{Q}^{(j_M,j_M)*}\left(k\right) & -\epsilon_{-k+Q}^{(j_M)} & 0 \\ \Delta_{-Q}^{(i_M,i_M)*}\left(k\right) & \Delta_0^{(j_M,i_M)*}\left(k+Q\right) & 0 & -\epsilon_{-k-Q}^{(i_M)} \\ \end{matrix} \right). \end{equation} If $\epsilon_{k-Q}^{(i_M)}$ and $\epsilon_{k+Q}^{(j_M)}$ are of the same sign in the coinciding region in the folded Brillouin zone, the argument in \ref{Sec:OMFS} have no difficulties except the similar generalization to the solution of the problem of allotment of Chern number and the problem of gauge. If $\epsilon_{k-Q}^{(i_M)}$ and $\epsilon_{k+Q}^{(j_M)}$ are of opposite sign, the minimal order parameter set cannot be $\Delta_0$ alone (see Fig.~\ref{GenTS}(c) for illustration). This is just the case for the model discussed in the main text, where $\Delta_0$ alone cannot gap the system. We state without proof here that, if only $\Delta_0$ and one of the $\Delta_{\pm Q}$ is non-zero, the system can still be fully gap. Without lost of generality, suppose $\Delta_{Q}$ is non-zero, the $j_M^{\text{th}}$ band connect with its hole partner directly and $i_M^{\text{th}}$ band connect through two intermediate steps. The Chern number of this kind of interaction can also be calculated. The direct pairing is straightforward, while for the indirect pairing (the one containing two intermediate steps), we can count the winding number of the effective $\Delta$. In this case, the effective pairing between $-\epsilon_{-k-Q}^{(i_M)}$ and $\epsilon_{k-Q}^{(i_M)}$ are proportional to the multiplication of $\Delta_{0}^{/(j_M,i_M)}$, $\Delta_{-Q}^{(j_M,j_M)*}$ and $\Delta_{0}^{(i_M,j_M)}$, therefore the winding of the effective $\Delta$ is the sum of the winding of these three quantities. The problem concerning gauge and the allotment of the original Chern number can also be similarly investigated. \end{document}
1,477,468,751,268
arxiv
\section{Introduction} One of the significant challenges is to prolong the lifetime of energy-constrained wireless networks which are powered by finite capacity batteries. Although, the lifetime of such networks can be extended by replacing or recharging the batteries, it may be inconvenient and costly. Therefore, energy harvesting has been considered a promising technique to prolong the network's lifetime \cite{1,2} since it provides wireless devices with the capability of perpetual charging of their batteries through harvesting energy from the surrounding environment. In this context, mobile devices can harvest energy from different natural sources, e.g., solar, thermal, vibrational, electromagnetic, etc. \cite{3,4,5,6}. RF energy harvesting has recently become a growing research thrust enabled by the design of novel harvesting circuitries which allow wireless devices to continuously harvest energy from the ambient radio environment. Significant research has been conducted on interference alignment networks with wireless energy transfer \cite{inter1,inter2,inter3,inter4}. Exploiting the fact that RF signals bear, both, energy and information at the same time, a dynamic simultaneous wireless information and power transfer scheme called SWIPT has been proposed in \cite{7,8,9,10,11,12}. SWIPT was first studied from an information-theoretic perspective in \cite{7,8}. The fundamental trade-off between simultaneously transmitting information and harvesting energy is characterized for narrowband noisy channels in \cite{7} and for frequency-selective channels in \cite{8}. Afterwards, from a communication-theoretic perspective, the fundamental trade-off between transmitting energy and transmitting information over a point-to-point noisy link is studied in \cite{9}. Motivated by the fact that energy harvesting circuits are unable to harvest energy and decode information at the same time, the authors in \cite{10} proposed two practical receiver designs, namely, time switching and power splitting. For the time switching scheme, a receiving antenna periodically switches between the energy harvesting receiver and the information decoding receiver. On the other hand, for the power splitting scheme, the received signal is split into two streams with different power levels; one is sent to the energy harvesting receiver and the other to the information decoding receiver. In addition, \cite{11} introduced dynamic power splitting as a general SWIPT operation and proposed two practical SWIPT receiver architectures: 1) separated information and energy receivers and 2) integrated information and energy receivers. Moreover, SWIPT has been proposed and studied for orthogonal frequency division multiplexing (OFDM) systems in \cite{12}. Another line of research has recently considered RF-powered cognitive radio networks \cite{13,14,15} whereby the secondary users are assumed to have RF energy harvesting capability so that they could harvest energy whether from the RF primary users' signals or from other ambient RF sources. The amount of harvested energy is then used for data transmission. First, \cite{13} studied the optimal mode selection policy of whether the secondary users should harvest RF energy or access the spectrum in each slot time in order to maximize the expected total throughput. The optimal spectrum sensing policy was investigated in \cite{14} to maximize the expected total throughput subject to two constraints, namely, an energy causality constraint and a collision constraint. The former guarantees that the total consumed energy is less than or equal to the total harvested energy, while the collision constraint protects the primary user by guaranteeing a minimum QoS requirement. In \cite{15}, the optimal transmission power and density, for the cognitive nodes, were derived in order to maximize the secondary network throughput under given outage probability constraints in, both, the primary and secondary networks. A new type of wireless networks, namely WPCNs, has been studied recently in \cite{16,17,18,19,20}. In WPCNs, wireless devices use the harvested RF energy to communicate with each other. WPCNs have been studied under various network setups; the wireless powered cellular network was investigated in \cite{16}, where power beacons are deployed randomly to charge the mobile devices. On the other hand, wireless powered sensor networks were investigated in \cite{17,18}, where a mobile charging vehicle is moving around in order to continuously provide sensor nodes with wireless energy. Moreover, \cite{19} proposed a new routing metric for wireless powered sensor networks based on the charging ability of the sensor nodes. In addition, the optimal charging and transmission cycles, with the objective of enhancing the lifetime of the network under user-specified end-to-end constraints (throughput and latency), have been characterized. Motivated by the fact that wireless energy transfer directly impacts data communication, since they both share the same frequency band, \cite{20} has proposed a distributed medium access protocol for efficiently sharing the radio resources for these two major functions. An alternative model for WPCNs has recently attracted considerable attention in the literature \cite{21,22,23,24,25,26,27}. In this particular model, users first harvest RF energy on the downlink from wireless energy signals broadcast by a basestation (BS) or hybrid access point (HAP). Afterwards, users transmit their information signals to the HAP on the uplink using the energy harvested in the downlink phase, e.g., using TDMA in \cite{21}. In addition, \cite{22} introduced user cooperation as a solution to the doubly near-far phenomenon that results in unfair rate allocation among users as observed in \cite{21}. Furthermore, a full-duplex WPCN scheme has been introduced in \cite{23}. Taking into consideration the energy causality constraints, of practical significance, \cite{24} has studied full-duplex WPCNs in which a user can only consume energy harvested before its allocated uplink time for data transmission. Cognitive radio WPCNs have been introduced in \cite{25}, where the WPCN shares the same spectrum, for both downlink wireless energy transfer and uplink data transmissions, with the primary wireless communication system. In addition, the authors proposed two models for spectrum sharing, namely, underlay and overlay based cognitive WPCN, depending on the type of available information for the cognitive WPCN about the primary wireless communication system. Motivated by the fact that the location of HAPs and wireless energy nodes (WENs) would have a significant impact on the WPCN performance, the optimal node placement has been investigated in \cite{26}. The network deployment cost was minimized via characterizing the minimum number of HAPs and WENs needed to achieve the performance requirements of wireless devices. WPCNs with two types of nodes, with and without RF energy harvesting capability, was introduced in \cite{27}. In this paper, we generalize conventional TDMA wireless networks to a new type of wireless networks coined g-WPCNs, where nodes are assumed to be equipped with RF energy harvesting circuitries along with constant energy supplies. The prime motivation for this work is twofold: i) quantify the performance gains attributed to RF energy harvesting, when available to conventional TDMA wireless networks studied before and ii) relax the strong assumption adopted widely in prior WPCNs studies, whereby the user devices are solely operated by the inherently limited RF energy harvesting with no other sources of energy. Due to the limited amount of RF energy and the modest efficiency of harvesting circuitries, we argue that RF harvesting would predominantly serve as a supplementary energy source. Our prime objective is to optimize the design of g-WPCNs and characterize the gains obtained by the assumption that nodes have RF energy harvesting capabilities along with the constant energy supplies, compared to conventional TDMA wireless networks (with only constant energy supplies, yet, no energy harvesting) and WPCNs with only RF energy harvesting nodes \cite{21}. Our main contribution in this paper is multi-fold. First, we introduce a new, more realistic wireless network setting, coined g-WPCNs, in which all nodes are equipped with RF energy harvesting circuitries along with the constant energy supplies. To the best of the authors' knowledge, this is the first generalized WPCNs optimization framework in the open literature. Second, we formulate an optimization problem to maximize the sum throughput under the generalized problem setting. Furthermore, we show that the generalized optimization problem seemlessly reduces to two extreme special cases in the literature, namely, conventional TDMA wireless networks with no RF energy harvesting capability and standard WPCNs with only RF energy harvesting nodes. Third, we introduce WPCNs with two types of nodes, with and without RF energy harvesting capability, and characterize its optimal resource allocation policy in closed form. Fourth, motivated by the fairness problem known for the sum throughput maximization objective, we formulate a maxmin problem for the generalized system setting. Finally, we establish convexity for the formulated problems and solve efficiently for the optimal policy using standard techniques. Our numerical results show that the two extreme network settings, namely, WPCNs with only RF energy harvesting nodes and conventional TDMA no-harvesting wireless networks, are considered as lower bounds on the performance of the generalized problem setting in terms of the maximum sum throughput and maxmin throughput. Moreover, the results reveal valuable insights and throughput-fairness trade-offs unique to our new problem setting. The rest of the paper is organized as follows. The system model is presented in Section~\ref{sec:sys}. In Section~\ref{sec:gen}, the sum throughput maximization problem for the generalized system model is formulated, convexity is established and an efficient algorithm is proposed to solve it. Furthermore, we show that formulations for extreme scenarios studied earlier in the literature fall as special cases of the generalized problem formulation proposed here. In Section~\ref{sec: WPCNs with two type of nodes}, the sum throughput maximization problem of WPCNs with two types of nodes; with and without RF energy harvesting capability, is formulated. Furthermore, the optimal resource allocation policy is characterized in closed form. The maxmin throughput optimization problem is formulated, convexity is established and solved efficiently in Section~\ref{sec:maxmin}. Numerical results are presented in Section~\ref{sec:num}. Finally, Section~\ref{sec:con} concludes the paper and points out potential directions for future research. \begin{figure} \centering \includegraphics[width=9cm,height= 6 cm]{general_system_model.pdf} \caption{Generalized WPCN where nodes are powered with two energy sources.} \label{fig:1} \end{figure} \vspace{-0.5 cm} \section{System Model} \label{sec:sys} \vspace{-0.2 cm} We study a generalized wireless powered communication network consisting of one BS and $K$ users, as shown in Fig. \ref{fig:1}. It is assumed that the BS and all users are equipped with a single antenna each, operate over the same frequency channel and the radios are half-duplex. Each user, denoted by $U_{i}$ for $i=1,\cdots, K$, is assumed to be equipped with a constant energy supply, and thus has an allowable amount of energy to be consumed in each slot denoted by $E_{i}^{b}$ \cite{30,31,32}. Furthermore, each user is assumed to be equipped with an RF energy harvesting circuitry. In this paper, one of our main objectives is to characterize the performance gains attributed to having the RF energy harvesting capabilities, beyond conventional TDMA-based networks with no harvesting capabilities. The network operates in a TDMA fashion. For convenience, we assume the block (slot) duration is normalized to one. At the first $\tau_{0} \in [0,1]$ fraction of time, the BS broadcasts an energizing signal over the downlink so that each $U_{i}$ could harvest a certain amount of energy. The remaining $1-\tau_{0}$ fraction of time is allocated to uplink data transmissions where $U_{i}$ is assigned certain portion of time denoted by $\tau_{i}$\footnote{Note that slot time allocations are assumed to take continuous values. This, in turn, requires accurate synchronization methods to implement such scheme in realistic systems.}, for $i=1, \cdots, K$. Hence, the slot is split as follows. \begin{equation} \label{eq1} \sum_{i=0}^{K}{\tau_{i}} \leq 1. \end{equation} The downlink channel coefficient from the BS to $U_{i}$ and the uplink channel coefficient from $U_{i}$ to the BS are denoted by complex random variables $h_{i}^{\prime}$ and $g_{i}^{\prime}$, respectively, with channel power gains $h_{i} = \vert h_{i}^{\prime}\vert^{2}$ and $g_{i} = \vert g_{i}^{\prime} \vert^{2}$. It is assumed that all downlink and uplink channels are quasi-static flat fading, i.e., they remain constant over a time slot, but can change independently from one slot to another. The BS has perfect knowledge of the channel state information (CSI) to all users (i.e., all channel coefficients) at the beginning of each slot\footnote{The assumption that CSI is perfectly pre-estimated at the BS in the beginning of each slot is an idealization of actual practical systems. This calls for the necessity of using estimators with high accuracy to sufficiently reduce the potential estimation errors.}. The transmitted energy signal from the BS to all users, over the downlink, is denoted by $x_{B}$ with fixed average power, $P_B$, i.e., $E\left( \vert x_{B} \vert^{2}\right) = P_{B}$. Hence, the energy harvested by an arbitrary node, $U_{i}$, in the downlink phase is given by \begin{equation} \label{eq2} E^h_{i} = \eta_{i} P_{B} h_{i} \tau_{0} , \end{equation} where $\eta_{i}\footnote{Note that this paper falls within the context of WPCNs where the efficiency of energy harvesting circuitries is assumed to be linear \cite{21,22,23,24,25,26,27}. Incorporating the assumption of non-linear energy harvesting efficiency to our model is a challenging direction of future work.} \in (0,1)$ is the efficiency of the RF energy harvesting circuitry \cite{28,29}, at $U_{i}$. The value of $\eta_{i}$ depends on the efficiency of the harvesting antenna, the impedance matching circuit and the voltage multipliers. Therefore, the consumed energy per slot for uplink data transmission by $U_{i}$, $E_{i}$, is limited by \begin{equation} \label{eq3} E_{i} \leq E^b_{i} + E^h_{i},\; i=1, \cdots, K. \end{equation} \begin{table}[t!]\caption{Table of notation} \centering \begin{center} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{ {c} | {c} } \hline\hline \textbf{Notation} & \textbf{Description} \\ \hline $E_{i}^{b}$; $E_{i}^{h}$ & Allowable amount of energy to be consumed by $U_{i}$ in each slot; amount of harvested energy by $U_{i}$ \\ \hline $\tau_{0}$; $\tau_i$ & Downlink energy transfer fraction of time; allocated portion of time to $U_{i}$ for uplink data transmissions\\ \hline $h_{i}$; $g_{i}$ & Downlink channel power gain from the BS to $U_{i}$; uplink channel power gain from $U_{i}$ to the BS\\ \hline $P_{B}$; $E_{i}; \sigma^2$ & Downlink energy transmit power by the BS; uplink consumed energy by $U_{i}$ for data transmission; noise power\\ \hline $E_{max}$ & Maximum allowable consumed energy by all users per slot\\ \hline $\tau_{1,i}$; $\tau_{2,j}$ & Uplink allocated time for $U_{1,i}$; uplink allocated time for $U_{2,j}$\\ \hline $\bar{E}$ & Amount of energy drawn by each $U_{2,j}$ from its dedicated energy supply within its assigned $\tau_{2,j}$\\ \hline $\eta_{i}$; $\beta$ & Efficiency of $U_{i}$'s RF energy harvesting circuitry; pathloss exponent\\ \hline $\Gamma$ & Signal to noise ratio gap due to a practical modulation and coding scheme used.\\ \hline\hline \end{tabular}} \end{center} \label{tab:TableOfNotations} \end{table} According to Shannon's formula, the achievable uplink throughput of $U_{i}$ in bits/second/Hz is given by \begin{equation} \label{eq5} \begin{aligned} R_{i} \left(E_{i},\tau_{i}\right) & = \tau_{i} \log_{2} \left(1 + \dfrac{g_{i} E_{i}}{\Gamma \sigma^{2} \tau_{i}}\right)\\ & =\tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}}\right), \end{aligned} \end{equation} where $\sigma^{2}$ is the noise power at the BS, $\alpha_{i} = \dfrac{g_{i}}{\Gamma \sigma^{2}}$ for $i=1, \cdots, K,$ and $\Gamma$ denotes the signal to noise ratio gap due to a practical modulation and coding scheme used. The notation used in this paper is summarized in Table~\ref{tab:TableOfNotations}. \section{Sum throughput maximization} \label{sec:gen} In this section, we formulate the sum throughput maximization problem for the generalized WPCN setting shown in Fig. 1 and establish its convexity which facilitates efficient solution using standard optimization solvers. We formulate the sum throughput maximization problem for a generalized setting of conventional TDMA-based wireless networks, whereby all nodes have RF energy harvesting capabilities along with the constant energy supplies. In particular, we find the optimal duration $\tau_0$ for harvesting as well as the durations, $\tau_{i}$, for uplink data transmissions and the optimal consumed energy by each user per slot, $E_{i}$, that maximize the system sum throughput subject to a system energy constraint \cite{27} on the total allowable consumed energy by all users per slot, denoted by $E_{max}$, the transmission slot duration constraint and the total allowable consumed energy by each user per slot constraints. The motivation behind introducing the system energy constraint is two-fold: i) it guarantees a fair comparison between our proposed g-WPCNs and other prior wireless networks, namely, conventional TDMA-based wireless networks (no RF energy harvesting) and WPCNs with RF energy harvesting nodes only, through setting $E_{max}$ with the average total amount of consumed energy in those prior wireless networks, and ii) it characterizes the maximum sum throughput that can be achieved by g-WPCNs via allocating the users that are closer to the BS, and hence experience better channels, more energy compared to other users, as will be highlighted in Section~\ref{sec:num}. Therefore, based on (\ref{eq1}) - (\ref{eq5}), the problem of maximizing the sum throughput per slot can be formulated as follows. \begin{align} \nonumber & \textbf{P1}: \hspace{0.5cm} && \nonumber\underset{\mathbf{E},\pmb{\tau}}{\text{max}}\;\; \sum_{i=1}^{K}{\tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}}\right)} \\ \label{eq6a}&\text{s.t.} &&\sum_{i=1}^{K}{E_{i}} \leq E_{max}, \\ \label{eq6b} &&&\sum_{i=0}^{K}{\tau_{i}} \leq 1, \\ \label{eq6c} &&& \pmb{\tau} \succeq \mathbf{0}, \\ \label{eq6d}&&& 0 \leq E_{i} \leq E^b_{i} + \eta_{i} P_{B} h_{i} \tau_{0},\hspace{0.5 cm} i=1, \cdots,K, \end{align} where $\pmb{\tau}=[\tau_{0}, \cdots,\tau_{K}]$, $\mathbf{E}=[E_{1}, \cdots, E_{K}]$, $\mathbf{0}$ is a vector of zeros that has the same size as $\pmb{\tau}$ and the symbol $\succeq$ represents the element-wise inequality.\\ \vspace{-0.4 cm} \begin{theorem}\label{th:1} \textbf{P1} is a convex optimization problem. \end{theorem} \begin{IEEEproof} Please refer to Appendix A. \end{IEEEproof} Based on Theorem~\ref{th:1}, \textbf{P1} is a convex optimization problem and, hence, can be solved efficiently using standard convex optimization solvers. Furthermore, it can be easily shown that there exists a $[\mathbf{E}\;\pmb{\tau}]$ policy that strictly satisfies all constraints of $\textbf{P1}$. Hence, according to Slater's condition \cite{35}, strong duality holds for this problem; therefore, the Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient for the global optimality of $\textbf{P1}$. However, due to the complexity of the problem, there are no closed form expressions that solve the KKT conditions. Therefore, in order to gain more insights about the optimal policy, we propose an algorithm based on alternating optimization approach for solving \textbf{P1}. First, we investigate the optimal time allocations ($\pmb{\tau}^{*}$) for a given $\mathbf{E}$ that satisfies (\ref{eq6a}) and $0 \leq E_{i} < E^b_{i} + \eta_{i} P_{B} h_{i}, i=1, \cdots,K$. Next, we get the optimal consumed energy allocations ($\mathbf{E}^{*}$) for a given $\pmb{\tau}$ that satisfies (\ref{eq6b}) - (\ref{eq6d}). Finally, the optimal time and energy allocations for \textbf{P1} are obtained by employing the alternating optimization procedure, as established by the following two Theorems and Algorithm 1. \begin{theorem}\label{th:4} Given $\mathbf{E}$ that satisfies (\ref{eq6a}) and $0 \leq E_{i} < E^b_{i} + \eta_{i} P_{B} h_{i}, i=1, \cdots,K$, the optimal time allocations are given by \begin{equation} \label{eq66} \tau_{0}^{*} = \text{min} \left[ \left(\underset{i}{\text{max}}\lbrace\dfrac{E_{i} - E^b_{i}}{\eta_{i} P_{B} h_{i}}\rbrace\right)^{+},\; 1 \right], \end{equation} \begin{equation} \label{eq67} \tau_{i}^{*}=\dfrac{\alpha_{i} E_{i} \left(1 - \tau_{0}^{*}\right)}{\sum_{j=1}^{K}{\alpha_{j} E_{j}}},\; i= 1, \cdots,K, \end{equation} where $(x)^{+} = \text{max}(0,x)$. \end{theorem} \begin{IEEEproof} Please refer to Appendix B. \end{IEEEproof} \begin{theorem}\label{th:5} Given $\pmb{\tau}$ that satisfies (\ref{eq6b}) - (\ref{eq6d}), the optimal energy allocations are given by \begin{equation} \label{eq68} E_{i}^{\ast} = \begin{cases} E^b_{i} + \eta_{i} P_{B} h_{i} \tau_{0},\;\text{if} \;E_{max} \geq E_{tot}\\ \text{min}\left[\left(- \dfrac{\tau_{i}}{\alpha_{i}}\left(\dfrac{\alpha_{i}}{\lambda^{*} \ln(2)} + 1\right)\right)^{+},\;E^{b}_{i} + \eta_{i} P_{B} h_{i} \tau_{0} \right],\;\text{otherwise} \end{cases} \end{equation} for $i = 1,\cdots, K$, where $E_{tot} = \sum_{j = 1}^{K}{\left(E^b_{j} + \eta_{j} P_{B} h_{j} \tau_{0}\right)}$ is the total amount of energy available for all users to be consumed per slot, and $\lambda^{*}$ satisfies the equality constraint $\sum_{i=1}^{K}{E_{i}^{*}} = E_{max}$. \end{theorem} \begin{IEEEproof} Please refer to Appendix C. \end{IEEEproof} \begin{algorithm}[h] \caption{\textbf{P1} solver.}\label{euclid} \begin{algorithmic} \STATE 1. Initialize: $t = 0$, $\mathbf{E} = \mathbf{E}^{(t)}$. \STATE 2. Repeat \STATE \hspace{1cm} (1) Compute $\pmb{\tau}^{(t+1)}$ from (\ref{eq66}) and (\ref{eq67}) with given $\mathbf{E}^{(t)}$ . \STATE \hspace{1cm} (2) Compute $\mathbf{E}^{(t+1)}$ from (\ref{eq68}) with given $\pmb{\tau}^{(t+1)}$. \STATE 3. Until $[\pmb{\tau}^{(t+1)}\; \pmb{E}^{(t+1)}]$ converges to a predetermined accuracy. \STATE 4. Set $\pmb{\tau}^{*} = \pmb{\tau}^{(t+1)}$ and $\mathbf{E}^{*} = \mathbf{E}^{(t+1)}$. \end{algorithmic} \end{algorithm} According to Theorem~\ref{th:4} and Theorem~\ref{th:5}, for initial energy allocations ($\mathbf{E}^{(0)}$), the optimal time allocations $\pmb{\tau}^{(1)}$ can be obtained by (\ref{eq66}) and (\ref{eq67}). Afterwards, $\pmb{\tau}^{(1)}$ can be used to obtain $\mathbf{E}^{(1)}$ from (\ref{eq68}), and so on until $[\pmb{\tau}^{(t+1)}\; \pmb{E}^{(t+1)}]$ converges to a predetermined accuracy. Therefore, $\pmb{\tau}^{(t+1)}$ and $\mathbf{E}^{(t+1)}$ will be the optimal time and energy allocations for \textbf{P1}, respectively. The proposed alternating optimization approach is guaranteed to converge to the optimal solution of \textbf{P1} \cite{boyd2011alternating} since the objective function of \textbf{P1} is: 1) a concave function jointly in $\pmb{\tau}$ and $\mathbf{E}$ and 2) a smooth function in both $\pmb{\tau}$ and $\mathbf{E}$. At each iteration of Algorithm 1, the computational complexity of step 2.(1) is $\mathcal{O}(K+1)$ \cite{23} to obtain $\pmb{\tau}$ using (\ref{eq66}) and (\ref{eq67}). Furthermore, in step 2.(2), $\mathcal{O}(K)$ computations are required for computing $\mathbf{E}$ using (\ref{eq68}). Therefore, the complexity of one iteration of Algorithm 1 is $\mathcal{O}(K+1)$, i.e., linear in the number of users. Next, we demonstrate the generality of \textbf{P1} through characterizing the conditions under which the sum throughput maximization problem for extreme scenarios known in the literature become special cases of our generalized formulation, namely, conventional TDMA-based wireless networks (no RF energy harvesting) and WPCNs with RF energy harvesting nodes only. \subsection{Prior formulations as special cases of \textbf{P1}} \label{sec:special} A salient feature of the problem formulation in \textbf{P1} is its generality manifested through capturing the fact that wireless nodes in envisioned WPCNs are typically powered using multiple energy sources, namely, two sources (constant energy supplies and RF energy harvesting circuitries). This, in turn, gives rise to the key observation that related prior work would fall as special cases of \textbf{P1}. In this section, we present two conventional scenarios studied earlier in the literature as special cases of \textbf{P1} and introduce a third, more practical, special case in Section~\ref{sec: WPCNs with two type of nodes}. \subsubsection{Conventional TDMA-based wireless networks (no RF energy harvesting)} In this scenario, all wireless nodes are legacy and, hence, are not equipped with RF energy harvesting circuitries, $(\tau^*_0=0)$, yet, have constant energy supplies. Hence, each user has an allowable amount of energy to be consumed in each slot, $E_{i}^{b}$ \cite{30,31,32}. Therefore, \textbf{P1} will reduce to the sum throughput maximization problem in conventional TDMA-based wireless networks as follows. \begin{equation} \label{eq7} \begin{aligned} & \textbf{P2}: \hspace{0.5 cm} &&\underset{\mathbf{E},\pmb{\tau^{\prime}}}{\text{max}}\;\; \sum_{i=1}^{K}{\tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}}\right)} \\ &\text{s.t.} &&\sum_{i=1}^{K}{\tau_{i}} \leq 1, \\ &&&\sum_{i=1}^{K}{E_{i}} \leq E_{max}, \\ &&& \pmb{\tau^{\prime}} \succeq 0,\\ &&& 0 \leq E_{i} \leq E^b_{i},\hspace{0.5 cm} i=1, \cdots,K, \end{aligned} \end{equation} where $\pmb{\tau^{\prime}}=[\tau_{1}, \cdots,\tau_{K}]$. Based on Theorem \ref{th:1}, $\textbf{P2}$ is a convex optimization problem, and thus can be solved using standard convex optimization techniques. Following the proof of Theorem \ref{th:4} and Theorem~\ref{th:5}, Algorithm 1 can solve $\textbf{P2}$ using the following expressions \begin{equation} \tau_{i}^{*}=\dfrac{\alpha_{i} E_{i}}{\sum_{j=1}^{K}{\alpha_{j} E_{j}}},\; i= 1, \cdots,K, \end{equation} \begin{equation} E_{i}^{\ast} = \begin{cases} E^b_{i},\; \text{if} \;E_{max} \geq \bar{E}_{tot}\\ \text{min}\left[\left(- \dfrac{\tau_{i}}{\alpha_{i}}\left(\dfrac{\alpha_{i}}{\lambda^{*} \ln(2)} + 1\right)\right)^{+},\;E^{b}_{i} \right],\; \text{otherwise} \end{cases} \end{equation} for $i = 1,\cdots, K$, where $\bar{E}_{tot} = \sum_{j = 1}^{K}{E^b_{j}}$ is the total amount of energy available for all users to be consumed per slot, and $\lambda^{*}$ satisfies the equality constraint $\sum_{i=1}^{K}{E_{i}^{*}} = E_{max}$. \subsubsection{WPCNs with RF energy harvesting nodes only} According to the setting of \cite{21}, all nodes have RF energy harvesting capability only with no constant energy supplies, i.e., this implies that $E^b_{i} = 0$ in \textbf{P1}. Furthermore, all harvested energy by a user in a slot is fully consumed for uplink data transmission in the same slot, $E_{i}^{*} = \eta_{i} P_{B} h_{i} \tau_{0}$. In addition, there is no limitation on the allowable consumed energy per slot, i.e., staring from \textbf{P1}, we have that $E_{max} = \infty$. Therefore, \textbf{P1} reduces to the optimal time allocation problem maximizing the sum throughput in WPCNs with RF energy harvesting nodes only in \cite{21} as follows. \begin{equation} \label{eq10} \begin{aligned} & \textbf{P3}: \hspace{0.5 cm} && \underset{\pmb{\tau}}{\text{max}}\;\; \sum_{i=1}^{K}{\tau_{i} \log_{2} \left(1 + \gamma_{i} \dfrac{\tau_{0}} {\tau_{i}}\right)} \\ & \text{s.t.} & & \sum_{i=0}^{K}{\tau_{i}} \leq 1, \\ &&& \pmb{\tau} \succeq \mathbf{0}, \\ \end{aligned} \end{equation} where $\gamma_{i} = \dfrac{\eta_{i} h_{i} g_{i} P_{B}}{\Gamma \sigma^{2}}$. The optimal time allocations of \textbf{P3} are given, according to \cite{21}, by \begin{equation} \label{eq11} \tau_{i}^{\ast} = \begin{cases} \dfrac{x^{*} - 1}{A + x^{*} - 1},\; \; i = 0 \\ \dfrac{\gamma_{i}}{A + x^{*} - 1},\; \; i = 1,\cdots,K, \end{cases} \end{equation} where $A = \sum_{i = 1}^{K}{\gamma_{i}}$ and $x^{*} > 0$ is the solution of $x\ln{x} - x + 1 = A$. It is obvious by now that the problem formulation \textbf{P1} is, indeed, a generalized formulation that encompasses two well-known problem settings in the literature, namely, conventional TDMA with no RF energy harvesting capability at the nodes and WPCNs with all nodes having solely RF energy harvesting capability only. Furthermore, we show in the next section that \textbf{P1} extends to cover an important scenario of practical significance, introduced in \cite{27} with two types of nodes, namely, RF energy harvesting nodes and legacy (no RF energy harvesting capability) nodes. \begin{figure} \centering \includegraphics[width=9 cm,height= 6 cm]{WPCN_with_two_types_of_nodes.pdf} \caption{WPCN with heterogeneous nodes.} \label{fig:2} \end{figure} \section{WPCNs with heterogeneous nodes} \label{sec: WPCNs with two type of nodes} Motivated by the fact that RF energy harvesting is a new technology that may not be available to all the nodes in the network, we study in this section a practically viable network setting, namely, WPCNs with heterogeneous nodes. This constitutes an important step towards studying more realistic WPCNs since the RF energy harvesting technology would take time as it gradually penetrates the wireless industry. As shown in Fig. \ref{fig:2}, the network consists of two types of nodes; one is assumed to have RF energy harvesting capability and no other energy sources (Type I), denoted by $U_{1,i}$ for $i=1,\cdots,M$, while the other group has legacy nodes that are assumed not to have RF energy harvesting capability and are equipped with continuous energy supplies (Type II), denoted by $U_{2,j}$ for $j=1,\cdots,N$. Following the WPCNs operational regime, the BS with fixed power $(P_{B})$ broadcasts an energizing signal in the downlink over $\tau_{0}$ fraction of time. Afterwards, $U_{1,i}$ and $U_{2,j}$ are allocated portions of times for uplink data transmission, denoted by $\tau_{1,i}$ and $\tau_{2,j}$, respectively. It then follows that \begin{equation} \label{eq12} \tau_{0}+\sum_{i=1}^{M}{\tau_{1,i}}+\sum_{j=1}^{N}{\tau_{2,j}} \leq 1. \end{equation} The downlink channel power gain from the BS to $U_{1,i}$, the uplink channel power gain from $U_{1,i}$ to the BS and the uplink channel power gain from $U_{2,j}$ to the BS are denoted by $h_{1,i}$, $g_{1,i}$ and $g_{2,j}$, respectively. Therefore, the achievable uplink throughput of $U_{1,i}$ and $U_{2,j}$ in bits/second/Hz is given by \begin{equation} \label{eq13} \begin{aligned} R_{1,i} \left(\tau_{0},\tau_{1,i}\right) & = \tau_{1,i} \log_{2} \left(1 + \dfrac{\eta_{i} P_{B} h_{1,i} g_{1,i} \tau_{0}}{\Gamma \sigma^{2}\tau_{1,i}}\right)\\ & =\tau_{1,i} \log_{2} \left(1 + \gamma_{i} \dfrac{\tau_{0}}{\tau_{1,i}}\right), \end{aligned} \end{equation} \begin{equation} \label{eq14} \begin{aligned} R_{2,j} \left(\bar{E},\tau_{2,j}\right) & = \tau_{2,j} \log_{2} \left(1 + \dfrac{g_{2,j} \bar{E}}{\Gamma \sigma^{2}\tau_{2,j}}\right) \\ & =\tau_{2,j} \log_{2} \left(1 + \theta_{j} \dfrac{\bar{E}}{\tau_{2,j}}\right), \end{aligned} \end{equation} respectively, where $\bar{E}$ is the energy drawn by each $U_{2,j}$ from its dedicated energy supply within its assigned $\tau_{2,j}$ fraction of time, $\gamma_{i} = \dfrac{\eta_{i} h_{1,i} g_{1,i} P_{B}}{\Gamma \sigma^{2}}$ and $\theta_{j} = \dfrac{g_{2,j}}{\Gamma \sigma^{2}}$ for $i=1, \cdots, M$, $j=1, \cdots, N$. Therefore, from (\ref{eq13}) and (\ref{eq14}), the generalized formulation \textbf{P1} reduces to \begin{equation} \label{eq15} \begin{aligned} & \textbf{P4}: \; &&\underset{\pmb{\tau^{\prime \prime}},\bar{E}}{\text{max}} \;\; \sum_{i=1}^{M}{R_{1,i} \left(\tau_{0},\tau_{1,i}\right)} + \sum_{j=1}^{N}{R_{2,j} \left(\bar{E},\tau_{2,j}\right)} \\ & \text{s.t.} & & \tau_{0}+\sum_{i=1}^{M}{\tau_{1,i}}+\sum_{j=1}^{N}{\tau_{2,j}} \leq 1, \\ &&& a\tau_{0} + N \bar{E} \leq E_{max}, \\ &&& \pmb{\tau^{\prime \prime}} \succeq \mathbf{0}, \\ &&& \bar{E} \geq 0, \end{aligned} \end{equation} where $\pmb{\tau^{\prime \prime}}=[\tau_{0}, \tau_{1,1}, \cdots,\tau_{1,M}, \tau_{2,1}, \cdots,\tau_{2,N}]$ and $a = \sum_{i=1}^{M}{\eta_{i} P_{B} h_{1,i}}$.\\ In the following theorem, we characterize the optimal solution for \textbf{P4} in closed form which is one of the main contributions subject to this paper. \begin{theorem} \label{th:3} For $E_{max} > 0$, the optimal time and energy allocations of \textbf{P4} are given by (\ref{eq16}) - (\ref{eq19}) \begin{figure*} \begin{equation} \label{eq16} \tau_{0}^{\ast} = \begin{cases} \begin{aligned} &\min \left[\dfrac{x^{*} - 1}{A_{1} + x^{*} - 1} , \dfrac{E_{max}}{a}\right],&& \text{if} \; E_{max} \leq \dfrac{a(x_{1}^{*} - 1)}{A_{1} + x_{1}^{*} - 1} \; \text{and} \;A_{1} \geq\dfrac{a}{N} A_{2} \\ &\dfrac{N\left(x_{1}^{\ast} - 1 \right)- E_{max} A_{2} }{N\left(x_{1}^{\ast} - 1 + A_{1}\right) - a A_{2} },&& \text{if} \; \dfrac{a(x_{1}^{*} - 1)}{A_{1} + x_{1}^{*} - 1} \leq E_{max} \leq \dfrac{N}{A_{2}}(x_{1}^{*} - 1) \; \text{and} \;A_{1} \geq \dfrac{a}{N} A_{2}\\ &0 ,&& \text{if} \; \left( E_{max} \geq \dfrac{N}{A_{2}}(x_{1}^{*} - 1)\; \text{and} \;A_{1} \geq\dfrac{a}{N} A_{2} \right) \\ &&& \hspace{2 cm}\text{or} \;\left( A_{1} < \dfrac{a}{N} A_{2}\right ) \end{aligned} \end{cases} \end{equation} \begin{equation} \label{eq17} \tau_{1,i}^{\ast} = \begin{cases} \begin{aligned} &\max \left[\dfrac{\gamma_{i}}{A_{1} + x^{*} - 1} , \dfrac{\gamma_{i}}{A_{1}}\left(1 - \dfrac{E_{max}}{a}\right)\right],&&\text{if} \; E_{max} \leq \dfrac{a(x_{1}^{*} - 1)}{A_{1} + x_{1}^{*} - 1} \; \text{and} \;A_{1} \geq\dfrac{a}{N} A_{2} \\ &\dfrac{\gamma_{i} \left(N\left(x_{1}^{\ast}- 1\right) - E_{max} A_{2} \right) }{\left(x_{1}^{\ast} - 1\right) \left(N\left(x_{1}^{\ast} - 1 + A_{1}\right) - a A_{2} \right)},&&\text{if} \; \dfrac{a(x_{1}^{*} - 1)}{A_{1} + x_{1}^{*} - 1} \leq E_{max} \leq \dfrac{N}{A_{2}}(x_{1}^{*} - 1) \\ &&& \hspace{2 cm}\text{and} \;A_{1} \geq \dfrac{a}{N} A_{2} \\ &0,&&\text{if} \; \left(E_{max} \geq \dfrac{N}{A_{2}}(x_{1}^{*} - 1)\; \text{and} \;A_{1} \geq\dfrac{a}{N} A_{2}\right)\\ &&& \hspace{2 cm}\text{or}\left(A_{1} < \dfrac{a}{N} A_{2}\right) \end{aligned} \end{cases} \end{equation} \begin{equation} \label{eq18} \tau_{2,j}^{\ast} = \begin{cases} \begin{aligned} &0,&&\text{if} \; E_{max} \leq \dfrac{a(x_{1}^{*} - 1)}{A_{1} + x_{1}^{*} - 1} \; \text{and} \;A_{1} \geq\dfrac{a}{N} A_{2} \\ &\dfrac{\theta_{j} \left(E_{max}\left(x_{1}^{\ast} - 1 + A_{1} \right) - a \left(x_{1}^{\ast} - 1\right)\right) }{\left(x_{1}^{\ast} - 1\right)\left(N\left(x_{1}^{\ast} - 1 + A_{1}\right) - a A_{2} \right)}, &&\text{if} \; \dfrac{a(x_{1}^{*} - 1)}{A_{1} + x_{1}^{*} - 1} \leq E_{max} \leq \dfrac{N}{A_{2}}(x_{1}^{*} - 1) \\ &&& \hspace{2 cm}\text{and} \;A_{1} \geq \dfrac{a}{N} A_{2} \\ &\dfrac{\theta_{j}}{A_{2}},&&\text{if} \; \left(E_{max} \geq \dfrac{N}{A_{2}}(x_{1}^{*} - 1)\; \text{and} \;A_{1} \geq\dfrac{a}{N} A_{2}\right)\\ &&& \hspace{2 cm}\text{or}\left(A_{1} < \dfrac{a}{N} A_{2}\right) \end{aligned} \end{cases} \end{equation} \begin{equation} \label{eq19} \bar{E}^{\ast} = \begin{cases} \begin{aligned} &0,&&\text{if} \; E_{max} \leq \dfrac{a(x_{1}^{*} - 1)}{A_{1} + x_{1}^{*} - 1} \; \text{and} \;A_{1} \geq\dfrac{a}{N} A_{2} \\ &\dfrac{E_{max}\left(x_{1}^{\ast} - 1 + A_{1} \right) - a \left(x_{1}^{\ast} - 1\right) }{N\left(x_{1}^{\ast} - 1 + A_{1}\right) - a A_{2}},&&\text{if} \; \dfrac{a(x_{1}^{*} - 1)}{A_{1} + x_{1}^{*} - 1} \leq E_{max} \leq \dfrac{N}{A_{2}}(x_{1}^{*} - 1) \\ &&& \hspace{2 cm}\text{and} \;A_{1} \geq \dfrac{a}{N} A_{2} \\ &\dfrac{E_{max}}{N},&&\text{if} \; \left(E_{max} \geq \dfrac{N}{A_{2}}(x_{1}^{*} - 1)\; \text{and} \;A_{1} \geq\dfrac{a}{N} A_{2}\right)\\ &&& \hspace{2 cm}\text{or} \;\left(A_{1} < \dfrac{a}{N} A_{2}\right) \end{aligned} \end{cases} \end{equation} \hrulefill \end{figure*} for $i=1,\cdots,M$ and $j=1,\cdots, N$, where $A_{1} = \sum_{i=1}^{M}{\gamma_{i}}$, $A_{2} = \sum_{j=1}^{N}{\theta_{j}}$, $x_{1}^{\ast} > 1$ is the solution of $f(x_{1}) = A_{1} - \dfrac{a}{N} A_{2}$ and $x^{\ast} > 1$ is the solution of $f(x) = A_{1}$, where \begin{equation} \label{eq20} f(x) = x \ln(x) - x + 1.\\ \end{equation} \end{theorem} \begin{IEEEproof} Please refer to Appendix D. \end{IEEEproof} For the sake of obtaining more insight into the solution given in Theorem~\ref{th:3}, we consider next a simple WPCN with only two users; one user of each type mentioned before. \subsection*{A Two-User Example} With the objective of capturing the optimality criteria of $\textbf{P4}$, we study a simple WPCN of only two nodes where $M=1$ and $N=1$. Referring to Theorem~\ref{th:3}, few key observations about the optimal solution are now in oder. First, the energy harvesting node is only allocated portion of the slot duration (either for harvesting $\tau_{0}$ or for data transmission $\tau_{1,1}$) if its uplink channel power gain $(g_{1,1})$ is greater than the channel power gain of the legacy node $(g_{2,1})$. Otherwise, the whole slot and the total allowable energy consumption per slot $(E_{max})$ are assigned to the legacy node. Second, for $g_{1,1} \geq g_{2,1}$, the portion of time which is allocated to the energy harvesting node depends on $E_{max}$. Based on the value of the maximum system energy consumption allowed per slot, $(E_{max})$, three different cases arise as follows. For small $E_{max} \leq \dfrac{a(x_{1}^{*} - 1)}{\gamma_{1} + x_{1}^{*} - 1}$, the energy harvesting node is allocated the whole slot and consumes the entire $E_{max}$. On the other hand, for large $E_{max} \geq \dfrac{1}{\theta_{1}}(x_{1}^{*} - 1)$, the whole slot and $E_{max}$ are assigned to the legacy node, as intution suggests. Finally, for $\dfrac{a(x_{1}^{*} - 1)}{\gamma_{1} + x_{1}^{*} - 1} \leq E_{max} \leq \dfrac{1}{\theta_{1}}(x_{1}^{*} - 1)$, each user is assigned a slot portion for uplink data transmission which is proportional to its uplink channel power gain. Taking into consideration the above two observations, the sum throughput maximization problem causes unfairness to different users. In addition, we note that each node is allocated uplink transmission time which does not only depend on its uplink channel power gain, as in WPCNs with energy harvesting nodes only, but also depends on the amount of allowable energy consumption per slot. Fig. \ref{fig:3} shows the optimal time allocation, for a WPCN with two nodes where $M=1$ and $N=1$ vs. $E_{max}$ for different values of $\dfrac{g_{1,1}}{g_{2,1}}$ ($\dfrac{g_{1,1}}{g_{2,1}} =$ 2, 2.5, 3, 3.5 and 4). We fix $a = 5$ and $\theta_{1} = 20$. It is observed that as $\dfrac{g_{1,1}}{g_{2,1}}$ expands, the range of $E_{max}$ values for which both users are allocated portions of the slot duration for data transmission, given by $\dfrac{a(x_{1}^{*} - 1)}{\gamma_{1} + x_{1}^{*} - 1} \leq E_{max} \leq \dfrac{1}{\theta_{1}}(x_{1}^{*} - 1)$, expands. It is also worth noting that $\tau_{2,1}$ monotonically increases and $\tau_{1,1}$ monotonically decreases as $E_{max}$ increases over the shown range. \begin{figure} \centering \includegraphics[width=9 cm, height= 6cm]{closedformillustration-eps-converted-to.pdf} \caption{Optimal time allocation behavior with $E_{max}$ for a two user system; one user of each type.} \label{fig:3} \end{figure} It is obvious by now that the optimal resource allocation policy, which maximizes the sum throughput, in WPCNs with heterogeneous nodes depends on two major factors: 1) the total amount of allowable energy consumption per slot $(E_{max})$ and 2) the channel power gains of the different nodes. This, in turn, leads to unfair rate allocation among different users as shown above. In the next section, we propose to maximize the minimum throughput to tackle the fairness problem. \section{Fair rate allocation in generalized WPCNs} \label{sec:maxmin} In this section, we shift our attention to the fair rate allocation problem in generalized WPCNs. This is motivated by the fairness challenges faced by \textbf{P1} as discussed next. In particular, we formulate a maxmin rate allocation problem. \subsection{Motivation} Given the sum throughput maximization problem in $\textbf{P1}$, the total allowable consumed energy per slot constraint in (\ref{eq6a}) allocates more energy, and, hence more uplink transmission time to nodes with better channel power gains. This leads to unfair rate allocation among different users. In Fig. \ref{fig:4}, the Jain's fairness index (JFI) \cite{33} is plotted for the optimal solution of \textbf{P1} against the pathloss exponent for a WPCN with two users, $K = 2$. Generally, JFI is defined as $\dfrac{\left( \sum_{i=1}^{K}{R_{i}}\right)^{2}}{K \sum_{i=1}^{K}{R_{i}^{2}}}$, where $R_i$ is the rate allocated to $U_{i}$. The channel power gains are modeled as $h_{i} = g_{i} = 10^{-3} \rho_{i}^{2} d_{i}^{-\beta}$ for $i=1, \cdots, K$, where $d_{i}$ denotes the distance between $U_{i}$ and the BS, $\beta$ denotes the pathloss exponent and $\rho_{i}$ is the standard Rayleigh short term fading; therefore $\rho_{i}^{2}$ is exponentially distributed random variable with unit mean. In addition, $P_{B} = 20$ dBm, $E^b_{1} = E^b_{2} = 10^{-7}$ joules, $\sigma^{2} = -160$ dBm/Hz, $\eta_{1} = \eta_{2} = 0.5$, $\Gamma = 9.8$ dB, $ d_{1} = \dfrac{d_{2}}{2} = 5$ meters, $E_{max} = 10^{-6} $ joules and the bandwidth is set to be 1 MHz. In addition, each throughput value is obtained by averaging over 1000 randomly generated channel realizations. For a wireless network of two users, the JFI ranges from 0.5 (worst case) to 1 (best case) and it is maximum when the two users achieve the same throughput. It is observed that the fairness index monotonically decreases as the pathloss exponent increases until it nearly approaches its worst value (0.5) when $\beta = 4$. This happens since the gap between the users' channel power gains increases as the pathloss exponent increases. This, in turn, highlights one instance of the fundamental throughput-fairness trade-off, where the maximum sum throughput is achieved at the expense of a modest degradation in the fairness. \begin{figure} \centering \includegraphics[width=9 cm, height= 5.5cm]{jainsfairnessindexp1-eps-converted-to.pdf} \caption{Jain's fairness index for the optimal solution of $\textbf{P1}$ vs. the pathloss exponent for $K=2$ users.} \label{fig:4} \end{figure} \subsection{Generalized Maxmin fairness formulation} Motivated by the fairness limitations of $\textbf{P1}$, we propose an alternative generalized optimization problem targeting fairness in the well-known maxmin sense \cite{34} subject to the same constraints of $\textbf{P1}$ as follows. \begin{equation} \label{eq25} \begin{aligned} & \textbf{P1}^{\text{Maxmin}}: \hspace{0.5 cm} &&\underset{\mathbf{E},\pmb{\tau}}{\text{max}} \; \; \underset{i}{\text{min}} \left(R_{i} \left(E_{i},\tau_{i}\right)\right) \\ &\text{s.t.} &&\sum_{i=0}^{K}{\tau_{i}} \leq 1, \\ &&&\sum_{i=1}^{K}{E_{i}} \leq E_{max}, \\ &&& \pmb{\tau} \succeq \mathbf{0},\\ &&& 0 \leq E_{i} \leq E^b_{i} + \eta_{i} P_{B} h_{i} \tau_{0},\hspace{0.5 cm} i=1, \cdots,K. \end{aligned} \end{equation} Based on Theorem~\ref{th:1}, it follows that the objective function of problem $\textbf{P1}^{\text{Maxmin}}$ which is the minimum of a set of concave functions, i.e, $R_{i} \left(E_{i},\tau_{i}\right)$ for $i=1,\cdots,K$, is a concave function. Therefore, $\textbf{P1}^{\text{Maxmin}}$ is a convex optimization problem. Note that, for the same conditions discussed in sections~\ref{sec:gen} and~\ref{sec: WPCNs with two type of nodes}, under which \textbf{P1} reduces to the sum throughput maximization problem for extreme scenarios known in the literature, $\textbf{P1}^{\text{Maxmin}}$ also reduces to the maxmin problem in these extreme cases. An equivalent optimization problem to $\textbf{P1}^{\text{Maxmin}}$ can be cast as follows. \begin{equation} \label{eq26} \begin{aligned} & \textbf{P1}^{-\text{Maxmin}}: & & \underset{t,\mathbf{E},\pmb{\tau}}{\text{max}} \; t \\ & \text{s.t.} & & \tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}}\right) \geq t,\hspace{0.5 cm} i=1, \cdots,K, \\ &&&\sum_{i=1}^{K}{E_{i}} \leq E_{max}, \\ &&& \pmb{\tau} \succeq \mathbf{0}, \\ &&& 0 \leq E_{i} \leq E^b_{i} + \eta_{i} P_{B} h_{i} \tau_{0}, \hspace{0.5 cm}i=1, \cdots,K, \end{aligned} \end{equation} where $t$ is an auxiliary variable that denotes the minimum throughput achieved by each user. With the purpose of obtaining more insight into the optimal policy of $\textbf{P1}^{-\text{Maxmin}}$, we provide the following Theorem which shows that the optimal policy of $\textbf{P1}^{-\text{Maxmin}}$ must satisfy the condition that all users achieve the same throughput. \begin{theorem}\label{th:6} The optimal policy of $\textbf{P1}^{-\text{Maxmin}}$ satisfies $R_i\left(E_{i}^*,\tau_{i}^*\right) = t^*$ for $i = 1,\cdots, K$. \end{theorem} \begin{IEEEproof} The proof is by contradiction. Without loss of generality, assume that the optimal policy satisfies $R_{i}\left(E_{i}^{*},\tau_{i}^{*}\right) = t_{1}$, $i = 1,\cdots, K-1$, and $R_{K}\left(E_{K}^*,\tau_{K}^*\right) = t_{2}$. Furthermore, assume that $t_{1} < t_{2}$, and, hence $t^{*} = t_{1}$. The monotonicity of each individual $R_{i}(E_{i},\tau_{i})$ in both $(E_{i},\tau_{i})$ guarantees that we can find $[\mathbf{E}^{\prime}\; \pmb{\tau}^{\prime}]$ which improves the minimum achievable throughput by all users. This can be achieved through decreasing $E_{k}^*$ or $\tau_{K}^{*}$ while increasing $E_{i}^{*}$ or $\tau_{i}^{*}$, $i = 1,\cdots, K-1$, till all users achieve a common throughput $t^{\prime}$ ($t_{1} < t^{\prime} < t_{2}$) and then no further improvements can be done. Therefore, the achievable throughput by all users using $[\mathbf{E}^{\prime}\; \pmb{\tau}^{\prime}]$ will be $t^{\prime} > t_{1}$ which contradicts with the assumption that $t_{1}$ is the maxmin throughput. This establishes the proof. \end{IEEEproof} Due to the convexity of $\textbf{P1}^{-\text{Maxmin}}$ and based on Theorem~\ref{th:6}, $\textbf{P1}^{-\text{Maxmin}}$ could be solved efficiently using standard convex optimization techniques, e.g, the sub-gradient approach along with the alternating optimization procedure. Details are omitted due to space limitations. One subgradient approach based algorithm is proposed to solve the maxmin problem in WPCNs with RF energy harvesting nodes only in \cite{21}. In the next section, we compare the two generalized formulations with respect to the total system throughput and individual user's throughput in order to highlight the merits and limitations of both. \begin{table}[t!]\caption{Table of simulation parameters} \centering \begin{center} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{ {c} | {c} } \hline\hline \textbf{Parameter} & \textbf{Value} \\ \hline $h_{i} = g_{i}$ & $10^{-3} \rho_{i}^{2} d_{i}^{-\beta}$ ($\rho_{i}^{2}$ is exponentially distributed random variable with unit mean)\\ \hline $d_{1} = d_{1,1}$; $d_{2} = d_{2,1}$ & $10$ meters; 5 meters\\ \hline $\sigma^2$; bandwidth & $-160$ dBm/Hz; $1$ MHz\\ \hline $\eta_{1} = \eta_{2}$; $\Gamma$ & $0.5$; $9.8$ dB\\ \hline\hline \end{tabular}} \end{center} \label{tab:TableOfsimulation} \end{table} \section{Numerical results} \label{sec:num} \subsection{System setup} We provide numerical results showing the merits of the formulated optimization problems and the associated trade-offs. Motivated by the convexity of the formulated maxmin problem in Section~\ref{sec:maxmin}, we use standard optimization solvers, e.g., CVX \cite{35}, to obtain its optimal solution. We denote the maxmin formulation of \textbf{Pi} by $\textbf{Pi}^{\text{Maxmin}}$, $i \in \{1,2,3,4\}$. We consider same parameters as in \cite{21} as follows. If not otherwise stated, we consider the following parameters $P_{B} = 30$ dBm, $\sigma^{2} = -160$ dBm/Hz, $\eta_{i}= 0.5$ for $i=1,\cdots,K$, $\Gamma = 9.8$ dB and the bandwidth is set to be 1 MHz. In addition, we consider the same model for the channel power gains as in Fig. \ref{fig:4}. Moreover, each throughput curve shown later is obtained by averaging over 1000 randomly generated channel realizations. In Fig. 5, 6, 7, 8 and 9, we consider the same scenario for all studied networks. The WPCN with two types of nodes is assumed to have $N=1$, $M=1$, $d_{1,1} = 10$ meters and $d_{2,1} = 5$ meters. In addition, the other wireless networks are considered to have two users with the same $d_{1,1}$ and $d_{2,1}$ given above. The average maximum sum throughput and maxmin throughput of the generalized problem setting (\textbf{P1} and $\textbf{P1}^{\text{Maxmin}}$) and conventional TDMA-based wireless networks (\textbf{P2} and $\textbf{P2}^{\text{Maxmin}}$) are plotted for different values of $E^b_{i}$ ($E^b_{1}$ = $E^b_{2}$ = $3\times10^{-7}$, $7\times10^{-7}$ and $5\times10^{-6}$ joules). The used values of simulation parameters are summarized in Table \ref{tab:TableOfsimulation}. Our objective it to fairly compare the performance of the generalized problem setting with the performance of different wireless networks discussed in sections~\ref{sec:gen} and ~\ref{sec: WPCNs with two type of nodes}, namely, conventional TDMA-based wireless networks, WPCNs with energy harvesting nodes only and WPCNs with two types of nodes subject to same amount of available resources. Towards this objective, for the sum throughput optimization problems, the average amount of harvested energy over the 1000 channel realizations for the WPCN with only energy harvesting nodes (\textbf{P3}) is set to $E_{max}$ in \textbf{P1}, \textbf{P2} and \textbf{P4} (i.e., per slot system energy constraint). This, in turn, results in the same long-term average energy consumption in all systems. Similarly, for the maxmin throughput optimization problems, the average amount of harvested energy over the 1000 channel realizations for the WPCN with only energy harvesting nodes ($\textbf{P3}^{\text{Maxmin}}$) is set to $E_{max}$ in $\textbf{P1}^{\text{Maxmin}}$, $\textbf{P2}^{\text{Maxmin}}$ and $\textbf{P4}^{\text{Maxmin}}$. Finally, in Fig. 10, we show the impact of replacing a number of Type I nodes with Type II nodes on WPCNs with heterogeneous nodes performance (\textbf{P4}). \begin{figure}[t!] \centering \includegraphics[width=9 cm, height= 7cm]{sumthroughputvsPathloss-eps-converted-to.pdf} \caption{Average maximum sum throughput for all systems with two nodes vs. the Pathloss Exponent, $\beta$.} \label{fig:5} \end{figure} \subsection{Performance results} In Fig. \ref{fig:5}, we compare the maximum sum throughput, averaged over 1000 channel realizations, for the 4 studied systems vs. the pathloss exponent $(\beta)$. A number of observations are now in order. First, we note that the average maximum sum throughput of the four studied systems monotonically decreases as the pathloss exponent increases. This is due to the fact that the channel power gains become worse as $\beta$ increases. Therefore, the amount of harvested energy by each user becomes lower and, hence, the average maximum sum throughput decreases. Second, when $E_{i}^{b} = 3 \times 10^{-7} \; \text{and}\; 7 \times 10^{-7}$ Joules, the average maximum sum throughput attained by \textbf{P1} is notably larger than that of \textbf{P2} for $\beta \leq 3$. This, in turn, highlights the great influence of the RF energy harvesting capability on generalized WPCNs performance, compared to conventional TDMA wireless networks (no RF energy harvesting). More specifically, when $\beta \leq 3$, both users experience good channels, and thus the amount of harvested energy is so large that the performance of \textbf{P1} greatly outperform that of \textbf{P2}. On the other hand, as $\beta$ increases $(\beta > 3)$, the channel power gains become worse, and, hence the effectiveness of the RF energy harvesting capability on the network performance decreases. Therefore, the performance of \textbf{P1} approaches that of \textbf{P2}. Third, when $E^b_{i}$ is large, i.e., $E_{i}^b = 5 \times 10^{-6}$, both $\textbf{P1}$ and $\textbf{P2}$ achieve the same average maximum sum throughput. This happens since when $E_{i}^{b}$ is large, the average maximum sum throughput of \textbf{P1} is attained via allocating the entire slot duration for uplink data transmissions, i.e., $\tau_{0} = 0$. Finally, the average maximum sum throughput achieved by \textbf{P4} is higher than the average maximum sum throughput achieved by \textbf{P3} due to the fact that the total allowable energy consumption per slot constraint allocates more energy to the user with higher channel power gains, that is, the legacy node in our scenario, to maximize the sum throughput. Therefore, in our scenario, the average maximum sum throughput is attained via allocating more energy to the legacy node than the energy harvesting node and, hence, reducing $\tau_{0}$. This is to the contrary of the WPCN with energy harvesting nodes only (\textbf{P3}), where the amount of harvested energy by farther user cannot be efficiently utilized for uplink data transmissions and cannot be reduced via reducing $\tau_{0}$ as in \textbf{P4}. This is attributed to the fact that, under \textbf{P3}, the user closer to the BS is also an energy harvesting node which harvests its energy during the same $\tau_{0}$ fraction of time. This, in turn, brings an interesting insight, and may be somewhat surprising at the first glance, that more realistic WPCNs with heterogeneous nodes outperform (in terms of the average maximum sum throughput) WPCNs with energy harvesting nodes only, assuming both are subject to the same overall system constraints. \begin{figure} \centering \includegraphics[width=9 cm, height= 7cm]{sumthroughputvsPA-eps-converted-to.pdf} \caption{Average maximum sum throughput for all systems with two nodes vs. the BS power, $P_{B}$.} \label{fig:6} \end{figure} In Fig. \ref{fig:6}, the average maximum sum throughput is plotted for the four systems under consideration against the BS power, $P_{B}$, considering the same scenario in Fig. \ref{fig:5} and using $\beta = 2$. We note that the average maximum sum throughput of the four systems monotonically increases as $P_{B}$ increases. This is intuitive since the average amount of harvested energy by both users in WPCNs with only RF energy harvesting nodes (\textbf{P3}) increases with $P_{B}$. Therefore, $E_{max}$ (the average amount of harvested energy in \textbf{P3}) in \textbf{P1}, \textbf{P2} and \textbf{P4} increases with $P_{B}$. This naturally results in a higher average maximum sum throughput. It is observed that the average maximum sum throughput attained by \textbf{P1} and \textbf{P2}, for the used values of $E_{i}^b$, is the same when $P_{B} \leq 15$ dBm. This is attributed to the fact that if $P_{B} \leq 15$ dBm, the average amount of harvested energy in \textbf{P3} ($E_{max}$ in \textbf{P1}, \textbf{P2} and \textbf{P4}) is very low that \textbf{P1} achieves the average maximum sum throughput via allocating all the entire slot duration for uplink data transmissions (no need for harvesting energy). As $P_{B}$ increases, i.e., $P_{B} > 15$ dBm, the average amount of harvested energy in \textbf{P3} becomes larger, and, hence the RF energy harvesting capability would have a great impact on the performance attained by \textbf{P1}. Therefore, we note that \textbf{P1} outperform \textbf{P2} in terms of the achievable average maximum sum throughput when $E_{i}^{b} = 3 \times 10^{-7} \; \text{and}\; 7 \times 10^{-7}$ Joules. In addition, the average maximum sum throughput of \textbf{P2} saturates. \begin{figure}[t!] \centering \includegraphics[width=9 cm, height= 7cm]{maxminvsPathloss-eps-converted-to.pdf} \caption{Average maxmin throughput for all systems with two nodes vs. the Pathloss Exponent, $\beta$.} \label{fig:7} \end{figure} Motivated by the inherent unfairness witnessed for the sum throughput maximization formulation for the four studied systems, Fig. \ref{fig:7} shows the average maxmin throughput comparison with the same set of parameters as in Fig. \ref{fig:5}. First, it is noticed that the average maxmin throughput attained by the generalized formulation ($\textbf{P1}^{\text{Maxmin}}$), for the used values of $E^b_{i}$, along with the conventional TDMA-based wireless network, for $E_{i}^{b} = 3 \times 10^{-7} \; \text{and}\; 7 \times 10^{-7}$, and WPCNs with two types of nodes, all outperform the performance of the WPCN with energy harvesting nodes only. It is also observed that twice the average maxmin throughput of each system (which is the average sum throughput given that we have only two users based on Theorem~\ref{th:6}), at each pathloss exponent value, is less than the average maximum sum throughput for the same system (Fig. \ref{fig:5}). This, in turn, demonstrates the fundamental trade off between achieving maximum sum throughput and achieving fair throughout allocations among different users. In Fig. \ref{fig:8}, the average maxmin throughput is plotted for the four systems against $P_{B}$ considering the same scenario in Fig. \ref{fig:6}. It is observed that the average maxmin throughput of the four systems monotonically increases with $P_{B}$. For small values of $P_{B}$, i.e., $P_{B} \leq 10$ dBm, the average maxmin throughput attained by $\textbf{P1}^{\text{Maxmin}}$ and $\textbf{P2}^{\text{Maxmin}}$, for different values of $E^b_{i}$, achieve the highest average maxmin throughput. In addition, the range of $P_{B}$ values, over which the performance of $\textbf{P2}^{\text{Maxmin}}$ closely follows the performance of $\textbf{P1}^{\text{Maxmin}}$, expands as $E^b_{i}$ increases. In Fig. \ref{fig:10}, our objective is to emphasize the impact of users' distances, $d_{1,1}$ and $d_{2,1}$, from the BS on the network performance. Towards this objective, the average maximum sum throughput of the four systems under consideration is plotted against $d_{1,1}$. Furthermore, we fix $d_{2,1} = 5$ meters, $\beta = 2$ and $P_{B} = 20$ dBm. We note that the average maximum sum throughput of the four studied systems monotonically decreases as $d_{1,1}$ increases. This is due to the fact that as $d_{1,1}$ increases, $U_{1,1}$ experiences a worse channel in both the uplink and the downlink, and, thus harvests less energy from the BS and requires more energy for uplink data transmissions. Furthermore, similar to Fig. \ref{fig:5} and Fig. \ref{fig:6}, we observe that the average maximum sum throughput attained by \textbf{P2} and \textbf{P3} constitute lower bounds on the performance attained by the generalized setting in \textbf{P1}. Fig. \ref{fig:9} shows the impact of replacing a number of Type I nodes with Type II nodes on WPCNs with heterogeneous nodes performance (\textbf{P4}), via comparing the average maximum sum throughput of $\textbf{P4}$ for different combinations of $M$ and $N$. Towards this objective, we consider a network with six users with same distance $d = \frac{10}{6}$ meters. Note that the insight revealed in Fig. \ref{fig:9} remains valid for all different scenarios with randomized sets of users' distances as demonstrated in Fig. \ref{fig:5} and Fig. \ref{fig:10}. Thus, we focus on the scenario of all users with the same distance to emphasize that effect on the network performance. In addition, we use $P_{B} = 20$ dBm. It is observed that as the number of Type II nodes ($N$) increases, the average maximum sum throughput increases since increasing $N$ reduces the allocated time for energy harvesting $(\tau_{0})$ and, hence, the average maximum sum throughput increases via assigning that reduction in $(\tau_{0})$ for uplink data transmission through Type II nodes. Therefore, it is clear that the highest and lowest average maximum sum throughput are obtained by the extreme cases of $N=6$, $M=0$ and $N=0$, $M=6$ (\textbf{P3}), respectively, as shown in the figure. \begin{figure} \centering \includegraphics[width=9 cm, height= 7cm]{maxminvsPA-eps-converted-to.pdf} \caption{Average maxmin throughput for all systems with two nodes vs. the BS power, $P_{B}$.} \label{fig:8} \end{figure} \begin{figure} \centering \includegraphics[width=9 cm, height= 7cm]{distance_Effect-eps-converted-to.pdf} \caption{Average maximum sum throughput for all systems with two nodes vs. $U_{1}$'s distance, $d_{1,1}$.} \label{fig:10} \end{figure} \begin{figure} \centering \includegraphics[width=9 cm, height= 7cm]{large-eps-converted-to.pdf} \caption{ Average maximum sum throughput for $\textbf{P4}$ vs. the Pathloss Exponent, $\beta$, for different mixes of node types.} \label{fig:9} \end{figure} \section{Conclusion} This paper introduces a new, more realistic wireless network setting, coined generalized wireless powered communication networks. Under this setting, each node has two energy sources; a constant energy supply and an RF energy harvesting circuitry. We formulate two optimization problems to investigate the maximum sum throughput and the maxmin throughput. Moreover, we show that different known wireless networks fall as special cases of the proposed system model, namely, conventional TDMA-based wireless networks, WPCNs with only RF energy harvesting nodes and WPCNs with heterogeneous nodes. Our numerical results highlight the great impact of the RF energy harvesting capability on the generalized problem performance, compared to conventional TDMA-based wireless networks. Furthermore, they reveal that the performance of the generalized problem approaches the performance of conventional TDMA-based wireless networks as the amount of allowable consumed energy from constant supply per slot increases. They also demonstrate the fundamental trade off between achieving maximum sum throughput and achieving fairness among different users. In addition, the results reveal the superiority of WPCNs with heterogeneous nodes compared to traditional WPCNs with RF energy harvesting nodes only. As part of the future work, we would like to extend the current framework to multiple BSs. \label{sec:con} \section*{Appendix A} Thanks to the fact that the perspective function of a concave function is also a concave function \cite{35}. $\tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}} {\tau_{i}}\right)$ is the perspective function of the concave function $\log_{2} \left(1 + \alpha_{i} E_{i}\right)$ which preserves the concavity of $R_{i}$ with respect to $(E_{i},\tau_{i})$. Since the non-negative weighted sum of concave functions is also concave \cite{35}, then the objective function of $\textbf{P1}$, which is the non-negative weighted summation of concave functions, i.e., $R_{i}$ for $i=1, \cdots,K$, is a concave function in $(\mathbf{E},\pmb{\tau})$. In addition, all constraints of $\textbf{P1}$ are affine in $(\mathbf{E},\pmb{\tau})$. This establishes the proof. \section*{Appendix B} For a given $\mathbf{E}$ that satisfies (\ref{eq6a}) and $0 \leq E_{i} < E^b_{i} + \eta_{i} P_{B} h_{i}, i=1, \cdots,K$, \textbf{P1} reduces as follows. \begin{align} \nonumber & \textbf{P1}^{\prime}: \nonumber&& \underset{\pmb{\tau}}{\text{max}}\;\; \sum_{i=1}^{K}{\tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}}\right)} \\ \label{eq70a}&\text{s.t.} &&\sum_{i=1}^{K}{\tau_{i}} \leq 1 - \tau_{0}, \\ \label{eq70b} &&& \pmb{\tau} \succeq \mathbf{0},\\ \label{eq70c} &&& \tau_{0} \geq \dfrac{E_{i} - E^b_{i}}{\eta_{i} P_{B} h_{i}},\hspace{0.5 cm} i=1, \cdots,K. \end{align} It can be easily shown that $R_{i} = \tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}}\right)$ is a monotonically increasing function in $(E_{i}, \tau_{i})$ [25, Lemma 3.2], $i=1,\cdots,K$. Therefore, the constraint in (\ref{eq70a}) should hold with equality at the optimality (otherwise, the objective function can be further increased by increasing some $\tau_{i}$'s). Hence, from (\ref{eq70c}), the optimal harvesting time duration is given by \begin{equation}\label{eq71} \tau_{0}^{*} = \text{min} \left[ \left(\underset{i}{\text{max}}\lbrace\dfrac{E_{i} - E^b_{i}}{\eta_{i} P_{B} h_{i}}\rbrace\right)^{+},\; 1 \right]. \end{equation} Hence, $\textbf{P1}^{\prime}$ reduces to \begin{align} \nonumber & \textbf{P1}^{\prime \prime}: \hspace{0.5 cm} \nonumber&& \underset{\pmb{\tau^{\prime}}}{\text{max}}\;\; \sum_{i=1}^{K}{\tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}}\right)} \\ \label{eq72a}&\text{s.t.} &&\sum_{i=1}^{K}{\tau_{i}} = 1 - \tau_{0}^{*}, \\ \label{eq72b} &&& \pmb{\tau^{\prime}} \succeq \mathbf{0}. \end{align} Recall that $\pmb{\tau^{\prime}}=[\tau_{1}, \cdots,\tau_{K}]$. Based on Theorem \ref{th:1}, $\textbf{P1}^{\prime \prime}$ is a convex optimization problem and its Lagrangian is given by \begin{dmath} \label{eq73} \mathcal{L}\left(\pmb{\tau^{\prime}},\mu\right) = R_{sum}\left(\pmb{\tau^{\prime}}\right) + \mu \left(\sum_{i=1}^{K}{\tau_{i}}- \left(1 - \tau_{0}^{*}\right)\right) , \end{dmath} where $R_{sum}\left(\pmb{\tau^{\prime}}\right) = \sum_{i=1}^{K}{\tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}}\right)} $ and $\mu$ is the Lagrangian dual variable associated with the total slot duration constraint (\ref{eq72a}). It can be easily shown that there exists a $\pmb{\tau^{\prime}}$ that strictly satisfies all constraints of $\textbf{P1} ^{\prime \prime}$. Hence, according to Slater's condition \cite{35}, strong duality holds for this problem; therefore, the KKT conditions are necessary and sufficient for the global optimality of $\textbf{P1} ^{\prime \prime}$, which are given by \begin{align} \label{eq74} \dfrac{\partial}{\partial \tau_{i}^{*}} \mathcal{L}\left(\pmb{\tau^{\prime *}},\mu^{*}\right) = \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}^{*}}\right) - \dfrac{\alpha_{i} \dfrac{E_{i}}{ \tau_{i}^{*}}}{\ln(2)\left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}^{*}}\right)} = - \mu^{*}, \end{align} $i=1, \cdots,K,$ \begin{equation} \label{eq75} \sum_{i=1}^{K}{\tau_{i}^{*}} = 1 - \tau_{0}^{*}, \end{equation} where $\pmb{\tau^{\prime *}}$ and $\mu^{\ast}$ denote, respectively, the optimal primal and dual solutions of $\textbf{P1}^{\prime \prime}$. Therefore, from (\ref{eq74}) and (\ref{eq75}), we have \begin{equation} \label{eq76} \alpha_{1} \dfrac{E_{1}}{\tau_{1}^{*}} = \alpha_{2} \dfrac{E_{2}}{\tau_{2}^{*}} \cdots \alpha_{K} \dfrac{E_{K}}{\tau_{K}^{*}} = \dfrac{\sum_{j=1}^{K}{\alpha_{i} E_{i}}}{1 - \tau_{0}^{*}}. \end{equation} Thus from (\ref{eq76}), the optimal time allocations are given by \begin{equation} \label{eq78} \tau_{i}^{*}=\dfrac{\alpha_{i} E_{i} \left(1 - \tau_{0}^{*}\right)}{\sum_{j=1}^{K}{\alpha_{j} E_{j}}},\; i= 1, \cdots,K. \end{equation} This establishes the proof. \section*{Appendix C} For a given $\pmb{\tau}$ that satisfies (\ref{eq6b}) - (\ref{eq6d}), \textbf{P1} reduces as follows. \begin{align} \nonumber & \textbf{P1}^{\dagger}: \hspace{0.5 cm} \nonumber&& \underset{\mathbf{E}}{\text{max}}\; \; \sum_{i=1}^{K}{\tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}}\right)} \\ \label{eq79a}&\text{s.t.} &&\sum_{i=1}^{K}{E_{i}} \leq E_{max}, \\ \label{eq79b}&&& 0 \leq E_{i} \leq E^b_{i} + \eta_{i} P_{B} h_{i} \tau_{0},\hspace{0.5 cm} i=1, \cdots,K. \end{align} Recall that $R_{i} = \tau_{i} \log_{2} \left(1 + \alpha_{i} \dfrac{E_{i}}{\tau_{i}}\right)$ is a monotonically increasing function in $(E_{i}, \tau_{i})$, $i=1,\cdots,K$. Therefore, when $E_{max} \geq \sum_{j = 1}^{K}{\left(E^b_{j} + \eta_{j} P_{B} h_{j} \tau_{0}\right)}$, $\textbf{P1}^{\dagger}$ has a trivial solution that $E_{i}^{*} = E^b_{i} + \eta_{i} P_{B} h_{i} \tau_{0}$, $i=1, \cdots,K$. On the other hand, when $E_{max} < \sum_{j = 1}^{K}{\left(E^b_{j} + \eta_{j} P_{B} h_{j} \tau_{0}\right)}$, the optimal solution of $\textbf{P1}^{\dagger}$ can be characterized as follows. First, the constraint in (\ref{eq79a}) should hold with equality at the optimality (otherwise, the objective function can be further increased by increasing some $E_{i}$'s). Based on Theorem \ref{th:1}, $\textbf{P1}^{\dagger}$ is a convex optimization problem and its Lagrangian is given by \begin{equation} \label{eq80} \mathcal{L}\left(\mathbf{E},\lambda\right) = R_{sum}\left(\mathbf{E}\right) + \lambda \left(\sum_{i=1}^{K}{E_{i}} - E_{max}\right) , \end{equation} where $\lambda$ is the Lagrangian dual variable associated with the total allowable consumed energy per slot constraint (\ref{eq79a}). The strong duality holds for $\textbf{P1}^{\dagger}$; therefore, the KKT conditions are necessary and sufficient for the global optimality of $\textbf{P1}^{\dagger}$, which are given by \begin{equation} \label{eq81} \dfrac{\partial}{\partial E_{i}^{*}} \mathcal{L}\left(\mathbf{E}^{*},\lambda^{\ast}\right) = \dfrac{\alpha_{i}}{\ln(2)\left(1 + \dfrac{\alpha_{i} E_{i}^{*}}{\tau_{i}}\right)} + \lambda^{*}= 0,\; i=1, \cdots,K, \end{equation} \begin{equation} \label{eq82} \sum_{i=1}^{K}{E_{i}^{*}} = E_{max}, \end{equation} where $\mathbf{E}^{*}$ and $\lambda^{\ast}$ denote, respectively, the optimal primal and dual solutions of $\textbf{P1}^{\dagger}$. Therefore, from (\ref{eq81}), we have \begin{equation} \label{eq83} E_{i}^{*} = - \dfrac{\tau_{i}}{\alpha_{i}}\left(\dfrac{\alpha_{i}}{\lambda^{*} \ln(2)} + 1\right),\; i=1, \cdots,K. \end{equation} Taking into account the constraints in (\ref{eq79b}), the optimal energy allocations are given by \begin{equation}\label{eq85} E_{i}^{*} = \text{min}\left[\left(- \dfrac{\tau_{i}}{\alpha_{i}}\left(\dfrac{\alpha_{i}}{\lambda^{*} \ln(2)} + 1\right)\right)^{+},\;E^{b}_{i} + \eta_{i} P_{B} h_{i} \tau_{0} \right], \end{equation} where $i=1,\cdots,K$ and $\lambda^{*}$ satisfies the equality constraint $\sum_{i=1}^{K}{E_{i}^{*}} = E_{max}$. This establishes the proof. \section*{Appendix D} \textbf{P4} is a convex optimization problem and its Lagrangian is given by \begin{dmath} \label{eq34} \mathcal{L}\left(\pmb{\tau^{\prime \prime}},\bar{E},\lambda,\mu\right) = R_{sum}\left(\pmb{\tau^{\prime \prime}},\bar{E}\right) - \mu \left(\tau_{0}+\sum_{i=1}^{M}{\tau_{1,i}}+\sum_{j=1}^{N}{\tau_{2,j}} - 1\right) - \lambda \left(a\tau_{0} + N \bar{E} - E_{max}\right), \end{dmath} where $\mu$ and $\lambda$ are the Lagrangian dual variables associated with the slot duration and the total allowable consumed energy per slot constraints, respectively, and $R_{sum}\left(\pmb{\tau^{\prime \prime}},\bar{E}\right) = \sum_{i=1}^{M}{R_{1,i}\left(\tau_{0},\tau_{1,i}\right)} + \sum_{j=1}^{N}{R_{2,j}\left(\bar{E},\tau_{2,j}\right)}$. Hence, the dual function can be expressed as \begin{equation} \label{eq35} G\left(\lambda,\mu\right) = \underset{\pmb{\tau^{\prime \prime}},\bar{E}\in \mathcal{S}}{\max } \; \mathcal{L}\left(\pmb{\tau^{\prime \prime}},\bar{E},\lambda,\mu\right), \end{equation} where $\mathcal{S}$ is the feasible set specified by $\pmb{\tau^{\prime \prime}} \succeq \mathbf{0}$ and $\bar{E} \geq 0$. It can be easily shown that there exists a $(\pmb{\tau^{\prime \prime}},\bar{E})$ that strictly satisfies all constraints of \textbf{P4}. Hence, according to Slater's condition \cite{35}, strong duality holds for this problem; therefore, the KKT conditions are necessary and sufficient for the global optimality of \textbf{P4}, which are given by \begin{dmath} \label{eq36} \tau_{0}^{\ast}+\sum_{i=1}^{M}{\tau_{1,i}^{\ast}}+\sum_{j=1}^{N}{\tau_{2,j}^{\ast}} \leq 1, \end{dmath} \begin{equation} \label{eq37} a\tau_{0}^{\ast} + N \bar{E}^{\ast} \leq E_{max}, \end{equation} \begin{equation} \label{eq38} \mu^{\ast} \left(\tau_{0}^{\ast}+\sum_{i=1}^{M}{\tau_{1,i}^{\ast}}+\sum_{j=1}^{N}{\tau_{2,j}^{\ast}} - 1\right) = 0, \end{equation} \begin{equation} \label{eq39} \lambda^{\ast} \left(a\tau_{0}^{\ast} + N \bar{E}^{\ast} - E_{max}\right) = 0, \end{equation} \begin{equation} \label{eq40} \dfrac{\partial}{\partial \tau_{0}}R_{sum}\left(\pmb{\tau^{\prime \prime \ast}},\bar{E}^{\ast}\right) - \left(a \lambda^{\ast} + \mu^{\ast}\right) = 0, \end{equation} \begin{equation} \label{eq41} \dfrac{\partial}{\partial \tau_{1,i}}R_{sum}\left(\pmb{\tau^{\prime \prime \ast}},\bar{E}^{\ast}\right) - \mu^{\ast} = 0, \; i=1, \cdots,M, \end{equation} \begin{equation} \label{eq42} \dfrac{\partial}{\partial \tau_{2,j}}R_{sum}\left(\pmb{\tau^{\prime \prime \ast}},\bar{E}^{\ast}\right) - \mu^{\ast} = 0,\; j=1, \cdots,N, \end{equation} \begin{equation} \label{eq43} \dfrac{\partial}{\partial \bar{E}^{\ast}}R_{sum}\left(\pmb{\tau^{\prime \prime \ast}},\bar{E}^{\ast}\right) - N \lambda^{\ast} = 0, \end{equation} where $\left(\pmb{\tau^{\prime \prime \ast}},\bar{E}^{\ast}\right)$ and $\left(\lambda^{\ast},\mu^{\ast}\right)$ denote, respectively, the optimal primal and dual solutions of \textbf{P4}. Since $R_{sum}\left(\pmb{\tau^{\prime \prime}},\bar{E}\right)$ is a monotonic increasing function in $\left(\pmb{\tau^{\prime \prime}},\bar{E}\right)$, therefore $\tau_{0}^{\ast}+\sum_{i=1}^{M}{\tau_{1,i}^{\ast}}+\sum_{j=1}^{N}{\tau_{2,j}^{\ast}} = 1$ and $a\tau_{0}^{\ast} + N \bar{E}^{\ast} = E_{max}$ must hold. From (\ref{eq40}) - (\ref{eq43}), we have \begin{equation} \label{eq44} \sum_{i=1}^{M}{\dfrac{\gamma_{i}}{1 + \gamma_{i} \dfrac{\tau_{0}^{\ast}}{\tau_{1,i}^{\ast}}}} = \left(a \lambda^{\ast} + \mu^{\ast}\right) \ln(2), \end{equation} \begin{equation} \label{eq45} \ln\left(1 + \gamma_{i} \dfrac{\tau_{0}^{\ast}}{\tau_{1,i}^{\ast}}\right) - \dfrac{\gamma_{i}\dfrac{\tau_{0}^{\ast}}{\tau_{1,i}^{\ast}}}{1+\gamma_{i}\dfrac{\tau_{0}^{\ast}}{\tau_{1,i}^{\ast}}} = \mu^{\ast} \ln(2),\; i=1, \cdots,M, \end{equation} \begin{equation} \label{eq46} \ln\left(1+\dfrac{\bar{E}^{\ast} \theta_{j}}{\tau_{2,j}^{\ast}}\right) - \dfrac{\dfrac{\bar{E}^{\ast} \theta_{j}}{\tau_{2,j}^{\ast}}}{1 + \dfrac{\bar{E}^{\ast} \theta_{j}}{\tau_{2,j}^{\ast}}} = \mu^{\ast} \ln(2), \; j=1, \cdots, N. \end{equation} \begin{equation} \label{eq47} \sum_{j=1}^{N}{\dfrac{\theta_{j}}{1 + \theta_{j} \dfrac{\bar{E}^{\ast}}{\tau_{2,j}^{\ast}}}} = N\lambda^{\ast}\ln(2), \end{equation} Therefore, from (\ref{eq45}) and (\ref{eq46}), we have \begin{equation} \label{eq48} \dfrac{\gamma_{1} \tau_{0}^{\ast}}{\tau_{1,1}^{\ast}} = \dfrac{\gamma_{2} \tau_{0}^{\ast}}{\tau_{1,2}^{\ast}} = \cdots \dfrac{\gamma_{M} \tau_{0}^{\ast}}{\tau_{1,M}^{\ast}} = \dfrac{\bar{E}^{\ast}\theta_{1}}{\tau_{2,1}^{\ast}} = \dfrac{\bar{E}^{\ast}\theta_{2}}{\tau_{2,2}^{\ast}} = \cdots \dfrac{\bar{E}^{\ast}\theta_{N}}{\tau_{2,N}^{\ast}} = x_{1}. \end{equation} From $\tau_{0}^{\ast}+\sum_{i=1}^{M}{\tau_{1,i}^{\ast}}+\sum_{j=1}^{N}{\tau_{2,j}^{\ast}} = 1$ and (\ref{eq48}), $\tau_{1,i}^{\ast}$ and $\tau_{2,j}^{\ast}$ can be expressed, respectively, by \begin{equation} \label{eq49} \tau_{1,i}^{\ast} = \dfrac{\gamma_{i} \left(N\left(x_{1}^{\ast}- 1\right) - E_{max} A_{2} \right) }{\left(x_{1}^{\ast} - 1\right) \left(N\left(x_{1}^{\ast} - 1 + A_{1}\right) - a A_{2} \right)}, \; i=1, \cdots, M, \end{equation} \begin{equation} \label{eq50} \tau_{2,j}^{\ast} = \dfrac{\theta_{j} \left(E_{max}\left(x_{1}^{\ast} - 1 + A_{1} \right) - a \left(x_{1}^{\ast} - 1\right)\right) }{\left(x_{1}^{\ast} - 1\right)\left(N\left(x_{1}^{\ast} - 1 + A_{1}\right) - a A_{2} \right)}, \; j=1, \cdots, N, \end{equation} where $A_{1} = \sum_{i=1}^{M}{\gamma_{i}}$ and $A_{2} = \sum_{j=1}^{N}{\theta_{j}}$. From (\ref{eq44}) and (\ref{eq47}), it follows that \begin{equation}\label{eq51} \lambda^{\ast} = \dfrac{A_{2}}{ N x_{1} \ln(2)}, \end{equation} \begin{equation}\label{eq52} \mu^{\ast} = \dfrac{A_{1} - \dfrac{a}{N}A_{2}}{x_{1} \ln(2)}. \end{equation} By substituting with $\mu^{\ast}$ into (\ref{eq45}), we have \begin{equation} \label{eq53} x_{1}\ln(x_{1}) - x_{1} + 1 = A_{1} - \dfrac{a}{N} A_{2}. \end{equation} From (\ref{eq49}) and (\ref{eq50}), it is clear that $x_{1} > 1$ if $A_{1} > 0$, $A_{2} > 0$ and $0< \tau_{0}^{\ast} < 1$. According to \cite[Lemma 3.2]{21}, there exists a unique solution $x_{1}^{\ast} > 1$ for (\ref{eq53}) if $A_{1} \geq \dfrac{a}{N} A_{2}$, otherwise the total slot time and the total allowable consumed energy per slot will be assigned to the Type II nodes for uplink information transmissions. Thus from (\ref{eq48})-(\ref{eq53}), the optimal time and energy allocations are given by \begin{equation} \label{eq54} \tau_{0}^{\ast} = \dfrac{N\left(x_{1}^{\ast} - 1 \right)- E_{max} A_{2} }{N\left(x_{1}^{\ast} - 1 + A_{1}\right) - a A_{2}}, \end{equation} \begin{equation} \label{eq55} \tau_{1,i}^{\ast} = \dfrac{\gamma_{i} \left(x_{1}^{\ast} - E_{max} A_{2} - 1\right) }{\left(x_{1}^{\ast} - 1\right) \left(x_{1}^{\ast} + A_{1} - a A_{2} -1\right)}, \; i=1, \cdots,M, \end{equation} \begin{equation} \label{eq56} \tau_{2,j}^{\ast} = \dfrac{\theta_{j} \left(E_{max}\left(x_{1}^{\ast} + A_{1} - 1\right) - a \left(x_{1}^{\ast} - 1\right)\right) }{K\left(x_{1}^{\ast} - 1\right)\left(x_{1}^{\ast} + A_{1} - a A_{2} -1\right)}, \; j=1, \cdots,N. \end{equation} \begin{equation} \label{eq57} \bar{E}^{\ast} = \dfrac{E_{max}\left(x_{1}^{\ast} - 1 + A_{1} \right) - a \left(x_{1}^{\ast} - 1\right) }{N\left(x_{1}^{\ast} - 1 + A_{1}\right) - a A_{2}}. \end{equation} From (\ref{eq54}) - (\ref{eq57}) and taking into account that $[\tau_{0}^{*}, \tau_{1,1}^{*}, \cdots,\tau_{1,M}^{*}, \tau_{2,1}^{*}, \cdots,\tau_{2,N}^{*}, \bar{E}^{*}] \succeq \mathbf{0}$ , we must have $\dfrac{a(x_{1}^{*} - 1)}{A_{1} + x_{1}^{*} - 1} \leq E_{max} \leq \dfrac{N}{A_{2}}(x_{1}^{*} - 1)$. If $E_{max} > \dfrac{N}{A_{2}}(x_{1}^{*} - 1)$, then we have $[\tau_{0}^{*}, \tau_{1,1}^{*}, \cdots,\tau_{1,M}^{*}] \prec \mathbf{0}$. Hence, the total slot time and the total allowable consumed energy per slot will be assigned to the Type II nodes for uplink information transmissions. Therefore, from (\ref{eq38}) and (\ref{eq46}), the optimal time and energy allocations are given by (\ref{eq16})-(\ref{eq19}). On the other hand, if $E_{max} < \dfrac{a(x_{1}^{*} - 1)}{A_{1} + x_{1}^{*} - 1}$, then we have $[\tau_{2,1}^{*}, \cdots,\tau_{2,N}^{*}, \bar{E}^{*}] \prec \mathbf{0}$. Hence, the total slot time and the total allowable consumed energy per slot will be assigned to the Type I nodes for uplink information transmissions. Therefore, from (\ref{eq38}), (\ref{eq44}) and (\ref{eq45}), the optimal time and energy allocations are given by (\ref{eq16})-(\ref{eq19}). This establishes the proof.
1,477,468,751,269
arxiv
\section*{introduction and background} Gromov Witten theory has provided important tools to study interesting loci in the moduli space of curves since the early 90's. Here we use the Gromov-Witten theory of smooth projective homogeneous spaces $X$ to produce a seemingly rich source of basepoint free classes of arbitrary codimension on the moduli space of stable $n$-pointed rational curves. Basepoint free divisors on a projective variety like $\ovop{M}_{0,n}$ are important as they give rise to morphisms to other projective varieties; basepoint free cycles of higher codimension reflect other aspects of the geometry of the space. The basic construction of Gromov-Witten classes is the following: Consider a locus $L$ of points $(C,\vec{p})\in \ovop{M}_{0,n}$, so that there is a stable map $f$ of some particular degree $\beta$, from a pre-stable curve $(\widetilde{C},\vec{p})$ to a variety $X$, so that $\widetilde{C}$ maps to $C$ and the images of the marked points $p_i$ lie on some fixed Schubert subvarieties $W_i\subset X$, in general position \cite{KM,FP}\footnote{All definitions, and requirements are explained in Section \ref{oneone}.}. If $X$ is a homogeneous variety on which a group $G$ acts transitively, and the expected dimension of such maps is $-c$, we will find an effective cycle of codimension $c$ on $\ovop{M}_{0,n}$. Moving the $W_i$ by the group $G$, using Kleiman's transversality theorem, one can show that the associated linear system does not have a base locus. We work with cycles up to rational equivalence, and as we show in Proposition \ref{GWStrong}, the Gromov Witten loci we consider satisfy a more robust and functorial basepoint free condition, closed under intersection products, which we call {\em{rational strongly basepoint free}}, after Fulger and Lehmann \cite{FL} (see Definition \ref{SBPFDef}). We often call these strongly base point free. In Lemma \ref{SBPFprops}, we verify (as with numerical equivalence on smooth varieties as in \cite{FL}) that in this context, the pushforward of strongly base point free classes along flat maps are strongly basepoint free. Since forgetful maps $\ovop{M}_{0,n}\to \ovop{M}_{0,m}$ with $m<n$ are flat, one obtains base point free divisor classes on $\ovop{M}_{0,n}$ by pushing forward strongly base point free classes, of higher codimension (like the GW classes). Said otherwise, higher codimension classes are useful even if one is only interested in divisors. To identify cycle classes, one may intersect with a dual basis. For instance, a divisor class on $\ovop{M}_{0,n}$ is computed by intersecting with boundary curves. Explicit expressions for such an intersection are given in Propositions \ref{DivisorIntersection}, and \ref{HigherCodimensionIntersection}. Proposition \ref{Recon} simplifies the formula in Proposition \ref{DivisorIntersection} considerably in case the rational cohomology of $X$ is generated by divisors, as for $X=G/B$ with $B$ a Borel subgroup. In practice, to compute the divisor classes, one needs (1) The (small) quantum cohomology rings of the homogeneous spaces $X$, and (2) Four point (big) quantum cohomology numbers (where the underlying pointed curve is not held fixed in the enumerative problem). As explained in Proposition \ref{Recon}, it follows from \cite{KM} that the second condition can be reduced recursively to the first, if the rational cohomology of $X$ is generated by divisors, as for $X=G/B$ with $B$ a Borel subgroup. In Proposition \ref{complet}, we show that when $X=\Bbb{P}^r$, Gromov Witten divisors are numerically equivalent to so-called conformal blocks divisors for type A at level one \cite{Fakh} (described in Section \ref{CBDivisors}). In this case classes are indexed by parameters $a_1,\dots,a_n \in \{0,\dots,r\}$, such that $\sum_i a_i\equiv 0 \pmod{r+1}$. Higher codimension GW cycles for $X=\Bbb{P}^r$, especially the basepoint free divisor classes they push forward to under forgetful maps, seem to be new (see Remark \ref{Interesting}). To illustrate, we give in Section \ref{PushforwardExamples}, two examples of higher codimension cycles that pushforward to divisors. We also consider examples given by smooth projective quadric hypersurfaces $X=Q_r$ of both even and odd dimension $r$. In this case, one obtains a divisor if $\sum_{i}a_i\equiv 1\pmod{r}$. When $r$ is odd, the $a_i \in\{0,1,\dots,r\}$ index Schubert classes $H_i$ of codimension $i$ generated by the hyperplane $H_1$. When even, there are two Schubert classes in the middle dimension. The cycle classes lie in cones generated by nef cycles (those classes that nonnegatively intersect ones of complementary codimension). We see in fact that many divisors formed by quadrics are on extremal faces of the nef cone, as they contract boundary curves depending on the Schubert cycles chosen (see Propositions \ref{Contract1} and \ref{Contract2}). We give two examples of extremal rays of the nef cone generated by classes from odd quadrics, and using even quadrics, we give two examples of divisors that lie on extremal faces. One of the even quadric examples lies on a two dimensional extremal face of the nef cone not known to be spanned by conformal blocks divisors. Projective space and quadrics are just the beginning: In principle, any loci can be studied with available methods. In Section \ref{Questions} we list two questions about these and related classes. \begin{remark} Loci of enumerative significance inside $(G/B)^n$ were used in recent work of the first author and J. Kiers \cites{B,BK} to determine the extremal rays of the $\Bbb{Q}$-cone of $G$-invariant effective divisors on $(G/B)^n$, see \cite[Theorem 1.6]{BK}. These loci bear a resemblance to the Gromov-Witten loci considered in this paper, in that we are varying the marked curve and keeping the point in $(G/B)^n$ fixed here: In \cite{BK} one considers loci of points $\vec{{g}}\in(G/B)^n$ such that there exist points of $G/P$ which satisfy enumerative constraints given by $\vec{{g}}$. A point in $G/P$ can be viewed as a degree zero map from a fixed $n$-marked genus zero curve to $G/P$. Maps of non-zero degrees are considered in the multiplicative/quantum generalizations of this problem. The Gromov-Witten loci are basepoint free, whereas in \cite{B,BK}, the loci obtained (under some enumerative assumptions) are strongly rigid \cite[Theorem 1.6, (b)]{BK}. It is perhaps fruitful to look at ``universal" GW enumerative loci in $\ovop{M}_{0,n}\times (G/B)^n$, but we have not pursued this here. \end{remark} \subsection{Methods for obtaining base-point free cycles on $\ovop{M}_{0,n}$} An effective cycle $\alpha$ of codimension $k$ is basepoint free if the base locus of $\alpha$ is empty: \begin{definition}\label{bpfDef}A cycle $\alpha\in A_k(Z)$ is basepoint free if for any point $z\in Z$, there is an effective $k$-cycle $\beta$ on $Z$, linearly equivalent to $\alpha$, such that $z$ is not in the support of $\beta$. \end{definition} A divisor $D$ is basepoint free if and only if the rational map $\phi_D$ defined by $D$ is a morphism. Naturally as basepoint free divisors on $\ovop{M}_{0,n}$ correspond to morphisms from $\ovop{M}_{0,n}$ to projective varieties, there has been interest in determining such divisors. A number of morphisms from the moduli space of curves to other projective varieties have been found and studied in the literature, many giving alternative compactifications. These new moduli spaces include for example, cyclic covers \cite{FedCyclic}, and GIT quotients \cites{GiansiracusaSimpson, AS, G, GG, GJM, gjms} generalizing Kapranov's compactifications of $\op{M}_{0,n}$ \cites{KapVer,KapChow}. The latter giving a different (but overlapping) set of modular interpretations than described in the work of Smyth \cite{Smyth}, in that they parametrize embedded curves, as opposed to abstract ones (cf. \cite[p.245]{GJM}). Other morphism have been found as well. The following other methods are known for obtaining basepoint free cycle classes on $\ovop{M}_{0,n}$: \begin{enumerate} \item[(1)] First Chern classes of conformal block bundles (computed in genus zero by \cite{Fakh}: These are associated to irreducible representations $\lambda_1,\dots,\lambda_n$ at a level $\ell$ of a simple Lie group $G$). Also Schur polynomials in the Chern classes of conformal block bundles (see Remark \ref{CBPush} and Lemma \ref{SBPFprops}; also see \cite[Example 12.1.17]{Fulton}). \item[(2)] Gromov-Witten classes $I^{c,X}_{\beta,\vec{\alpha}}$ with $X$ homogeneous (see Prop \ref{GWStrong}). \item[(3)] Algebraic operations in (1) and (2): intersection products, pushforwards under point-dropping maps $\ovop{M}_{0,n}\to \ovop{M}_{0,m}$, with $m<n$, and iterations of these (see Prop \ref{GWStrong} and Lemma \ref{SBPFprops}). \end{enumerate} The only verified identity above are for the GW divisors with $c=1$ for $X=\Bbb{P}^r$ and the level one conformal block divisors for $A_r$ (Proposition \ref{complet}). We have seen possible connections, at least on the level of parameters, between GW divisors for Grassmannian varieties and for higher level type A conformal blocks. \begin{remark} GW divisor classes for $\mathbb{P}^{r}=Gr(1,r+1)$, a homogenous space for $\mathfrak{sl}_{r+1}$ for $c=1$, coincide with conformal blocks divisors for $\mathfrak{sl}_{r+1}$ at level one, and there may be a more general connection between GW divisors for Grassmannians $Gr(\ell,r+1)$ and conformal blocks divisors for $\mathfrak{sl}_{r+1}$ at higher levels $\ell$ . The known level one identity is different from the type of pairing in Witten's theorem where small quantum numbers for $\mathbb{P}^{r}=\op{Gr}(1,r+1) =\op{Gr}(r,r+1)$ are paired with ranks of CB divisors for $\mathfrak{sl}_{r}$ (and not $\mathfrak{sl}_{r+1}$). On the other hand, basepoint free divisors produced by conformal blocks, and by GW theory, are generally speaking parameterized by different data: Conformal blocks by representations of a semisimple group at a level, and GW cycles by Weyl group data for a homogeneous space. We would be surprised if the two were always equal. \end{remark} \subsection{Gromov-Witten theory preliminaries}\label{oneone} For nonnegative integers $(g,n)$ such that $3g-3+n\ge 0$, there is an irreducible, projective variety $\ovop{M}_{g,n}$ whose points are in one-to-one correspondence with isomorphism classes of curves with at worst nodal singularities and only finitely many automorphisms. The stack $\overline{\mathcal{M}}_{g,n}$, reflects the geometry of $\overline{\operatorname{M}}_{g,n}$, while being in certain ways easier to study. Since $\ovop{M}_{0,n}$ is a fine moduli space, these two points of view are equivalent in this case. For most of the paper, we consider the case when $g=0$, where the stack is represented by the smooth projective variety $\overline{\op{M}}_{0,n}$. For this reason, we routinely refer to $\ovmc{M}_{0,n}$ as a space instead of stack. The set of all stable maps of genus $g$ and degree $\beta \in H_2(X)$, with $n$ marked points to a normal variety $X$ forms a (Deligne-Mumford) moduli stack $\ovmc{M}_{g,n}(X,\beta)$. Stable maps are tuples $((C,\vec{p}), f)$, where $(C,\vec{p})$ is a pre-stable curve and $f$ is a stable map from $(C,\vec{p})$ to $X$. A pre-stable curve $(C,\vec{p})$ is a connected, and reduced curve $C$ of genus $g$, with at worst nodal singularities, and $\vec{p}$ are smooth points on $C$. A stable map is any morphism from a pre-stable curve to $X$ such that there are only finitely many automorphisms of the map. To construct Gromov-Witten invariants, one uses the $n$ evaluation maps $\op{ev}_i: \ovmc{M}_{g,n}(X,\beta)\to X$, the contraction map $\eta:\ovmc{M}_{g,n}(X,\beta)\to \ovmc{M}_{g,n}$. The virtual fundamental class $[\ovmc{M}_{g,n}(X,\beta)]\in A_{\nu,\Bbb{Q}}(\ovmc{M}_{g,n}(X,\beta))$ is constructed in \cite{BF}. The virtual dimension is \begin{equation}\label{expected} \nu=(3g-3+n) + c_1(T_X)\cdot\beta + (1-g)\dim X. \end{equation} Given $\alpha_1,\dots,\alpha_n\in A_{\Bbb{Q}}^*(X)$, $\alpha_i\in A^{|\alpha_i|}(X)$, the class of $W_i$, the Gromov-Witten class $$I^X_{g,n,\beta}(\alpha_1\tensor\alpha_2\tensor\dots\tensor\alpha_n)\in A_{e,\Bbb{Q}}( \ovmc{M}_{g,n})$$ is obtained as the push forward by the contraction $\eta:\ovmc{M}_{g,n}(X,\beta)\to \ovmc{M}_{g,n}$, of the cap product \begin{equation}\label{class} \prod_{i=1}^n\op{ev}_i^*(\alpha_i)\cap [\ovmc{M}_{g,n}(X,\beta)]\in A_{e,\Bbb{Q}}(\ovmc{M}_{g,n}(X,\beta))=A^c_{\Bbb{Q}}(\ovmc{M}_{g,n}(X,\beta)) \end{equation} where $$e=\nu-\sum|\alpha_i|=(3g-3+n) + c_1(T_X)\cdot\beta + (1-g)\dim X -\sum|\alpha_i|.$$ It is a cycle of codimension \begin{equation}\label{codim} c=\dim \ovmc{M}_{g,n}-e=\sum|\alpha_i|-c_1(T_X) \cdot \beta- (1-g)\dim X. \end{equation} \begin{definition}\label{DivisorCondition} We say that a triple $(X,\beta, \vec{\alpha})$ satisfies the {\bf{co-dimension c cycle condition}} if \begin{equation}\label{BPFClass} \sum_i |\alpha_i|=c+c_1(T_X) \cdot \beta+ (1-g)\dim X. \end{equation} \end{definition} \begin{defi} For arbitrary homogeneous $\alpha_1,\dots,\alpha_n$, define the GW-cycles $I^{c,X}_{\beta,\vec{\alpha}}\in A^c_{\Bbb{Q}}(\ovmc{M}_{g,n})$, on $\ovmc{M}_{g,n}$ as follows: $$I^{0,X}_{\beta,\vec{\alpha}}= \left\{ \begin{matrix} d & \text{if} \ (X,\beta,\vec{\alpha}) \ \text{satisfies the codimension $c=0$ cycle condition}, where \\ & I^X_{0,n,\beta}(\alpha_1\tensor\alpha_2\dots\tensor\alpha_n)=d [\ovmc{M}_{g,n}]\in A^0_{\Bbb{Q}}(\ovmc{M}_{g,n});\\ 0 & \text{otherwise}. \end{matrix} \right.$$ $$I^{c,X}_{\beta,\vec{\alpha}}= \left\{ \begin{matrix} I^X_{0,n,\beta}(\alpha_1\tensor\alpha_2\tensor \dots\tensor\alpha_n) & \text{if} \ (X,\beta,\vec{\alpha}) \ \text{satisfies the codimension $c>0$ cycle condition}; \\ 0 & \text{otherwise}. \end{matrix} \right.$$ \end{defi} \begin{remark} Localization techniques are used to compute these invariants in many cases, especially for homogeneous $X$ \cite{GP}. \end{remark} \section{Rational strongly base point freeness and GW-cycles}\label{FuLe} Here we define the notion of rational strongly basepoint free cycles, which is inspired by the one given in \cite{FL} for strongly basepoint free cycles. In Lemma \ref{SBPFprops} we list a number of properties satisfied by such strongly basepoint free cycles. In Proposition \ref{GWStrong}, we show that Gromov-Witten cycles $I^{c,X}_{\beta,\vec{\alpha}}$ with $X$ homogeneous, are rationally strongly base point free. In Remark \ref{CBPush}, we point out that Schur polynomials in the Chern classes of $\mathbb{V}(\mathfrak{g},\vec{\lambda},\ell)$ are strongly base point free on $\ovop{M}_{0,n}$. Recall that forgetful maps $\ovop{M}_{0,n}\to \ovop{M}_{0,m}$ with $m<n$ are flat. Lemma \ref{SBPFprops} together with Propositions \ref{GWStrong} and \ref{CBPush} therefore are a source of basepoint free cycles on the moduli spaces $\ovop{M}_{0,n}$. In particular, one obtains base point free classes on $\ovop{M}_{0,n}$ by pushing forward strongly base point free classes of higher codimension on suitable $\ovop{M}_{0,n'}$ with $n'>n$. \subsection{Rational strongly basepoint free cycles} \begin{defi}\label{SBPFDef} An effective integral Chow cycle $\alpha\in A^k(X)$ of codimension $k$ on an equidimensional possibly singular, reducible, and/or disconnected projective variety $X$ is said to be {\em{rationally strongly basepoint free}} if there is a flat morphism $s:U\to X$ from an equidimensional quasi-projective scheme $U$ of finite type and a proper morphism $p:U\to W$ of relative dimension $\dim X-k$, where $W$ is an irreducible quasi-projective variety, isomorphic to an open subset of $\Bbb{A}^m$ for a suitable $m$, such that each component of $U$ surjects onto $W$, and $\alpha= (s|F_p)_*[F_p]$, where $F_p$ is a general fiber of $p$. \end{defi} \begin{defi} Denote the semigroup of rationally strongly basepoint free classes of codimension $k$ on a (possibly singular) projective variety $X$ by $\op{SBPF}^k(X)\subseteq A^k(X)$.\end{defi} For rationally strongly basepoint free cycles, unlike for the strongly basepoint free cycles of \cite{FL}, we are working with rational equivalence, rather than numerical equivalence. Moreover, for $\op{SBPF}^k(X)\subseteq A^k(X)$, we do not form the closure of the cones generated by such classes. We have included the condition that $W$ is an open subset of $\Bbb{A}^m$ since we are interested in rational equivalence. Moreover, one can drop the condition that each component of $U$ surjects onto $W$ since $W$ is required to be quasi-projective, we may replace it by an open subset, and $U$ by the inverse image of this open set. Note that if $F_{p_i}$, $i=1,2$ are two fibers then $(s|F_{p_i})_*[F_{p_i}]$ coincide in $A^k(X)$. Indeed, suppose $U$ sits inside a projective space $\Bbb{P}\times W$ over $W$, and $\overline{W}$ is a projective space containing $W$ as an open subset. Form the closure $\overline{U}$ of $U$ in the projective variety $\Bbb{P}\times \overline{W}\times X$. We have maps $\overline{U}\to X$ (which may not be flat) and $\overline{U}\to \overline{W}$. Over $W\subseteq\overline{W}$, $U$ and $\overline{U}$ coincide. Therefore $F_{p_i}$ are also fibers of $\overline{U}\to \overline{W}$ and are hence rationally equivalent. Now $\overline{U}\to X$ is proper and hence the pushforwards of the fibers agree in Chow groups. \begin{lemma}\label{SBPFprops}Rationally strongly basepoint free classes satisfy the following properties: \begin{enumerate} \item[(a)] A rationally strongly basepoint free class $\alpha\in \op{SBPF}^k(Z)$ is basepoint free in the following stronger sense: Given any irreducible subvariety $V\subset Z$ (for example a point), there is a effective cycle of class $\alpha$ which intersects $V$ in no more than expected dimension (if the intersection is non-empty). \item[(b)] If $Z$ is a smooth projective variety and $\alpha\in \op{SBPF}^k(Z)$ and $\beta\in \op{SBPF}^{k'}(Z)$, then their intersection product $\alpha\cdot\beta\in \op{SBPF}^{k+k'}(Z)$. \item[(c)] Let $\pi:X\to Y$ be a flat morphism of relative dimension $d$ and $\alpha\in \op{SBPF}^k(X)$, then $\pi_*\alpha\in \op{SBPF}^{k-d}(Y)$. \item[(d)] If $X,Y$ are projective varieties, with $Y$ smooth, and $\pi:X\to Y$ is a morphism, then $\pi^*\op{SBPF}^k(Y)\subseteq \op{SBPF}^k(X)$. \item[(e)] The cycle class of a Schubert variety on a $G/P$ is rationally strongly base point free. Therefore all effective cycles on a homogeneous space are rationally strongly base point free. \item[(f)] Let $\Bbb{V}$ be a globally generated vector bundle of rank $n$ on a smooth projective variety $X$. The Schur polynomial $s_{\lambda}=\det(c_{{\lambda_i}+j-i})_{1\leq i,j\leq n}$ in the Chern classes $c_i=c_i(\Bbb{V})$ of $\Bbb{V}$ lies in $\op{SBPF}^{|\lambda|}(Z)$. Here $|\lambda|=\sum |\lambda_i|$ is the length of the partition $\lambda=(\lambda_1\geq \dots\geq \lambda_n\geq 0)$. See \cite[Def 3.2]{FL} and the proof of \cite[Lemma 5.7]{FL}. \item[(g)] Base point free divisors on a smooth variety are rationally strongly base point free. \end{enumerate} \end{lemma} \begin{proof}For (a), $\dim (V\cap s(F_p))\leq \dim (s^{-1}(V)\cap F_p)$, which in turn is the generic dimension of fibers of $s^{-1}(V)\to W$ which is $\dim U -\dim X +\dim V-\dim W=\dim V-k$. Part (b) follows from \cite[Corollary 5.6]{FL}. The $W$ for the intersection cycle is the product of the $W$ for $\alpha$ and $\beta$ and is hence rational. Part (c) follows from the same proof as \cite[Lemma 5.3]{FL} (here the $W$ is unchanged for the pushforward): In particular, it isn't necessary to assume that $X$ or $Y$ are smooth. For (d) see \cite[Lemma 5.4]{FL} (particularly the first paragraph of the proof there, the $W$ is unchanged here). In particular, one does not need smoothness of $X$. Property (e) follows by taking $W=G/B$ (which is rational), $U$ the universal Schubert variety in $G/B\times G/P$, and $X=G/P$. Statement $(f)$ was proved for strongly basepoint free cycles (see \cite[Def 3.2]{FL} and the proof of \cite[Lemma 5.7]{FL}) for smooth varieties, is true for singular projective varieties $X$ as well (using properties (d) and (e) with $Y$ a Grassmannian). For property (g), note that any base point free divisor on a smooth variety the pull back, from a projective space $\Bbb{P}^n$, of an effective divisor by a morphism and hence properties (c) and (e) apply. \end{proof} \subsection{GW classes are rationally strongly basepoint free}\label{GWStrongSection} For the rest of the paper we assume $g=0$, the variety $X=G/P$, is homogeneous, and $\alpha_i$ cycle classes of Schubert varieties. By \cite{FP}, the coarse moduli space $\ovop{M}_{0,n}(X,\beta)$ is equidimensional of the expected dimension \eqref{expected} (with $g=0$), and we may work with the fundamental class of the coarse moduli space $\ovop{M}_{0,n}(X,\beta)$ instead of the virtual fundamental class. The classes $I^{c,X}_{\beta, \vec{\alpha}}$ are therefore integral Chow cycles. \begin{proposition}\label{GWStrong} Assume $X=G/P$, and let $(X, \beta, \vec{\alpha})$ satisfy the co-dimension $c$ cycle condition. Then the Gromov Witten cycle $I^{c,X}_{\beta, \vec{\alpha}}$ is a rationally strongly basepoint free on $\ovop{M}_{0,n}$, i.e., $I^{c,X}_{\beta, \vec{\alpha}}\in \op{SBPF}^k(\ovop{M}_{0,n})$. \end{proposition} To prove Proposition \ref{GWStrong}, we refer to the following. \begin{lemma}\label{below}Let $\eta:\ovop{M}_{0,n}(X,\beta)\to \ovop{M}_{0,n}$, and $x \in \ovop{M}_{0,n}$. \begin{enumerate} \item Each component of $\eta^{-1}(x)$ has dimension equal to $\dim \ovop{M}_{0,n}(X,\beta)-\dim \ovop{M}_{0,n}$; \item The map $\eta$ is flat. \end{enumerate} \end{lemma} \begin{proof}(of Lemma \ref{below}) {\em{Part} (1):} This is of course well known, and follows from \cite{KM} and \cite{FP}. For a fixed nodal curve $C$ of arithmetic genus $0$, the space of maps $C\to X$ has dimension $\dim X + \beta\cdot T_X= \dim \ovop{M}_{0,n}(X,\beta)-\dim \ovop{M}_{0,n}$ \cite[Section 5.2]{FP}. We have to therefore account for the collapsing operation in which a $C$ has a component which is mapped on to $X$ with positive degree, and has only two special points (the point in $\ovop{M}_{0,n}$ collapses this component). Such maps are subject to a non-trivial equivalence: The extra component has positive dimensional space of automorphisms fixing the marked points, and hence brings down the count of space of maps by at least one. {\em{Part} (2):} Locally $\ovop{M}_{0,n}(X,\beta)$ is the quotient of a smooth variety $Y$ by a finite group $G$. The composite map $Y\to Y/G\subseteq\ovop{M}_{0,n}(X,\beta)\leto{\eta}\ovop{M}_{0,n}$ is flat since $Y$ is smooth and all fibers have the expected dimension by Lemma \ref{below}. Now the coordinate rings of $Y/G$ are direct summands of the coordinate rings of $Y$, and hence are flat over $\ovop{M}_{0,n}(X,\beta)$ (see \cite[Remark 2.6.8]{KV}). \end{proof} \begin{proof} (of Proposition \ref{GWStrong}) One has maps $\op{ev}: \ovop{M}_{0,n}(X,\beta)\to X^n$ and flat maps $\eta: \ovop{M}_{0,n}(X,\beta)\to \ovop{M}_{0,n}$. We claim that the pull back under $\op{ev}$ of $\alpha_1\tensor\alpha_2\tensor\dots\tensor\alpha_n$ is strongly base point free. This claim implies Proposition \ref{GWStrong}, since the Gromov-Witten cycle $I^{c,X}_{\beta, \vec{\alpha}}$ is the pushforward $\eta_*(\op{ev}^* ( \alpha_1\tensor\alpha_2\tensor\dots\tensor\alpha_n))$ and $\eta$ is flat (and using Property (c) in Section \ref{FuLe}). Every effective cycle on a projective homogeneous space is strongly basepoint free, see Lemma \ref{SBPFprops} (e). Therefore $\alpha_1\tensor\alpha_2\tensor\dots\tensor\alpha_n$ is a strongly basepoint free cycle on $\ovop{M}_{0,n}(X,\beta)$. We now use the following property (d) of Lemma \ref{SBPFprops}: If $X,Y$ are projective varieties, with $Y$ smooth, and $\pi:X\to Y$ is a morphism, then $\pi^*\op{SBPF}^k(Y)\subseteq \op{SBPF}^k(X)$. \end{proof} \begin{remark} It is easy to see, rather immediately, that $I^{c,X}_{\beta, \vec{\alpha}}$ is basepoint free on $\ovop{M}_{0,n}$. Let $P$ be a point of $\ovop{M}_{0,n}$, and let $Z\subset X^n$ be the product of Schubert varieties $X_i$ with cycle classes $\alpha_i$. Note that $G^n$ acts transitively on $X$. By Kleiman's Bertini theorem \cite{Kleiman}, for general $\vec{h}=(h_1,\dots,h_n)\in G^n$, one has that $\op{ev}^{-1}(\vec{h}Z)$ has the expected codimension inside $\ovop{M}_{0,n}(X,\beta)$, and meets the fiber $\eta^{-1}(P)$ (which is equidimensional) in the expected dimension, which is easily computed to be $-c<0$. The cap product \eqref{class} is represented by the effective cycle $\op{ev}^{-1}(\vec{h}Z)$, and the basepoint freeness follows. \end{remark} \subsection{Chern classes of conformal blocks on $\ovop{M}_{0,n}$ are rationally strongly basepoint free}\label{CB}\label{CBDivisors} Conformal blocks bundles refers to first vector bundles of coinvariants $\mathbb{V}(\mathfrak{g},\vec{\lambda},\ell)$ defined on $\ovmc{M}_{g,n}$, where $(\mathfrak{g},\vec{\lambda},\ell)$ is a compatible triple consisting of a simple Lie algebra $\mathfrak{g}$, a positive integer $\ell$, and $\vec{\lambda}=(\lambda_1,\ldots,\lambda_n)$ are dominant weights for $\mathfrak{g}$ at level $\ell$. One can find a construction of these bundles in \cite{Fakh} (they were originally constructed in \cite{TUY}), as well as a proof of global generation in case $g=0$, and many relevant examples and results, including formulas for the Chern classes in genus zero. Formulas for the first Chern classes were given in the cases of genus zero, and genus one with one marked point in \cite{Fakh}. Together with factorization formulas, these determine the first Chern class in any genus (such F-curves span the second homology). Formulas for the total Chern character were given in \cite{MOPPZ} in arbitrary genus; and are referred to as Verlinde bundles there. \begin{remark}\label{CBPush} Schur polynomials in the Chern classes of $\mathbb{V}(\mathfrak{g},\vec{\lambda},\ell)$ on $\ovop{M}_{0,n}$ are strongly base point free. Note that these Schur classes include Chern classes of $\Bbb{V}$. Indeed, the vector bundles $\mathbb{V}(\mathfrak{g},\vec{\lambda},\ell)$ defined on $\ovop{M}_{0,n}$, are globally generated in case $g=0$, and parts (e),(f) of Lemma \ref{SBPFprops} therefore apply. \end{remark} \section{GW cycles} In Proposition \ref{DivisorIntersection} we give a formula for the intersection of a GW cycle of codimension one with F-Curves, described below in Def \ref{FCurve}. These curves can be used to compute the class of a divisor (see Section \ref{NA}). Ingredients for the proof of Proposition \ref{DivisorIntersection} will be defined in Section \ref{FactProp}. The proof is given in Section \ref{DivIntProof}. Prop \ref{DivisorIntersection} is generalized in Proposition \ref{HigherCodimensionIntersection} to give an explicit formula for the intersection of a GW-loci $I^{c,X}_{\beta,\vec{\alpha}}$ of arbitrary codimension $c$ with a boundary cycle of complementary codimension, which like $\op{F}$-curves, are products of moduli spaces. The proof of Prop \ref{DivisorIntersection} that of Proposition \ref{DivisorIntersection}, and we state them separately for clarity, and because we focus on divisors. We show in Section \ref{Reconstruction} how it is sometimes possible to simplify the formulas by reducing four-point classes to three points. \subsection{Computing classes of GW cycles by intersecting with boundary classes}\label{divisorClasses} \subsubsection{Intersecting GW divisors with boundary curves} \begin{definition}\label{FCurve} If $N_1\cup \cdots \cup N_4$ is a partition of $[n]=\{1,\ldots,n\}$ consisting of four nonempty subsets, then given four points ($\mathbb{P}^1, \{p_i\}_{i \in N_j}\cup P_j)\in \ov{M}_{0,|N_j|+1}$, we can define a map $$\ovop{M}_{0,4} \longrightarrow \ovop{M}_{0,n}, \ \ (C_0, \{Q_1,\ldots,Q_4\}) \mapsto (C,\vec{p}),$$ where $C$ is a union of $C_0$ and the four copies of pointed $\mathbb{P}^1$ glued by attaching the points $\{P_j\}_{j=1}^4$ to the four marked points $\{Q_j\}_{j=1}^4$. The $\op{F}$-Curve $F_{N_1,\cdots,N_4}$ is the numerical equivalence class of the image of this map. \end{definition} \begin{proposition}\label{DivisorIntersection}Let $\op{F}_{N_1,\ldots,N_4}$ be an $\op{F}$-Curve on $\ovop{M}_{0,n}$, let $X$ be a smooth projective homogeneous variety and suppose $\vec{\alpha}$ satisfies the codimension one co-cycle condition. Then $$I^{1,X}_{\beta,\vec{\alpha}} \cdot \op{F}_{N_1,\ldots,N_4} = \sum_{\stackrel{\vec{\omega}=\{\omega_1,\ldots,\omega_4\}}{ \in (W/W_P)^4}} I^{1,X}_{\beta-\sum_{j=1}^4\beta_j, \vec{\omega}} \prod_{j=1}^4 \ I^{0,X}_{\beta_j, \alpha(N_j)\cup \omega_j'} .$$ \end{proposition} We note the similarity of the expression in the statement Prop \ref{DivisorIntersection} with \cite[Prop 2.7]{Fakh} which gives the intersection of conformal blocks divisors $c_1(\mathbb{V}(\mathfrak{g},\vec{\lambda},\ell))$ and F-Curves. These are equal in the case $X=\mathbb{P}^r$, $\mathfrak{g}=\mathfrak{sl}_{r+1}$, and $\ell=1$ (see Prop \ref{complet}). \subsubsection{The nonadjacent basis}\label{NA} To compute classes of GW divisors in examples, we will use what is called the nonadjacent basis, which we next describe. Let $G_n$ be a cyclic graph with $n$ vertices labeled $S=\{1, 2, \ldots , n\}$. A subset of vertices $T\subset S$ is called adjacent if $t(T)$, the number of connected components of the subgraph generated T, is 1. Since $G_n$ is cyclic, if $t(T) = k$, then $t(T^c) = k$. By \cite[Proposition 1.7]{Carr}. The set $B=\{\delta_{T}: \ t(T) \ge 2\}$ forms a basis of $\op{Pic}(\ovop{M}_{0,n})_{\mathbb{Q}}$. The dual of a basis element $\delta_{T} \in B$ is an $\op{F}$-curve if and only if $t(T)=2$, and for $t(T)>2$, dual elements are alternating sums of $\op{F}$-curves. In \cite{MoonSwin} an algorithm is given for finding a dual element. For the purposes of computing examples we give the dual basis for $n=5$ and $n=6$ below. On $\ovop{M}_{0,5}$, one nonadjacent basis is given by $B=\{\delta_{13}, \delta_{14}, \delta_{24}, \delta_{25}, \delta_{35}\}$, and the dual basis to $B$ consists of $\op{F}$-curves $\{\op{F}_{1,2,3,45}, \op{F}_{1,4,5,23}, \op{F}_{2,3,4,15}, \op{F}_{1,2,5,34}, \op{F}_{3,4,5,12}\}$. For $\op{Pic}(\ovop{M}_{0,6})$ one nonadjacent basis is $$\{\delta_{13},\delta_{14},\delta_{15},\delta_{24},\delta_{25},\delta_{26},\delta_{35},\delta_{36},\delta_{46}, \delta_{124},\delta_{125},\delta_{134},\delta_{135},\delta_{136},\delta_{145},\delta_{146}\}.$$ Classes of divisors can be computed by intersecting with curves in the dual basis: \begin{multline} \{F_{1,2,3,456},F_{1,4 ,23 ,56},F_{1, 5,6 ,234}, F_{2, 3, 4, 156}, F_{2, 5, 16, 34},F_{1, 2, 6, 345},\\ F_{3, 4, 5, 126}, F_{3, 6, 12, 45}, F_{4, 5, 6, 123}, F_{3, 4, 12, 56}, F_{5, 6, 12, 34}, F_{1, 2, 34, 56}, \\ (F_{5, 6, 13, 24} + F_{1, 2, 3, 456}+F_{2, 3, 4, 156} - F_{2, 3, 16, 45}), F_{2, 3, 16, 45}, F_{1, 6, 23, 45}, F_{4, 5, 16, 23}\}. \end{multline} \subsubsection{Intersecting higher codimension GW cycles with boundary classes} For $k=n-3-c$, the locus $$\delta^k(\ovop{M}_{0,n})=\{(C,\vec{p}) \in \ovop{M}_{0,n} | \ \mbox{ C has at least k nodes } \}$$ is effective and has dimension $c$. We will next give a formula for the intersection of its irreducible components with $I^{c,X}_{\beta,\vec{\alpha}}$ in case $(X,\beta, \vec{\alpha})$ satisfies the codimension c cycle condition. For the formula, we set a small amount of notation. Irreducible components of $\delta^k(\ovop{M}_{0,n})$ are determined by the dual graph of the curves parametrized. Such a graph is a tree with $k$ edges, joining $k+1$ vertices, decorated by $n$ half-edges, so that each vertex has at least 3 edges plus half-edges. To simplify the discussion, we label the vertices $\vec{v}=\{v_1,\ldots,v_{k+1}\}$, and edges $\vec{e}=\{e_{ij}\}_{1\le i<j\le k+1}$, where we take $e_{ij}$ to be zero unless $v_i$ and $v_j$ are connected by an edge. Half-edges are labeled $\vec{h}=\{h_j\}_{j=1}^n$, and we label the component $\delta^k(\Gamma_{\vec{v},\vec{e},\vec{h}})$. In the formula given in Proposition \ref{HigherCodimensionIntersection}, given a vertex $v_i$, by $\alpha(v_i)$ we mean the set of $\alpha_j \in A^*(X)$ associated to the set of half edges $h_j$ attached to the vertex $v_i$. For each vertex $v_i$ we'll also consider new classes $\gamma_{ia} \in A^*(X)$, associated to the nonzero edges $e_{ia}$ for $i+1 <a < k+1$ and classes $\gamma_{ai}^* \in A^*(X)$, dual to $\gamma_{ai} \in A^*(X)$, associated to each nonzero edge $e_{ai}$. If the edge $e_{ij}$ is zero (so vertices $v_i$ and $v_j$ are not connected in the dual graph), we still write down a class $\gamma_{ij}$, but it is simply not in the formula, or one can imagine that there is an edge, and, by propagation of vacua, we may assume the class is zero. \begin{proposition}\label{HigherCodimensionIntersection}With notation as above, one has $$I^{c,X}_{\beta,\vec{\alpha}} \cdot \delta^k(\Gamma_{\vec{v},\vec{e},\vec{h}}) = \prod_{\overset{1\le i \le k+1}{ 0\le \beta_i \le \beta, \sum_{i=1}^{k+1}\beta_i = \beta}} \prod_{\overset{\gamma_{1i},\ldots,\gamma_{i-1i},\gamma_{i i+1},\ldots, \gamma_{i k+1}}{ \in (W/W_P)^{k+1}}} I^{c_i,X}_{\beta_i, \alpha(v_i)\cup \{\gamma^*_{ai}\}_{a=1}^{i-1} \cup \{\gamma_{ia}\}_{a=i+1}^{k+1}}.$$ \end{proposition} While we mainly focus on examples of divisors here, there are a number of reasons to compute classes of higher codimension GW cycles. For instance, while one usually obtains base point free classes by pullback along a morphism, one of the main themes of this paper is that the pushforward of higher codimension cycles produces new basepoint free divisor classes. Moreover, it is not clear even in the simplest case for $X=\Bbb{P}^r$, what divisors we get by push forwards from higher codimension ($c>1$). These classes for $r=2$ will incorporate the Kontsevich counts of number of rational curves in $\Bbb{P}^2$ passing through a fixed number of points in general position: To the best of our knowledge, these Kontsevich counts are not known to be related to representation theoretic or conformal blocks ranks, and therefore any relation to Chern classes of conformal blocks is new to us. \subsection{Ingredients for the proofs of Propositions \ref{DivisorIntersection} and \ref{HigherCodimensionIntersection}} \subsubsection{Factorization, Propagation of Vacua}\label{FactProp} In the proof of Propositions \ref{DivisorIntersection} and \ref{HigherCodimensionIntersection} we use two properties of GW classes which we call Factorization, and Propagation of Vacua for their similarity to properties of the same name that hold for vector bundles of coinvariants and conformal blocks. Here $I^{0,X}_{\beta, \vec{\alpha}}$ plays the role of the rank of the vector bundle of co-invariants, and the $I^{1,X}_{\beta, \vec{\alpha}}$ corresponds to first Chern classes of the bundles. To state this {factorization formula} \cite[Section 2.2.6]{KM}, we write the cohomology class of the diagonal for $X=G/P$: The Schubert classes $X_w$ in $X=G/P$ are parameterized by $W/W_P$. For $w\in W/W_P$, let $w'$ be the unique element so that $[X_w]\cdot [X_{w'}]=[pt]\in H^*(X)$. Then the cohomology class of the diagonal $\Delta\subset X\times X$ is \begin{equation}\label{diagonal} [\Delta]=\sum_{w\in W/W_P} X_w\tensor X_{w'}\in A^{\dim X}(X\times X). \end{equation} Let $\gamma: \ovop{M}_{0,n_1+1} \times \ovop{M}_{0,n_2+1} \to \ovop{M}_{0,n_1+n_2}$ be the clutching morphism, where one attaches pointed curves by glueing them together along the last marked point for each factor. Let $\pi_i:\ovop{M}_{0,n_1+1} \times \ovop{M}_{0,n_2+1} \longrightarrow \ovop{M}_{0,n_i+1}$ be the projection maps. If $(X,\beta, \{\alpha_1,\ldots,\alpha_{n_1+n_2}\})$ satisfies the codimension $1$ cycle condition, then the {\bf{factorization formula}} states that $\gamma^* I^{1,X}_{\beta, \{\alpha_1,\dots,\alpha_{n_1+n_2}\}}$ decomposes as sum of divisor classes pulled back from the $\ovop{M}_{0,n_i+1}$ along $\pi_i$. The class pulled back from $\ovop{M}_{0,n_2+1}$ equals $$\sum_{\beta_1 +\beta_2=\beta, w\in W/W_P}I^{0,X}_{\beta_1, \{\alpha_1,\dots,\alpha_{n_1},[X_{w}]\}}\pi_2^* I^{1,X}_{\beta_2, \{\alpha_{n_1+1},\dots,\alpha_{n},[X_{w'}]\}}$$ Note that if $c_{\beta_1}$ and $c_{\beta_2}$ are the corresponding co-dimensions in \eqref{codim} then $c_{\beta_1}+c_{\beta_2} =c _{\beta}$, since the co-dimensions of $X_w$ and $X_{w'}$ add up to $\dim X$. If $(X,\beta, \{\alpha_1,\ldots,\alpha_{n_1+n_2}\})$ satisfies the codimension $0$ cycle condition, then $I^{0,X}_{\beta, \{\alpha_{1},\dots,\alpha_{n_1+n_2}\}}$ breaks up as a sum $$\sum_{\beta_1 +\beta_2=\beta, w\in W/W_P}I^{0,X}_{\beta_1, \{\alpha_1,\dots,\alpha_{n_1},[X_{w}]\}} \ I^{0,X}_{\beta_2, \{\alpha_{n_1+1},\dots,\alpha_{n},[X_{w'}]\}}$$ These can be generalized to analogous factorization formulas for $I^{c,X}_{\beta, \{\alpha_{1},\dots,\alpha_{n},[T_0]\}}$ in case $(X,\beta, \vec{\alpha})$ satisfies the codimension c cycle condition. The GW classes also satisfy a formula \cite[Section 2.2.3]{KM}, analogous to what is called {\bf{Propagation of Vacua}} for vector bundles of conformal blocks. Namely, let $T_0\in A^0(X)$ be the fundamental class of the space. Then if $(X,\beta, \vec{\alpha})$ satisfies the codimension c cycle condition, then $$I^{c,X}_{\beta, \{\alpha_{1},\dots,\alpha_{n},[T_0]\}}=\pi_{n+1}^* I^{c,X}_{\beta, \{\alpha_{1},\dots,\alpha_{n}\}},$$ where $\pi_{n+1}:\ovop{M}_{0,n+1}\to \ovop{M}_{0,n}$ is the projection map. \subsubsection{Small quantum cohomology}\label{small} Assume $X$ is a homogenous space as before. Let $T_1,\dots,T_p$ be a basis of $A^1(X)$. Let $Z[q]=Z[q_1,\dots,q_p]$ where $q_1,\dots,q_p$ are formal variables. For $\beta\in H_2(X)$, let $q^{\beta}=q_1^{\beta\cdot T_1}q_2^{\beta\cdot T_2}\dots q_p^{\beta\cdot T_p}$, and set $QH^*(X)= H^*(X)\tensor \Bbb{Z}[q]$. Define a small quantum $\Bbb{Z}[q]$-algebra structure $\star$ on $QH^*(X)$ by $$\alpha_1\star \alpha_2= \sum_{\gamma,\beta}q^{\beta}\langle \alpha_1,\alpha_2,\gamma\rangle_{\beta} \gamma'$$ here $\alpha_1,\alpha_2\in H^*(X)$ and $\beta$ runs through $H_2(X)$, and $\gamma$ runs through all Schubert cycle classes. \subsubsection{Proof of Proposition \ref{DivisorIntersection}}\label{DivIntProof} Let $\op{F}_{N_1,\ldots,N_4}$ be an $\op{F}$-Curve, and $I_{\vec{\alpha}}^{\beta}(X)$ a GW-divisor on $\ovop{M}_{0,n}$. Without loss of generality we can rename the $\alpha_i$ so that $\{\alpha_i : i\in N_j\}=\{\alpha^j_j,\ldots,\alpha^j_{n_j}\}$, where $n_j=|N_j|$. There is a surjective map from a product of $\ovop{M}_{0,4}$ and four copies of $\ovop{M}_{0,3}$ onto $F_{N_1,N_2,N_3,N_4}$. To compute the class of $I_{\vec{\alpha}}^{\beta}(X)$, one pulls the divisor back to the product of the moduli spaces. By the factorization formula, one gets the asserted formula. \begin{remark}The proof of Proposition \ref{HigherCodimensionIntersection} is analogous to the proof of Proposition \ref{DivisorIntersection}. \end{remark} \subsection{Divisor intersection simplifications}\label{Reconstruction} In order to find classes of GW divisors, one needs to know how to find: $$I^{0,X}_{\beta, \{\alpha_{1},\dots,\alpha_{n}\}}\in A^0(\ovop{M}_{0,4})=\Bbb{Z} \mbox{ and } \ I^{1,X}_{\beta, \{\alpha_{1},\dots,\alpha_{4}\}}\in \op{Pic}(\ovop{M}_{0,4})=\Bbb{Z}.$$ Often these quantities can be simplified computationally. For example: \begin{enumerate} \item $I^{0,X}_{\beta, \{\alpha_{1},\dots,\alpha_{n}\}}$ always reduces to $3$-point GW numbers, which are the coefficients of $q^{\beta}[pt]$ in the small quantum product $$[X_{w_1}]\star [X_{w_2}]\dots \star [X_{w_n}].$$ \item If for some $i \in \{1,\ldots, 4\}$, the class $\alpha_i$ has codimension one, then by \cite[Prop III, p 35]{FP}, \begin{equation}\label{Divisor} I^{1,X}_{\beta, \{\alpha_{1},\dots,\alpha_{4}\}}=(\alpha_4\cdot \beta) \ I^{0,X}_{\beta, \{\alpha_{1},\alpha_2,\alpha_{3}\}}\in \Bbb{Z}=\op{Pic}(\ovop{M}_{0,4}). \end{equation} \item If $\beta=0$, then $I^{1,X}_{\beta, \{\alpha_{1},\ldots,\alpha_{n}\}}=0$ and $I^{0,X}_{\beta, \{\alpha_{1},\ldots,\alpha_{n}\}}$ coincides with the multiplicity of the class of a point in the product, in cohomology $H^*(X)$, of $\alpha_1,\dots,\alpha_n$. \end{enumerate} \bigskip As we next explain, another simplification of the four-point numbers can often be made in terms of small quantum cohomology numbers and from identities pulled back from $\ovop{M}_{0,4}$. \subsubsection{} The formulas to be described in this section are from \cite[3.2.3, Step 2]{KM}. We extend the definition of $I^{c,X}_{\beta, \vec{\alpha}}$ to allow for arbitrary $\alpha_i\in QH^*(X)=H^*(X)\tensor\Bbb{Z}[q]$ (see Section \ref{small}) by $\Bbb{Z}$-linearity (and not $\Bbb{Z}[q]$ linearity!) in $\alpha_i$, and by setting $$I^{c,X}_{\beta, \{q^{m_1}\alpha_1, \ldots , q^{m_n}\alpha_n\}}=I^{c,X}_{\beta-\sum_i m_i, \{\alpha_1,\ldots,\alpha_n\}}.$$ Recall that the degree $|q^{\beta}\alpha|$ is $\beta\cdot c_1(T_V) + |\alpha|$ for homogeneous $\alpha\in H^*(V)$. \begin{proposition}\label{Recon}For $\alpha_i,\alpha_j,\alpha_k,\alpha_{\ell},\alpha_m \in QH^{*}(X)$, and homogeneous such that $$\sum_{x} |\alpha_x| = 1 +\beta \cdot c_1 (T_X) +\dim X,$$ \begin{multline} I^{1,X}_{\beta, \{\alpha_k,\alpha_{\ell},\alpha_{m},\alpha_{i}\star \alpha_j\}} = I^{1,X}_{\beta, \{\alpha_j,\alpha_{\ell},\alpha_{m},\alpha_i \star \alpha_k\}} +I^{1,X}_{\beta, \{\alpha_i,\alpha_k,\alpha_m,\alpha_j \star \alpha_{\ell}\}}-I^{1,X}_{\beta, \{\alpha_i,\alpha_j,\alpha_m,\alpha_{k}\star \alpha_{\ell}\}} \\ =I^{1,X}_{\beta,\{\alpha_j,\alpha_{k},\alpha_{m},\alpha_i \star \alpha_{\ell}\}} +I^{1,X}_{\beta,\{\alpha_i,\alpha_{\ell},\alpha_m,\alpha_j \star \alpha_k\}}-I^{1,X}_{\beta, \{\alpha_i,\alpha_j,\alpha_m,\alpha_{k}\star \alpha_{\ell}\}}. \end{multline} \end{proposition} \begin{proof} It is easy to check that we may assume $\alpha_i,\alpha_j,\alpha_k,\alpha_{\ell},\alpha_m \in H^{*}(X)$, by writing $\alpha_i=q^{\beta_i}\alpha'_i$ etc. We work with the contraction morphism $\rho: \ovop{M}_{0,5}(X,\beta)\to \ovop{M}_{0,4}$. On $\ovop{M}_{0,4}\cong \mathbb{P}^1$, one has the divisor class identities $$\delta_{ij,k \ell}=\delta_{ik,j\ell} = \delta_{i\ell, jk}.$$ When pulled back along $\rho$, these give the identities \begin{multline}\label{RelA} \sum_{S}I_{3,\beta_1}^{X}(\alpha_i,\alpha_j,\gamma) \ I^{X}_{4,\beta-\beta_1}(\alpha_k,\alpha_{\ell},\alpha_{m},\gamma') +\sum_{S}I_{4,\beta_1}^{X}(\alpha_i,\alpha_j,\alpha_m,\gamma) \ I^{X}_{3,\beta-\beta_1}(\alpha_k,\alpha_{\ell},\gamma')\\ =\sum_{S}I_{3,\beta_1}^{X}(\alpha_i,\alpha_k,\gamma) \ I^{X}_{4,\beta-\beta_1}(\alpha_j,\alpha_{\ell},\alpha_{m},\gamma') +\sum_{S}I_{4,\beta_1}^{X}(\alpha_i,\alpha_k,\alpha_m,\gamma) \ I^{X}_{3,\beta-\beta_1}(\alpha_j,\alpha_{\ell},\gamma')\\ \sum_{S}I_{3,\beta_1}^{X}(\alpha_i,\alpha_{\ell},\gamma) \ I^{X}_{4,\beta-\beta_1}(\alpha_j,\alpha_{k},\alpha_{m},\gamma') +\sum_{S}I_{4,\beta_1}^{X}(\alpha_i,\alpha_{\ell},\alpha_m,\gamma) \ I^{X}_{3,\beta-\beta_1}(\alpha_j,\alpha_{k},\gamma'), \end{multline} where $S=\{\gamma, \beta_1 \ | \ [\Delta]=\sum \gamma \otimes \gamma' \ \}$. Using that $$\alpha_x \star \alpha_y = \sum_{\beta_1} q^{\beta_1} \langle \alpha_x, \alpha_y, \gamma \rangle \ \gamma',$$ one has $q^{\beta_1}I^X_{4,\beta-\beta_1}(\alpha_a,\alpha_b,\alpha_c,\alpha_d)=I^X_{4,\beta}(\alpha_a,\alpha_b,\alpha_c,q^{\beta_1}\alpha_d)$. We may therefore rewrite Eq \ref{RelA} as \begin{multline}\label{RelB} I^{X}_{4,\beta}(\alpha_k,\alpha_{\ell},\alpha_{m},\alpha_{i}\star \alpha_j) +I_{4,\beta}^{X}(\alpha_i,\alpha_j,\alpha_m,\alpha_{k}\star \alpha_{\ell}) \\ = I^{X}_{4,\beta}(\alpha_j,\alpha_{\ell},\alpha_{m},\alpha_i \star \alpha_k) +I_{4,\beta}^{X}(\alpha_i,\alpha_k,\alpha_m,\alpha_j \star \alpha_{\ell}) \\ I^{X}_{4,\beta}(\alpha_j,\alpha_{k},\alpha_{m},\alpha_i \star \alpha_{\ell}) +I_{4,\beta}^{X}(\alpha_i,\alpha_{\ell},\alpha_m,\alpha_j \star \alpha_k). \end{multline} \end{proof} \subsubsection{Application of Proposition \ref{Recon}} We write a simpler version of Proposition \ref{Recon}, which when used judiciously can simplify 4-point numbers to sums of 3-point numbers. \begin{proposition}\label{Recursive} For $\alpha_i,\alpha_j,\alpha_k,\alpha_{\ell},\alpha_m \in QH^{*}(X)$, suppose that $\alpha_{\ell}=H_1$ is the class of a hyperplane, and $\alpha_m=H^{\star (t-1)}$, so that $\alpha_{\ell}\cdot \alpha_m = H_1 \star H^{\star, t-1}=H^{\star,t}$. Then one can rewrite $I^{1,X}_{\beta, \{\alpha_i,\alpha_j,\alpha_k,H^{ \star t}\}}$ as: \begin{equation}\label{formule} I^{1,X}_{\beta, \{\alpha_i\star H,\alpha_k,\alpha_j, H^{ \star(t-1)} \}} + I^{1,X}_{\beta, \{\alpha_i,H,\alpha_k, H^{ \star (t-1)}\star \alpha_j \}} -I^{1,X}_{\beta, \{\alpha_i\star\alpha_j,\alpha_k, H,H^{ \star(t-1)} \}} \end{equation} \end{proposition} \begin{remark} If $\alpha_2,\alpha_3,\alpha_3\in H^*(V)$ (and not in $QH^*(V))$, by \cite[Prop III, p 35]{FP}, $$I^{1,X}_{\beta, \{H_1,\alpha_2,\alpha_{3},\alpha_4\}}=(H_1\cdot \beta) I^{0,X}_{\beta, \{\alpha_2,\alpha_3,\alpha_4\}}.$$ and $$I^{1,X}_{\beta, \{H_1,q^{\beta_2}\alpha_2,q^{\beta_3}\alpha_{3},q^{\beta_4}\alpha_4\}}=(H_1\cdot \beta') I^{0,X}_{\beta', \{\alpha_2,\alpha_3,\alpha_4\}}.$$ where $\beta'=\beta-\beta_2-\beta_3-\beta_4$. Therefore in Equation \eqref{formule}, the second and third term can be computed using small quantum cohomology. The first term has $H^{\star (t-1)}$ in the last coordinate, so the exponent in $H$ has improved, and we can iterate the procedure to reduce to $t=1$. \end{remark} For an example of a calculation done using Proposition \ref{Recursive}, see the Section \ref{Ex3}. \section{Projective space Examples} The simplest case to consider is $X=\Bbb{P}^r$, and Proposition \ref{complet} links divisor classes $I^{1,\mathbb{P}^r}_{\vec{m}, d}$ on $\ovop{M}_{0,n}$ to conformal blocks divisors for type A at level 1. \begin{proposition}\label{complet}Suppose we are given a pair $(r, \vec{m})$ such that $\sum_{i=1}^nm_i=(r+1)(d+1)$. Then $$I^{1,\mathbb{P}^r}_{\vec{m}, d} \equiv c_1(\mathbb{V}(\operatorname{sl}_{r+1}, \{\omega_{m_1},\ldots, \omega_{m_n}\},1)).$$ \end{proposition} \begin{proof}Conformal blocks divisors $c_1(\mathbb{V}(\mathfrak{g},\vec{\lambda},\ell))$ are described briefly in Section \ref{CBDivisors}. Here we are concerned with the special case when $\mathfrak{g} =\mathfrak{sl}_{r+1}$, and $\ell=1$. In this case, for $\vec{\lambda}=(\lambda_1,\ldots,\lambda_n)$, the $\lambda_i$ correspond to Young diagrams with rows $\ell=1 \ge \lambda_i^1 \ge \cdots \ge \lambda_i^r$, and the compatibility requirement is that $\sum_{i=1}^n|\lambda_i|=(r+1)(d+1)$, where $|\lambda_i|=\sum_{j=1}^r\lambda_I^j$. It is enough to show that each divisor intersects any $\op{F}$-curve in the same degree. By Proposition \ref{DivisorIntersection}, this amounts to proving, for any partition $N_1 \cup \cdots \cup N_4$ of $[n]$ into nonempty subsets, if we write $\vec{m}(N_j)=\{m_i : i \in N_j\}$, for any $\vec{a}=(a_1,a_2,a_3,a_4)$ with $$\sum_{i \in N_j}m_i+a'_j=(r+1)(d_j+1), \mbox{ and } \sum_{i}a_i=(r+1)(d-\sum_{i}d_i+1).$$ that $I^{0,\mathbb{P}^r}_{d_j, \vec{m}(N_j) \cup a'_j}$ is proportional to $\op{rank}(\mathbb{V}(\operatorname{sl}_{r+1}, \{\omega_{m_i}: i \in N_j\} \cup \omega_{a'_j},1))$, and that $$I^{1,\mathbb{P}^r}_{d-\sum_i d_i, \vec{a}} \equiv c_1(\mathbb{V}(\operatorname{sl}_{r+1}, \{\omega_{a_1},\ldots, \omega_{a_4}\},1)).$$ In \cite{Fakh}, Fakhruddin proved that the level one bundles in type A always have rank one. So it is enough to check: \begin{enumerate} \item Four point classes are the same: $$I^{1,\mathbb{P}^r}_{\beta, \vec{a}} \equiv c_1(\mathbb{V}(\operatorname{sl}_{r+1}, \{\omega_{a_1},\ldots, \omega_{a_4}\},1)), \mbox{ where } \sum_i a_i =(r+1)(\beta+1), \mbox{ and}$$ \item Coefficients are the same: $$I^{0,\mathbb{P}^r}_{d_j, \vec{m}(N_j) \cup a'_j} = \op{Rank}\mathbb{V}(\mathfrak{sl}_r,\{\omega_{a_i}: i \in N_j \} \cup \omega_{a'_j},1)=1.$$ \end{enumerate} \bigskip To see that four point classes are the same: If one of the $a_i=0$ then the class is pulled back from $\ovop{M}_{0,3}$, and hence zero. The conformal blocks divisor is also trivial in this case. If $\beta=0$, then divisors from both theories are zero. Clearly $a_1+a_2+a_3+a_4 \leq 4r$ and hence $\beta \in \{0,1,2\}$. We show now that if $\beta=2$, the GW divisor is zero (the same is true of the conformal blocks divisor \cite[Lemma 5.1]{Fakh}). Clearly in this case $r\geq 3$ (otherwise we will need $4$ classes in $\Bbb{P}^2$ with codimensions summing to $9$). We want to count maps $f:(\Bbb{P}^1,p_1, p_2,p_3,p_4)\to\Bbb{P}^r$ such that $p_i$ go into specified Schubert cells (generic translates of standard cells). The image of the conic lies in a plane in $\Bbb{P}^r$. The space of such planes $\op{Gr}(3,r+1)$ is of dimension $3(r-2)$. The conditions imposed on this plane are at least $\sum (a_i-2)=3(r+1)-8> 3(r-2)$, hence there are no such planes (If $a_i=3$, then we want the $\Bbb{C}^3\subset \Bbb{C}^{r+1}$ arising from the plane to meet a codimension $3$ hyperplane in $\Bbb{C}^{r+1}$ non-trivially, which imposes one condition on the plane. Similarly if $a_i> 3$, the number of conditions imposed is $(a_i-2)$). Finally, if $\beta=1$, we may assume all $a_i>1$, because if $a_i=1$, the GW divisor is of degree $1$ as is the conformal blocks divisor, by \cite[Lemma 5.1]{Fakh}. To look for a line of degree $1$, we can look in $\op{Gr}(2,n)$. We therefore need to intersect the Schubert varieties corresponding to partitions $(a_1-1,0), (a_2-1,0), (a_3-1,0), (a_3-1,0)$ when $a_1+a_2+a_3+a_4-4=2(r+1)-4= 2(r-1)$ in the Grassmannian $\op{Gr}(2,r+1)$. Assuming $a_1\leq a_2\leq a_3\leq a_4$, we want the answer to be $a_1$ if $a_2+a_3\geq a_1+a_4$, and $r+1-m_4$ otherwise as is the case for the corresponding conformal blocks divisor \cite[Lemma 5.1]{Fakh}. Let $\lambda_i=a_i-1$. We do the computation in the representation ring of $\op{sl}_2$. The cup product of $V(\lambda_1)$ and $V(\lambda_4)$ is a multiplicity free string of representations $V(\lambda_4-\lambda_1), V(\lambda_4-\lambda_1+2),\dots, V(\lambda_1+\lambda_4)$. Since $\lambda_4-\lambda_1\geq \lambda_3-\lambda_2$, the desired intersection number is $1+1/2(\lambda_4+\lambda_1-(\lambda_4-\lambda_1))=a_1$ if $\lambda_4+\lambda_1\leq \lambda_2+\lambda_3$, and equal to $1+1/2(\lambda_2+\lambda_3-(\lambda_4-\lambda_1)= 1+ (r-1-\lambda_4)= r+1-a_4$ otherwise. Since the $\vec{m}(N_j) \cup a'_j$ satisfy the $c=0$ co-cycle condition, $I^{0,\mathbb{P}^r}_{d_j, \vec{m}(N_j) \cup a'_j}$ can be computed with small quantum cohomology numbers for $\Bbb{P}^r$ and are easily seen to be $1$. The ranks of the conformal blocks divisors in type A at level one are one \cite{Fakh}. The proof of Proposition \ref{complet} is now complete. \end{proof} In \cite{G}, it was shown for $\op{S}_n$-invariant divisors $c_1(\mathbb{V}(\mathfrak{sl}_{r+1},\vec{\lambda},1))$, (and then later for general divisors in \cite{GG}), in case $\sum_{i=1}|\lambda_i|=(r+1)(d+1)$, that the divisors $c_1(\mathbb{V}(\mathfrak{sl}_{r+1},\vec{\lambda},1))$ give maps to moduli spaces that generically parametrize configurations of weighted points that lie on a Veronese curve of degree $d$ in $\mathbb{P}^d$. The statement of the result in Proposition \ref{complet} is a priori different, as it refers generically to maps of $\mathbb{P}^1$ to $\mathbb{P}^{r}$. \subsection{Examples of higher codimension cycles that pushforward to divisors}\label{PushforwardExamples} Suppose $\sum m_i\equiv c-1 \pmod{r+1}$. Then, $I^{c,\mathbb{P}^r}_{\vec{m}, d}$ is a codimension $c$ rationally strongly basepoint free cycle on $\ovop{M}_{0,n}$ with $(d+1)(r+1)+ c-1=\sum m_i$. Consider the forgetful map $\ovop{M}_{0,n}\to \ovop{M}_{0,n-1}$. We wish to determine/study $\eta_* I^{c,\mathbb{P}^r}_{\vec{m}, d}$ a basepoint free divisor on $\ovop{M}_{0,n-c+1}$. We consider here two explicit example in case $c=2$. If $m_n=1$, then this push forward coincides with $I^{1,\mathbb{P}^r}_{\vec{m}',d}$ on $\ovop{M}_{0,n-1}$ where $\vec{m}'=(m_1,\dots,m_{n-1})$. We therefore have selected examples for which $m_n>1$. \subsubsection{Pushforward of $Z=I^{2,\mathbb{P}^3}_{\{H_1,H_2^6\}, 2}$ from $\ovop{M}_{0,7}$ to a basepoint free divisor on $\ovop{M}_{0,6}$} Let $\pi: \ovop{M}_{0,7}\to \ovop{M}_{0,6}$ be the morphism which drops the $7$th marked point. To determine the class of $\pi_*(I^{2,\mathbb{P}^3}_{\{H_1,H_2^6\}, 2})$ on $\ovop{M}_{0,6}$, we intersect $I^{2,\mathbb{P}^3}_{\{H_1,H_2^6\}, 2}$ with the pullback $\pi^*(F)$, where $F$ runs over the set of curves dual to the nonadjacent basis. This is the dual basis given by the boundary curves: \begin{multline} \{F_{1,2,3,456},F_{1,4 ,23 ,56},F_{1, 5,6 ,234}, F_{2, 3, 4, 156}, F_{2, 5, 16, 34}, F_{1, 2, 6, 345},\\ F_{3, 4, 5, 126}, F_{3, 6, 12, 45}, F_{4, 5, 6, 123}, F_{3, 4, 12, 56}, F_{5, 6, 12, 34}, F_{1, 2, 34, 56}, \\ (F_{5, 6, 13, 24} + F_{1, 2, 3, 456}+F_{2, 3, 4, 156} - F_{2, 3, 16, 45}), F_{2, 3, 16, 45}, F_{1, 6, 23, 45}, F_{4, 5, 16, 23}\}. \end{multline} Because of the symmetry of the Schubert classes used to define $I^{2,\mathbb{P}^3}_{\{H_1,H_2^6\}, 2}$, we can relabel these curves depending on where the first point appears: $$\{A,B, A, C, D, A, C, D, C, D, D, B, (A+C), D, B, D\},$$ where \begin{multicols}{2} \begin{itemize} \item $A=Z\cdot \pi^*(F_{\{p_1\},\{x\},\{x\},\{x, x, x\}})$; \item $B=Z\cdot \pi^*(F_{\{p_1\},\{x\} ,\{x, x\}\{x, x\}})$; \item $C=Z\cdot \pi^*(F_{\{x\},\{x\},\{x\},\{p_1,x,x\}})$; and \item $D=Z\cdot \pi^*(F_{\{x\}, \{x\}, \{p_1,x\}, \{x,x\}})$. \end{itemize} \end{multicols} We calculate, for every surface above the intersection of $Z$ with its three components. We'll check below that $Z \cdot \pi^*(F_{\{p_1\},\{x\},\{x\},\{x,x,x\}})=Z\cdot (Z^A_1+Z^A_2+Z^A_3) = 2+0+2.$ Here $Z^A_1\cong \ovop{M}_{0,5}\times \ovop{M}_{0,3}\times \ovop{M}_{0,3}$, with the $7$th point on the first component, $Z^A_2 \cong \ovop{M}_{0,4}\times \ovop{M}_{0,4}\times \ovop{M}_{0,3}$, with the $7$th point on the second component, and $Z^A_3\cong \ovop{M}_{0,4}\times \ovop{M}_{0,3}\times \ovop{M}_{0,4}$, with the $7$th point on the third component. To compute the intersection, using factorization, we determine what Schubert classes can be used as attaching data. In this example, on the factor isomorphic to $\ovop{M}_{0,5}$, we are given Schubert classes $\{H_1, H_2, H_2, H_2\}$ at four points, and we let $\alpha_1$ be the class at the $5$-th {\em{attaching}} point. On the second factor isomorphic to $\ovop{M}_{0,3}$, we have one given class $H_2$ and two attaching classes $\alpha_1^*$ and $\alpha_2$, and on the third factor isomorphic to $\ovop{M}_{0,3}$ there are two given classes, $\{H_2,H_2\}$, and the attaching class $\alpha_2*$. Since the total degree in $Z$ is $2$, when restricted to the three factors, $Z$ will decompose as a product of $GW$ classes one of which has degree $0$. If the degree zero class is on the factor isomorphic to $\ovop{M}_{0,5}$, the result will give a zero intersection. There is one choice for a nonzero intersection given by letting $$\alpha_1=H_2, \alpha_1^*=H_1, \alpha_2=H_0, \alpha_2^*=H_3, \mbox{ which gives } Z \cdot Z^A_1 = 2.$$ There is no choice giving a nonzero intersection of $Z$ with $Z^A_2$. Using an argument similar to the first, one can check that the intersection of $Z$ with $Z^A_3$ is $2$. We shall see that $Z\cdot \pi^* B=Z \cdot (Z^B_1+Z^B_2+Z^B_3)=0$. Here again on $Z^B_i$ the the 7th point is on the $i$-th component, but this time there are two attaching points on the first factor in each case. For instance Here $Z^B_1\cong \ovop{M}_{0,5}\times \ovop{M}_{0,3}\times \ovop{M}_{0,3}$, with the $7$th point on the first component, and we are given Schubert classes $\{H_1, H_2, H_2\}$ at three points, and let $\alpha_1$ and $\alpha_2$ be the classes at the $4$th and $5$-th points which are the points attaching this factor two the other two. The other two factors constrain $\alpha_1=\alpha_2=H_0$, which forces the degree of $Z$ on this surface to be zero. This same phenomena happens to at least one of the attaching points for each of the cases $Z^B_2$ and $Z^B_3$ and the total degree is $0$. One has that $C=6$, and that $D=0+0+2=2$. $$[A,B, A, C, D, A, C, D, C, D, D, B, (A+C), D, B, D]= 2[2,0, 2, 3, 1, 2, 3, 1, 2, 1, 1, 0, 5, 1, 0, 1].$$ One can check (using a program such as LRS \cite{LRS}, as we did) that this intersects $15$ F-Curves on $\ovop{M}_{0,6}$ in degree zero. The face determined by these is 4 dimensional, and and $Z$ can be expressed as a combination of the following extremal rays that span this face: \begin{multicols}{2} \begin{enumerate} \item $[1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 2, 0, 0, 0]$; \item $[0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0]$; \item $[0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0]$; \item $[1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 2, 0, 0, 1]$. \end{enumerate} \end{multicols} The first extremal ray listed is the same as the first ray $R_1$ on Swinarski's list, where he classifies the $\op{S}_6$-equivalence classes of the $3,190$ extremal rays of the nef cone for $\ovop{M}_{0,6}$. The second and third divisors listed generate rays in the class for $R_6$ on his list, and the last represents the class of $R_5$. So while these 4 rays span a 4-dimensional extremal face, they are of three different types on Swinarski's list. \subsubsection{Pushforward of $Z=I^{2,\mathbb{P}^3}_{\{H_1^4,H_3^3\}, 2}$ from $\ovop{M}_{0,7}$ to $\ovop{M}_{0,6}$} Let $\pi: \ovop{M}_{0,7}\to \ovop{M}_{0,6}$ be the morphism which drops the $7$th marked point. To determine the class of $\pi_*(I^{2,\mathbb{P}^3}_{\{H_1,H_2^6\}, 2})$ on $\ovop{M}_{0,6}$, we intersect $I^{2,\mathbb{P}^3}_{\{H_1,H_2^6\}, 2}$ with the pullback $\pi^*(F)$, where $F$ runs over the set of curves dual to the nonadjacent basis, and because of the symmetry of the Schubert classes used to define $Z$, we only need to keep track of where the 5th and 6th points are. The class is given by \begin{multline}[A,B,C, A, D, E,E, D, C,B, F, B, (F + 2A- G), G,D, D]=[1,1,2,1, 1, 2,2, 1, 2,1, 1, 1, 3,0,1, 1]\\ =[0,1,1,1, 0, 1,1, 1, 0,1, 0, 0, 1,0,1, 0]+[1,0,1,0, 1, 1,1, 0, 2,0, 1, 1, 2,0,0, 1]=R_5+R_{16}, \end{multline} where for $x \in \{p_1,p_2,p_3,p_4 \}$, and $y \in \{p_5,p_6\}$ one has \begin{multicols}{2} \begin{itemize} \item $A=Z\cdot \pi^*F_{\{x\},\{x\},\{x\},\{x,y,y\}}=1$; \item $B=Z\cdot \pi^*F_{\{x\},\{x\} ,\{x,x\},\{y,y\}}=1$; \item $C=Z\cdot \pi^*F_{\{x\}, \{y\},\{y\} ,\{x,x,x\}}=2$; \item $D=Z\cdot \pi^*F_{\{x\}, \{y\}, \{x,y\}, \{x,x\}}=1$; \item $E=Z\cdot \pi^*F_{\{x\},\{x\},\{y\}, \{x,x,y\}}=2$; \item $F=Z\cdot \pi^*F_{\{y\}, \{y\},\{x,x\},\{x,x\}}=1$; \item $G=Z\cdot \pi^*F_{\{x\},\{x\},\{x,y\},\{x,y\}}=0$; \end{itemize} \end{multicols} and by $R_5$ and $R_{16}$ we mean the 5th and 16th rays on the list in Swinarski's enumeration of equivalence classes of extremal rays of the nef cone of $\ovop{M}_{0,6}$ in \cite{Swin}. The divisor $\pi_*Z$ contracts 12 F-Curves on $\ovop{M}_{0,6}$, and there are $23$ extremal rays of the nef cone $\op{Nef}(\ovop{M}_{0,6})$ that also contract those F curves. In particular, $\pi_*Z$ lies on the face spanned by the 23 extremal rays. Using LRS \cite{LRS}, one can check that as a cone, this face is 7 dimensional. In particular, a generic element on this face would be described by an effective combination of 7 divisors. But as one can see above, this divisor is a combination of two extremal rays. \begin{remark}\label{Interesting} While $R_5$ (as well as $R_1$ and $R_6$ from the previous example) are known to be spanned by conformal blocks divisors, there is no known conformal blocks divisor that spans $R_{16}$. In other words, while Proposition \ref{complet} links conformal blocks divisors to GW divisors classes for $X=\mathbb{P}^r$, there does not appear to be a link between conformal blocks divisors and the divisors one obtains by pushing forward GW classes of higher codimension for $X=\mathbb{P}^r$. \end{remark} \subsubsection{$D^2$} Suppose $\sum m_i=(d+1)(r+1)$. Then $D=I^{1,\mathbb{P}^r}_{\vec{m}, d}$ is a rationally strongly basepoint free divisor, and $D^2$ is a rationally strongly base point free codimension two cycle on $\ovop{M}_{0,n}$. Hence by Lemma \ref{SBPFprops}, the pushforward $\pi_*(D^2)$ is a basepoint free divisor class on $\ovop{M}_{0,n-1}$, where $\pi:\ovop{M}_{0,n}\to \ovop{M}_{0,n-1}$ is any of the $n$ point dropping maps. Now the boundary cycles span $A^1(\ovop{M}_{0,n})$ (by \cite{KeelThesis}), and one can use the intersection formulas there to compute $D^2$, the pushforward of $D^2$ can be computed by the formulas in \cite{AC}. \section{Quadrics and GW divisors} Let $X=Q_r$ be a smooth projective quadric of even dimension $r=2m \ge 4$ or of odd dimension $r=2m+1 \ge 1$ given by a nondegenerate quadratic form on a vector space $V$, of dimension $r+2$ over a field $F$, so $X \subset \mathbb{P}(V)$. Let $H=H_1\in A^1(X)$ the pullback of the hyperplane class in $A^1(\mathbb{P}(V))$. We let $H_i=H_1^i$ (the $i$-fold cup product in ordinary cohomology)for $i\in[1,r]$ and $H_0=1$. The degree of the canonical bundle of a smooth projective quadric $Q_r$ is $-r$. So for $(X,d,\vec{\alpha})$ to satisfy the codimension $c$ cycle condition we must have that $$\sum_{i}|\alpha_i| = c+ r(d+1),$$ In case $r=2m+1$ is odd: \begin{itemize} \item Let $W$ be a maximal totally isotropic subspace so $\mathbb{P}(W) \subset X$, $\dim\mathbb{P}(W)=m$. For any integer $i \in [0,m]$, let $L_i\in A_i(X)$ be the class of an $i$ dimensional subspace of $\mathbb{P}(W)$. Then the total Chow ring of $X$ is free with basis $\{H_i, L_i \ | \ i \in [0,m]\}$. Note that $H_{m+1}=2L_{m}$ for $i\in[1,m+1]$ and $H\cdot L_i=L_{i-1}$ for any $i \in [1,m]$. \item As a basis of the rational cohomology we take $$1=H_0, H_1, \ldots, H_r.$$ \end{itemize} In case $r=2m$ is even: \begin{itemize} \item The space of maximal isotropic subspaces of $V$ has two components. Let $W_1, W_2$ be representatives in each component. Now $\mathbb{P}(W_a) \subset X$, $a=1,2$ and let $\xi_1,\xi_2\in A_m(X)$ be their cycle classes. For $i\in[0,m-1]$, let $L_i\in A_i(X)$ be the cycle class of an $i$ dimensional subspace of $\Bbb{P}(W_1)$ (note that we get the same cycle class if $W_1$ is replaced by $W_2$ here). The total Chow ring of $X$ is free with basis $H_0=1$, $H_1$, $\ldots$,$H_{m-1}$, $\xi_1$, $\xi_2$, $L_{m-1}\ldots$, $L_0$. We also have $H\cdot L_i=L_{i-1}$ for any $i \in [1,m-1]$, $H_m=\xi_1+\xi_2$ and $H\cdot\xi_a= L_{m-1}$ for $a=1,2$, so that $H_{m+1}=2L_{m-1}$. \item For even dimensional quadrics, as a basis of the rational cohomology we take $$1=H_0, H_1, \ldots, H_{m-1}, \xi_1, \xi_2, H_{m+1},\ldots, H_r.$$ \end{itemize} In both cases, for $X=Q_r$ (even or odd), $H^{\star j}=H_j$ if $j<r$. If $j=r$, then $H^{\star j}$ equals $H_j$ plus a multiple of $q$ times the identity in cohomology. But a four point number with one of the terms equalling identity in cohomology is zero. Therefore we may always write $(H_j,y_1,y_2,y_3)_{\beta}=(H^{\star,j},y_1,y_2,y_3)_{\beta}$ and apply Proposition \ref{Recursive} to simplify intersection formulas. The cohomology of an even quadric is generated by the hyperplane class except in the middle dimension. But we cannot have a $4$ point number with all four terms in the middle dimension, since the codimensions need to add up to $1$ mod $r$. Therefore $4$ point numbers for even quadrics are computable with these methods. For proofs of these statements and more on quadric hypersurfaces see \cite[Part 3]{EKM}. To compute classes, we determine certain facts about the quantum cohomology of $X=Q_r$. \begin{lemma}\label{basic} $I^{0,Q_r}_{1, \{H_1,H_r, H_{r-1} \}}=4$ \end{lemma} \begin{lemma}\label{mult} $$H_i \star H_j = \left\{ \begin{matrix} H_{i+j} & \text{if } i+j < r \\ H_r+2qH_0 & \text{if } i+j = r \\ 4qH_{\ell} & \text{if } i+j = r + \ell, \text{ with } i<r \text{ and } j<r\\ 2qH_i & \text{if } i<r, \text{ and } j=r \\ 4q^2 H_0 & \text{if } i= j=r \end{matrix} \right.$$ \end{lemma} \begin{remark}The formulas in Lemmas \ref{basic} and \ref{mult} hold for $r$ both even and odd. Formulas specific to the even case are given in Section \ref{EvenQuadrics}. \end{remark} \begin{proof}(of Lemma \ref{basic}) For the first assertion, we need to count lines in the quadric $Q$ which pass through a point $P$, and a fixed line $L$ in the quadric. Clearly four times this count gives us the desired answer since $H_r$ and $H_{r-1}$ are twice the classes of a point and a line respectively. Consider the projective space spanned by the point $P$ and the fixed line $L$, a $\Bbb{P}^2$. The quadric, restricted to this $\Bbb{P}^2$ splits as a product $Q=LL'$ since it contains $L$, we may assume that $L'$ passes through the point $P$ ($P\not\in L$), and hence $L'$ is the unique line we are looking for (it certainly meets $L$). \end{proof} \begin{proof} (of Lemma \ref{mult}) For odd quadrics, one can show that $H^{\star i}=H_i$ if $i\leq r-1$, and $H^{\star r}=H\star H_{r-1} =H_r + 2q\cdot 1$, since the dual of $1$ is $\frac{1}{2}H_r$. $H^{\star r+1}= 2qH + H\star H_r=4qH,$ since the dual of $H$ is $\frac{1}{2}H_{r-1}$. For even quadrics we need the action of orthogonal group on the space of $m+1$ isotropic subspaces of $\Bbb{C}^{2m+2}$ has two components. The dimension of intersection of two subspaces in the same connected component is constant modulo two. Since the three point number $\langle H,H_r, H_{r-1}\rangle_1$ is equal to $4$, the multiplication rules for $H_i\star H_j$ are the same. \end{proof} \subsection{Examples for odd quadrics} In Section \ref{Ex3} we illustrate the use of Proposition \ref{Recursive}, which simplifies 4-point numbers to sums of 3-point numbers. In Section \ref{Extremality} we show how by using the formulas for the intersections of divisors and curves, one can find GW divisors that are extremal in the nef cone by selecting parameters that will guarantee the divisor contracts boundary curves. In Section \ref{ExtremalExamples} we have calculated examples of extremal rays of the nef cone $\op{Nef}^1(\ovop{M}_{0,6})$ that come from GW divisors from odd quadrics. \subsubsection{Examples of use of Proposition \eqref{Recursive} for odd quadrics}\label{Ex3} Let $X=Q_3$, and we evaluate $$M=I^{1,X}_{2, \{L_0,L_0,L_1,L_1\}}=1$$ (as reported in \cite{FP}, page 44 $N_{2,2}=1$). Note that $L_0=\frac{1}{2}H_3= \frac{1}{2}H^{\star 3}-q\cdot 1$, $L_1=\frac{1}{2}H^{\star 2}=\frac{1}{2}H_2$. Therefore, $M=\frac{1}{16}I^{1,X}_{2, \{H_3,H_3,H_2,H_2\}}$. We will show that $I^{1,X}_{2, \{H_3,H_3,H_2,H_2\}}=16$: Using Proposition \eqref{Recursive}, and writing $H_2=H\star H$, we get $I^{1,X}_{2, \{H_3,H_3,H_2,H_2\}}=A+B-C$, where \begin{itemize} \item $A=I^{1,X}_{2, \{H_3\star H,H_3,H_2,H\}}=2I^{1,X}_{1, \{H,H_3,H_2,H\}}=2I^{0,X}_{1, \{H_3,H_2,H\}}$. Now $I^{0,X}_{1, \{H_3,H_2,H\}}$ equals coefficient of $qT_0$ in $H_3\star H_2\star H=2qH_2\star H=2q (H_3+2q)=2q(2T_0)+4q^2$. Therefore $A=8$. \item $B=I^{1,X}_{2, \{H_3, H,H_2,H_3\star H\}}= I^{1,X}_{2, \{H_3, H,H_2,2q H\}}=2I^{1,X}_{1, \{H_3, H,H_2,H\}}=2I^{0,X}_{2, \{H_3,H_2,H\}}$. Now $H_3\star H_2\star H=2qH_2\star H=2q (H_3+2qH_0)=2q(2T_0+2qH_0)$. Therefore $B=8$. \item $C=I^{1,X}_{2, \{H_3\star H_3,H_2,H,H\}}=0$. \end{itemize} \subsubsection{Extremality results}\label{Extremality} As the following results show, is straight forward to design divisors $I^{1,Q_r}_{d, \vec{a}}$, for $Q_r\subset \mathbb{P}^{r+1}$ an odd quadric, that lie on extremal faces of the nef cone. \begin{proposition}\label{Contract1} Let $Q_r\subset \mathbb{P}^{r+1}$ be an odd quadric, and $I^{1,Q_r}_{d, \vec{a}}$ a GW-divisor. If there is an index $i \in[n]$ such that $a_i=r$ and $J\subset [n]\setminus i$, such that for all $j\in J$, $1\le a_j \le r$, and $\sum_{j\in J}a_j=r$, then $I^{1,Q_r}_{d, \vec{a}}$ contracts any $F$-curve of the form $F_{I, A, B,C}$, for $I=J\cup \{i\}$, and hence lies on a face of the nef cone. \end{proposition} \begin{proof} Let $r=2m-1$, and two indices $i$ and $j \in [n]$ with $a_i=a_j =r$, then the divisor $I^{1,Q_r}_{d, \vec{a}}$ will kill any $F$-Curve of the form $F_{I, A,B,C}$ where $I=\{a_i,a_j\}$, since $H_r\star H_r=4qH_0$, and $4qI^{1,Q_r}_{d, \{H_0,\alpha_1,\alpha_2,\alpha_3\}}=0$ for all possible $\alpha_1,\alpha_2,\alpha_3$ under consideration. More generally if there is an index $i \in [n]$ such that $a_i=H_r$, and $J\subset [n]\setminus i$, such that for all $j\in J$, $1\le a_j \le r$, and $\sum_{j\in J}a_j=r$, one has $a_i=H_i$, and the star product of classes in $J$ is $H_r$, so the star product of classes in $I=J\cup \{i\}$ is $H_r\star H_r=4qH_0$, and the result follows. \end{proof} \begin{proposition}\label{Contract2} Let $Q_r\subset \mathbb{P}^{r+1}$ be an odd quadric, and $I^{1,Q_r}_{d, \vec{a}}$ a GW-divisor with $d\le 4$. \begin{enumerate} \item[$d=1$] If there are indices $a_1$ and $a_2$ such that $a_1+a_2 >r$, then $I^{1,Q_r}_{1, \vec{a}}$ will kill any $F$-Curve of the form $F_{A,B,C,D}$ where $\{a_1,a_2\} \subset A$. \item[$d=2$] If there are indices $a_1$, $a_2$, $b_1$, and $b_2$ such that $a_1+a_2 >r$, and $b_1+b_2 >r$ then $I^{1,Q_r}_{2, \{\vec{a}}$ will kill any $F$-Curve of the form $F_{A,B,C,D}$ where $\{a_1,a_2\} \subset A$ and $\{b_1,b_2\}\subset B$. \item[$d=3$] If there are indices $a_1$, $a_2$, $b_1$, $b_2$, $c_1$, and $c_2$, such that $a_1+a_2 >r$, $b_1+b_2 >r$ and $c_1+c_2 >r$ then $I^{1,Q_r}_{3, \{\vec{a}}$ will kill any $F$-Curve of the form $F_{A,B,C,D}$ where $\{a_1,a_2\} \subset A$, $\{b_1,b_2\}\subset B$, and $\{c_1,c_2\}\subset C$. \item[$d=4$] If there are indices $a_1$, $a_2$, $b_1$, $b_2$, $c_1$, $c_2$, $d_1$, and $d_2$,such that $a_1+a_2 >r$, $b_1+b_2 >r$, $c_1+c_2 >r$, and $d_1+d_2 >r$ then $I^{1,Q_r}_{4, \{\vec{a}}$ will kill any $F$-Curve of the form $F_{A,B,C,D}$ where $\{a_1,a_2\} \subset A$, $\{b_1,b_2\}\subset B$, $\{c_1,c_2\}\subset C$, and $\{d_1,d_2\}\subset C$. \end{enumerate} \end{proposition} \begin{proof} Intersections on the leg bring the degree down by one, leaving the spine at degree zero. \end{proof} \subsubsection{Examples of classes from odd quadrics}\label{ExtremalExamples} Here we consider two GW divisors in $\op{Pic}(\ovop{M}_{0,6})$: \begin{enumerate} \item $I^{1,Q_r}_{4, \{1,r,r,r,r,r\}}=16 \ R_{1}$; \item $I^{1,Q_r}_{4, \{i,j,r,r,r,r\}}= 8 R_{10}$, where $1< i\le j$, and $i+j=r+1$; \end{enumerate} To compute the class of $I^{1,Q_r}_{4, \{1,r,r,r,r,r\}}$, we intersect with dual curves, and we'll see that $$\frac{1}{16}G^4_{1,r,r,r,r,r}=\delta_{13}+\delta_{15}+\delta_{24}+\delta_{26}+\delta_{35}+\delta_{46}+2\delta_{135}=R_{1},$$ where $R_{1}$ is the first ray on the list of extremal rays of $\op{Nef}(\ovop{M}_{0,6})$ listed in \cite{Swin}. \bigskip Let $4 I^{1,Q_r}_{2, \{H_1, H_r, H_r, H_r\}} =\alpha$. We have $I^{0,Q_r}_{d, \{H_a,H_b,H_c\}}$ is $\frac{1}{2}$ times the coefficient of $q^{d}H_{r-c}$ in $H_a\star H_b$, since the cohomology class of the diagonal has the term $\frac{1}{2}H_c\tensor H_{r-c}$. Using Lemmas \ref{basic}, and \ref{mult} $$4 I^{1,Q_r}_{2, \{H_1, H_r, H_r, H_r\}} =4 \cdot 2 \ I^{0,Q_r}_{2, \{H_r, H_r, H_r\}}=4 \cdot 2 \cdot 2=16.$$ In the following (and the other examples here), to save space, once we put the classes into the place holders for points in the F-Curves, we mean we are intersecting the divisors with the F-curves. \begin{multicols}{2} \begin{itemize} \item[(13)] $D\cdot F_{1,2,3,456} = F_{1,r,r, r^3} =16$; \item[(14)] $D\cdot F_{1,4 ,23 ,56}= F_{1,r ,r^2 ,r^2}=0$; \item[(15)] $D\cdot F_{1, 5,6 ,234} =F_{1, r,r ,r^3} =16$; \item[(24)] $D\cdot F_{2, 3, 4, 156} = F_{r, r, r, \{1r^2\}} =16$; \item[(25)] $D\cdot F_{2, 5, 16, 34}= F_{r, r, \{1r\}, r^2}= 0 $; \item[(26)] $D \cdot F_{1, 2, 6, 345} =F_{1, r, r, r^3} =16$; \item[(35)] $D \cdot F_{3, 4, 5, 126} = F_{r, r, r, \{1r^2\}} =16$; \item[(36)] $D \cdot F_{3, 6, 12, 45} = F_{r, r, \{1r\}, r^2} =0 $; \item[(46)] $D \cdot F_{4, 5, 6, 123}= F_{r, r, r, \{1r^2\}}=16$; \item[(124)] $D \cdot F_{3, 4, 12, 56} = F_{r, r, \{1r\}, r^2} = 0 $; \item[(124)] $D \cdot F_{5, 6, 12, 34} = F_{r, r, \{1r\}, r^2} =0 $; \item[(134)] $D \cdot F_{1, 2, 34, 56} = F_{1,r ,r^2 ,r^2}= 0$; \item[(136)] $D \cdot F_{2, 3, 16, 45} = F_{r, r, \{1r\}, r^2} = 0 $; \item[(145)] $D \cdot F_{1, 6, 23, 45} = F_{1,r ,r^2 ,r^2}=0$; \item[(146)] $D \cdot F_{4, 5, 16, 23} = F_{r, r, \{1r\}, r^2} 0 $; \end{itemize} \end{multicols} For the coefficient of $\delta_{135}$: \begin{multline} D \cdot (F_{5, 6, 13, 24} + F_{1, 2, 3, 456}+F_{2, 3, 4, 156} - F_{2, 3, 16, 45})) \\ =F_{r, r, \{1r\}, r^2} + F_{1,r,r, r^3} + F_{r, r, r, \{1r^2\}}-F_{r, r, \{1r\}, r^2} =2(16) \end{multline} The expression for $I^{1,Q_r}_{2, \{H_1, H_r, H_r, H_r\}} $ is $\op{S}_6$-invariant, even though the choice of Schubert classes defining it is not. There are relations coming from the fact that $\ovop{M}_{0,4}\cong \mathbb{P}^1$, giving the extra symmetry in the class. This class was identified in \cite{Swin} to be spanned by an $\mathfrak{sl}_2$-conformal blocks divisor at level one. By scaling identities for level one bundles in type A (which have rank one), we know that $R_1$ is proportional to $$c_1(\mathbb{V}(\mathfrak{sl}_{2\ell},\{\omega_{\ell}^6\},1))=\frac{1}{\ell} c_1(\mathbb{V}(\mathfrak{sl}_{2},\{\omega_1^6\},1)) =c_1(\mathbb{V}(\mathfrak{sl}_{2},\{(\ell\omega_1)^6\},\ell)), \ \ \ell \ge 1.$$ There is a curious and certainly tenuous potential relationship between odd quadric GW divisors and conformal blocks divisors for $\mathfrak{sl}_2$: The automorphism group of an odd quadric $Q_{2m+1}$ is $SO_{2(m+1)+1}(\mathbb{C})$. The Langland's dual group to $SO_{2(m+1)+1}(\mathbb{C})$ is $Sp_{2(m+1)}(\mathbb{C})$, which has associated Lie algebra $\mathfrak{s}\mathfrak{p}_{2(m+1)}$. Fakhruddin proved in \cite{Fakh}, that the level one type $C_{\ell}$ bundles with four points are the same as the level $\ell$ bundles for $\mathfrak{sl}_2$. So perhaps there is a general level 1 identity between the GW divisors for odd quadrics and the CB divisors for $\mathfrak{sl}_2$ at level one. \subsubsection{$G^4_{i,j,r,r,r,r}$, where $1< i\le j$, and $i+j=r+1$} To compute the class, set $4 \ I^{1,Q_r}_{2, \{H_i,H_j,H_r,H_r\}}=\alpha$ and $4 \ I^{0,Q_r}_{2, \{H_r,H_r,H_r\}}=\beta$. Below in Lemma \ref{alphabeta}, we see that $\alpha=\beta=8$, so that $$\frac{1}{8}G^4_{i,j,r,r,r,r}=[1:0:1:1:0:1:1:0:1:0:0:0:3:0:0:1] =R_{10},$$ where $R_{10}$ is the $10$-th ray\footnote{We know $\op{R}_{10}= \rho c_1(\mathbb{V}(\mathfrak{sl}_{r+1},\{\omega_1^2, \omega_i, \omega_{r+1-i}, \omega_r^2\},1))$, for $1<i\le \frac{(r+1)}{2}$, and some positive rational $\rho$.} on the list of extremal rays of the nef cone of $\ovop{M}_{0,6}$ in \cite{Swin}. \begin{multicols}{2} \begin{enumerate} \item[(13)] $D\cdot F_{1,2,3,456} = F_{i,j,r,r^3} = \alpha$; \item[(14)] $D\cdot F_{1,4 ,23 ,56}= F_{i,r ,\{jr\} ,r^2}=0$; \item[(15)] $D\cdot F_{1, 5,6 ,234} = F_{i, r,r ,\{jr^2\}} =\alpha$; \item[(24)] $D\cdot F_{2, 3, 4, 156} = F_{j, r, r, \{ir^2\}} =\alpha$; \item[(25)] $D\cdot F_{2, 5, 16, 34}= F_{j, r, \{ir\}, r^2}=0$; \item[(26)] $D \cdot F_{1, 2, 6, 345} = F_{i, j, r, r^3}=\alpha $; \item[(35)] $D \cdot F_{3, 4, 5, 126} = F_{r, r, r, \{ijr\}} =\beta$; \item[(36)] $D \cdot F_{3, 6, 12, 45} = F_{r, r, \{ij\}, r^2} =0$; \item[(46)] $D \cdot F_{4, 5, 6, 123}= F_{r, r, r, \{ijr\}}=\beta$; \item[(124)] $D \cdot F_{3, 4, 12, 56} = F_{r, r, \{ij\}, r^2} =0$; \item[(125)] $D \cdot F_{5, 6, 12, 34} = F_{r, r, \{ij\}, r^2} =0$; \item[(134)] $D \cdot F_{1, 2, 34, 56} = F_{i, j, r^2, r^2} =0$; \item[(136)] $D \cdot F_{2, 3, 16, 45} = F_{j, r, \{ir\}, r^2} = 0$; \item[(145)] $D \cdot F_{1, 6, 23, 45} = F_{i, r, \{jr\}, r^2} = 0$; \item[(146)]$D \cdot F_{4, 5, 16, 23} = F_{r, r, \{ir\}, \{jr\}} =\alpha$. \end{enumerate} \end{multicols} and the coefficient of $\delta_{135}$ is given by \begin{multline} D \cdot (F_{5, 6, 13, 24} + F_{1, 2, 3, 456}+F_{2, 3, 4, 156} - F_{2, 3, 16, 45})) \\ =(F_{r, r, \{ir\}, \{jr\}} + F_{i, j, r, r^3}+F_{j, r, r, \{ir^2\}} - F_{j, r, \{ir\}, r^2})) \\ =4(H_r,H_r,H_i,H_j)_2+4I^{1,Q_r}_{2, \{H_i,H_j,H_r,H_r\}}+4(H_i,H_r,H_r,H_i)_2-8(H_j,H_r,H_i,H_0)_1=3\alpha \end{multline} \begin{lemma}\label{alphabeta} $\alpha=\beta=8$. \end{lemma} \begin{proof} To get $\beta$, we compute $I^{0,Q_r}_{2, \{H_r,H_r,H_r\}}=\frac{1}{2}$ times the coefficient of $q^{2}H_{0}$ in $H_r\star H_r=4q^2H_0$. So $I^{0,Q_r}_{2, \{H_r,H_r,H_r\}}=2$, and $\beta=8$. To get $\alpha$, we compute $I^{1,Q_r}_{2, \{H_i,H_j,H_r,H_r\}}$ using Proposition \ref{Recursive}. We write $I^{1,Q_r}_{2, \{H_i,H_j,H_r,H_r\}}$ as \begin{multline} =2\left(I^{0,Q_r}_{2, \{H_r,H_i,H_r \star H_j\}}- I^{0,Q_r}_{2, \{H_i,H_{j-1}, H_r \star H_r\}} \right)+I^{1,Q_r}_{2, \{H_r,H_i,H_{j-1},H_r\star H_1\}}\\ =2 \cdot 2 I^{0,Q_r}_{1, \{H_r,H_i,H_j\}}-0+2I^{1,Q_r}_{1, \{H_r,H_i,H_{j-1}, H_1\}}\\ =2 \cdot 2 I^{0,Q_r}_{1, \{H_r,H_i,H_j\}}+2I^{0,Q_r}_{1, \{H_r,H_i,H_{j-1}\}}. \end{multline} The first summand is zero since $I^{0,Q_r}_{1, \{H_r,H_i,H_j\}}$ is $\frac{1}{2}$ the coefficient of $q^1H_0$ in $H_i\star H_j=4qH_1$. The second summand is 2 since $I^{0,Q_r}_{1, \{H_r,H_i,H_{j-1}\}}$ is $\frac{1}{2}$ the coefficient of $q^1H_0$ in $H_i\star H_{j-1}=H_r+2qH_0$. \end{proof} Note that this divisor satisfies Patterns I and II from \cite{Swin}. Patterns I and II are intersection behavior shared by divisors lying on remaining extremal rays that he could not identify as being spanned by a conformal blocks divisor for $\mathfrak{sl}_2$. In \cite{Swin}, the ray $R_{10}$ was identified as being spanned by a level 2 CB divisor for $\mathfrak{sl}_6$. \subsection{Examples for even quadrics}\label{EvenQuadrics} Because of the cohomology class in the middle dimension, the classes for the even quadrics $X=Q_{2m}$ can be different, depending on whether $m$ is even or odd. Moreover, when $m=2$, and $m=3$, differences in the symmetry causes the classes to behave differently than in the general case. To compute classes, the following facts are used. \begin{lemma} \begin{enumerate} \item $H\star \xi_1 =H\star\xi_2 = \frac{1}{2}H_{m+1}$ (for degree reasons there are no $q$ terms). \item $H_m\star H_m=H_{2m} +2q\cdot H_0$. \item If $m$ is odd, then $\langle \xi_1,\xi_2, [pt]\rangle_1=0$, and hence $\langle \xi_1,\xi_1, [pt]\rangle_1=1$. Therefore $\xi_1\star \xi_2=[pt]$ and $\xi_1\star\xi_1=\xi_2\star\xi_2=q\cdot 1$ \item If $m$ is even then $\langle \xi_1,\xi_1, [pt]\rangle_1=0$, and hence $\langle \xi_1,\xi_2, [pt]\rangle_1=1$. Therefore $\xi_1\star \xi_2=q\cdot 1$ and $\xi_1\star\xi_1=\xi_2\star\xi_2=[pt]$ \end{enumerate} \end{lemma} \begin{proof} We need to compute $\xi_1\star\xi_2$ and $\xi_1\star\xi_1= \xi_2\star \xi_2.$ But $H_m\star H_m=(\xi_1+\xi_2)\star (\xi_1+\xi_2) = 2\xi_1\star\xi_2 + (\xi_1\star \xi_1 +\xi_2\star\xi_2)$, Therefore the $q\cdot 1$ terms in $\xi_1\star\xi_2$ and $\xi_1\star\xi_1$ add to $1$, so one of them should be one and the other $0$. In the second case (the first is similar) pick linear spaces $M$ and $M'$ in the quadric $Q_r$ in general position and with cohomology class $\xi_1$. We get linear spaces $M,M'\subseteq \Bbb{C}^{2m+2}$ of dimension $m+1$ each. The dimension of intersection of $M$ and $M'$ is congruent modulo two to $m+1$, an odd number. Therefore we may assume $M\cap M'$ is one dimensional. Now we want to count lines $L$ in the quadric through $M$, $M'$ and a general point $A$ in $Q_r$. Consider the span of $A$ and $M$ giving us a $P=\Bbb{P}^{m+1}$ in $\Bbb{P}^{2m+1}$. The quadric restricted to $P$ equals $MT$, $T$ a hyperplane in $P$ which contains $A$. $M'\cap P$ is entirely contained in $M$, and we may assume that it does not intersect $T\cap M$. The line $L$ has to stay in $T$, and pass through $M'\cap P$ which does not intersect $T$. This is not possible. \end{proof} \subsubsection{A GW divisor on an extremal face of the nef cone spanned by conformal blocks divisors} To compute the class of $I^{1,Q_6}_{2, \{H_1,H_6,\xi_1,\xi_1,\xi_1,\xi_1\}}$ in the nonadjacent basis, we intersect with the dual curves to see that \begin{multline} I^{1,Q_6}_{2, \{H_1,H_6,\xi_1,\xi_1,\xi_1,\xi_1\}}=[2, 0, 2, 2 , 0, 2, 4, 0, 4, 0, 0, 0, 6, 0, 0, 3]\\ =[1, 0 ,1 , 1 , 0 , 1 , 1 , 0 , 1 , 0 , 0 , 0 , 2 , 0, 0, 0] + [1 ,0 , 1 , 1 , 0 , 1 , 1 , 0 , 1 , 0 , 0 , 0 , 2 , 0 , 0 , 1] \\ + [0 , 0 , 0 , 0 , 0 , 0 , 2 , 0 , 2 , 0 , 0 , 0 , 2 , 0 , 0 , 2 ] =R_{1} + R_{5} +2 R_{6}. \end{multline} When intersecting $D$ with the dual F- curves, we get: \begin{multicols}{3} \begin{itemize} \item $F_{H_1,H_6,\xi_1,\{\xi_1,\xi_1,\xi_1\}} = 2$; \item $F_{H_1,\xi_1 ,\{H_6,\xi_1\} ,\{\xi_1,\xi_1\}}= 0 $; \item $F_{H_1, \xi_1,\xi_1 ,\{H_6,\xi_1,\xi_1\}} = 2$; \item $F_{H_6, \xi_1, \xi_1, \{H_1,\xi_1,\xi_1\}} =2$; \item $F_{H_6, \xi_1, \{H_1,6\}, \{\xi_1,\xi_1\}}= 0$; \item $F_{H_1, H_6, \xi_1, \{\xi_1,\xi_1,\xi_1\}} = 2$; \item $F_{\xi_1, \xi_1, \xi_1, \{H_1,H_6,\xi_1\}} =4$; \item $F_{\xi_1, \xi_1, \{H_1,H_6\}, \{\xi_1,\xi_1\}} = 0$; \item $F_{\xi_1, \xi_1, \xi_1, \{H_1,H_6,\xi_1\}}=4$; \item $F_{\xi_1, \xi_1, \{H_1,H_6\}, \{\xi_1,\xi_1\}} = 0$; \item $F_{\xi_1, \xi_1, \{H_1,H_6\}, \{\xi_1,\xi_1\}} = 0 $; \item $F_{H_1, H_6, \{\xi_1,\xi_1\}, \{\xi_1,\xi_1\}} = 0 $; \item $F_{H_6, \xi_1, \{H_1,\xi_1\}, \{\xi_1,\xi_1\}} = 0$; \item $F_{H_1, \xi_1, \{H_6,\xi_1\}, \{\xi_1,\xi_1\}} = 0 $; \item $F_{\xi_1, \xi_1, \{H_1,\xi_1\}, \{H_6,\xi_1\}} =3$; \end{itemize} \end{multicols} For the coefficient of $\delta_{135}$, we have \begin{multline}(F_{\xi_1, \xi_1, \{H_1,\xi_1\}, \{H_6,\xi_1\}} + F_{H_1, H_6, \xi_1, \{\xi_1,\xi_1,\xi_1\}}+F_{H_6, \xi_1, \xi_1, \{H_1,\xi_1,\xi_1\}} - F_{H_6, \xi_1,\{H_1,\xi_1\},\{ \xi_1,\xi_1\}})\\ =\frac{1}{2}I^{1,Q_6}_{1,\{H_3, H_4,\xi_1,\xi_1\}}+2+2-0=6. \end{multline} We note that $R_2$, $R_5$, and $R_6$ can all be expressed in terms of conformal blocks divisors. Namely, for positive rational numbers $\rho_2$, $\rho_5$, and $\rho_6$, \begin{itemize} \item[$\op{R}_2$] $=\rho_2 \ c_1(\mathbb{V}(\mathfrak{sl}_{r+1},\{\omega_1^2, \omega_i, \omega_{r+1-i}, \omega_{r}^2\},2))$, with $1 \le i \le \frac{(r+1)}{2}$; \item[$\op{R}_5$] $=\rho_5 \ c_1(\mathbb{V}(\mathfrak{sl}_{r+1}, \{\omega_i^3, \omega_{r+1-i}^3 \},1))$, with $r \ge 2$, and $i< \frac{r+1}{2}$; \item[$\op{R}_6$] $=\rho_6 \ c_1(\mathbb{V}(\mathfrak{sl}_{r+1},\{ \ell \omega_1, m\omega_1, \ell \omega_r, m \omega_r, 0, 0\},\ell)$, with $r\ge 1$. \end{itemize} In particular, this means that $I^{1,Q_6}_{2, \{H_1,H_6,\{\xi_1,\xi_1,\xi_1,\xi_1\}}$ is on an extremal face of the nef cone spanned by conformal blocks divisors. \subsubsection{A GW class on an extremal face of the nef cone}\label{NEWfinally} One can show that in the standard nonadjacent basis, for $X=Q_{4}$ \begin{multline} I^{1,X}_{2, \{H_1,\xi_1,\xi_1,\xi_1,\xi_2,H_4\}}= [0, 1, 1, 0, 2, 0, 2, 0, 2, 1, 2, 1, 2, 0, 0, 2]\\ = [0 , 1, 0 , 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1 ]+[0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0 , 1 , 0, 0, 1] =R_{20}+R_{3}. \end{multline} The ray $R_{3}$ is known to be spanned by a conformal blocks divisor\footnote{$\op{R}_3 = \rho \ c_1(\mathbb{V}(\mathfrak{sl}_{r+1},\{ \omega_1^3, \ell\omega_1,\omega_{r-2}, \ell \omega_r\},\ell))$, where $r\ge 3$, $\ell \ge 1$, for some positive rational $\rho$.}, but $R_{20}$ is not known to be so. In particular, we know of no description of any element on the interior of the extremal face spanned by $R_{3}$ and $R_{20}$ as being given by conformal blocks divisors. \section{Two related questions}\label{Questions} \subsection{Cycles from Gromov-Witten theory of Blow-ups}\label{QuestionsGWBlowUps} Let $X$ be a convex variety (e.g., a homogenous projective variety) of dimension $m$, and $\pi:\widetilde{X}\to X$ the blow up of $X$ at a point $P\in X$. There is a natural inclusion $\pi^*:A^*(X)\to A^*(\widetilde{X})$ via pull-back of cycles. Note that $\pi_*\circ \pi^*$ is the identity on $A^*(X)$ (here $\pi_*:A^*(\widetilde{X})\to A^*(X)$ is the natural push forward map on cycles). It follows from \cite[Lemma 2.2]{Gath} that if $\vec{\alpha}$ is an $n$-tuple of effective cycles in $A^*(X)$, and $\beta\in A_1(X)$, as we write in case the codimension $c$ cycle condition is satisfied by the triple $(X,\beta,\vec{\alpha})$, \begin{multline}\label{gathmann} I^{c,X}_{\beta,\{\alpha_1,\ldots,\alpha_n\}} = I^X_{0,n,\beta}(\alpha_1\tensor\alpha_2\tensor\dots\tensor\alpha_n)\\ =I^{\widetilde{X}}_{0,n,\pi^*\beta}(\pi^*\alpha_1\tensor \pi^*\alpha_2\tensor\dots\tensor \pi^*\alpha_n)=I^{c,\widetilde{X}}_{\pi^*\beta,\{\pi^*\alpha_1,\ldots,\pi^*\alpha_n\}}. \end{multline} Now let $X=\Bbb{P}^r$. The cohomology of $X$ is generated by cycle classes of linear subspaces $L_d\subset \Bbb{P}^r$ of some codimension $d$. The cycle classes of these linear spaces are the same, denoted $[L_d]$. Now choose one such linear subspace $L_d\subset \Bbb{P}^r$ of codimension $d$ which passes through $P$, then $\pi^*[L_d]=L_d'+T_d$ where $L_d'\subset\widetilde{X}$ is the strict transform of $L_d$, and $T_d$ is the class of a $\dim(L_d)=r-d$ linear subspace of the exceptional divisor, on $\widetilde{X}$ (one can identify the exceptional divisor with $\Bbb{P}^{r-1}$). Therefore if $\alpha_i$ are cycle classes of positive dimensional subspaces $L_{a_i}$ in $\Bbb{P}^r$ of co-dimension $a_i$ (so no point classes), then one can rewrite Equation \eqref{gathmann} as follows, \begin{multline}\label{qtty} I^{c,X}_{\beta,\{\alpha_1,\ldots,\alpha_n\}} = I^X_{0,n,\beta}(\otimes_{i=1}^n\alpha_i)=I^{\widetilde{X}}_{0,n,\pi^*\beta}(\otimes_{i=1}^n\pi^*\alpha_i)\\ =I^{\widetilde{X}}_{0,n,\pi^*\beta}(\otimes_{i=1}^n (L'_{a_i}+T_{a_i})) = \sum_{S \subset \{1,\ldots,n\}} I^{\widetilde{X}}_{0,n,\pi^*\beta}\bigl(\left(\otimes_{i\in S} L_{a_i}' \right) \otimes \left( \otimes_{i\in S^c} T_{a_i}\right)\bigr). \end{multline} We have therefore decomposed the Gromov-Witten classes into a sum of (possibly non effective) cycles on $\ovop{M}_{0,n}$ by expanding the above quantity \eqref{qtty}. \subsection{Fedorchuk's divisors} For $\sum_i a_i =(r+1)(d+1)$, recall we have shown \begin{equation}\label{Fdiv} I^{1,\mathbb{P}^r}_{d, \vec{a}} \equiv c_1(\mathbb{V}(\operatorname{sl}_{r+1}, \{\omega_{a_1},\ldots, \omega_{a_n}\},1)). \end{equation} Assuming that none of the $a_i$ are zero, we consider the divisor $$\Bbb{D}'=2c_1(\mathbb{V}(\operatorname{sl}_{r+1}, \{\omega_{a_1},\ldots, \omega_{a_n}\},1))-\sum_{r+1\mid \sum_{i\in I} a_i}\Delta_{I,J},$$ which Fedorchuk \cite[Equation (7.0.17)]{Fed} has proved is nef, and an effective sum of boundary classes. However $\Bbb{D}'$ is not known to be semi-ample (i.e., that some multiple is basepoint free). Using the scaling identity \cite[Proposition 1.3]{GG} $$2c_1(\mathbb{V}(\operatorname{sl}_{r+1}, \{\omega_{a_1},\ldots, \omega_{a_n}\},1))=c_1(\mathbb{V}(\operatorname{sl}_{2r+2}, \{\omega_{2a_1},\ldots, \omega_{2a_n}\},1)),$$ we can rewrite the expression with Fedorchuk's divisor as (with $\vec{m}=2\vec{a}$), $$I^{1,\mathbb{P}^{2r+1}}_{d, \vec{m}}= \Bbb{D}'+ \sum_{r+1\mid \sum_{i\in S} a_i}\Delta_{S,S^c}.$$ Can $\Bbb{D}'$ be characterised by Gromov-Witten theory of blow-ups, for example, is $\Bbb{D}'$ equivalent to some combination of terms in the following natural decomposition (use \eqref{qtty} for $X=\Bbb{P}^{2r+1}$)? \begin{equation}\label{Fed} I^{1,\mathbb{P}^{2r+1}}_{d, 2\vec{m}}=\sum_{\overset{S \subsetneq \{1,\ldots,n\},}{} } I^{\widetilde{X}}_{0,n,\pi^*\beta} \bigl(\left(\otimes_{i\in S} L_{m_i}' \right)\otimes \left(\otimes_{i\in S^c} T_{m_i}\right)\bigr). \end{equation} \subsection{Divisors from the Gromov-Witten theory of pairs}\label{QuestionsGWPairs} Consider the case of a projective space $X=\Bbb{P}^r$ and a hyperplane $H$ in $X$. Let $s>1$ and $\alpha=(\alpha_1,\dots,\alpha_s)$ be an $s$-tuple of positive integers such that $\sum_{i=1}^s \alpha_i=d$. Define the space $\overline{M}_{0,n,s}(X, d\mid \alpha)=\overline{M}_{0,n,s}(H/X, d\mid \alpha)$ to be the closure in $\overline{M}_{0,n+s}(X,d)$ of the set of {irreducible} stable maps $(C,x_1,\dots,x_n,y_1,\dots,y_s,f)$ of degree $d$ to $X$ with $f(C)\not\subset H$ such that the divisor $f^*H$ on $C\cong \Bbb{P}^1$ is equal to $\sum_i \alpha_i y_i$ (equality of cycles, not just linear equivalence). This implies $f(y_i)\in H$ (since $\alpha_i$ are assumed to be positive). Vakil \cite{V} has shown that each irreducible component of $\overline{M}_{0,n,s}(X, d\mid \alpha)$ has the expected dimension, which is equal to $\dim \overline{M}_{0,n}(X,d)-\sum_{i=1}^s (\alpha_i-1).$ Let $\gamma_1,\dots,\gamma_n\in A^*(X)$ and $\mu_1,\dots,\mu_s\in A^*(H)$ and set $\sum_i\operatorname{codim}\gamma_j+\sum_j \operatorname{codim}\mu_i=\tau$. Then one can form the cycle $$(\operatorname{ev}^*_{x_1}\gamma_1\dots \operatorname{ev}^*_{x_n}\gamma_n)\cdot (\operatorname{ev}^*_{y_1}\mu_1\dots \operatorname{ev}^*_{x_s}\mu_s)\cap [\overline{M}_{0,n,s}(X,d\mid \alpha)]\in A^*(\overline{M}_{0,n,s}(X,d\mid \alpha)),$$ which has homological degree $\dim \overline{M}_{0,n,s}(X,d\mid \alpha)-\tau$, pushforward to $\overline{M}_{0,n+s}$ the same degree, and is a class of codimension $c$ if $\dim \overline{M}_{0,n,s}(X,d\mid \alpha)-\tau=\dim \overline{M}_{0,n+s}-c$, which simplifies to $$d(r+1)+r+c= \sum\alpha_i +\sum \gamma_j +\sum \mu_i.$$ Let $I^{c,H/X}_{d,\alpha}(\gamma_1\tensor\dots\tensor \gamma_n\mid\mu_1\tensor\dots\tensor \mu_s)\in A^c (\overline{M}_{0,n+s})$ denote the push-forward cycle. It is easy to see that it is effective (by Kleiman's theorem). However, it is not clear that $I^{c,H/X}_{d,\alpha}(\gamma_1\tensor\dots\tensor \gamma_n\mid\mu_1\tensor\dots\tensor \mu_s)$ is basepoint free. To prove so using our methods so far, one would need to know the dimension of fibers of $\overline{M}_{0,n,s}(H/X, d\mid \alpha)\to\overline{M}_{0,n+s}$, or show this map is flat. \subsubsection*{Acknowledgements} We thank Han-Bom Moon for his remarks on a draft of this paper. Gibney is supported by NSF. \begin{bibdiv} \begin{biblist} \bib{ags}{article}{ author={Alexeev, V.}, author={Gibney, A.}, author={Swinarski, D.}, title={Higher-level $\germ{sl}_2$ conformal blocks divisors on $\overline M_{0,n}$}, journal={Proc. Edinb. Math. Soc. (2)}, volume={57}, date={2014}, number={1}, pages={7--30}, } \bib{AS}{article}{ author={Alexeev, V.}, author={Swinarski, D.}, title={Nef divisors on $\overline M_{0,n}$ from GIT}, language={English, with English and Russian summaries}, conference={ title={Geometry and arithmetic}, }, book={ series={EMS Ser. Congr. Rep.}, publisher={Eur. Math. Soc., Z\"urich}, }, date={2012}, pages={1--21}, } \bib{AC}{article}{ author={Arbarello, E.}, author={Cornalba, M.}, title={Combinatorial and algebro-geometric cohomology classes on the moduli spaces of curves}, journal={J. Algebraic Geom.}, volume={5}, date={1996}, number={4}, pages={705--749}, } \bib{LRS}{article}{ author={Avis, D}, title={\texttt{\upshape lrslib}: a self-contained ANSI C implementation of the reverse search algorithm for vertex enumeration/convex hull problems}, date={2018}, note={Version 6/2} } \bib{BF}{article} { AUTHOR = {Behrend, K.} AUTHOR = {Fantechi, B.}, TITLE = {The intrinsic normal cone}, JOURNAL = {Invent. Math.}, FJOURNAL = {Inventiones Mathematicae}, VOLUME = {128}, YEAR = {1997}, NUMBER = {1}, PAGES = {45--88}, } \bib{B}{article}{ author={Belkale, P.}, title={Extremal rays in the Hermitian eigenvalue problem,} note ={arXiv:1705.10580, Math. Ann., to appear} year = {2017}, } \bib{BK}{article}{ author={Belkale, P.}, author={Kiers, J.}, title={Extremal rays in the Hermitian eigenvalue problem for arbitrary types,} note ={arXiv:1803.03350} year = {2018}, } \bib{Carr}{article}{ author={Carr, S.}, title={A polygonal presentation of $\op{Pic}(\ovmc{M}_{0,n}$),} note ={ arXiv:0911.2649 [math.AG]} year = {2009}, } \bib{CK}{book} { AUTHOR = {Cox, D.A.} AUTHOR = {Katz, S.}, TITLE = {Mirror symmetry and algebraic geometry}, SERIES = {Mathematical Surveys and Monographs}, VOLUME = {68}, PUBLISHER = {American Mathematical Society, Providence, RI}, YEAR = {1999}, PAGES = {xxii+469}, } \bib{EKM}{book}{ author={Elman, R.}, author={Karpenko, N.}, author={Merkurjev, A.}, title={The algebraic and geometric theory of quadratic forms}, series={AMS Colloquium Publications}, volume={56}, publisher={AMS, Providence, RI}, date={2008}, pages={viii+435}, } \bib{Fakh}{article}{ author={Fakhruddin, N.}, title={Chern classes of conformal blocks}, conference={ title={Compact moduli spaces and vector bundles}, }, book={ series={Contemp. Math.}, volume={564}, publisher={Amer. Math. Soc.}, place={Providence, RI}, }, date={2012}, pages={145--176}, } \bib{Fed}{article}{ author={Fedorchuk, M.}, title={Semiampleness Criteria for divisors on $\ovmc{M}_{0,n}$,} note ={arXiv:1407.7839} year = {2014}, } \bib{FedCyclic}{article}{ author={Fedorchuk, M.}, title={Cyclic Covering Morphisms on $\bar {M}_{0,n}$}, date={2011}, eprint={http://arxiv.org/abs/1105.0655}, } \bib{FL}{article}{ AUTHOR = {Fulger, M.} AUTHOR = {Lehmann, B.}, TITLE = {Positive cones of dual cycle classes}, JOURNAL = {Alg. Geom.}, FJOURNAL = {Algebraic Geometry}, VOLUME = {4}, YEAR = {2017}, NUMBER = {1}, PAGES = {1--28}, ISSN = {2214-2584}, } \bib{FP}{incollection} { AUTHOR = {Fulton, W.} AUTHOR = {Pandharipande, R.}, TITLE = {Notes on stable maps and quantum cohomology}, BOOKTITLE = {Algebraic geometry---{S}anta {C}ruz 1995}, SERIES = {Proc. Sympos. Pure Math.}, VOLUME = {62}, PAGES = {45--96}, PUBLISHER = {Amer. Math. Soc., Providence, RI}, YEAR = {1997}, } \bib{Fulton}{book} { AUTHOR = {Fulton, W.}, TITLE = {Intersection theory}, SERIES = {Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]}, VOLUME = {2}, EDITION = {Second}, PUBLISHER = {Springer-Verlag, Berlin}, YEAR = {1998}, PAGES = {xiv+470}, } \bib{Gath}{article} { AUTHOR = {Gathmann, A.}, TITLE = {Gromov-{W}itten invariants of blow-ups}, JOURNAL = {J. Alg. Geom.}, FJOURNAL = {Journal of Algebraic Geometry}, VOLUME = {10}, YEAR = {2001}, NUMBER = {3}, PAGES = {399--432}, } \bib{GiansiracusaSimpson}{article}{ author={Giansiracusa, N.}, author={Simpson, M.}, title={GIT compactifications of $\scr M_{0,n}$ from conics}, journal={Int. Math. Res. Not. IMRN}, date={2011}, number={14}, pages={3315--3334}, } \bib{G}{article} { AUTHOR = {Giansiracusa, N.}, TITLE = {Conformal blocks and rational normal curves}, JOURNAL = {J. Alg. Geom.}, FJOURNAL = {Journal of Algebraic Geometry}, VOLUME = {22}, YEAR = {2013}, NUMBER = {4}, PAGES = {773--793}, } \bib{GG}{article} { AUTHOR = {Giansiracusa, N.} AUTHOR = {Gibney, A.}, TITLE = {The cone of type {$A$}, level 1, conformal blocks divisors}, JOURNAL = {Adv. Math.}, FJOURNAL = {Advances in Mathematics}, VOLUME = {231}, YEAR = {2012}, NUMBER = {2}, PAGES = {798--814}, } \bib{GJM}{article}{ author={Giansiracusa, N.}, author={Jensen, D.}, author={Moon, H-B.}, title={GIT compactifications of $M_{0,n}$ and flips}, journal={Adv. Math.}, volume={248}, date={2013}, pages={242--278}, } \bib{GiansiracusaSimpson}{article}{ author={Giansiracusa, Noah}, author={Simpson, Matthew}, title={GIT compactifications of $\scr M_{0,n}$ from conics}, journal={IMRN}, date={2011}, number={14}, pages={3315--3334}, } \bib{gjms}{article}{ author={Gibney, A.}, author={Jensen, D.}, author={Moon, H-B.}, author={Swinarski, D.}, title={Veronese quotient models of $\overline{\rm M}_{0,n}$ and conformal blocks}, journal={Michigan Math. J.}, volume={62}, date={2013}, number={4}, pages={721--751}, } \bib{GP}{article} { AUTHOR = {Graber, T.} AUTHOR= {Pandharipande, R.}, TITLE = {Localization of virtual classes}, JOURNAL = {Invent. Math.}, FJOURNAL = {Inventiones Mathematicae}, VOLUME = {135}, YEAR = {1999}, NUMBER = {2}, PAGES = {487--518}, } \bib{KapVer}{article}{ author={Kapranov, M. M.}, title={Veronese curves and Grothendieck-Knudsen moduli space $\overline M_{0,n}$}, journal={J. Algebraic Geom.}, volume={2}, date={1993}, number={2}, pages={239--262}, } \bib{KapChow}{article}{ author={Kapranov, M. M.}, title={Chow quotients of Grassmannians. I}, conference={ title={I. M. Gel\cprime fand Seminar}, }, book={ series={Adv. Soviet Math.}, volume={16}, publisher={Amer. Math. Soc., Providence, RI}, }, date={1993}, pages={29--110}, } \bib{KeelThesis}{article}{ author={Keel, S.}, title={Intersection theory of moduli space of stable $n$-pointed curves of genus zero}, journal={Trans. Amer. Math. Soc.}, volume={330}, date={1992}, number={2}, pages={545--574}, } \bib{Kleiman}{article} { AUTHOR = {Kleiman, S. L.}, TITLE = {The transversality of a general translate}, JOURNAL = {Comp. Math.}, FJOURNAL = {Compositio Mathematica}, VOLUME = {28}, YEAR = {1974}, PAGES = {287--297}, } \bib{KV}{book} { AUTHOR = {Kock, J.} AUTHOR = {Vainsencher, I.}, TITLE = {A f\'ormula de {K}ontsevich para curvas racionais planas}, SERIES = {22$^{\rm o}$ Col\'oquio Brasileiro de Matem\'atica.}, PUBLISHER = {Inst de Mat. Pura e Aplicada (IMPA), Rio de Janeiro}, YEAR = {1999}, PAGES = {xiv+113}, } \bib{KM}{article} { AUTHOR = {Kontsevich, M.} AUTHOR= {Manin, Yu.}, TITLE = {Gromov-{W}itten classes, quantum cohomology, and enumerative geometry}, JOURNAL = {Comm. Math. Phys.}, FJOURNAL = {Communications in Mathematical Physics}, VOLUME = {164}, YEAR = {1994}, NUMBER = {3}, PAGES = {525--562}, } \bib{MOPPZ}{article} { AUTHOR = {Marian, A.} AUTHOR = {Oprea, D.} AUTHOR = {Pandharipande, R.} AUTHOR= {Pixton, A.} AUTHOR = {Zvonkine, D.}, TITLE = {The {C}hern character of the {V}erlinde bundle over {$\overline{\mathcal{M}}_{g,n}$}}, JOURNAL = {J. Reine Angew. Math.}, FJOURNAL = {Journal f\"ur die Reine und Angewandte Mathematik. [Crelle's Journal]}, VOLUME = {732}, YEAR = {2017}, PAGES = {147--163}, } \bib{MoonSwin}{article}{ author={Moon, H-B.}, author={Swinarski, D.}, title={Effective curves on $\overline{\rm M}_{0,n}$ from group actions}, journal={Manuscripta Math.}, volume={147}, date={2015}, number={1-2}, pages={239--268}, } \bib{Swin}{article}{ author={Swinarski, D.}, title={$\op{sl}_2$ conformal block divisors and the nef cone of $\ovmc{M}_{0,n}$}, note ={arXiv:1107.5331} year = {2011}, } \bib{Smyth}{article}{ author={Smyth, D.I. }, title={Towards a classification of modular compactifications of $\scr{M}_{g,n}$}, journal={Invent. Math.}, volume={192}, date={2013}, number={2}, pages={459--503}, } \bib{TUY}{article}{ author={Tsuchiya, A.}, author={Ueno, K.}, author={Yamada, Y.}, title={Conformal field theory on universal family of stable curves with gauge symmetries}, conference={ title={Integrable systems in quantum field theory and statistical mechanics}, }, book={ series={Adv. Stud. Pure Math.}, volume={19}, publisher={Academic Press, Boston, MA}, }, date={1989}, pages={459--566}, } \bib{V}{article} { AUTHOR = {Vakil, R.}, TITLE = {The enumerative geometry of rational and elliptic curves in projective space}, JOURNAL = {J. Reine Angew. Math.}, FJOURNAL = {Journal f\"ur die Reine und Angewandte Mathematik. [Crelle's Journal]}, VOLUME = {529}, YEAR = {2000}, PAGES = {101--153}, } \end{biblist} \end{bibdiv} \end{document}
1,477,468,751,270
arxiv
\subsection{Proving Semantic Equivalence of Optimized formulas (\Cref{thm:sem-equiv-opt})} \label{sec:proof-opt-equiv} \vspace{-1mm} In this section we present proofs ascertaining the correctness of our enforcers. We prove \Cref{thm:strong-enf}, by proving that the enforcers synthesised by our synthesis function are \emph{sound} and \emph{transparent}. We prove these two criteria in \Cref{sec:proof-soundness,sec:proof-transparency}. Finally, we prove that our synthesised enforcers also abide by \emph{non-violating trace transparency} in \Cref{sec:proof-trace-transparency}. In order to facilitate our proofs we also use an alternative satisfaction semantics for \SHML as explained below. \paragraph*{Alternative \SHML Semantics} An alternative semantics for \SHML was presented by Aceto \etal in \cite{Aceto1999TestingHML,Aceto2007Book} in terms of a \emph{satisfaction relation}, \hSat. When restricted to \SHML, \hSat is the \emph{largest relation} \R satisfying the implications defined in \Cref{fig:uhml-sat}. \begin{figure}[h] \begin{displaymath} \begin{array}{r@{\;\,}c@{\;\,}ll} (\pV,\hTru)&\in&\R & \imp \textsl{ true } \\[.5mm] (\pV,\hFls)&\in&\R & \imp \textsl{ false } \\[.5mm] (\pV,\hBigAnd{i\in\IndSet}\hV_i)&\in&\R & \imp (\pV,\hV_i)\in\R \textsl{ for all } i{\,\in\,}\IndSet \\[.5mm] (\pV,\hNec{\actS}\hV)&\in&\R &\imp (\forall\acta,\pVV\cdot\pV\wtraS{\acta}\pVV \textsl{ and } \mtchS{\actS}{\acta}=\sV)\,\imp\, (\pVV,\hV\sV)\in\R \\[.5mm] (\pV,\hMaxXF)&\in&\R & \imp (\pV,\hV\Sub{\hMaxXF}{\hVarX})\in\R \\[.5mm] \end{array} \end{displaymath} \caption{A Satisfaction relation for \SHML formulas} \label{fig:uhml-sat} \end{figure} The satisfaction relation states that truth, \hTru, is \emph{always satisfied}, while falsehood, \hFls, can \emph{never be satisfied}. Conjunctions, $\hBigAnd{i\in\IndSet}\hV_i$ are satisfied when \emph{all branches} are satisfied (\ie $\forall i{\,\in\,}\IndSet \text{ such that } \pV\hSat\hV_i$), while necessities, $\hNec{\actS}\hV$, are satisfied by a process \pV when \emph{all derivatives} \pVV that are reachable over an action \acta where $\mtchS{\actS}{\acta}\!=\!\sV$ (possibly none), also satisfy $\hV\sV$, \ie $\pVV{\,\hSat\,}\hV\sV$. Finally, a process \pV satisfies a maximal fixpoint \hMaxXF when it is also able to satisfy an \emph{unfolded version} of \hV, \ie $\pV\hSat\hV\sub{\hMaxX{\hV}}{\hVarX}$. The satisfaction semantics, $\pV\hSat\hV$, agrees with the denotational semantics of the \SHML subset of \recHML, \hSemS{\hV}, presented in \Cref{fig:recHML}, so that $\pV\hSat\hV$ can be used in lieu of $\pV\in\hSemS{\hV}$ (see \cite{Aceto1999TestingHML,Aceto2007Book} for more detail). \input{appendix/monitor-soundness-proof.tex} \input{appendix/monitor-transparency-proof.tex} \input{appendix/monitor-trace-transparency-proof.tex} \subsection{Proving Soundness} \label{sec:proof-soundness} $$ \forall\pV{\,\in\,}\Sys,\hV\in\SHMLnf\;\cdot\;\hV{\in}\Sat \imp \eI{\eSem{\hV}}{\pV}{\,\hSat\,}\hV $$ To prove this lemma we must show that relation \R (below) is a \emph{satisfaction relation} (\hSat) as defined by the rules in \Cref{fig:uhml-sat}. $$ \R\;\defeq\;\setdef{(\eI{\eSem{\hV}}{\pV},\hV)}{\hV{\in}\Sat} $$ \\[-10mm] \setcounter{equation}{0} \begin{proof} We prove this claim by case analysis on the structure of $\hV$. \begin{case}[\hV=\hVarX]Does not apply since $\hVarX$ is an open formula and thus $\hVarX\notin\Sat$. \end{case} \begin{case}[\hV=\hFls]Does not apply since $\hFls\notin\Sat$.\end{case} \begin{case}[\hV=\hTru]Holds trivially since \emph{every process} satisfies \hTru, which thus confirms that $(\eI{\eSem{\hTru}}{\pV},\hTru)\in\R$ according to the definition of \R. \end{case} \begin{case}[\hV=\hMaxXF \text{ and } \hVarX{\in}\fv{\hV}] We assume that \begin{gather} \hMaxXF\in\Sat \label{proof:str-soundness-max-1} \end{gather} To prove that \R is a satisfaction relation, we show that if $(\eI{\eSem{\hMaxXF}}{\pV},\hMaxXF){\in}\R$, then from the recursive unfolding $\hVMaxXFSub$, we can also synthesise an enforcer $\eSem{\hVMaxXFSub}$ such that $ (\eI{\eSem{\hVMaxXFSub}}{\pV},\hVMaxXFSub)\in\R$ as well. Hence, by \eqref{proof:str-soundness-max-1} and the definition of \Sat we know that $\exists\pV'\cdot\pV'{\hSatS}\hMaxXF$, and so by the definition of \hSatS we can deduce that $\exists\pV'\cdot\pV'{\hSatS}\hVMaxXFSub$, from which we can thus conclude \begin{gather} \hVMaxXFSub\in\Sat \label{proof:str-soundness-max-2} \end{gather} Finally, from \eqref{proof:str-soundness-max-2} and the definition of \R we conclude that \begin{gather*} (\eI{\eSem{\hVMaxXFSub}}{\pV},\hVMaxXFSub)\in\R \end{gather*} as required, and so we are done. \end{case} { \newcommand{\formulaVar}[3]{\displaystyle\hBigAndD{#1{\,\in\,}#2\!\!\!\!\!\!\!\!}\ensuremath{\hNec{\actSN{\pate_{#1}}{\bV_{#1}}}{#3}}} \newcommand{\ensuremath{\formulaVar{j}{\IndSet'}{\hFls}}}{\ensuremath{\formulaVar{j}{\IndSet'}{\hFls}}} \newcommand{\ensuremath{\formulaVar{k}{\IndSet''}{\hV_k}}}{\ensuremath{\formulaVar{k}{\IndSet''}{\hV_k}}} \newcommand{\hMaxXF}{\ensuremath{\formulaVar{h}{\IndSet}{\hV_h}}} \newcommand{\ensuremath{\hAnd{\formulaJ\;}{\;\formulaK}}}{\ensuremath{\hAnd{\ensuremath{\formulaVar{j}{\IndSet'}{\hFls}}\;}{\;\ensuremath{\formulaVar{k}{\IndSet''}{\hV_k}}}}} \newcommand{\branchID}{\prf{\actSID{\pate_i}{\bV_i}}{\eSem{\hV_i}}} \newcommand{\branchSUP}{\prf{\actSTD{\pate_i}{\bV_i}}{\mV}} \begin{case}[\hV=\hMaxXF \text{ and } \bigdistinct{h\in\IndSet}\actSN{\pate_h}{\bV_h}] In this case we will be segmenting the set of indices \IndSet into $\IndSet'$ and $\IndSet''$ such that $\IndSet'$ contains the indices (if any) of the branches where the continuation formula $\hV_i$ is a falsehood \hFls, while $\IndSet''$ contains the rest, and so we will be writing $\ensuremath{\hAnd{\formulaJ\;}{\;\formulaK}}$ in lieu of $\hMaxXF$. We thus assume that \begin{gather} \ensuremath{\hAnd{\formulaJ\;}{\;\formulaK}}\in\Sat \label{proof:str-soundness-nec-1} \end{gather} From \eqref{proof:str-soundness-nec-1} and the definition of $\eSem{-}$ we have that \begin{gather} \eSem{\ensuremath{\hAnd{\formulaJ\;}{\;\formulaK}}} = \rec{\rVV}{\Big(\ch{\chBig{j\in\IndSet'}\prf{\actSTD{\pate_j}{\bV_j}}{\rVV}}{\chBig{k\in\IndSet''}\prf{\actSID{\pate_k}{\bV_k}}{\eSem{\hV_k}} }\Big)}=\mV \label{proof:str-soundness-nec-2} \end{gather} By unfolding the recursive construct in \eqref{proof:str-soundness-nec-2} we have that \begin{gather} \eSem{\ensuremath{\hAnd{\formulaJ\;}{\;\formulaK}}} = \Big(\ch{\chBig{j\in\IndSet'}\prf{\actSTD{\pate_j}{\bV_j}}{\mV}}{\chBig{k\in\IndSet''}\prf{\actSID{\pate_k}{\bV_k}}{\eSem{\hV_k}} }\Big) \label{proof:str-soundness-nec-2.5} \end{gather} In order to prove that \R is a satisfaction relation, for this case we must show that every individual branch in \eqref{proof:str-soundness-nec-2.5} is in \R as well. In order to show this we proceed by case analysis and show that the different types of branches that are synthesisable are also in \R. Hence, for all $i\in\IndSet$, we consider the following cases: \medskip \begin{enumerate}[(i)] \item when {$\eSem{\hNec{\actSN{\pate_i}{\bV_i}}{\hFls}}{\;=\;}\branchSUP$}: In order to prove that this branch is in \R it suffices showing that for all \acta and \pVV, when $\eI{\prf{\actSTD{\pate_i}{\bV_i}}{\mV}}{\pV}\wtraS{\acta}\pVV$ such that $\mtchS{\actSN{\pate_i}{\bV_i}}{\acta}{\,=\,}\sV$ then $(\pVV,\hFls){\,\in\,}\R$. \smallskip This case holds trivially since by rules \rtit{iTrn} and \rtit{eTrn} we know that whenever \pV produces an action \acta such that symbolic action \actSN{\pate_i}{\bV_i} is satisfied, \ie $\mtchS{\actSN{\pate_i}{\bV_i}}{\acta}{\,=\,}\sV$, the produced action \acta gets internally transformed into a silent (\actt) action, meaning that $\eI{\branchSUP}{\pV}\nwtraS{\acta}$, and so the modal necessities leading to a falsehood (\eg in this case \hNec{\actSN{\pate_i}{\bV_i}}{\hFls}) \emph{never get satisfied} by the monitored system. \medskip % \item when {$\eSem{\hNec{\actSN{\pate_i}{\bV_i}}{\hV_i}}{\;=\;}\branchID$}: Once again in order to prove that this branch is in \R, we must show that for all \acta and \pVV, when $\eI{\branchID}{\pV}\wtraS{\acta}\pVV$ such that $\mtchS{\actSN{\pate_i}{\bV_i}}{\acta}{\,=\,}\sV$ then $(\pVV,\hV_i){\,\in\,}\R$. \smallskip In order to show this we assume that \begin{gather} \mtchS{\actSN{\pate_i}{\bV_i}}{\acta}=\sV \label{proof:str-soundness-nec-3} \\ \eI{\branchID}{\pV}\wtraS{\acta}\pVV \label{proof:str-soundness-nec-4} \end{gather} By the definition of $\wtraS{\acta}$ we know that the weak transition in \eqref{proof:str-soundness-nec-4} is composed from 0 or more \actt-transitions followed by the \acta-transition as shown below \begin{gather} \eI{\branchID}{\pV}\wtraS{\actt}\pVV'\traS{\acta}\pVV \label{proof:str-soundness-nec-4.1} \end{gather} By the rules in our model we can infer that the \actt-transitions performed in \eqref{proof:str-soundness-nec-4.1} (if any) are only possible via multiple applications of rule \rtit{iAsy} which allows us to deduce \begin{gather} \pV\wtraS{\actt}\pV'' \label{proof:str-soundness-nec-4.2}\\ (\pVV'=\eI{\branchID}{\pV''})\traS{\acta}\pVV \label{proof:str-soundness-nec-4.3} \end{gather} Since we do not make any assumptions about the resultant enforced system \pVV, we must first infer its form so to be able to deduce whether $(\pVV,\hV_i\sV)\in\R$ or not. Since the reduction in \eqref{proof:str-soundness-nec-4.3} can be the result of two instrumentation rules, namely \rtit{iTer} and \rtit{iTrn}, we consider both cases separately. \begin{itemize} \item \lipicsHeader{\rtit{iTer}:} As we assume that \eqref{proof:str-soundness-nec-4.3} is the result of rule \rtit{iTer}, by this rule we thus have that $\eI{\branchID}{\pV''}\ntraS{\acta}$, which means that $\mtchS{\actSN{\pate_i}{\bV_i}}{\acta}=\sVundef$ which contradicts with assumption \eqref{proof:str-soundness-nec-3}, and hence this case does not apply. \item \lipicsHeader{\rtit{iTrn}:} By assuming that \eqref{proof:str-soundness-nec-4.3} is the result of rule \rtit{iTrn}, we thus know that \begin{gather} \pV''\traS{\acta}\pV' \label{proof:str-soundness-nec-5} \\ \pVV = \eI{\eSem{\hV_i\sV}}{\pV'} \label{proof:str-soundness-nec-6} \\ \branchID\traS{\ioact{\acta}{\acta}}\eV' \label{proof:str-soundness-nec-7} \end{gather} Hence, from \eqref{proof:str-soundness-nec-6} we know that to prove this case we must show that $(\eI{\eSem{\hV_i\sV}}{\pV},\hV_i\sV){\,\in\,}\R$. We thus refer to our initial assumption \eqref{proof:str-soundness-nec-1} from which by the definition of \Sat we know that there exists some process \pVVV such that $\pVVV\hSatS\hMaxXF$. With the definition of \hSatS we thus know that \begin{gather} \exists\pVVV{\,\in\,}\Sys,\forall h{\,\in\,}\IndSet,\pVVV'{\,\in\,}\Sys\cdot \textsl{if } \pVVV\wtraS{\acta}\pVVV' \textsl{ and } \mtchS{\actSN{\pate_h}{\bV_h}}{\acta}{=}\sV \textsl{ then } \pVVV'\hSatS\hV_h\sV \label{proof:str-soundness-nec-9} \end{gather} Since from \eqref{proof:str-soundness-nec-4.2} and \eqref{proof:str-soundness-nec-5} we know that $\pV\wtraS{\acta}\pV'$, and so with the knowledge of \eqref{proof:str-soundness-nec-3}, from \eqref{proof:str-soundness-nec-9} we can thus infer that $\pV'\hSatS\hV_i\sV$ meaning that $\hV_i\sV\in\Sat$. This result allows us to deduce that by the definition of \R we conclude that \begin{gather} (\eI{\eSem{\hV_i\sV}}{\pV'},\hV_i\sV)\in\R \label{proof:str-soundness-nec-10} \end{gather} as required. Hence, from assumptions \eqref{proof:str-soundness-nec-3}, \eqref{proof:str-soundness-nec-4} and deduction \eqref{proof:str-soundness-nec-10} we can infer that for $j\in\IndSet$ we know that \begin{gather*} (\eI{\branchID}{\pV},\hNec{\actSN{\pate_i}{\bV_i}}{\hV_i})\in\R \end{gather*} as required, and we are done. \end{itemize} \end{enumerate} \end{case} }\vspace{-7mm} \end{proof} \smallskip \subsection{Non-Violating Trace Transparency} \label{sec:proof-trace-transparency} \begin{rtp} \begin{enumerate}[\qquad(a)] \item $\forall\pV{\,\in\,}\Sys, \hV{\,\in\,}\SHMLnf \cdot \nvsat{\pV}{\tr}{\hV} \,\text{ and }\, \pV\wtraS{\tr}\pV'\; \imply \; \eI{\eSem{\hV}{}}{\pV}\wtraS{\tr}\eI{\eV'}{\pV'}$ \item $\forall\pV{\,\in\,}\Sys, \hV{\,\in\,}\SHMLnf \cdot \nvsat{\pV}{\tr}{\hV} \,\text{ and }\, \eI{\eSem{\hV}{}}{\pV}\wtraS{\tr}\eI{\eV'}{\pV'} \; \imply \; \pV\wtraS{\tr}\pV'$ \end{enumerate}\medskip The proofs for (a) and (b) rely on a number of auxiliary lemmas, namely, \Cref{lemma:nvtt-1,lemma:nvtt-2} are required for proving (a) while \Cref{lemma:nvtt-1,lemma:nvtt-3,,lemma:nvtt-4} are necessary for proving (b). Before introducing these lemmas, in \Cref{fig:after-defs} we introduce function $\afterFS{::}(\SHMLnf\times\Act)\mapsto\SHMLnf$, denoting how an \SHMLnf formula is affected after evaluating with respect to some action \actu. \begin{figure}[t] \begin{align*} \afterF{\hV}{\actt} &\defeq \hV \\ \afterF{\hV}{\acta} &\defeq \begin{xbrace}{c@{\qquad}l} \hV & \text{if }\hV{\,\in\,}\Set{\hTru,\hFls}\\ \afterF{\hV\sub{\hMaxXF}{\hVarX}}{\acta} & \text{if }\hV{\,=\,}\hMaxXF\\ \hV_j\sV & \text{if }\hV{\,=\,}\hBigAnd{i\in\IndSet}\hNec{\actS_i}\hV_i\;\text{ and }\;\exists j\in\IndSet\cdot\mtchS{\actS_j}{\acta}{=}\sV \\ \hTru &\text{if }\hV{\,=\,}\hBigAnd{i\in\IndSet}\hNec{\actS_i}\hV_i\;\text{ and }\;\emph{otherwise} \end{xbrace} \end{align*} \caption{Defining function \afterFS.} \label{fig:after-defs} \end{figure} \begin{lemma} \label[lemma]{lemma:nvtt-1} $$\pV\wtraS{\acta}\pV' \text{ and } \nvsat{\pV}{\acta\tr}{\hV} \imp \nvsat{\pV'}{\tr}{\afterF{\hV}{\acta}}$$ This lemma states that if process \pV does not violate \hV \wrt trace, $\acta\tr$, then the process resulting from performing action \acta, \ie $\pV'$ and the trace suffix \tr, should also not violate the \SHMLnf formula obtained after \hV analyses action \acta, \ie \afterF{\hV}{\acta}. \qed \end{lemma} \begin{lemma} \label[lemma]{lemma:nvtt-2} $$\nvsat{\pV}{\acta\tr}{\hV} \text{ and } \pV\wtraS{\acta}\pV' \imp \eI{\eSem{\hV}}{\pV}\wtraS{\acta}\eI{\eSem{\afterF{\hV}{\acta}}}{\pV'} $$ This lemma dictates that if process \pV does not violate \hV \wrt trace $\acta\tr$, \ie $\nvsat{\pV}{\acta\tr}{\hV}$, and is capable of performing \acta, \ie $\pV\wtraS{\acta}\pV'$, then the enforced process $\eI{\eSem{\hV}}{\pV}$ should still be able to perform action \acta and reduce into $\eI{\eSem{\afterF{\hV}{\acta}}}{\pV'}$. \qed \end{lemma} \begin{lemma} \label[lemma]{lemma:nvtt-3} $$\nvsat{\pV}{\tr}{\hV} \text{ and } \eI{\eSem{\hV}}{\pV}\traS{\actt}\eI{\eV'}{\pV'} \imp \pV\traS{\actt}\pV' \text{ and } \eV'=\eSem{\hV} \text{ and } \nvsat{\pV'}{\tr}{\hV} $$ With this lemma we can deduce that if process \pV does not violate \hV \wrt any trace $\tr$, \ie $\nvsat{\pV}{\tr}{\hV}$, and when instrumented with monitor \eSem{\hV} it is capable of performing a silent \actt action, \ie $\eI{\eSem{\hV}}{\pV}\wtraS{\actt}\eI{\eV'}{\pV'}$, then $\eV'$ should still be equal to \eSem{\hV} and the unmonitored process $\pV$ should also be able to perform the same silent action and reduce into $\pV'$ such that this process also does not violate \hV \wrt the same trace \tr. \qed \end{lemma} \begin{lemma} \label[lemma]{lemma:nvtt-4} $$\eI{\eSem{\hV}}{\pV}\traS{\acta}\eI{\eV'}{\pV'} \imp \pV\traS{\acta}\pV' \text{ and } \eV'=\eSem{\afterF{\hV}{\acta}} $$ This lemma is similar \Cref{lemma:nvtt-3} but applies for visible actions. \qed \end{lemma} We first prove our main result, \ie implications (a) and (b) of the Non-Violating Trace Transparency, by assuming that these auxiliary lemmas hold; we then prove them afterwards. \end{rtp} \setcounter{equation}{0} \begin{proof}[Proof for (a)] By induction on the length of trace $\tr$. \begin{case}[\tr=\varepsilon] We assume that $\nvsat{\pV}{\varepsilon}{\hV}$ and that \begin{gather} \pV\wtraS{\varepsilon}\pV' \label{proof:ntt-bc-1} \end{gather} From the definition of $\wtraS{\varepsilon}$ and \eqref{proof:ntt-bc-1} we know that $\pV\traS{\actt}^{\!\ast}\pV'$, and hence by zero or more applications of \rtit{iAsy} we infer that $\eI{\eSem{\hV}}{\pV}\traS{\actt}^{\!\ast}\eI{\eSem{\hV}}{\pV'}$ and so by the definition of \wtraS{\tr}, we conclude that $$\eI{\eSem{\hV}}{\pV}\wtraS{\varepsilon}\eI{\eSem{\hV}}{\pV'}$$ as required. \end{case} \begin{case}[\forall\trr\cdot \tr=\acta\trr] We start by assuming that \begin{gather} \pV\wtraS{\acta\trr}\pV' \label{proof:ntt-ic-1} \\ \nvsat{\pV}{\acta\trr}{\hV} \label{proof:ntt-ic-2} \end{gather} By \eqref{proof:ntt-ic-1} and the definition of \wtraS{\tr}, we have that \begin{gather} \pV\wtraS{\acta}\pV'' \label{proof:ntt-ic-3}\\ \pV''\wtraS{\trr}\pV' \label{proof:ntt-ic-4} \end{gather} and by \eqref{proof:ntt-ic-2}, \eqref{proof:ntt-ic-3} and \Cref{lemma:nvtt-1} we know that \begin{gather} \nvsat{\pV''}{\trr}{\afterF{\hV}{\acta}} \label{proof:ntt-ic-6} \end{gather} With the knowledge of \eqref{proof:ntt-ic-4} and \eqref{proof:ntt-ic-6} we can now apply the \emph{inductive hypothesis} and infer that \begin{gather} \eI{\eSem{\afterF{\hV}{\acta}}}{\pV''}\wtraS{\trr}\eI{\eV'}{\pV'}. \label{proof:ntt-ic-9} \end{gather} Following this, by \eqref{proof:ntt-ic-2}, \eqref{proof:ntt-ic-3} and \Cref{lemma:nvtt-2} we have that \begin{gather} \eI{\eSem{\hV}}{\pV}\wtraS{\acta}\eI{\eSem{\afterF{\hV}{\acta}}}{\pV''} \label{proof:ntt-ic-8} \end{gather} Finally, by joining together \eqref{proof:ntt-ic-9} and \eqref{proof:ntt-ic-8} with the definition of \wtraS{\tr} we can conclude that \begin{gather*} \eI{\eSem{\hV}}{\pV}\wtraS{\acta\trr}\eI{\eSem{\afterF{\hV}{\acta}}}{\pV'} \end{gather*} as required, and so we are done. \vspace{-5mm} \end{case} \end{proof}\pagebreak \setcounter{equation}{0} \begin{proof}[Proof for (b)] By induction on the length of trace $\tr$. \begin{case}[\tr=\varepsilon] We assume that \begin{gather} \nvsat{\pV}{\varepsilon}{\hV} \label{proof:ntt-b-bc-2} \\ \eI{\eSem{\hV}}{\pV}\wtraS{\varepsilon}\eI{\eV'}{\pV'} \label{proof:ntt-b-bc-1} \end{gather} From \eqref{proof:ntt-b-bc-1} and the definition of $\wtraS{\varepsilon}$ we know that \begin{gather} \eI{\eSem{\hV}}{\pV}\traS{\actt}^{\ast}\eI{\eV'}{\pV'} \label{proof:ntt-b-bc-3} \end{gather} We now consider two cases for \eqref{proof:ntt-b-bc-3}, namely, when $\traS{\actt}^{0}$ and $\traS{\actt}\cdot\wtraS{\varepsilon}$. \begin{itemize} \item when $\traS{\actt}^{0}$: Since no transitions have been applied, from \eqref{proof:ntt-b-bc-3} we know that $\eV'=\eSem{\hV}$ and $\pV'=\pV$ and so by the definition of $\wtraS{\varepsilon}$ we can immediately conclude that $\pV\wtraS{\varepsilon}\pV'$ as required. \item when $\traS{\actt}\cdot\wtraS{\varepsilon}$: From \eqref{proof:ntt-b-bc-3} we can now deduce that \begin{gather} \eI{\eSem{\hV}}{\pV}\traS{\actt}\eI{\eV''}{\pV''} \label{proof:ntt-b-bc-4} \\ \eI{\eV''}{\pV''}\wtraS{\varepsilon}\eI{\eV'}{\pV'} \label{proof:ntt-b-bc-5} \end{gather} and so by \eqref{proof:ntt-b-bc-2}, \eqref{proof:ntt-b-bc-4} and \Cref{lemma:nvtt-3} we can infer that \begin{gather} \pV\traS{\actt}\pV'' \label{proof:ntt-b-bc-6} \\ \eV''=\eSem{\hV} \label{proof:ntt-b-bc-7} \\ \nvsat{\pV''}{\varepsilon}{\hV} \label{proof:ntt-b-bc-8} \end{gather} Hence, by \eqref{proof:ntt-b-bc-5}, \eqref{proof:ntt-b-bc-7}, \eqref{proof:ntt-b-bc-8} and the inductive hypothesis we conclude that \begin{gather} \pV''\wtraS{\varepsilon}\pV' \label{proof:ntt-b-bc-9} \end{gather} and so we can conclude by \eqref{proof:ntt-b-bc-6} and \eqref{proof:ntt-b-bc-9} that \begin{gather*} \pV\wtraS{\varepsilon}\pV' \end{gather*} as required. \end{itemize} \end{case} \begin{case}[\forall\trr\cdot \tr=\acta\trr] We first assume that \begin{gather} \eI{\eSem{\hV}}{\pV}\wtraS{\acta\trr}\eI{\eV'}{\pV'} \label{proof:ntt-b-ic-1} \\ \forall\trr\cdot\nvsat{\pV}{\acta\trr}{\hV} \label{proof:ntt-b-ic-2} \end{gather} By \eqref{proof:ntt-b-ic-1} and the definition of \wtraS{\tr}, we have that \begin{gather} \eI{\eSem{\hV}}{\pV}\wtraS{\acta}\eI{\eV''}{\pV''} \label{proof:ntt-b-ic-3}\\ \eI{\eV''}{\pV''}\wtraS{\trr}\eI{\eV'}{\pV'} \label{proof:ntt-b-ic-4} \end{gather} and by \eqref{proof:ntt-b-ic-3} and the definition \wtraS{\acta} we have that \begin{gather} \eI{\eSem{\hV}}{\pV}\traS{\actt}^{\!\ast}\eI{\eV'''}{\pV'''} \label{proof:ntt-b-ic-5}\\ \eI{\eV'''}{\pV'''}\traS{\acta}\eI{\eV''}{\pV''} \label{proof:ntt-b-ic-6} \end{gather} This information allows us to apply multiple consecutive applications of \Cref{lemma:nvtt-3} on \eqref{proof:ntt-b-ic-2} and \eqref{proof:ntt-b-ic-5} and infer that \begin{gather} \pV\traS{\actt}^{\ast}\pV''' \label{proof:ntt-b-ic-7} \\ \forall\trr\cdot\nvsat{\pV'''}{\acta\trr}{\hV} \label{proof:ntt-b-ic-8} \\ \eV'''=\eSem{\hV} \label{proof:ntt-b-ic-9} \end{gather} and by \eqref{proof:ntt-b-ic-6},\eqref{proof:ntt-b-ic-9} and \Cref{lemma:nvtt-4} we have that \begin{gather} \pV'''\traS{\acta}\pV'' \label{proof:ntt-b-ic-10} \\ \eV''=\eSem{\afterF{\hV}{\acta}} \label{proof:ntt-b-ic-11} \end{gather} By \eqref{proof:ntt-b-ic-8}, \eqref{proof:ntt-b-ic-10} and \Cref{lemma:nvtt-1} we know that \begin{gather} \nvsat{\pV''}{\trr}{\afterF{\hV}{\acta}} \label{proof:ntt-b-ic-12} \end{gather} With the knowledge of \eqref{proof:ntt-b-ic-4}, \eqref{proof:ntt-b-ic-11} and \eqref{proof:ntt-b-ic-12} we can now apply the \emph{inductive hypothesis} and infer that \begin{gather} \pV''\wtraS{\trr}\pV'. \label{proof:ntt-b-ic-13} \end{gather} Finally, by joining together \eqref{proof:ntt-b-ic-7}, \eqref{proof:ntt-b-ic-10} and \eqref{proof:ntt-b-ic-13} with the definition of \wtraS{\tr} we can conclude that \begin{gather*} \pV\wtraS{\acta\trr}\pV' \end{gather*} as required, and so we are done. \vspace{-5mm} \end{case} \end{proof}\smallskip \paragraph*{Proving \Cref{lemma:nvtt-1}} \begin{rtp} $$\pV\wtraS{\acta}\pV' \text{ and } \nvsat{\pV}{\acta\tr}{\hV} \imp \nvsat{\pV'}{\tr}{\afterF{\hV}{\acta}}$$ To simplify the proof, we instead prove the contrapositive, \ie $$\pV\wtraS{\acta}\pV' \text{ and } \vsat{\pV'}{\tr}{\afterF{\hV}{\acta}} \imp \vsat{\pV}{\acta\tr}{\hV}$$ \vspace{-5mm} \end{rtp} \setcounter{equation}{0} \begin{proof} The proof proceeds by rule induction on \afterF{\hV}{\acta}. \begin{case}[\afterF{\hTru}{\acta}] We assume that $\pV\wtraS{\acta}\pV'$ and also that $\vsat{\pV'}{\tr}{\afterF{\hTru}{\acta}}$. This case, however, does not apply since by definition $\afterF{\hTru}{\acta}=\hTru$ which contradicts the assumption that system $\pV'$ and trace \tr violate formula $\afterF{\hTru}{\acta}=\hTru$. \end{case} \begin{case}[\afterF{\hFls}{\acta}] This case holds \emph{trivially} since by the definition of \vsatL, we know that \hFls is violated regardless of the process or trace, such that we can immediately conclude that \begin{gather*} \vsat{\pV}{\acta\tr}{\hFls} \end{gather*} as required. \end{case} \begin{case}[\afterF{\hMaxXF}{\acta}] We start this case by assuming that \begin{gather} \pV\wtraS{\acta}\pV' \label{proof:nvtt-1-max-1}\\ \vsat{\pV'}{\tr}{\afterF{\hMaxXF}{\acta}} \label{proof:nvtt-1-max-2} \end{gather} Since by definition $\afterF{\hMaxXF}{\acta}=\afterF{\hVMaxXFSub}{\acta}$, by \eqref{proof:nvtt-1-max-1}, \eqref{proof:nvtt-1-max-2} and the \emph{inductive hypothesis} we infer that $\vsat{\pV}{\acta\tr}{\hV\sub{\hMaxXF}{\hVarX}}$, from which by the definition of \vsatL, we can conclude \begin{gather*} \vsat{\pV}{\acta\tr}{\hMaxXF} \end{gather*} as required. \end{case} \begin{case}[\afterF{\hBigAndD{i\in\IndSet}\hNec{\actS_i}\hV_i}{\acta} \text{ when }\exists j{\in}\IndSet\cdot\mtchS{\actS_j}{\acta}{=}\sV] We now assume that \begin{gather} \pV\wtraS{\acta}\pV' \label{proof:nvtt-1-and-a-1}\\ \vsat{\pV'}{\tr}{\afterF{\hBigAndD{i\in\IndSet}\hNec{\actS_i}\hV_i}{\acta}} \label{proof:nvtt-1-and-a-2}\\ \exists j{\in}\IndSet\cdot\mtchS{\actS_j}{\acta}{=}\sV \label{proof:nvtt-1-and-a-3} \end{gather} By \eqref{proof:nvtt-1-and-a-2}, \eqref{proof:nvtt-1-and-a-3} and the definition of \afterFS we deduce that \vsat{\pV'}{\tr}{\hV_j\sV} and subsequently by \eqref{proof:nvtt-1-and-a-1} and the definition of \vsatL we infer that $\vsat{\pV}{\acta\tr}{\hNec{\actS_j}\hV_j}$ upon which by \eqref{proof:nvtt-1-and-a-3} and the definition of \vsatL, we can finally conclude that \begin{gather*} \vsat{\pV}{\acta\tr}{\hBigAndD{i\in\IndSet}\hNec{\actS_i}\hV_i} \end{gather*} as required. \end{case} \begin{case}[\afterF{\hBigAndD{i\in\IndSet}\hNec{\actS_i}\hV_i}{\acta} \text{ when }\forall i{\in}\IndSet\cdot\mtchS{\actS_i}{\acta}{=}\sVundef] Initially we assume that $\pV\wtraS{\acta}\pV'$ and that \vsat{\pV'}{\tr}{\afterF{\hBigAndD{i\in\IndSet}\hNec{\actS_i}\hV_i}{\acta}}. This case, however, does not apply as when $\forall i{\in}\IndSet\cdot\mtchS{\actS_i}{\acta}{=}\sVundef$, then by definition, $\afterF{\hBigAndD{i\in\IndSet}\hNec{\actS_i}\hV_i}{\acta}=\hTru$ which leads to a contradiction since \vsat{\pV'}{\tr}{(\afterF{\hBigAndD{i\in\IndSet}\hNec{\actS_i}\hV_i}{\acta}=\hTru)} is a false assumption by the definition of \vsatL. \end{case} \vspace{-5mm} \end{proof} \setcounter{equation}{0} \paragraph*{Proving \Cref{lemma:nvtt-2}} \begin{rtp} \begin{align*} & \quad \nvsat{\pV}{\acta\tr}{\hV} \text{ and } \pV\wtraS{\acta}\pV' \imp \eI{\eSem{\hV}}{\pV}\wtraS{\acta}\eI{\eSem{ \afterF{\hV}{\acta}}}{\pV'} \\ \equiv & \;\; \exists \hV' \cdot \nvsat{\pV}{\acta\tr}{\hV} \text{ and } \pV\wtraS{\acta}\pV' \text{ and } \afterF{\hV}{\acta}{=}\hV' \imp \eI{\eSem{\hV}}{\pV}\wtraS{\acta}\eI{\eSem{\hV'}}{\pV'} \end{align*}\vspace{-3mm} \end{rtp} \begin{proof} The proof proceeds by rule induction on \afterF{\hV}{\acta}. \begin{case}[\afterF{\hTru}{\acta}] Initially we assume that: $\afterF{\hTru}{\acta}=\hTru$, $\nvsat{\pV}{\tr}{\hTru}$ and that $\pV\wtraS{\acta}\pV'$ from which we can deduce that \begin{gather} \pV\wreduc\pV'' \label{proof:nvtt-2-tt-2a} \\ \pV''\traS{\acta}\pV' \label{proof:nvtt-2-tt-2b} \end{gather} By applying multiple applications of rule \rtit{iAsy} on \eqref{proof:nvtt-2-tt-2a} we have that \begin{gather} \eI{\eSem{\hTru}}{\pV}\wreduc\eI{\eSem{\hTru}}{\pV''} \label{proof:nvtt-2-tt-3} \end{gather} Since $\eSem{\hTru}=\eIden$, by rule \rtit{eId} we have that \begin{gather} \eSem{\hTru}\traS{\ioact{\acta}{\acta}}\eSem{\hTru} \label{proof:nvtt-2-tt-5} \end{gather} and hence by \eqref{proof:nvtt-2-tt-2b}, \eqref{proof:nvtt-2-tt-5} and rule \rtit{iTrn} we know that $\eI{\eSem{\hTru}}{\pV''}\traS{\acta}\eI{\eSem{\hTru}}{\pV'}$, and so by \eqref{proof:nvtt-2-tt-3} and transitivity we conclude that \begin{gather*} \eI{\eSem{\hTru}}{\pV}\wtraS{\acta}\eI{\eSem{\hTru}}{\pV'} \end{gather*} as required. \end{case} \begin{case}[\afterF{\hFls}{\acta}] Since we assume that $\afterF{\hFls}{\acta}=\hFls$, $\pV\traS{\acta}\pV'$, and that $\nvsat{\pV}{\tr}{\hFls}$, this case does not apply since the last assumption does not hold because the definition of $\vsatL$ states that \hFls is \emph{always violated}. \end{case} \begin{case}[\afterF{\hMaxXF}{\acta}] We start by assuming that \begin{gather} \afterF{\hMaxXF}{\acta}=\afterF{\hV\sub{\hMaxXF}{\hVarX}}{\acta} \label{proof:nvtt-2-max-1}\\ \pV\wtraS{\acta}\pV' \label{proof:nvtt-2-max-2}\\ \nvsat{\pV}{\acta\tr}{\hMaxXF} \label{proof:nvtt-2-max-3} \end{gather} From assumption \eqref{proof:nvtt-2-max-1} and by the definition of \afterFS we can deduce that \begin{gather} \exists\hV'\cdot\afterF{\hV\sub{\hMaxXF}{\hVarX}}{\acta}=\hV' \label{proof:nvtt-2-max-4} \end{gather} and by applying the definition of $\vsatL$ on assumption \eqref{proof:nvtt-2-max-3}, we infer that \begin{gather} \nvsat{\pV}{\acta\tr}{\hV\sub{\hMaxXF}{\hVarX}} \label{proof:nvtt-2-max-5} \end{gather} By knowing \eqref{proof:nvtt-2-max-2}, \eqref{proof:nvtt-2-max-4} and \eqref{proof:nvtt-2-max-5} we can now apply the \emph{inductive hypothesis} and conclude that \begin{gather} \eI{\eSem{\hV\sub{\hMaxXF}{\hVarX}}}{\pV} \traS{\acta} \eI{\eSem{\hV'}}{\pV'} \label{proof:nvtt-2-max-6} \end{gather} By \eqref{proof:nvtt-2-max-6} and the definition of \eSem{-}, we know \begin{gather} \eI{\eSem{\hV}\sub{\rec{\rV}{\eSem{\hV}}}{\rV}}{\pV} \traS{\acta} \eI{\eSem{\hV'}}{\pV'} \label{proof:nvtt-2-max-7} \end{gather} By \eqref{proof:nvtt-2-max-7} and \rtit{eRec}, we know \begin{gather} \eI{\rec{\rV}{\eSem{\hV}}}{\pV} \traS{\acta} \eI{\eSem{\hV'}}{\pV'} \label{proof:nvtt-2-max-8} \end{gather} By \eqref{proof:nvtt-2-max-8} and the definition of \eSem{-}, we know \begin{gather*} \eI{\eSem{\hMaxXF}}{\pV} \traS{\acta} \eI{\eSem{\hV'}}{\pV'} \end{gather*} as required. \end{case} \begin{case}[\afterF{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i}{\acta} \text{ when }\exists j{\in}\IndSet\cdot\mtchS{\actS_j}{\acta}{=}\sV] We now assume that, \begin{gather} \afterF{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i}{\acta}=\hV_j\sV \label{proof:nvtt-2-and-a-1} \end{gather} because \begin{gather} \exists j{\in}\IndSet\cdot\mtchS{\actSN{\pate_j}{\bV_j}}{\acta}{=}\sV \label{proof:nvtt-2-and-a-2} \end{gather} and \begin{gather} \pV\wtraS{\acta}\pV' \label{proof:nvtt-2-and-a-3} \\ \nvsat{\pV}{\acta\tr}{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i} \label{proof:nvtt-2-and-a-4} \end{gather} Since from \eqref{proof:nvtt-2-and-a-4} we know that process \pV does not violate any of the conjunction branches, and since from \eqref{proof:nvtt-2-and-a-2} we know that system action \acta matches with branch $j$, by the definition of $\vsatL$ we can deduce that $\hV_j\neq\hFls$ (otherwise it would contradict with \eqref{proof:nvtt-2-and-a-4}). This means that by rule \rtit{eTrn} we know that the enforcer will not modify the system action \acta, and so we know that \begin{gather} \exists j{\in}\IndSet\cdot \prf{\actSTN{\pate_j}{\bV_j}{\bnd{\pate_j}}}{\eSem{\hV_j}} \traS{\ioact{\acta}{\acta}} \eSem{\hV_j\sV} \label{proof:nvtt-2-and-a-6} \end{gather} By \eqref{proof:nvtt-2-and-a-6} and \rtit{eSel} we know \begin{gather} \chBigI\prf{\actSTN{\pate_i}{\bV_i}{\pate'_i}}{\eSem{\hV_i}} \traS{\ioact{\acta}{\acta}} \eSem{\hV_j\sV} \quad (\text{where } \pate'_i{\in}\set{\bnd{\pate_i},\actt}) \label{proof:nvtt-2-and-a-7} \end{gather} By \eqref{proof:nvtt-2-and-a-7} and \rtit{eRec} we know \begin{gather} \rec{\rVV}{\chBigI\prf{\actSTN{\pate_i}{\bV_i}{\pate'_i}}{\eSem{\hV_i}}} \traS{\ioact{\acta}{\acta}} \eSem{\hV_j\sV} \quad (\text{where } \pate'_i{\in}\set{\bnd{\pate_i},\actt}) \label{proof:nvtt-2-and-a-8} \end{gather} By \eqref{proof:nvtt-2-and-a-8} and the definition of \eSem{-} we know \begin{gather} \eSem{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i} \traS{\ioact{\acta}{\acta}} \eSem{\hV_j\sV} \label{proof:nvtt-2-and-a-9} \end{gather} From \eqref{proof:nvtt-2-and-a-3} and the definition \wtraS{\acta}, we know that $\pV\wreduc\pV''\traS{\acta}\pV'$, which means that by multiple applications of rule \rtit{iAsy} we know that for every enforcer \eV, $\eI{\eV}{\pV}\wreduc\eI{\eV}{\pV''}$, and subsequently by \eqref{proof:nvtt-2-and-a-9} and rule \rtit{iTrn} we infer that \begin{gather*} \eI{\eSem{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i}}{\pV} \wreduc \eI{\eSem{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i}}{\pV''} \traS{\acta} \eI{\eSem{\hV_j\sV}}{\pV'} \end{gather*} as required. \end{case} \begin{case}[\afterF{\hBigAndD{i\in\IndSet}\hNec{\actS_i}\hV_i}{\acta} \text{ when }\forall i{\in}\IndSet\cdot\mtchS{\actS_i}{\acta}{=}\sVundef] We start by assuming that \begin{gather} \afterF{\hBigAndD{i\in\IndSet}\hNec{\actS_i}\hV_i}{\acta}=\hTru \label{proof:nvtt-2-and-b-1} \end{gather} because \begin{gather} \forall i{\in}\IndSet\cdot\mtchS{\actS_i}{\acta}{=}\sVundef \label{proof:nvtt-2-and-b-2} \end{gather} and \begin{gather} \pV\wtraS{\acta}\pV' \label{proof:nvtt-2-and-b-3} \\ \nvsat{\pV}{\acta\tr}{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i} \label{proof:nvtt-2-and-b-4} \end{gather} By \eqref{proof:nvtt-2-and-b-2} and the definition of $\vsatL$ we know \begin{gather} \forall i{\in}\IndSet\cdot \prf{\actSTN{\pate_i}{\bV_i}{\pate'_i}}{\eSem{\hV_i}} \ntraS{\acta} \quad (\text{where } \pate'_i{\in}\set{\bnd{\pate_i},\actt}) \label{proof:nvtt-2-and-b-5} \end{gather} By \eqref{proof:nvtt-2-and-b-5} and \rtit{eSel} we know \begin{gather} \chBigI\prf{\actSTN{\pate_i}{\bV_i}{\pate'_i}}{\eSem{\hV_i}} \ntraS{\acta} \quad (\text{where } \pate'_i{\in}\set{\bnd{\pate_i},\actt}) \label{proof:nvtt-2-and-b-6} \end{gather} By \eqref{proof:nvtt-2-and-b-6} and \rtit{eRec} we know \begin{gather} \rec{\rVV}{\chBigI\prf{\actSTN{\pate_i}{\bV_i}{\pate'_i}}{\eSem{\hV_i}}} \ntraS{\acta} \quad (\text{where } \pate'_i{\in}\set{\bnd{\pate_i},\actt}) \label{proof:nvtt-2-and-b-7} \end{gather} By \eqref{proof:nvtt-2-and-b-7} and the definition of \eSem{-} we know \begin{gather} \eSem{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i} \ntraS{\acta} \label{proof:nvtt-2-and-b-8} \end{gather} From \eqref{proof:nvtt-2-and-b-3} and the definition \wtraS{\acta}, we know that $\pV\wreduc\pV''\traS{\acta}\pV'$, which means that by multiple applications of rule \rtit{iAsy} we know that for every enforcer \eV, $\eI{\eV}{\pV}\wreduc\eI{\eV}{\pV''}$, and subsequently by \eqref{proof:nvtt-2-and-b-8} and rule \rtit{iTer} we infer that \begin{gather} \eI{\eSem{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i}}{\pV} \wreduc \eI{\eSem{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i}}{\pV''} \traS{\acta} \eI{\eIden}{\pV'} \label{proof:nvtt-2-and-b-9} \end{gather} Finally by \eqref{proof:nvtt-2-and-b-9} and the definitions of \eSem{-} and \wtraS{\acta} we conclude that \begin{gather*} \eI{\eSem{\hBigAndD{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i}}{\pV} \wtraS{\acta} \eI{\eSem{\hTru}}{\pV'} \end{gather*} as required and so we are done. \end{case} \vspace{-5mm} \end{proof} \setcounter{equation}{0} \paragraph*{Proving \Cref{lemma:nvtt-3}} \begin{rtp} \begin{align*} \nvsat{\pV}{\tr}{\hV} \text{ and } \eI{\eSem{\hV}}{\pV}\traS{\actt}\eI{\eV'}{\pV'} \imp \pV\traS{\actt}\pV' \text{ and } \eV'=\eSem{\hV} \text{ and } \nvsat{\pV'}{\tr}{\hV} \end{align*}\vspace{-3mm} \end{rtp} \begin{proof} The proof proceeds by rule induction on $\eI{\eSem{\hV}}{\pV}\traS{\actt}\eI{\eV'}{\pV'}$. \begin{Cases}[\text{\rtit{iTer} and \rtit{iIns}}] These cases do not apply as \rtit{iTer} only transitions over visible actions \acta, while \rtit{iIns} cannot be applied as \eSem{\hV} does not synthesise insertion monitors. \end{Cases} \begin{case}[\rtit{iAsy}] We assume that \begin{gather} \forall \tr\cdot\nvsat{\pV}{\tr}{\hV} \label{proof-3-nvtt-1}\\ \eI{\eSem{\hV}}{\pV}\traS{\actt}\eI{\eV'}{\pV'} \label{proof-3-nvtt-2} \end{gather} because \begin{gather} \pV\traS{\actt}\pV' \label{proof-3-nvtt-3} \\ \eV'=\eSem{\hV} \label{proof-3-nvtt-4} \end{gather} Since the violation semantics are agnostic of \actt-actions, from \eqref{proof-3-nvtt-1} and \eqref{proof-3-nvtt-3} we can deduce that \begin{gather} \forall \tr\cdot\nvsat{\pV'}{\tr}{\hV} \label{proof-3-nvtt-5} \end{gather} and so we are done by \eqref{proof-3-nvtt-3}, \eqref{proof-3-nvtt-4} and \eqref{proof-3-nvtt-5}. \end{case} \begin{case}[\rtit{iTrn}] We assume that \begin{gather} \forall \tr\cdot\nvsat{\pV}{\tr}{\hV} \label{proof-3-nvtt-itrn-1}\\ \eI{\eSem{\hV}}{\pV}\traS{\actt}\eI{\eV'}{\pV'} \label{proof-3-nvtt-itrn-2} \end{gather} because \begin{gather} \pV\traS{\acta}\pV' \label{proof-3-nvtt-itrn-3} \\ \eSem{\hV}\traS{\ioact{\acta}{\actt}}\eV' \label{proof-3-nvtt-itrn-4} \end{gather} By the rules in our model we know that the suppressing transition in \eqref{proof-3-nvtt-itrn-4} can only take place if the monitor is capable of performing the suppression transformation, \ie has the form $\rec{\rVV}{\ch{\prf{\actSTD{\pate_j}{\bV_j}}{\rVV}}{ \chBig{i\in\IndSet\setminus\set{j}}{\begin{xbrace}{cl} \prf{\actSTD{\pate_j}{\bV_j}}{\rVV} & \text{ if }\hV_i=\hFls\\ \prf{\actSN{\pate_j}{\bV_j}}{\eSem{\hV_i}} & \text{ otherwise} \end{xbrace} }}}$ where $\exists j\cdot\mtchS{\prf{\actSTD{\pate_j}{\bV_j}}}{\acta}=\sV$. By the definition of \eSem{\hV} this monitor can only be synthesised if \hV has the form of $\hAnd{\hNec{\actSN{\pate_j}{\bV_j}}\hFls}{ \hBigAndU{i\in\IndSet\setminus\set{j}}\hNec{\actSN{\pV_i}{\bV_i}}\hV_i }$, which means that every \acta prefixed trace would violate \hV since when \acta satisfies the conjunct necessity $\hNec{\actSN{\pate_j}{\bV_j}}\hFls$ every suffix \trr of the trace would violate \hFls and so we have that $\forall\trr\cdot\vsat{\pV}{\acta\trr}{\hV}$. This therefore contradicts with assumption \eqref{proof-3-nvtt-itrn-1} and hence this case does not apply. \vspace{-3mm} \end{case} \end{proof} \setcounter{equation}{0} \paragraph*{Proving \Cref{lemma:nvtt-4}} \begin{rtp} \begin{align*} \eI{\eSem{\hV}}{\pV}\traS{\acta}\eI{\eV'}{\pV'} \imp \pV\wtraS{\acta}\pV' \text{ and } \eV'=\eSem{\afterF{\hV}{\acta}} \end{align*}\vspace{-3mm} \end{rtp} \begin{proof} The proof proceeds by rule induction on $\eI{\eSem{\hV}}{\pV}\traS{\acta}\eI{\eV'}{\pV'}$. \begin{Cases}[\text{\rtit{iAsy} and \rtit{iIns}}] These cases do not apply since \rtit{iAsy} transitions over \actt actions only, while \rtit{iIns} cannot ever be applied since \eSem{\hV} does not synthesise insertion monitors. \end{Cases} \begin{case}[\rtit{iTer}] We assume that $\eI{\eSem{\hV}}{\pV}\traS{\acta}\eI{\eIden}{\pV'}$ because \begin{gather} \pV\tra{\acta}\pV' \label{proof-4-nvtt-iter-2} \\ \eSem{\hV}\ntra{\acta}\,\land\,\eSem{\hV}\ntra{\actdot} \label{proof-4-nvtt-iter-3} \end{gather} From the definition of \eSem{\hV} and the rules in our model we know that \eqref{proof-4-nvtt-iter-3} is only possible when $\hV=\hBigAndU{i\in\IndSet}{\hNec{\actSN{\pate_i}{\bV_i}}\hV_i}$ and $\forall i\cdot\mtchS{\actSN{\pate_i}{\bV_i}}{\acta}=\sVundef$ as this would be synthesised into an enforcer of the $\eV=\rec{\rVV}{\chBigI\prf{\actSTN{\pate_i}{\bV_i}{\pate_i'}}{\eV'}}$ where every branch is unable to match with \acta. Knowing that \hV can only have this form and by the definition of \afterFS we deduce that \begin{gather} \afterF{\hV}{\acta}=\hTru \label{proof-4-nvtt-iter-4} \end{gather} Since by the definition of \eSem{-} we know that $\eIden=\eSem{\hTru}$, by \eqref{proof-4-nvtt-iter-4} we can conclude that \begin{gather} \eIden=\eSem{\afterF{\hV}{\acta}} \label{proof-4-nvtt-iter-5} \end{gather} and hence this case is done by \eqref{proof-4-nvtt-iter-2} and \eqref{proof-4-nvtt-iter-5}. \end{case} \begin{case}[\rtit{iTrn}] We assume that $\eI{\eSem{\hV}}{\pV}\traS{\acta}\eI{\eV'}{\pV'}$ because \begin{gather} \pV\tra{\actb}\pV' \label{proof-4-nvtt-itrn-2} \\ \eSem{\hV}\tra{\ioact{\actb}{\acta}}\eV' \label{proof-4-nvtt-itrn-3} \end{gather} From the definition of \eSem{\hV} we can infer that our synthesis cannot generate action replacing monitors and hence we can deduce that \begin{gather} \acta=\actb \label{proof-4-nvtt-itrn-4} \end{gather} From the definition of \eSem{\hV} and the rules in our model we can also deduce that when $\acta=\actb$ (as confirmed by \eqref{proof-4-nvtt-itrn-4}), \eqref{proof-4-nvtt-itrn-3} occurs only when $\hV=\hBigAndU{i\in\IndSet}{\hNec{\actSN{\pate_i}{\bV_i}}\hV_i}$ and $\exists j\cdot\mtchS{\actSN{\pate_j}{\bV_j}}{\acta}=\sV$ as this would be synthesised into an enforcer of the form \begin{gather} \eSem{\hV}=\rec{\rVV}{\ch{\prf{\actSN{\pate_j}{\bV_j}}{\eSem{\hV_j}}}{ \chBig{i\in\IndSet\setminus\set{j}}\begin{xbrace}{rl} \prf{\actSTD{\pate_i}{\bV_i}}{\rVV} & \;\;(\text{if }\hV_i=\hFls)\\ \prf{\actSN{\pate_i}{\bV_i}}{\eSem{\hV_i}} & \;\;(\text{otherwise}) \end{xbrace} }} \label{proof-4-nvtt-itrn-4.5} \end{gather} where only the branch with index $j$ can match \acta. Knowing that \hV can only have this form, by the definition of \afterFS we can deduce that \begin{gather} \afterF{\hV}{\acta}=\hV_j\sV \label{proof-4-nvtt-itrn-5} \end{gather} Hence, by applying rules \rtit{eRec}, \rtit{eSel} and \rtit{eTrn} on \eqref{proof-4-nvtt-itrn-3}, with the knowledge of \eqref{proof-4-nvtt-itrn-4.5} we know that $\eV'=\eSem{\hV_j\sV}$ and hence by \eqref{proof-4-nvtt-itrn-5} we can conclude that \begin{gather} \eV'=\eSem{\afterF{\hV}{\acta}} \label{proof-4-nvtt-itrn-6} \end{gather} and so we are done by \eqref{proof-4-nvtt-itrn-2}, \eqref{proof-4-nvtt-itrn-4} and \eqref{proof-4-nvtt-itrn-6}. \end{case} \end{proof} \subsection{Proving Transparency} \label{sec:proof-transparency} \begin{rtp} $$\forall\pV{\,\in\,}\Sys,\hV{\,\in\,}\SHMLnf\cdot \pV{\,\hSatS\,}\hV \; \imp \; \pV\bisim\eI{\eSem{\hV}}{\pV} $$ To prove this lemma we show that relation \R (below) is a \emph{strong bisimulation relation}. $$ \R\;\defeq\;\setdef{(\pV,\eI{\eSem{\hV}}{\pV})}{\pV{\,\hSatS\,}\hV} $$ \noindent Hence we must show that \R satisfies the following transfer properties for each $(\pV,\eI{\eSem{\hV}}{\pV}){\,\in\,}\R$: \begin{enumerate}[\quad(a)] \item if $\pV\traS{\actu}\pV'$ then $\eI{\eSem{\hV}}{\pV}\traS{\actu}S'$ and $(\pV',S')\in\R$ \item if $\eI{\eSem{\hV}}{\pV}\traS{\actu}S'$ then $\pV\traS{\actu}\pV'$ and $(\pV',S')\in\R$ \end{enumerate} \noindent We prove $(a)$ and $(b)$ separately by assuming that $\pV{\,\hSatS\,}\hV$ in both cases as defined by relation \R and conduct these proofs under the assumption that all our formulas are \emph{guarded}, \ie every occurrence of a logical variable \hVarX is always preceded by a modal necessity. It is common knowledge that every \mucalc formula (a reformulation of \recHML) can converted into a semantically equivalent guarded formula of the same logic (see \cite{Banieqbal1898,Walukiewicz2000}). This allows us to conduct the proofs for both $(a)$ and $(b)$ by mathematical induction on the number of maximal fixed points declarations that occur at the \emph{topmost-level} as defined by the rules in \Cref{fig:top-level-max}. $\\[-5mm]\;$ \end{rtp} \begin{figure} $$ \begin{array}{rcl} \lenMax{\hTru} \; = \; \lenMax{\hFls} \; = \; \lenMax{\hVarX} \; = \; \lenMax{\hBigAndD{i\in\IndSet}\hNec{\actS_i}\hV_i } &=& 0 \\ \lenMax{\hMaxXF} &=& \lenMax{\hV}+1 \end{array} $$ \caption{The number of top level maximal fixed points.} \label{fig:top-level-max} \end{figure} \setcounter{equation}{0} \begin{proof}[Proof for (a)] We proceed by mathematical induction of $\lenMax{\hV}$. \begin{Cases}[\lenMax{\hFls}=\lenMax{\hVarX}=0] Both cases do not apply since $\nexists \pV\cdot \pV\hSatS\hFls$ and similarly since $\hVarX$ is an open-formula and so $\nexists \pV\cdot \pV\hSatS\hVarX$. \end{Cases} \begin{case}[\lenMax{\hTru}=0] We now assume that \begin{gather} \pV\hSatS\hTru \label{proof:trans-tt-1}\\ \pV\traS{\actu}\pV' \label{proof:trans-tt-2} \end{gather} Since $\actu\in\set{\actt,\acta}$, we must consider both cases. \begin{itemize} \item \lipicsHeader{\actu=\actt:} Since \actu=\actt, we can apply rule \rtit{iAsy} on \eqref{proof:trans-tt-2} and get \begin{gather} \eI{\eSem{\hTru}}{\pV}\traS{\actt}\eI{\eSem{\hTru}}{\pV'} \label{proof:trans-tt-3} \end{gather} as required. Also, since we know that every process satisfies \hTru, we know that $\pV'\hSatS\hTru$, which by the definition of \R we conclude \begin{gather} (\pV',\eI{\eSem{\hTru}}{\pV'})\in\R \label{proof:trans-tt-4} \end{gather} as required. This means that this subcase is done by \eqref{proof:trans-tt-3} and \eqref{proof:trans-tt-4}. \item \lipicsHeader{\actu=\acta:} Since by rule \rtit{eId} we know that $\eIden\traS{\ioact{\acta}{\acta}}\eIden$, and since \actu=\acta, we can apply rule \rtit{iTrn} on \eqref{proof:trans-tt-2} and deduce \begin{gather} \eI{\eIden}{\pV}\traS{\acta}\eI{\eIden}{\pV'} \label{proof:trans-tt-5} \end{gather} Since \eSem{\hTru}=\eIden, we can refine \eqref{proof:trans-tt-5} as \begin{gather} \eI{\eSem{\hTru}}{\pV}\traS{\acta}\eI{\eSem{\hTru}}{\pV'} \label{proof:trans-tt-6} \end{gather} as required. Once again, since $\pV'\hSatS\hTru$, we can deduce \begin{gather} (\pV',\eI{\eSem{\hTru}}{\pV'})\in\R \label{proof:trans-tt-7} \end{gather} as required. This subcase is done by \eqref{proof:trans-tt-6} and \eqref{proof:trans-tt-7}. \end{itemize} \end{case} {\newcommand{\hMaxXF}{\hBigAnd{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i} \newcommand{\rec{\rV}{\eSem{\hV}}}{\Big(\rec{\rVV}{\chBigI\begin{xbrace}{lc} \prf{\actSTD{\pate_i}{\bV_i}}{\rVV} &\quad (\text{if }\hV_i=\hFls) \\ \prf{\actSID{\pate_i}{\bV_i}}{\eSem{\hV_i}} &\quad (\text{otherwise}) \end{xbrace}}\Big)} \begin{case}[\lenMax{\hMaxXF}=0] Assume that \begin{gather} \pV\hSatS\hMaxXF \label{proof:trans-nec-1}\\ \pV\traS{\actu}\pV' \label{proof:trans-nec-2} \end{gather} Since $\actu\in\set{\actt,\acta}$, we must consider both cases. \begin{itemize} \item \lipicsHeader{\actu=\actt:} Since \actu=\actt, we can apply rule \rtit{iAsy} on \eqref{proof:trans-nec-2} and obtain \begin{gather} \eI{\eSem{\hMaxXF}}{\pV}\traS{\actt}\eI{\eSem{\hMaxXF}}{\pV'} \label{proof:trans-nec-3} \end{gather} as required. Since \actu=\actt, and since we know that \SHML is \actt-closed (see Proposition 3.8 in \cite{Aceto1999TestingHML}), from \eqref{proof:trans-nec-1} and \eqref{proof:trans-nec-2}, we can deduce that $\pV'\hSatS\hMaxXF$, so that by the definition of \R we conclude \begin{gather} (\pV',\eI{\eSem{\hMaxXF}}{\pV'})\in\R \label{proof:trans-nec-4} \end{gather} as required. This subcase is therefore done by \eqref{proof:trans-nec-3} and \eqref{proof:trans-nec-4}. \item \lipicsHeader{\actu=\acta:} Since $\actu=\acta$, from \eqref{proof:trans-nec-2} we know that \begin{gather} \pV\traS{\acta}\pV' \label{proof:trans-nec-5} \end{gather} Since the branches in our conjunction are all prefixed by disjoint symbolic actions, \ie $\bigdistinct{i\in\IndSet}\actSN{\pate_i}{\bV_i}$, we know that \emph{at most one} of the branches can match an action \acta. Hence, we consider two cases, namely: \begin{itemize} \item \lipicsHeader{No matching branches (\ie $ \forall i\in\IndSet\cdot\mtch{\actSN{\pate_i}{\bV_i}}{\acta}=\sVundef$):} Since $\eSem{\hMaxXF}=\rec{\rV}{\eSem{\hV}}$, and since none of the guarding symbolic transformations in the synthesised selection can match action \acta, we conclude that \begin{gather} \eSem{\hMaxXF}\ntraS{\acta} \label{proof:trans-nec-6} \end{gather} Since $\eSem{\hTru}=\eIden$, by \eqref{proof:trans-nec-6} and rule \rtit{iTer} we thus know \begin{gather} \eI{\eSem{\hMaxXF}}{\pV}\traS{\acta}\eI{\eSem{\hTru}}{\pV'} \label{proof:trans-nec-7} \end{gather} as required. Also, since any process satisfies \hTru, we know that $\pV'\hSatS\hTru$, and so by the definition of \R we conclude that \begin{gather} (\pV',\eI{\eSem{\hTru}}{\pV'})\in\R \label{proof:trans-nec-8} \end{gather} as required. This subcase is therefore done by \eqref{proof:trans-nec-7} and \eqref{proof:trans-nec-8}. \medskip % \item \lipicsHeader{One matching branch (\ie $ \exists j\in\IndSet\cdot\mtch{\actSN{\pate_j}{\bV_j}}{\acta}=\sV$):} From \eqref{proof:trans-nec-1} and by the definition of \hSatS we know that for every index $i\in\IndSet$ and process $\pV''\in\Sys$ $(\pV\wtraS{\acta}\pV'' \text{ and } \mtchS{\actSN{\pate_i}{\bV_i}}{\acta}{=}\sV)$ \textsl{imply} $\pV''{\hSatS}\hV_j\sV $, and so, since $\exists j\in\IndSet\cdot\mtchS{\actSN{\pate_j}{\bV_j}}{\acta}{=}\sV$, and from \eqref{proof:trans-nec-5} we can deduce that \begin{gather} \pV'\hSatS\hV_j\sV \label{proof:trans-nec-9} \end{gather} Also, since $\mtchS{\actSN{\pate_j}{\bV_j}}{\acta}{=}\sV$, by rule \rtit{eTrn} we know that \begin{gather} \forall\eV_j,\pate'\cdot\eTrns{\pate_j}{\bV_j}{\pate'}{\eV_j} \traS{\ioact{\acta}{\pate'\sV}} \eV_j\sV \label{proof:trans-nec-10} \end{gather} By applying rules \rtit{eSel}, \rtit{eRec} on \eqref{proof:trans-nec-10} and then \eqref{proof:trans-nec-2} and \rtit{iTrn} we get \begin{gather} \forall\eV_j\cdot \eI{\Big((\rec{\rVV}{\ch{(\chBig{k\in\IndSet\setminus\set{j}\hspace{-3mm} }\eTrns{\pate_k}{\bV_k}{\pate'_k}{\eV_k})}{(\eTrns{\pate_j}{\bV_j}{\pate'}{\eV_j})\Big} } )}{\pV} % \traS{\pate'\sV} \eI{\eV_j\sV}{\pV'} \label{proof:trans-nec-11} \end{gather} From \eqref{proof:trans-nec-11} and the definition of \eSem{-} we can infer that $\eV_j=\rVV$ and $\pate'=\actt$ when $\eV_j$ is derived from $\hV_j=\hFls$, or $\eV_j=\eSem{\hV_j}$ and $\pate'=\bnd{\pate_j}$ otherwise. By \eqref{proof:trans-nec-9} we can deduce that the former is \emph{false} because if $\hV_j=\hFls$, then this would contradict with \eqref{proof:trans-nec-9}, and hence only the latter applies. So, since $\eSem{\hV_j\sV}=\eV_j\sV$ and $\pate_j\sV=\acta$ we have that \begin{gather} \forall\eV_j\cdot \eI{\Big((\rec{\rVV}{\ch{(\chBig{k\in\IndSet\setminus\set{j}\hspace{-3mm} }\eTrns{\pate_k}{\bV_k}{\pate'_k}{\eV_k})}{(\eTrns{\pate_j}{\bV_j}{\bnd{\pate_j}}{\eV_j})\Big} } )}{\pV} % \traS{\acta} \eI{\eSem{\hV_j\sV}}{\pV'} \label{proof:trans-nec-12} \end{gather} By \eqref{proof:trans-nec-12} and the definition of \eSem{-} we can thus conclude that \begin{gather} \eI{\eSem{\hMaxXF}}{\pV}\traS{\acta}\eI{\eSem{\hV_j\sV}}{\pV'} \label{proof:trans-nec-13} \end{gather} as required, and by \eqref{proof:trans-nec-9} and the definition of \R we conclude that \begin{gather} (\pV',\eI{\eSem{\hV_j\sV}}{\pV'})\in\R \label{proof:trans-nec-14} \end{gather} as required. Hence, this subcase is done by \eqref{proof:trans-nec-13} and \eqref{proof:trans-nec-14}. \end{itemize} \end{itemize} \end{case} } {\newcommand{\hMaxXF}{\hMaxXF} \newcommand{\rec{\rV}{\eSem{\hV}}}{\rec{\rV}{\eSem{\hV}}} \begin{case}[\lenMax{\hMaxXF}=l+1] We start by assuming that \begin{gather} \pV\hSatS\hMaxXF \label{proof:trans-max-1}\\ \pV\traS{\actu}\pV' \label{proof:trans-max-2} \end{gather} Since $\actu\in\set{\actt,\acta}$, we must consider both cases. \begin{itemize} \item \lipicsHeader{\actu=\actt:} Since \actu=\actt, we can apply rule \rtit{iAsy} on \eqref{proof:trans-max-2} and deduce that \begin{gather} \eI{\eSem{\hMaxXF}}{\pV}\traS{\actt}\eI{\eSem{\hMaxXF}}{\pV'} \label{proof:trans-maxa-3} \end{gather} as required. Also, since \SHML is \actt-closed (see Proposition 3.8 in \cite{Aceto1999TestingHML}), by \eqref{proof:trans-max-1} and \eqref{proof:trans-max-2}, we also know that $\pV'\hSatS\hMaxXF$ as well. Hence, by the definition of \R we conclude \begin{gather} (\pV',\eI{\eSem{\hMaxXF}}{\pV'})\in\R \label{proof:trans-maxa-4} \end{gather} and so we done by \eqref{proof:trans-maxa-3} and \eqref{proof:trans-maxa-4}. \item \lipicsHeader{\actu=\acta:} Since $\actu=\acta$, from \eqref{proof:trans-max-2} we know that \begin{gather} \pV\traS{\acta}\pV' \label{proof:trans-maxb-3} \end{gather} and by \eqref{proof:trans-max-1} and the definition of \hSatS we know \begin{gather} \pV\hSatS\hVMaxXFSub \label{proof:trans-maxb-4} \end{gather} Since we assume that logical variables (\eg \hVarX) are \emph{guarded}, by the definition of \lenMax{\hV} we know that whenever a maximal fixed point \hMaxXF is unfolded into \hVMaxXFSub, the number of top level maximal fixed points decreases by 1, and so since $\lenMax{\hMaxXF}=l+1$, we infer that \begin{gather} \lenMax{\hVMaxXFSub}=l \label{proof:trans-maxb-5} \end{gather} Hence, by \eqref{proof:trans-maxb-3}, \eqref{proof:trans-maxb-4}, \eqref{proof:trans-maxb-5} and the inductive hypothesis we can deduce that \begin{gather} \exists\pVV'\cdot \eI{\eSem{\hVMaxXFSub}}{\pV} \traS{\acta} \pVV' \label{proof:trans-maxb-6} \\ (\pV',\pVV')\in\R \label{proof:trans-maxb-7} \end{gather} By applying the definition of \eSem{-} on \eqref{proof:trans-maxb-6}, followed by rule \rtit{iTrn} we get \begin{gather} \exists\pVV'\cdot \eSem{\hV}\Sub{\rec{\rV}{\eSem{\hV}}}{\rV} \traS{\ioact{\acta}{\acta}}\eV \qquad \text{ where }\pVV'=\eI{\eV}{\pV'} \label{proof:trans-maxb-8} \end{gather} By applying rule \rtit{eRec} on \eqref{proof:trans-maxb-8}, followed by \eqref{proof:trans-maxb-3} and \rtit{iTrn} we get \begin{gather} \exists\pVV'\cdot \eI{\rec{\rV}{\eSem{\hV}}}{\pV} \traS{\acta}\pVV' \label{proof:trans-maxb-9} \end{gather} and so, we can apply \eSem{-} on \eqref{proof:trans-maxb-9} and obtain \begin{gather} \exists\pVV'\cdot \eI{\eSem{\hMaxXF}}{\pV} \traS{\acta}\pVV' \label{proof:trans-maxb-10} \end{gather} as required. We are therefore done by \eqref{proof:trans-maxb-7} and \eqref{proof:trans-maxb-10}. \end{itemize} \end{case} } \end{proof} \begin{proof}[Proof for (b)] The proof proceeds by mathematical induction of $\lenMax{\hV}$. \begin{Cases}[\lenMax{\hFls}=\lenMax{\hVarX}=0] Both cases do not apply since $\nexists \pV\cdot \pV\hSatS\hFls$ and similarly since $\hVarX$ is an open-formula and $\nexists \pV\cdot \pV\hSatS\hVarX$. \end{Cases} \begin{case}[\lenMax{\hTru}=0] Assume that \begin{gather} \pV\hSatS\hTru \label{proof:trans-b-tt-1}\\ \eI{\eSem{\hTru}}{\pV}\traS{\actu}\pVV' \label{proof:trans-b-tt-2} \end{gather} Since $\actu\in\set{\actt,\acta}$, we must consider both cases. \begin{itemize} \item \lipicsHeader{\actu=\actt:} Since \actu=\actt, the transition in \eqref{proof:trans-b-tt-2} can be performed either via \rtit{iTrn} or \rtit{iAsy}. We must therefore consider both cases. \begin{itemize} \item \lipicsHeader{\rtit{iAsy}:} From rule \rtit{iAsy} and \eqref{proof:trans-b-tt-2} we thus know that $\pVV'=\eI{\eV}{\pV'}$ and that $\eV=\eSem{\hTru}$ since this remains unaffected by the transition, such that $\pV\traS{\actt}\pV'$ as required. Also, since every process satisfies \hTru, we know that $\pV'\hSatS\hTru$ as well, and so we are done since by the definition of \R we know that $(\pV',\eI{\eSem{\hTru}}{\pV'})\in\R$. \item \lipicsHeader{\rtit{iTrn}:} From rule \rtit{iTrn} and \eqref{proof:trans-b-tt-2} we know that: $\pVV'=\eI{\eV}{\pV'}$, $\pV\traS{\acta}\pV'$ and that \begin{gather} \eSem{\hTru}\traS{\ioact{\acta}{\actt}}\eV \label{proof:trans-b-tt-3} \end{gather} Since $\eSem{\hTru}=\eIden$, by rule \rtit{eId} we know that \eqref{proof:trans-b-tt-3} is \emph{false} and hence this case does not apply. \end{itemize} \item \lipicsHeader{\actu=\acta:} Since \actu=\acta, the transition in \eqref{proof:trans-b-tt-2} can be performed either via \rtit{iTrn} or \rtit{iTer}. We consider both cases. \begin{itemize} \item \lipicsHeader{\rtit{iTer}:} This case does not apply since by applying \rtit{iTer} on \eqref{proof:trans-b-tt-2} we know that $\eSem{\hTru}\ntraS{\acta}$ which is \emph{false} since $\eSem{\hTru}=\eIden$ and rule \rtit{eId} states that for all \acta, $\eIden\traS{\ioact{\acta}{\acta}}\eIden$, thus leading to a contradiction. \item \lipicsHeader{\rtit{iTrn}:} By applying rule \rtit{iTrn} on \eqref{proof:trans-b-tt-2} we know that $\pVV'=\eI{\eV}{\pV'}$ such that \begin{gather} \pV\traS{\acta}\pV' \label{proof:trans-b-tt-4} \\ \eSem{\hTru}\traS{\ioact{\acta}{\acta}}\eV \label{proof:trans-b-tt-5} \end{gather} Since $\eSem{\hTru}=\eIden$, by applying rule \rtit{eId} to \eqref{proof:trans-b-tt-5} we know that $\eV=\eIden=\eSem{\hTru}$, meaning that $\pVV'=\eI{\eSem{\hTru}}{\pVV'}$. Hence, since every process satisfies \hTru we know that $\pV'\hSatS\hTru$, so that by the definition of \R we conclude \begin{gather} (\pV',\eI{\eSem{\hTru}}{\pV'})\in\R \label{proof:trans-b-tt-6} \end{gather} Hence, we are done by \eqref{proof:trans-b-tt-4} and \eqref{proof:trans-b-tt-6}. \end{itemize} \end{itemize} \end{case} {\newcommand{\hMaxXF}{\hBigAnd{i\in\IndSet}\hNec{\actSN{\pate_i}{\bV_i}}\hV_i} \newcommand{\rec{\rV}{\eSem{\hV}}}{\Big(\rec{\rVV}{\chBigI\begin{xbrace}{lc} \prf{\actSTD{\pate_i}{\bV_i}}{\rVV} &\quad (\text{if }\hV_i=\hFls) \\ \prf{\actSID{\pate_i}{\bV_i}}{\eSem{\hV_i}} &\quad (\text{otherwise}) \end{xbrace}}\Big)} \newcommand{\Big(\chBigI\begin{xbrace}{lc} \prf{\actSTD{\pate_i}{\bV_i}}{\rVV} &\quad (\text{if }\hV_i=\hFls) \\ \prf{\actSID{\pate_i}{\bV_i}}{\eSem{\hV_i}} &\quad (\text{otherwise}) \end{xbrace}\Big)\sub{\eV}{\rVV}}{\Big(\chBigI\begin{xbrace}{lc} \prf{\actSTD{\pate_i}{\bV_i}}{\rVV} &\quad (\text{if }\hV_i=\hFls) \\ \prf{\actSID{\pate_i}{\bV_i}}{\eSem{\hV_i}} &\quad (\text{otherwise}) \end{xbrace}\Big)\sub{\eV}{\rVV}} \begin{case}[\lenMax{\hMaxXF}=0] We assume that \begin{gather} \pV\hSatS\hMaxXF \label{proof:trans-nec-b-1}\\ \eI{\eSem{\hMaxXF}}{\pV}\traS{\actu}\pVV' \label{proof:trans-nec-b-2} \end{gather} Since $\actu\in\set{\actt,\acta}$, we must consider both cases. \begin{itemize} \item \lipicsHeader{\actu=\actt:} Since \actu=\actt, from \eqref{proof:trans-nec-b-2} we know that \begin{gather} \eI{\eSem{\hMaxXF}}{\pV}\traS{\actt}\pVV' \label{proof:trans-nec-b-3} \end{gather} The \actt-transition in \eqref{proof:trans-nec-b-3} can be performed either via \rtit{iTrn} or \rtit{iAsy}; we thus consider both cases. \begin{itemize} \item \lipicsHeader{\rtit{iAsy}:} As we assume that the reduction in \eqref{proof:trans-nec-b-3} is the result of rule \rtit{iAsy}, we know that $\pVV'=\eI{\eSem{\hMaxXF}}{\pV'}$ such that \begin{gather} \pV\traS{\actt}\pV' \label{proof:trans-nec-b-4} \end{gather} as required. Also, since \SHML is \actt-closed (see Proposition 3.8 in \cite{Aceto1999TestingHML}), by \eqref{proof:trans-nec-b-1} and \eqref{proof:trans-nec-b-4} we can deduce that $\pV'\hSatS\hMaxXF$ as well, so that by the definition of \R we conclude that \begin{gather} (\pV',\eI{\eSem{\hMaxXF}}{\pV'})\in\R \label{proof:trans-nec-b-5} \end{gather} and so we are done by \eqref{proof:trans-nec-b-4} and \eqref{proof:trans-nec-b-5}. \item \lipicsHeader{\rtit{iTrn}:} By assuming that reduction \eqref{proof:trans-nec-b-3} results from \rtit{iTrn}, we know that $\pVV'=\eI{\eV'}{\pV'}$ such that \begin{gather} \pV\traS{\acta}\pV' \label{proof:trans-nec-b-5.1} \\ \eSem{\hMaxXF}\traS{\ioact{\acta}{\actt}}\eV' \label{proof:trans-nec-b-6} \end{gather} By \eqref{proof:trans-nec-b-6} and the definition of \eSem{-} we know that \begin{gather} (\eV=\rec{\rV}{\eSem{\hV}}) \traS{\ioact{\acta}{\actt}} \eV' \label{proof:trans-nec-b-7} \end{gather} By applying rule \rtit{eRec} on \eqref{proof:trans-nec-b-7} we know \begin{gather} \Big(\chBigI\begin{xbrace}{lc} \prf{\actSTD{\pate_i}{\bV_i}}{\rVV} &\quad (\text{if }\hV_i=\hFls) \\ \prf{\actSID{\pate_i}{\bV_i}}{\eSem{\hV_i}} &\quad (\text{otherwise}) \end{xbrace}\Big)\sub{\eV}{\rVV} \traS{\ioact{\acta}{\actt}} \eV' \label{proof:trans-nec-b-8} \end{gather} From \eqref{proof:trans-nec-b-8} we know that the input action \acta is suppressed into a \actt which is only possible when \acta matches a branch of the form \prf{\actSTD{\pate_j}{\bV_j}}{\rVV} for some $j\in\IndSet$, and so we know that \begin{gather} \exists j\in\IndSet\cdot\mtchS{\actSN{\pate_j}{\bV_j}}{\acta}=\sV \label{proof:trans-nec-b-8.1} \end{gather} By the definition of \eSem{-} we however know that this matching branch was derived from a conjunct subformula of the form $\hNec{\actSN{\pate_j}{\bV_j}}\hFls$, such that we know that \begin{gather} \pVV'=\eI{\eSem{\hFls}}{\pV'} \label{proof:trans-nec-b-9} \end{gather} According to the definition of \R, for the pair $(\pV',\pVV')$ to be in \R we must now show that $\pV'\hSatS\hFls$ which is obviously \emph{false}, and hence, contradicts with assumption \eqref{proof:trans-nec-b-1}. Precisely, this contradiction occurs since by the definition of \hSatS, when $\pV\hSatS\hMaxXF$ then $\pV\wtraS{\acta}\pV'$ (which is confirmed by \eqref{proof:trans-nec-b-5.1}) and $\exists j\in\IndSet\cdot\mtchS{\actSN{\pate_j}{\bV_j}}{\acta}{=}\sV$ (also confirmed by \eqref{proof:trans-nec-b-8.1}) imply $\pV'\hSatS\hV_j\sV$ which leads to a contradiction since in this case $\hV_j\sV{=}\hFls$. Hence, this subcase does not apply. \end{itemize} \item \lipicsHeader{\actu=\acta:} Since \actu=\acta, by \eqref{proof:trans-nec-b-2} and the definition of \eSem{-} we know that \begin{gather} \eI{\rec{\rV}{\eSem{\hV}}}{\pV}\traS{\acta}\pVV' \label{proof:trans-nec-b-10} \end{gather} Since the transition in \eqref{proof:trans-nec-b-10} can be performed via \rtit{iTer} or iTrn, we consider both possibilities. \begin{itemize} \item \lipicsHeader{\rtit{iTer}:} As we assume that \eqref{proof:trans-nec-b-10} results from rule \rtit{iTer}, we know that \begin{gather} \pV\traS{\acta}\pV' \label{proof:trans-nec-b-11} \end{gather} as required, and that $\pVV'=\eI{\eIden}{\pV'}=\eI{\eSem{\hTru}}{\pV'}$ since $\eSem{\hTru}=\eIden$. Consequently, as every process satisfies \hTru, we know that $\pV'\hSatS\hTru$ and so by the definition of \R we can conclude that \begin{gather} (\pV',\eI{\eSem{\hTru}}{\pV'})\in\R \label{proof:trans-nec-b-12} \end{gather} and so we are done by \eqref{proof:trans-nec-b-11} and \eqref{proof:trans-nec-b-12}. \item \lipicsHeader{\rtit{iTrn}:} By assuming that \eqref{proof:trans-nec-b-10} is obtained from rule \rtit{iTrn} we know that \begin{gather} \pV\traS{\acta}\pV' \label{proof:trans-nec-b-13} \end{gather} as required, and that \begin{gather} \rec{\rV}{\eSem{\hV}}\traS{\ioact{\acta}{\acta}}\pVV' \label{proof:trans-nec-b-14} \end{gather} By applying rules \rtit{eRec} and \rtit{eSel} on \eqref{proof:trans-nec-b-14} we know \begin{gather} \exists j\in\IndSet\cdot \eTrns{\pate_j}{\bV_j}{\pate'}{\eV} \traS{\ioact{\acta}{\acta}}\pVV' \label{proof:trans-nec-b-15} \end{gather} Since the transition in \eqref{proof:trans-nec-b-15} does not modify the given action \acta, we can infer that $\pate'=\bnd{\pate_j}$ and that $\eV=\eSem{\hV_j}$ where $\hV_j\neq\hFls$ so that when we apply rule \rtit{eTrn} to \eqref{proof:trans-nec-b-15} we can deduce that \begin{gather} \pVV'=\eSem{\hV_j\sV} \label{proof:trans-nec-b-16}\\ \mtchS{\actSN{\pate_j}{\bV_j}}{\acta}=\sV \label{proof:trans-nec-b-17} \end{gather} By applying the definition of \hSatS on \eqref{proof:trans-nec-b-1} we know that \begin{gather} \forall i\in\IndSet,\pV''\cdot \pV\wtraS{\acta}\pV'' \text{ and } \mtchS{\actSN{\pate_i}{\bV_i}}{\acta}=\sV \text{ then } \pV''\hSatS\hV_i\sV \label{proof:trans-nec-b-18} \end{gather} Hence, from \eqref{proof:trans-nec-b-13}, \eqref{proof:trans-nec-b-17} and \eqref{proof:trans-nec-b-18} we can deduce that $\pV'\hSatS\hV_j\sV$ and so by the definition of \R we can deduce that \begin{gather} (\pV',\eI{\eSem{\hV_j\sV}}{\pV'})\in\R \label{proof:trans-nec-b-19} \end{gather} and so we are done by \eqref{proof:trans-nec-b-13} and \eqref{proof:trans-nec-b-19}. \end{itemize} \end{itemize} \end{case} } {\newcommand{\hMaxXF}{\hMaxXF} \newcommand{\rec{\rV}{\eSem{\hV}}}{\rec{\rV}{\eSem{\hV}}} \begin{case}[\lenMax{\hMaxXF}=l+1] Assume that \begin{gather} \pV\hSatS\hMaxXF \label{proof:trans-max-b-1}\\ \eI{\eSem{\hMaxXF}}{\pV}\traS{\actu}\pVV' \label{proof:trans-max-b-2} \end{gather} Since the reduction in \eqref{proof:trans-max-b-2} can be performed as a result of rules \rtit{iAsy}, \rtit{iTer} and \rtit{iTrn}, we consider each case separately. \begin{itemize} \item \lipicsHeader{\rtit{iAsy}:} From rule \rtit{iAsy} and \eqref{proof:trans-max-b-2} we get that $\actu=\actt$ and that \begin{gather} \pV\traS{\actt}\pV' \label{proof:trans-max-b-4} \end{gather} as required, and that $\pVV'=\eI{\eSem{\hMaxXF}}{\pV'}$. Hence, since \SHML is \actt-closed (as advocated by Proposition 3.8 in \cite{Aceto1999TestingHML}) by \eqref{proof:trans-max-b-1} and \eqref{proof:trans-max-b-4} we deduce that $\pV'\hSatS\hMaxXF$ as well, and so by the definition of \R we conclude \begin{gather} (\pV',\eI{\eSem{\hMaxXF}}{\pV'})\in\R \label{proof:trans-max-b-5} \end{gather} as required. We are therefore done by \eqref{proof:trans-max-b-4} and \eqref{proof:trans-max-b-5}. \item \lipicsHeader{\rtit{iTer}:} If we assume that \eqref{proof:trans-max-b-2} results from rule \rtit{iTer}, we get that $\actu=\acta$ and that \begin{gather} \pV\traS{\acta}\pV' \label{proof:trans-max-b-6} \end{gather} as required, and that $\pVV'=\eI{\eIden}{\pV'}=\eI{\eSem{\hTru}}{\pV'}$, since $\eSem{\hTru}=\eIden$. Hence, since \hTru is \emph{always satisfied}, we know that $\pV'\hSatS\hTru$ and so by the definition of \R we can conclude \begin{gather} (\pV',\eI{\eSem{\hTru}}{\pV'})\in\R \label{proof:trans-max-b-7} \end{gather} Hence, we are done by \eqref{proof:trans-max-b-6} and \eqref{proof:trans-max-b-7}. % \item \lipicsHeader{\rtit{iTrn}:} By assuming that the reduction in \eqref{proof:trans-max-b-2} was performed via rule \rtit{iTrn} and by the definition of \eSem{-}, we know that \begin{gather} \pV\traS{\acta}\pV' \label{proof:trans-max-b-8}\\ \rec{\rV}{\eSem{\hV}}\traS{\ioact{\acta}{\actu}}\eV \qquad (\text{where }\pVV=\eI{\eV}{\pV'}) \label{proof:trans-max-b-9} \end{gather} By applying rule \rtit{eRec} to \eqref{proof:trans-max-b-9}, along with the definition of, \eSem{-} we can deduce that $\eSem{\hVMaxXFSub}\traS{\ioact{\acta}{\actu}}\eV$, so that by \eqref{proof:trans-max-b-8} and \rtit{iTrn} we have that \begin{gather} \eI{\eSem{\hVMaxXFSub}}{\pV}\traS{\actu}\pVV' \label{proof:trans-max-b-10} \end{gather} By \eqref{proof:trans-max-b-1} and the definition of \hSatS we know that \begin{gather} \pV\hSatS\hVMaxXFSub \label{proof:trans-max-b-11} \end{gather} Since we assume that logical variables (\eg \hVarX) are \emph{guarded}, by the definition of \lenMax{\hV} we know that whenever a maximal fixed point \hMaxXF gets unfolded into \hVMaxXFSub, the number of top level maximal fixed points decreases by 1, and so, since $\lenMax{\hMaxXF}=l+1$ we infer that \begin{gather} \lenMax{\hVMaxXFSub}=l \label{proof:trans-max-b-12} \end{gather} Hence, by \eqref{proof:trans-max-b-10}, \eqref{proof:trans-max-b-11}, \eqref{proof:trans-max-b-12} and the inductive hypothesis we conclude that $\pV\traS{\actu}\pV'$ and $(\pV',\pVV')\in\R$ as required, and so we are done. \end{itemize} \end{case} } \vspace{-5mm} \end{proof} \medskip \section{Introduction} \label{sec:intro} \input{sections/intro.tex} \section{Preliminaries} \label{sec:prelim} \input{sections/logic.tex} \section{An Operational Model for Enforcement} \label{sec:enf-model} \input{sections/model.tex} \section{Enforceability} \label{sec:enforceability} \input{sections/enforceability.tex} \section{Synthesising Suppression Enforcers} \label{sec:synthesis} \input{sections/synthesis.tex} \section{Alternative Transparency Enforcement} \label{sec:strong-enforceability} \input{sections/strong-enf.tex} \section{Conclusion} \label{sec:conc} \input{sections/conc.tex} \subsubsection{Ensuring Deterministic Behaviour of the Synthesized Enforcers}
1,477,468,751,271
arxiv
\section{Introduction} \label{sec:intro} Everyday processes occur in such a way as to suggest an obvious intuitive difference between the past and the future. One of the great mysteries of physics and, in particular, the metaphysics of time is to explain the existence of this time-asymmetry despite the symmetry of the known microscopic theories of physics under an appropriate time reversal operation. Ludwig Boltzmann provided a proposal for such an explanation that seems to work for everyday processes.\footnote{See \cite{sep-statphys-Boltzmann}.} This proposal placed the burden of explanation not on the nature of the fundamental laws but on the nature of the initial state. The time-symmetry of the laws is then broken by the asymmetrical restriction to possible models that have highly atypical initial (but not final) states. In this way, Boltzmann attempted to explain why one might readily expect a cup of coffee to fall and shatter onto the ground but would not expect a mess of coffee and shards of cup to reassemble themselves. Because the cup of coffee is a highly unusual state in the space of possible ways that the constituents of the cup and coffee could be arranged, it is more typical to see the pieces scatter haphazardly than to see then reassemble as a cup of coffee. While this kind of explanation works reasonably well for simple thermodynamic systems, complications arise when attempting to apply this strategy to the universe as a whole. Evidence from modern cosmology that the earliest known states of the universe appear to have extremely low entropy seems to have improved the situation. Positing an unimaginably atypical past state for the entire universe, a so-called \emph{Past Hypothesis (PH)} \citep{albert2009time}, might then be used to iteratively provide an explanation for why nested subsystems of the universe --- such as a coffee cup in a room in a city on a planet etc --- should individually be expected to start off in atypical states. Early versions of the PH date back to Boltzmann himself \citeyearpar{boltzmann2012suicide} and comprehensive improvements making use of modern lessons from cosmology have been advanced mostly notably by Roger Penrose \citeyearpar{Penrose:1979WCH,penrose1994second}, Joel Lebowitz \citeyearpar{lebowitz1993boltzmann}, Shelly Goldstein \citeyearpar{goldstein2001boltzmann,goldstein2004boltzmann} and Huw Price \citeyearpar{price1997book,price2002boltzmann,price2004origins}. A well-known formulation has been advocated in \citet{albert2009time} where the phrase `Past Hypothesis' was coined after an initial proposal by Richard Feynman \citeyearpar[p.116]{feynman2017character}. The status of the PH remains controversial: it is not difficult to find both glowing appraisals and scathing criticism. Barry Loewer rates the problem of time-asymmetry as ``among the most important questions in the metaphysics of science'' \citep{loewer2012two} and the PH as ``the most promising approach to reductive accounts of time's arrows''. Huw Price rates the discovery of the low entropy past ``one of the most important [achievements] in the entire history of physics''\citeyearpar{price2004origins}. Despite these grand claims, criticism abounds. John Earman \citeyearpar{earman2006past} puts it bluntly: \begin{quote} This dogma, I contend, is ill-motivated and ill-defined, and its implementation consists mainly in furious hand waving and wishful thinking. In short, it is (to borrow a phrase from Pauli) not even false. \end{quote} \cite{Wald:2012zf} deliver a scathing critique of the basic technical premises of the idea identifying ``a number of serious difficulties in'' attempting to formulate concrete implementations of the proposal. The purpose of this paper is to asses and extend existing criticism and introduce a particularly troubling dilemma in order to argue that the PH faces disturbing new difficulties. First we will provide a comprehensive analysis of existing criticism of the PH for the purpose of assessing its status. Three broad categories of criticism are identified and listed at the beginning of \S\ref{sec:deconstructing_the_argument}. These categories provide a formal scheme for describing and evaluating different criticisms of the PH that have been advanced in the literature. To add precision to this process, we will start in \S\ref{sec:the_past_hypothesis} by giving a modern presentation of the arguments motivating the PH and identify a list of important conditions (in \S\ref{sub:key_assumptions_of_the_past_hypothesis}) that underly these arguments. We will then analyze several examples of criticism, taken as exemplars, in each category by identifying the specific conditions that each criticism puts into question. While this list of criticisms is not meant to be exhaustive and no single form of criticism should be seen as providing grounds to reject the entire proposal, when taken together these objections are sufficient to raise serious concerns regarding the PH. The resulting analysis already paints a rather grim picture for the prospects of formulating a PH in an unambiguous way using sound mathematical and physical principles. One common response to such objections is that they amount merely to an unreasonable insistence on technical rigour given the immense mathematical difficulties associated with defining measures in general relativity. In response to such objections, we show in \S\ref{sec:symmetries_and_measure_ambiguities} that the PH encounters a troubling dilemma that persists even if all such technical concerns are removed. This dilemma is an uncomfortable choice between a loss of explanatory power --- the \emph{first horn} (see \S\ref{sub:the_origin_of_measure_ambiguities_in_cosmology}) --- and the breaking of a gauge symmetry --- the \emph{second horn} (see \S\ref{sub:symmetry_and_ambiguity}). To establish this dilemma, we begin by using the analysis of \S\ref{sec:the_past_hypothesis} and \ref{sec:deconstructing_the_argument} to describe the first horn. In \S\ref{sec:the_past_hypothesis} we show that it is essential to the arguments of the PH to provide a justification for the measure used in the required typicality argument. Then in \S\ref{sec:deconstructing_the_argument} and \S\ref{sub:the_origin_of_measure_ambiguities_in_cosmology} we argue that the existence of a unique time-independent measure on the cosmological state space is essential to the explanatory claims of the PH. In \S\ref{sub:dynamical_similarity_in_the_universe} we show that the unique time-independent measure is not invariant under a particular cosmological symmetry called \emph{dynamical similarity}. Using this, we establish the second horn of the dilemma in \S\ref{sub:symmetry_and_ambiguity} by arguing that a failure of the measure to be invariant under this symmetry introduces a distinction without difference by over-counting empirically indistinguishable states. This leads to the following dilemma: either reject a time-independent measure and undermine the explanatory basis for the PH (horn 1) or introduce a distinction without difference by breaking dynamical similarity (horn 2). \section{The Past Hypothesis} \label{sec:the_past_hypothesis} In this section we will first provide a modern outline of Boltzmann-style explanations of time-asymmetry, \S\ref{sec:preliminaries}, and then use this framework to illustrate the basic logic of the Past Hypothesis, \S\ref{sub:the_past_hypothesis}. We compile a list (\S\ref{sub:key_assumptions_of_the_past_hypothesis}) of conditions necessary for the arguments of the PH collected from \S\ref{sub:the_past_hypothesis}. \subsection{Boltzmannian explanations of time-asymmetry} \label{sec:preliminaries} In the Boltzmannian reasoning, the ultimate goal is to explain within a given system the time-asymmetry of some macroscopic processes from the fundamentally time-symmetric microscopic processes that underly it. The main formal ingredients of this procedure therefore involve a specification of the macro- and micro-states of the system, a particular reductive map between them, and a way to describe their behavior. This is usually achieved in the context of the Hamiltonian formalism. In this formalism, the micro-states of the systems in question are given in terms of representations of the configurations of the microscopic constituents of the system and their states of motion. These are expressed as generalized position and momentum variables formally represented by a symplectic manifold, $\Gamma$, that specifies the \emph{phase space} of the system. A phase space of this kind has a number of interesting mathematical properties. Of central importance is the existence of a privileged measure, called the \emph{Liouville measure} $\mu_L(\Sigma)$, that can be used to assign weights to arbitrary regions $\Sigma \in \Gamma$. The Liouville measure is singled-out by its rather remarkable symmetry properties that will be discussed in detail below. Concretely, the Liouville measure is the integral over the $n^\text{th}$ power of the symplectic form, where $n$ is half the dimension of $\Gamma$. In \emph{Darboux} coordinates $(q_i,p_i)$ where $\pb {q_i}{p_j} = \delta_{ij}$, we have $\mu_L(\Sigma) = \int_\Sigma \prod_{i = 1}^n \de p_i\, \de q_i $ (i.e., $\mu_L$ is the Lebesgue measure on $\Gamma$ in these coordinates). For systems with infinite degrees of freedom or where the range of positions and momenta is infinite, there may be mathematical difficulties in precisely formulating this measure. The first set of relevant conditions for applying the Boltzmannian logic is therefore that there exists some way of writing a mathematically precise (Condition-\ref{assumption:rigor}) and empirically unambiguous (Condition-\ref{assumption:uniqueness}) measure $\mu$ on $\Gamma$. (Note that this does not necessarily have to be the Liouville measure.) With a suitable measure in hand one can assign weights to arbitrary regions in phase space. These weights can be taken to define different notions of \emph{typicality} for these regions. For example, one can say that a particular region $A$ is \emph{typical} on phase space if its weight as determined by $\mu$ is sufficiently large with respect to the weight of phase space itself: \begin{equation} \frac{\mu(\Gamma) - \mu(A)}{\mu(\Gamma)} \ll 1\,. \end{equation} In general, a set $S$ is said to be typical with respect to some property $P$ and measure $\mu$ if its weight according to $\mu$ is large as compared with all other sets that possess the property $P$ \citep{frigg2009typicality}. Clearly, any notion of typicality requires some interpretation for the weights provided by $\mu$ in order to have any meaning. For the purposes of Boltzmann's argument, we will see below that it will be necessary to interpret the weight $\mu(\Sigma)$ as the relative likelihood of finding the system in a particular region $\Sigma$ (as opposed to somewhere else in $\Gamma$) at any given time. We identify this as an additional requirement (Condition-\ref{assumption:Boltzmann 2}) of the formalism. The next formal step is to define the macro-states of a system. Physically these correspond to macroscopic states of the system such as temperature, volume, pressure, etc. Formally they are represented by some macro-state space $M$ which must have a (much) smaller dimension than $\Gamma$. Because Boltzmann was usually considering closed systems where the total energy $E$ is preserved, it is customary to consider states restricted to constant energy surfaces $\Gamma_E = \Gamma|_{E = \text{constant}}$ (i.e., the micro-canonical ensemble). In general many microscopic states will be indistinguishable from each other at the macroscopic level. This indistinguishability is modeled as a projection from $\Gamma_E$ to $M$. The micro-states identified under this projection define a partitioning of $\Gamma_E$ into the partitions $\Gamma_m$, where $m \in M$ ranges over all macro-states in $M$. These partitions represent equivalence classes of macroscopically indistinguishable micro-states. In order for these to be meaningful physically, there must exist some epistemologically motivated coarse-graining procedure that realizes this projection. For example, if the macroscopic variable in question is the temperature, then the temperature must be a well-defined quantity. We identify this requirement with a further condition (Condition-\ref{assumption:epistomology}). With these ingredients in hand it is now possible to define the \emph{Boltzmann entropy} (from now on called the `entropy' unless otherwise stated) of a particular macro-state $m$ as the logarithm of the Liouville weight of the partition $\Gamma_m$:\footnote{$k_B$ fixes the units of $S_\text{B}$.} \begin{equation}\label{eq:entropy def} S_\text{B} = k_\text{B} \log[ \mu_L(\Gamma_{m}) ]\,. \end{equation} The last formal ingredient describes the behavior of the system. Consider representing a single history of the system by a curve $\gamma$ in $\Gamma$ as in Fig~\ref{fig:phase space}. The dynamics of an entire region can then be understood in terms of a collection of curves or a \emph{flow} where each point in the region is mapped to another neighboring point on $\Gamma$. For systems where the energy is conserved, this flow can be expressed mathematically in terms of a single phase space function, $H$, called the \emph{Hamiltonian} of the system. A theorem of primary importance due to \cite{liouville1838note} shows that the flow generated by \emph{any} choice of Hamiltonian function is guaranteed to preserve the Liouville measure. An immediate caveat of this theorem is that, up to a constant, the Liouville measure is the unique (smooth) measure preserved by any choice of Hamiltonian.\footnote{Proof: Formally Liouville's theorem implies $\mathcal L_{\chi_H} \mu = 0\,,\forall H: \Gamma \mapsto \mathbbm R$ where $\mu = \omega^n$ and $\omega$ is the symplectic 2-form on $\Gamma$ and the vector field $\chi_H$ is determined via $\de H = \iota_{\chi_H} \omega$. Writing an arbitrary smooth volume-form as $v = f \mu$, where $f$ is some arbitrary smooth positive function $f:\Gamma \mapsto \mathbbm R^+$, then Liouville's theorem and the condition $\mathcal L_{\chi_H} v = 0$ immediately lead to $f =$ constant.} It is this property that mathematically privileges the Liouville measure. Liouville's theorem is therefore doubly important for Boltzmann's reasoning. It provides at the same time a possible justification, via uniqueness, for the choice of typicality measure $\mu$ and a consistency requirement, via the invariance property under evolution, for being able to use the same measure at different times. The latter point arises as a consequence of a stronger requirement, which we identify as Condition-\ref{assumption:invariance}, that the typicality measure be invariant under all gauge symmetries of the system (in this case time-translational and, crucially, time-reversal invariance). In this context and for the remainder of the paper, we will understand a `gauge-symmetry' to be a transformation of the representations of a system that relates physically indistinguishable states. \begin{figure} \begin{center} \includegraphics[width=0.618\textwidth]{Figure_1.png} \caption{\label{fig:phase space} A small, atypical initial state will typically spend most of its future in a large equilibrium state $\Gamma_\text{eq}$.} \end{center} \end{figure} We are now equipped to give a modern synthesis of Boltzmann's reasoning. First one must show that for the system in question there exists an exceptionally large macro-state $\Gamma_\text{eq}$ that takes up most of the phase space volume of the system. We take this to represent a further requirement that $\Gamma_\text{eq}$ be a typical state in $\Gamma_E$ (Condition-\ref{assumption:Boltzmann 1}). The relevance of Condition-\ref{assumption:Boltzmann 1} can be seen by the interpretation given to the weights of $\mu$ given Condition-\ref{assumption:Boltzmann 2}. If $\mu(\Gamma_\text{eq})$ gives the relative likelihood of finding the system in $\mu(\Gamma_\text{eq})$ then for all practical purposes $\Gamma_\text{eq}$ is a steady or \emph{equilibrium} state of the system because the system will almost always be found there. More significantly, if an equilibrium state exists, then a system that starts in a small macro-state will typically spend most of its future time in $\Gamma_\text{eq}$. The basic picture is depicted in Fig~\ref{fig:phase space}. This picture is plausible because the counting suggested by the required interpretation of $\mu$ immediately suggests that a system starting outside of $\Gamma_\text{eq}$ has little option but to quickly wander into $\Gamma_\text{eq}$, where it will remain for a very long time. But now there is a puzzle. Applying the same reasoning backwards in time suggests that a state finding itself in a small macro-state will also typically spend of all its \emph{past} in equilibrium. Because this apparently violates our knowledge that the past entropy of the universe was low, we are faced with the so-called \emph{second problem} of Boltzmann (see \cite{brown2001origins}). To solve this problem, one can posit an extremely \emph{atypical} condition on the earliest relevant state of the system. Under this condition, the system will typically find that it will approach the equilibrium state in the future. Note the temporal significance of the measure (Condition-\ref{assumption:Boltzmann 2}) and its central role in grounding the explanation of time asymmetry. \subsection{The Past Hypothesis} \label{sub:the_past_hypothesis} The main idea behind the PH is to evoke the Boltzmann-style reasoning of the previous section to explain time asymmetry in the universe. The system in question is then taken to be the entire universe and the PH itself translates into a special condition on the earliest relevant state of the universe. All of the mathematical quantities discussed above --- phase spaces, measures, macro-states, etc --- are then taken to represent aspects of the universe as a whole. The proposed explanation is given in terms of a typicality argument: universes that obey the appropriate PH, it is claimed, will typically evolve towards an equilibrium state in the future. Time-asymmetry arises by asymmetrically applying the special condition to past, rather than future, states. That the Boltzmann reasoning, whose empirical success is traditionally realised in closed sub-systems of the universe, can provide explanatory leverage when applied to the universe \emph{as a whole} is then taken as a further condition (Condition-\ref{assumption:typicality}) for the PH. Empirical support for the extreme atypicality of the initial state of our universe is taken to be implied by abundant cosmological evidence for a low-entropy early universe (e.g., the near-thermality of the Cosmic Microwave Background (CMB) power spectrum). We take this to be a final condition (Condition-\ref{assumption:observations}) for the viability of the PH. \subsection{Requirements of the Past Hypothesis} \label{sub:key_assumptions_of_the_past_hypothesis} We will now state all conditions identified in \S\ref{sec:preliminaries} (this list of conditions is \emph{not} intended to be sufficient for the PH). \begin{enumerate}[(A)] \item There exists a measure, $\mu_\text{universe}$, on the phase space of the universe, $\Gamma_\text{universe}$, that is simultaneously:\label{assumption:measure} \begin{enumerate}[(\ref*{assumption:measure}1)] \item mathematically precise, \label{assumption:rigor} \item empirically unambiguous, and \label{assumption:uniqueness} \item invariant under all gauge symmetries. \label{assumption:invariance} \end{enumerate} \item It is justifiable to interpret the weights given by the chosen measure in terms of the relative likelihood of the system being in a given region at a given time.\label{assumption:Boltzmann 2} \item There is an epistemologically meaningful and mathematically well-defined projection from the microscopic phase space of the universe, $\Gamma_\text{universe}$, to a macroscopic phase space, $M_\text{universe}$.\label{assumption:epistomology} \item There exists a unique and exceptionally large state, defined to be the \emph{equilibrium state} $\Gamma_\text{eq}$, that is a typical macro-state on the phase space of the universe at any given energy $E$; i.e., $$\frac{\mu_\text{universe}[\Gamma_\text{E,universe}] - \mu_\text{universe}[\Gamma_\text{eq}]}{\mu_\text{universe}[\Gamma_\text{E,universe}]} \ll 1\,.$$\label{assumption:Boltzmann 1} \item Typicality arguments have explanatory power when applied to the universe.\label{assumption:typicality} \item There is cosmological evidence for the PH being true.\label{assumption:observations} \end{enumerate} \section{Criticisms of the PH} \label{sec:deconstructing_the_argument} In this section we will set the stage for the arguments motivating the considerations of \S\ref{sec:symmetries_and_measure_ambiguities}. We identify and describe three categories of criticisms of the PH: \begin{enumerate}[(I)] \item \emph{Mathematical precision}. These criticisms question whether the formal quantities necessary for stating the PH can be given precise, unambiguous mathematical definitions.\label{worry: ambiguity} \item \emph{Dynamical considerations}. These criticisms grant \eqref{worry: ambiguity} but question whether the resulting formal quantities have the physical characteristics required for a Boltzmannian explanation --- especially when gravitational interactions are taken into account.\label{worry:intuition} \item \emph{Justification and explanation}. These criticisms grant both \eqref{worry: ambiguity} and \eqref{worry:intuition} but question the explanatory power and physical justification of the typicality arguments used when applied to the universe as a whole.\label{worry:typicality} \end{enumerate} Division of criticism into the above categories emphasizes the reliance of the latter forms of criticism on being able to provide adequate responses to the former. If, for example, one cannot meet the standards of Category-\ref{worry: ambiguity}, then the framework must be rejected and the considerations of Categories \ref{worry:intuition} and \ref{worry:typicality} become irrelevant. We will see below that there are already significant worries raised at the level of Categories \ref{worry: ambiguity} and \ref{worry:intuition} even though a significant amount of philosophical literature is focused on evaluating criticism falling into Category-\ref{worry:typicality}. We now discuss several examples, taken to be exemplars, of criticism to illustrate each of the above categories. This analysis will help illustrate the importance of the distinct properties of the Liouville measure that provided the basis for the dilemma presented in \S\ref{sub:symmetry_and_ambiguity}. \subsection{Category I: mathematical precision} \label{sub:case_c_measure_ambiguities} In this section we will primarily be concerned with issues arising from Conditions-\ref{assumption:measure} due to infinite phase spaces. Such phase spaces entail serious mathematical problems for measure-theoretic approaches to explanation. These problems stem from two distinct sources. The first arises because measures evaluated on an infinite interval can only be defined according to a limiting procedure that typically leads to physically significant regularization ambiguities. These problems are compounded in field theories because of a second source of ambiguity due to the phase space itself being infinite dimensional. In this case, it is a theorem that no Borel measure exists \citep{Curiel:2015oea} so that the system must be truncated to a finite phase space in order to accommodate any measure. Ambiguities of these two kinds lead to a tension between mathematical precision (Condition-\ref{assumption:rigor}) and empirical uniqueness (Condition-\ref{assumption:uniqueness}). To make matters worse, the purely mathematical problem of defining any measure on the phase space of general relativity invariant under all space-time symmetries is far from being solved. This open technical problem is in fact one of the main formal obstructions to obtaining a canonical formulation of quantum gravity. With this in mind, it is advisable to explore various approximations to general relativity that render the computations of measures more tractable. But even in this simplified setting, one encounters immediate and troubling difficulties that are emblematic of the more general case. Pioneering work in \cite{gibbons1987natural} that was elaborated on by several authors in both the physics \citep{Hawking:1987bi,Hollands:2002xi,Corichi:2010zp,Ashtekar:2011rm,Wald:2012zf} and philosophy literature \citep{earman2006past,frigg2009typicality,Curiel:2015oea} shows that the natural measure on homogeneous and isotropic cosmologies has infinite phase space volume. In the references listed, different schemes are provided for handling these divergences, and these schemes introduce ambiguities. A particular illustration of this will be outlined in detail in \S\ref{sub:dynamical_similarity_in_the_universe}. To resolve these mathematical ambiguities (of the first kind discussed above), new inputs, which are often physical in nature, must be introduced. It is thus paramount that the extra inputs needed to resolve these ambiguities neither conflict with other symmetry principles, in accordance with Condition-\ref{assumption:invariance}, nor implicitly assume what is trying to be explained: i.e., the time-asymmetry of local thermodynamic processes. Otherwise, the explanatory power of the PH is undermined. To illustrate the extent to which these ambiguities are problematic, consider the concrete results of different authors with different intuitions performing computations of the relative likelihood of cosmic inflation. Advocates for inflation \citep{Kofman:2002cj,Carroll:2010aj} proposed a measure according to which the probability of inflation was found to be infinitesimally close to 1. Inflation skeptics \citep{Turok:2006pa} proposed an alternative measure where the probability of inflation was found to be 1 part in $10^{85}$! This remarkably huge discrepancy reflects the extent to which individual beliefs can affect cosmologist's determinations of the appropriate physical principles used to justify their measure and the difficulties of resolving the tensions between Condition-\ref{assumption:rigor} and Condition-\ref{assumption:uniqueness}. Any conclusions drawn on the basis of a typicality argument must be assessed in light of such remarkable disagreement between cosmologists. Ambiguities of this kind are not improved when more realistic models including cosmological inhomogeneities are considered. Any preliminary hopes, such as those alluded to in \cite{callender2010past}, that adding an infinite number of degrees of freedom would help resolve these ambiguities can be seen to be in vain when explicit models are considered. This has been done, for example, in \cite{Wald:2012zf}. What was found there was that the additional degrees of freedom introduce corresponding regularization ambiguities of the second kind discussed above. It is therefore necessary to introduce new physical principles in order to resolve these ambiguities. Given the daunting nature of a full general relativistic treatment, these considerations raises serious doubts regarding the possibility of being able to attribute any meaningful notion of typicality to the universe. \subsection{Category II: dynamical considerations} \label{sub:case_b_gravitational_considerations} In this section we will consider the unique properties of gravitational dynamics that complicate our entropic intuitions for the universe, assuming that a well-defined truncation of the phase-space exists on which a Liouville measure can be defined. Consider the equilibrium state of a free gas. It is smooth, homogeneous and nothing like the current state of the universe, which is characteristically clumpy and uneven. Those clumps comprise, amongst other things, star systems --- one of which supports the far-from-equilibrium biological system we find ourselves in. On the other hand, analysis of CMB temperature fluctuations reveals only a small $10^{-5}$ deviation from homogeneity. How can these observations be compatible with a low entropy past state? The standard response to this is that the gravitational contribution to the entropy should dominate at late times because of the unusual thermodynamic character of the gravitational interactions. This contribution is so great that it more than compensates for the decrease in entropy observed through the clumping of matter. Intuition for this comes from entropic considerations in Newtonian $N$-body self-gravitating systems, which have been used to model, for example, the dynamics of dust and stars in galaxies and galaxy clusters. But even in this simplified and well-tested setting there are difficulties that are emblematic of the considerations of \S\ref{sub:case_c_measure_ambiguities}. Because Liouville volume is a volume on phase space, the inverse square potential due to gravity and the large momenta it can generate flip expectations for what constitutes a high and low entropy state. The steep gravitational potential well taps a vast reservoir of entropy allowing for the kind of sizable low entropy fluctuations we see in biological systems on Earth. These features as well as the difficulties they entail are reviewed nicely in \cite{Padmanabhan:2008overview,PADMANABHAN:1990book}, which gives detailed proofs of many of the results referenced below. This flipping of expectations is argued to occur not only for $N$-body systems, but also in a full-fledged general relativistic treatment of entropy. Thus, advocates of the PH (for example \cite{goldstein2004boltzmann,albert2009time}) emphasize the $N$-body intuition pump as providing an explanation for why the early homogeneous state of the CMB should be thought of as having low entropy and the current clumped state, which contains steadily accumulating stable records, as having high entropy. Moreover, this intuition was a primary motivation for early attempts at formulating an explicit PH such as Penrose's \emph{Weyl Curvature Hypothesis} \citeyearpar{Penrose:1979WCH}. The $N$-body intuition pump, however, also raises potential concerns. Firstly, if we follow the past state far enough into the early universe, a full general relativistic treatment becomes unavoidable. But as we have already seen in \S\ref{sub:case_c_measure_ambiguities}, such a treatment suffers from troubling ambiguities and it is not clear that the simple Newtonian intuition will remain valid. Another significant worry is the definition of equilibrium itself. The notion of equilibrium in gravitational systems is complicated by two sources of divergence (for details see \cite{Padmanabhan:2008overview}): i) the infinite forces particles exert upon each other when they collide, and ii) the infinite distances particles can obtain when ejected from a system. To cure these divergences, it is necessary to render the entropy finite by imposing additional constraints. This involves closing the system at some maximum size, so that particles are not allowed to escape, and forbidding two particles from being able to collide. This requires extra assumptions that must be grounded in physically acceptable principles. It is therefore paramount that these physical idealizations be well-motivated. But the fact that these idealizations break down under specified conditions implies difficulties in defining stable equilibrium for the system. Indeed, $N$-body systems are known to only have local --- but no global --- maxima \citep{Padmanabhan:2008overview}. Thus, gravitating systems do not have genuine equilibrium states, and Condition-\ref{assumption:Boltzmann 1} cannot be strictly satisfied. In absence of an equilibrium state, thermodynamic quantities such as macro-states and their entropy cannot be defined and Condition-\ref{assumption:epistomology} is strictly violated. While this is not problematic for local meta-stable systems like a galaxy, it can certainly be problematic for globally defined systems like the entire universe. Moreover, even when local equilibria exist, there is still no guarantee that gravitational dynamics will actually steer the system towards these local equilibria in order to satisfy Condition-\ref{assumption:Boltzmann 2}. The crucial role of dynamics in the Boltzmannian argument has been emphasized in \cite{frigg2009typicality} and \cite{brown2001origins}. \subsection{Category III: justification and explanation} \label{sub:case_a_typicality} This section will firstly be concerned with the essential need to satisfy Condition-\ref{assumption:Boltzmann 2} by finding a valid justification for using Liouville volume as a typicality measure, assuming all concerns of Category~\ref{worry: ambiguity} and \ref{worry:intuition} have been resolved. In conventional statistical mechanical systems, this justification proceeds along two traditional routes. The first and oldest route relies on a theorem by \cite{birkhoff1931proof} that states that for ergodic systems the average time spent in a particular phase space region becomes roughly proportional to its Liouville volume if the timescales in question are much longer than the Poincar\'e recurrence time. Unfortunately, for almost all system --- and certainly for the universe --- the Poincar\'e recurrence time is significantly longer than the estimated time since the Big Bang. The second route, usually favored for its practicality, is to argue that the system undergoes a process called \emph{mixing}. Roughly speaking, a system is mixed when the long-run evolution of the measure of a system becomes approximately homogeneous, and therefore Liouvillian. Many systems exhibit this property and the relevant mixing timescales can be computed explicitly. Unfortunately, \cite{Wald:2012zf} argue that the observed expansion of the universe is too rapid to allow the large scale structures of the universe to interact often enough for mixing to occur on these scales. This suggests that it is unreasonable to expect the universe as a whole to undergo mixing. It would seem that in terms of conventional justification schemes for the Liouville measure Condition-\ref{assumption:Boltzmann 2} cannot be made compatible with the observational requirements of Condition-\ref{assumption:observations}. It is possible to look for justification schemes satisfying Condition-\ref{assumption:Boltzmann 2} that do not originate from conventional statistical mechanical considerations. One proposal made by Penrose \citeyearpar{Penrose:1979WCH,penrose1994second} and later advocated (either implicitly or explicitly) by \cite{goldstein2001boltzmann}, \cite{lebowitz1993boltzmann}, and \cite{albert2009time} is a version of the Principle of Insufficient Reason (PIR) as formalized by Laplace. In Penrose's version, a blind Creator must choose initial conditions for the universe among the space of all possibilities. Being indifferent to which conditions to choose, the Creator assigns equal likelihood to each possibility according to the Liouville measure. Given the failure of standard justifications schemes, \cite{Wald:2012zf} point to Penrose's proposal as the only available alternative. Unfortunately, the PIR has a troubled history in the philosophy of science and suffers from several well-known difficulties. At least four prominent criticisms are identified in \cite{uffink1995entropyconsistency}. While some of these are addressed implicitly throughout this text, one line of criticism dating back to Bernoulli is noteworthy because it also directly puts into question the validity of Condition-\ref{assumption:epistomology}. In this line of criticism one derives paradoxes that originate in an incompatibility between the measures obtained when applying the PIR to different choices of partition for the micro-states of a system. These paradoxes occur when the partitions correspond to \emph{disjunct coarse-grainings} or \emph{refinements} of each other \citep{norton2008ignorance}. There is nothing in the PIR that tells you which partitioning of the micro-states is the ``correct'' one precisely because this would require some non-trivial knowledge about how these partitions may have been gerrymandered. Without direct knowledge of the ``correct'' partitioning of micro-states, the PIR loses all explanatory power. The only remaining justification for the Liouville measure is a uniqueness argument under time-symmetry. If one requires a time-symmetric measure, then the uniqueness of the Liouville measure under the requirement of being preserved by arbitrary Hamiltonian evolution does single it out. However, as we will see in \S\ref{sec:symmetries_and_measure_ambiguities}, very general symmetry considerations will put into doubt any motivations for using the Liouville measure to establish a notion of typicality for models in the universe. We end this section by mentioning a prominent dialectic between Price \citeyearpar{price2002boltzmann,price2004origins} and Callender \citeyearpar{callender2004measures,callender2004origins} on the explanatory power of the PH that questions the validity of Condition-\ref{assumption:typicality}. In this dialectic Price argues that the PH itself should require explanation in pain of applying a ``temporal double standard'' to a past state when an atypical future state would plainly require explanation. Callender responds by stating that contingencies rarely (or never) require explanation, and an initial condition such as a PH is a contingency of this kind. \section{A Dilemma for the Past Hypothesis} \label{sec:symmetries_and_measure_ambiguities} \subsection{Preliminaries: dynamical similarity as a gauge symmetry of the universe} \label{sub:dynamical_similarity_in_the_universe} Before establishing the horns of the dilemma, it will be convenient to state some results that will be central to the analysis. We will need to give the definition of a particular symmetry of the universe and list some of its core properties. The symmetry that will be central to our argument is called \emph{dynamical similarity}. The three aspects of dynamical similarity that will be needed for our analysis are: first that dynamical similarity is a gauge symmetry of any general relativistic formulation of the laws of the universe, second that the Liouville measure is not invariant under dynamical similarity and third that in known theories of the universe dynamically similar measures are badly time-asymmetric. To illustrate our first point, we must show that dynamical similarity relates empirically indistinguishable descriptions of a general relativistic system. We will do this first by making a general argument and then by showing that this general argument is consistent with the treatment of particular cosmological theories. We begin by giving a definition of dynamical similarity.\footnote{ For an excellent account of dynamical similarity and its role in defining measures in cosmology see \cite{Sloan:2018lim}. } Consider any system whose dynamical possibilities are specified by Hamilton's principle. For such systems, an action functional $S[\gamma]$ is given such that the Dynamically Possible Models (DPMs), $\gamma_\text{DPM}$, of the system are stationary points of $S$: \begin{equation}\label{eq:stationarity cond} \delta S[\gamma]|_{\gamma_\text{DPM}} = 0 \,. \end{equation} Then any transformation on the state space of such a system that rescales the action functional, \begin{equation}\label{eq:dyn sym def} S \to c S\,, \end{equation} is defined to be a \emph{dynamical similarity}. For any system of this kind, a dynamical similarity will map a DPM to another DPM, and is therefore a symmetry. This follows straightforwardly from the fact that the stationarity condition \eqref{eq:stationarity cond} is invariant under \eqref{eq:dyn sym def}. Dynamical similarities are therefore symmetries of any general relativistic description of the universe because general relativity can be formulated in terms of Hamilton's principle. This notion of symmetry, namely a transformation that maps DPMs to DPMs, is not yet enough for our argument. We will further need to show that dynamically similar DPMs are empirically indistinguishable. To see that this is true, observe that the constant in the transformation \eqref{eq:dyn sym def} can always be set to 1 by a suitable choice of units for the action. Since the unit of action is the unit of angular momentum, we find that dynamical similarities map DPMs to DPMs with different choices of units of angular momentum. Only if these choices can be compared with an external reference scale for angular momentum can the DPMs in question be empirically distinguished. If instead the units of angular momentum are referenced from within the system, then an arbitrary choice of units can have no empirical consequences. Because we are interested in a general relativistic description of the entire universe, there can be no external reference unit to distinguish between dynamically similar descriptions of the system. Thus, dynamical similarities are symmetries of a general relativistic description of the universe that relate empirically indistinguishable models; i.e., they are gauge symmetries. This point is well-appreciated by cosmologists. In writing down the equations of cosmological systems, one starts with a general relativistic formulation and then imposes spatial homogeneity and isotropy. The simplest models of inflation can thus be described by a single geometric variable $v(t)$ representing the volume of a co-moving patch of the universe and a single massive scale field $\phi(t)$. The Hamiltonian for this system can be written as: \begin{equation} \mathbbm H = \lf[ - H^2 + \frac{\pi_\phi^2}{v^2} + \tilde m^2 \phi^2 \ensuremath{\right}]\,, \end{equation} where $H$ is \emph{Hubble} red-shift parameter conjugate of $v$, $\pi_\phi$ is the momentum of the scalar field, and $\tilde m$ is a dimensionless mass.\footnote{ To obtain this expression we have absorbed all units of angular momentum into the variables $v$, $\pi_\phi$ and $t$. Thus, $H$ and $\phi$ are dimensionless. We have also used a time parameter $t = v \tau$, where $\tau$ is the proper time along a homogeneous slice.} This theory inherits a dynamical similarity from its underlying general relativistic description. If we remember that $S = \int \de t\, \lf( \dot v H + \dot \phi \pi_\phi - \mathbbm H \ensuremath{\right})$ then the transformation \begin{align}\label{eq:dyn sim explicit} v &\to c v & \phi &\to \phi \\ H &\to H & \pi_\phi &\to c \pi_\phi \notag\,, \end{align} is a dynamical similarity when $t \to ct$. The physical significance of the dynamical similarity \eqref{eq:dyn sim explicit} is straightforward to understand. It represents the freedom to arbitrarily choose the initial volume of a fixed fiducial cell while keeping the red-shift fixed. In cosmology, dynamical similarity therefore reflects the well-known property that the scale factor is an unobservable degree of freedom even though its momentum, the Hubble parameter, is observable. This achieves our first objective. Our second objective is to show that the Liouville measure is not invariant under dynamical similarity. This together with the previous result will be essential for establishing the second horn of the dilemma: the breaking of gauge invariance by the Liouville measure. This can be achieved by expanding upon the mismatch between the transformation properties of the volume $v$ and its conjugate momentum $H$. The Liouville measure is a homogeneous measure on phase space. This means that it gives the same weight to a configuration variable as it does to the corresponding momentum. It is thus impossible for any of measure of this kind to be invariant under a symmetry that acts in an unbalanced way on the phase space variables. We can illustrate this explicitly for the cosmological theory given above. A set of canonically conjugate variables for this theory is: $\{v, H, \phi, \pi_\phi \}$, and therefore the Liouville measure is \begin{equation} \mu_L(R) = \int_R \de v\, \de H\, \de \phi \, \de \pi_\phi\,. \end{equation} This measure is explicitly not invariant under the symmetry \eqref{eq:dyn sim explicit}. While illustrative and physically relevant, the non-invariance of the Liouville measure in this example is not just a special feature of this particular cosmological theory, but a general property of the Liouville measure. In order for a dynamical similarity to rescale the action as in \eqref{eq:dyn sym def} it must rescale the symplectic potential $\theta = p \de q \to c \theta$. But since the Liouville measure is just a power of the exterior derivative of the symplectic potential, $\mu_L(R) = \int_R \lf( \de \theta \ensuremath{\right})^n $, the Liouville measure itself will necessarily rescale under a dynamical similarity. Thus, the Liouville measure in general cannot be invariant under dynamical similarity. The last objective of this section is to show that the lack of invariance of the Liouville measure results in a significant numerical time-asymmetry in its projection onto the dynamically similar state space relevant to cosmological theories. This result will be useful in strengthening the case for the loss of explanatory power that leads to the first horn of the dilemma (see \S\ref{sub:the_origin_of_measure_ambiguities_in_cosmology} for details). To achieve the last objective, we will recall the results of well-known derivations.\footnote{For a summary of the results used here see \cite{Wald:2012zf}.} The measure that is relevant to our considerations is a measure not on the space of states but on the space of models. This can be achieved by projecting the Liouville measure onto some initial data surface on phase space. Because the Liouville measure is time-independent, the choice of initial data surface is arbitrary. For the cosmological theory presented in this section, a convenient choice of initial data surface that is also empirically meaningful is that of a surface of constant red-shift: $H = H^\star$. This choice leads to the Gibbons--Hawking--Stewart measure \citep{gibbons1987natural} \begin{equation}\label{eq:GHS} \mu_\text{GHS}(r) = \int_r \sqrt{ (H^\star)^2 - \tilde m^2 \phi^2 } \de v\, \de \phi\,, \end{equation} where $r$ is a region on the surface $H = H^\star$ that is compact in $\phi$ but not in $v$. This measure is not regarded to be physical in part because of its non-compact domain in terms of $v$ but, more importantly, because of the arbitrariness of the value of $v$ in terms of a choice of initial fiducial cell. More recently, \cite{Sloan:2019wrz} has established a direct link between this arbitrariness and dynamical similarity.\footnote{ The connection was first noticed in the context of Loop Quantum Cosmology by \cite{Corichi:2010zp} and \cite{Ashtekar:2011rm}. } To obtain a physically significant measure, \cite{Hawking:1987bi} defined a regularization procedure that takes advantage of the homogeneity of \eqref{eq:GHS} in $v$ to integrate over all possible values of $v$. The resulting measure \begin{equation}\label{eq:prob inflation} \text{Prob}(r_\phi) = \lim_{v_\text{max} \to \infty} \frac{ \int_0^{v_\text{max}} \de v }{ \int_0^{v_\text{max}} \de v } \frac{ \int_{r_\phi} \de \phi \sqrt{ (H^\star)^2 - \tilde m^2 \phi^2 }}{ \int_{r_{\phi_\text{max}}} \de \phi \sqrt{ (H^\star)^2 - \tilde m^2 \phi^2 }} \to \text{finite} \end{equation} is finite. The result depends only on the ratio of the integrals over the region $r_\phi$, which can be used to define inflation, and the finite region $r_{\phi_{\text{max}}}$, which is given in terms of the dynamical constraints of the theory. From the perspective of dynamical similarity, the integration over $v$ is motivated by requiring that the physical measure be invariant under symmetries that relate physically indistinguishable models. The integral over $v$ is an integration over the action of the dynamical similarity \eqref{eq:dyn sim explicit}. The physical measure \eqref{eq:prob inflation} is therefore invariant under \eqref{eq:dyn sim explicit} while the unphysical measure \eqref{eq:GHS} is not. The integration over $v$ creates a new problem. The physical measure \eqref{eq:prob inflation} depends explicitly on the choice of initial data surface as determined by the choice of initial red-shift factor $H^\star$. This dependence on $H^\star$ is significant. As was shown explicitly in \cite{Wald:2012zf}, the different choices of $H^\star$ used by inflation sceptics \citep{Turok:2006pa} compared with inflations advocates \citep{Kofman:2002cj,Carroll:2010aj} leads to a colossal $85$ order of magnitude difference between the estimates of the likelihood of inflation. Because a choice of $H^\star$ corresponds to a choice of initial time, this huge numerical imbalance leads to a significant temporal asymmetry: choosing a more recent value of $H^\star$ gives a dramatically smaller value for the weight of the same region $r_\phi$. This result is not just a special feature of the particular cosmological theory developed in this section. The Liouville measure is the unique time-independent measure on phase space. But, as we have shown, the Liouville measure is in general not invariant under dynamical similarity. There is therefore no (smooth) time-independent measure invariant under dynamical similarity. This means, in general, that a dynamically similar measure on the space of models will necessary depend on the choice of initial data surface (e.g., it will depend on $H^\star$). Moreover, the temporal asymmetry introduced by this is significant. For the theory introduced in this section, it leads to an $85$ order of magnitude difference between different choices of $H^\star$. There are good reasons to believe that this numerical imbalance will persist in any general relativitistic description of the universe. The interpretation of dynamical similarity in terms of an arbitrary choice of volume will persist in general relativity. In this context, the red-shift factor $H$ is still the variable conjugate to $v$. The temporal asymmetry will then always depend on the initial choice of $H^\star$, and this varies wildly between now and the empirically accessible past in a monotonic way. The huge monotonic variation of the Hubble parameter over the known history of the universe therefore introduces a significant time asymmetry into the definition of a dynamically similar measure. \subsection{The first horn: loss of explanatory power} \label{sub:the_origin_of_measure_ambiguities_in_cosmology} The analysis of \S\ref{sec:deconstructing_the_argument} has established that there are many concerns regarding the justification of the choice of typicality measure used to formulate a PH. In \S\ref{sub:case_b_gravitational_considerations} it was argued that self-gravitating systems have unusual thermodynamic properties and in \S\ref{sub:case_a_typicality} these arguments where combined with known facts about the universe to suggest that conventional statistical mechanical justifications fail when applied to the universe. Justifications that rely on indifference principles where also criticised on epistemological grounds. The analysis of \S\ref{sec:deconstructing_the_argument} therefore leads to the conclusion that the only tenable justification for choosing the Liouville measure is an argument from time-independence. The Liouville measure is indeed singled out as being the unique measure on phase space that is preserved by an arbitrary choice of dynamics. At first sight this uniqueness appears to be particularly convenient because a time-independent measure is very natural in the context of a PH. But time-independence in the measure is more than a question of convenience in the context of a PH. In fact, it is an essential ingredient for the PH independent of any other justificatory considerations. Following \cite{price2002boltzmann}, the logic of the PH presented in \S\ref{sec:preliminaries} constitutes a contrastive explanation of the form: if A then B rather than C. The explanans A --- i.e., the PH itself --- is taken to explain the explanandum B --- i.e., the fact that typical processes are seen to overwhelmingly occur in a time-asymmetric way. The outcome C is then a typical member of a contrast class of outcomes that would be likely if not for A. The explanatory power of A comes from increasing the likelihood of B relative to C. In the case of a PH, the contrast class is the set of worlds where typical processes overwhelmingly occur in a time-symmetric way. According to this logic, in order for the PH to be a good explanation of time-asymmetry, it must be the only significant source of time-asymmetry. Clearly this is consistent with the apparent time-symmetry of the form of the fundamental laws. This consistency however is not sufficient. When a time-asymmetric measure is introduced into the formalism, the time-asymmetry of the measure could itself provide an explanation for the time-asymmetry of typical processes. This is especially true if the time-asymmetry of the measure introduces a significant numerical temporal gradient as was shown in the previous section for the case of cosmological models. Moreover, the time-dependence of the measure introduces an ambiguity in terms of which instant should be used in order to obtain a measure on the space of models. Such an ambiguity can only be resolved by including some additional principle to the PH --- thus undermining much of its explanatory appeal. It is therefore essential to the logic of the PH that the measure employed be time-independent, and especially important that the measure not be badly time-asymmetric. Otherwise we would have no reason to believe that processes would not occur in a time-asymmetric way even if the PH were not true. Note that these considerations hold regardless of any other justificatory considerations regarding the measure. This establishes the first horn of the dilemma. \subsection{The second horn: violation of a gauge symmetry} \label{sub:symmetry_and_ambiguity} In the preliminary \S\ref{sub:dynamical_similarity_in_the_universe} we saw that the projection of the Liouville measure onto the space of models, while time-independent, is nevertheless considered by cosmologists to be unphysical. Contrastingly, the measure that is considered by cosmologists to be physical was found to be invariant under dynamical similarity. We will now argue that this result is to be expected in any general relativistic description of the universe. To do this, we will show that a measure that is not invariant under symmetries that relate physically indistinguishable descriptions of a system (Condition-\ref{assumption:invariance}) introduces two distinct problems: first it introduces a distinction without difference and, second, it runs against standard practice in particle and statistic physics physics. Consider a region $R$ that lives in the domain $\mathcal D(\mu)$ of some measure $\mu$ and a transformation $T: \mathcal D(\mu) \to \mathcal D(\mu)$ that maps this domain onto itself. Our assumptions demand that $T$ map states of a system to empirically indistinguishable states. The set of states in the region $R$ is therefore empirically indistinguishable from the set of states in the transformed region $R' = T(R)$. In general, the non-invariance of $\mu$ under $T$ implies that the weight of the transformed region is not necessarily equal to the weight of the original: $\mu(R) \neq \mu(R')$. But if this is true then the weights $\mu(R)$ and $\mu(R')$ provide a distinction at the representational level between the regions $R$ and $R'$. Given our original assumptions, this distinction cannot represent any empirical difference. In this sense, the measure $\mu$ therefore introduces a representational distinction that can't be captured by the empirical properties of the world. It is therefore \emph{not} a valid measure for describing empirical phenomena. This argument is reinforced by standard practice in particle and statistical physics that requires that physical measures be invariant under all the gauge symmetries of a system. In the standard model of particle physics the gauge-invariance of the path-integral measure is a central foundational principle of the theory. More generally, the Faddeev--Popov determinant, which enforces the gauge-invariance of the path-integral measure, is considered a necessary ingredient in gauge theory (see \cite[Chap 15]{Weinberg:1996kr} for an overview and defence of this standard practice). Similarly in statistical physics, \cite{jaynes1973well} has argued influentially that measures should be invariant under transformations that relate indistinguishable states of a system. We therefore conclude that there are strong epistemological and methodological motivations for requiring Condition-\ref{assumption:invariance}. We are now in a position to state the second horn of our dilemma. As we have shown in the previous section, dynamical similarity is a symmetry that maps states of any general relativistic description of the universe to indistinguishable states. Given the argument above, any measure not invariant under such a symmetry must violate a gauge symmetry and introduce a distinction without difference. Therefore, a measure on the state space of a generally relativistic description of the universe that is not dynamically similar will run into the symmetry-violating horn. But as was shown in \S\ref{sub:dynamical_similarity_in_the_universe}, the Liouville measure is not dynamically similar. It follows that use of the Liouville measure therefore violates a gauge symmetry of the theory. This is the second horn. We now recall the first horn of the dilemma. The formulation of the PH must make use of the unique time-independent Liouville measure in order to retain its explanatory power. But the Liouville measure is not dynamically similar, and therefore introduces a distinction without difference. An advocate of the PH must therefore face the dilemma stated in the introduction: either lose explanatory power or introduce a distinction without difference. \section{Discussions/Conclusions} \label{sec:prospectus} We have seen that Boltzmann-style explanations of time-asymmetry that make use of a PH depend upon a series of very restrictive conditions. Our analysis in \S\ref{sec:deconstructing_the_argument} has uncovered several good reasons to question whether these conditions can ever be simultaneously satisfied. Broadly speaking we found that the nature of the phase space, dynamics and symmetries of general relativity provide reasons for pessimism regarding the prospects for providing and justifying a satisfactory notion of typicality for models of the universe. A common response against critiques of this kind is to observer that strict insistence on mathematical rigour has often been unreasonable in the development of theoretical physics. Controversy over difficult technical problems such as defining a measure on the solution space of general relativity should not, it is argued, halt progress altogether. It should still be reasonable to advance conjectures regarding the plausible features of measures that may one day become available. While such a strategy --- effective or not --- is available in response to much of the analysis of \S\ref{sec:deconstructing_the_argument}, it is no longer available in response to the dilemma of \S\ref{sec:symmetries_and_measure_ambiguities}. This is because the dilemma is the result of a simple symmetry argument applied to a very general way of formulating the laws of the universe. To reject dynamical similarity is to reject a description of the physics of the universe in terms of Hamilton's principle. To reject the uniqueness arguments for the time-symmetry of Liouville's measure is to reject a description of the universe in terms of a phase space. To not require the gauge-invariance of the measure is to introduce a distinction without difference and to reject standard practice in particle and statistical physics. None of these escape routes is particularly appealing. Even if one grants all the technical assumptions required by the PH, the dilemma persists. On the other hand, a rejection of the PH as an explanation for time-asymmetry avoids the dilemma completely. But how then is one to explain the time-asymmetry of macroscopic processes given the apparent time-symmetry of the fundamental laws? In other words, how is one to solve the original problem of the arrow of time? One possibility would be to embrace the necessary time-dependence of the measure implied by dynamically similarity. While the equations of motion of general relativity, and in particular the cosmological models discussed in \S\ref{sub:dynamical_similarity_in_the_universe}, are formally invariant under time-reversal, they also contain redundancy under dynamical similarity. The construction of a time-asymmetric measure invariant under dynamical similarity can be constructed for a very general class of systems \citep{Sloan:2018lim} in a way that mirrors the derivation of the physical measure \eqref{eq:prob inflation}. The resulting time-asymmetry of the measure can be shown to result from the non-conservative, time-irreversible structure of the reduced Hamiltonian for the system. Perhaps then the apparent time-symmetry of general relativity is simply an artefact of a representational redundancy? But if time-asymmetry really is built into the character of the empirically relevant formulation of the law, then this could provide a new basis for providing an explanation for the arrow of time. Such a strategy would parallel and further develop the approach suggested in \cite{Barbour:2014bga}, which also makes use of dynamical similarity. An important aspect of this approach is an account of the low-entropy past state as a generic, rather than highly atypical, feature of the theory. Such a scenario would therefore not require any PH. What remains is to extend a program of this kind to general relativity and to show that the time-asymmetry of the reduced system is indeed sufficient for explaining the observed time-asymmetry of macroscopic processes. This possibility opens up new and exciting directions for future investigations. \section*{Acknowledgements} I would like to thank Karim Th\'ebault for an enormous amount of encouragement, feedback, and helpful discussions. My thinking about the arrow of time has been heavily influenced by conservations with David Sloan, Tim Koslowski, Flavio Mercati and Julian Barbour. I'm also grateful to Roman Frigg, Fred Muller, Guido Bacciagaluppi, and audiences in Utrecht and Groningen for many useful discussions and feedback. Finally I'd like to thank Erik Curiel for valuable comments on an early version of the draft as well as Jan--Willem Romeijn and Simon Friederich for guidance, suggestions and mentorship. My work is supported by a Young Academy Groningen Scholarship. \bibliographystyle{chicago}
1,477,468,751,272
arxiv
\section{Introduction} Recent observational data provide a strong support of the existence of extragalactic magnetic fields, in the range of $\mcO (10^{-14}$-$10^{-20})\,{\rm G}$ on Mpc scales~\cite{Neronov:1900zz,Tavecchio:2010mk,Dermer:2010mm,Huan:2011kp,Dolag:2010ni,Essey:2010nd,Taylor:2011bn,Vovk:2011aa,Takahashi:2011ac,Finke:2013bua}\,. The generation of the magnetic field in high-redshift galaxies, clusters, and even in empty intergalactic region is still an unresolved problem in cosmology. No promising astrophysical process to generate the sufficient amount of the magnetic field on the large scales are known. As for the inflationary magnetogenesis, though the various mechanism are proposed, the several difficulties such as the strong coupling problem, the backreaction problem and the curvature perturbation problem in some specific models prevent successful production of magnetic field~\cite{Demozzi:2009fu, Barnaby:2012xt, Fujita:2013qxa}. Actually both upper and lower limits on the inflation energy scale can be derived from these problems in model independent ways and the limits are considerably severe if the extragalactic magnetic fields are stronger than $10^{-16}$G at present~\cite{Fujita:2012rb,Suyama:2012wh,Fujita:2014sna}. Thus it is known to be very difficult to generate the magnetic field in the context of the inflationary magnetogenesis on the flat Friedmann-Lema\^itre-Robertson-Walker (FLRW) universe. The superadiabatic growth of the magnetic fields in the open FLRW universe has been discussed in the literatures~\cite{Barrow:2011ic,Barrow:2012ty}. The authors of these literatures assumed the existence of supercurvature modes of the magnetic field, which describes the fluctuations with the wavelength exceeding the spatial curvature scale. If a supercurvature mode exists, it decays slower than $1/a^2$\,, where $a$ corresponds to the conventional scale factor of a FLRW universe, and can easily survive the inflationary era. Hence the relatively large amount of the magnetic field on supercurvature scales would remain at late time. However, the existence of supercurvature modes of magnetic fields is non-trivial and should be critically studied. Adamek {\it et al.}~\cite{Adamek:2011hi} recently pointed out that the equations of motion of a U(1) gauge field with unbroken conformal and gauge symmetries can be rewritten in the form that is identical to those of massive scalar fields for which there is no supercurvature mode~\footnote{Rigorously speaking, the proof of the absence of supercurvature modes requires knowledge of not only the equation of motion but also a Klein-Gordon norm and proper boundary conditions. The present paper fills those gaps for the analysis of the massless vector field, although our main focus will be on a massive vector field.}. The purpose of the present paper is to investigate whether supercurvature modes exist for a massive vector field, in both scalar and vector sectors of the physical spectrum. To be specific, we consider a U(1) gauge field with both gauge and conformal symmetries spontaneously broken through the Higgs mechanism. As for the background geometry, we consider a de Sitter spacetime in the open chart. This is relevant to the one-bubble open inflation scenario that naturally predicts the spatially negative curvature universe. While the recent observational data show that the universe is almost exactly flat with accuracy of about $1\%$\,, $|1-\Omega_0 |\leq 10^{-2}$~\cite{Ade:2013zuv}\,, open inflation scenario is attracting a renewed interest in the context of the string landscape scenario~\cite{Susskind:2003kw,Freivogel:2004rd}. There are a huge number of metastable de Sitter vacua and the tunneling transition generally occurs through the nucleation of a true vacuum bubble in the false vacuum background. Because of the symmetry of the instanton solution, a bubble formed by the Coleman-De Luccia (CDL) instanton~\cite{Coleman:1977py,Coleman:1980aw} looks like an infinite open universe from the viewpoint of an observer inside. If the universe experienced a sufficiently long inflation after the bubble nucleation, then the universe becomes almost exactly flat and subsequently evolves as a slightly open FLRW universe. This leads to a natural realization of one-bubble open inflation (see e.g. \cite{Sasaki:1994yt,Yamamoto:1996qq,Tanaka:1997kq,Garriga:1998he,Garriga:1997wz}) and can be tested against observations~\cite{Yamauchi:2011qq,Sugimura:2012kr,Sugimura:2013cra}. This paper is organized as follows. We first illustrate the background spacetime in section \ref{sec:Background}. In section \ref{sec: KG norm}\,, we expand the U(1) gauge field by harmonic functions and write down the reduced action for the even and odd modes of the U(1) gauge field. In order to investigate the existence/absence of supercurvature modes, we show the quantization conditions for the even and odd modes on a Cauchy surface. With the obtained normalization conditions, we then analyze whether the supercurvature modes, which are normalizable on the Cauchy surface, exist in section \ref{sec: Mode functions}. In section \ref{sec: massless limit}\,, as a consistency check, we explicitly calculate the Wightman function in the decoupling limit by using the (subcurvature) mode functions derived in section \ref{sec: KG norm}. It is shown that the correct expression for the Euclidean Wightman function is recovered in the decoupling limit without need for any supercurvature modes. Finally, section \ref{sec:summary} is devoted to a summary and discussions. \section{Background} \label{sec:Background} In this paper, we consider a U(1) gauge field with both gauge and conformal symmetries spontaneously broken through the Higgs mechanism in an open de Sitter geometry, i.e. a de Sitter spacetime in the open chart. Before showing relevant forms of the background metric, we illustrate that this setup is appropriate to investigate the existence/absence of supercurvature mode of a massive vector field in open inflationary universe. Let us begin with a system which consists of multi scalar fields and the U(1) gauge field minimally coupled with Einstein gravity. We investigate the evolution of mode functions of the U(1) gauge field in the one-bubble open inflationary scenario and particularly focus on whether the supercurvature modes are generated. To be specific, we introduce a real scalar field $\sigma$ that governs the quantum tunneling from a false vacuum to a true vacuum and realizes inflation after the quantum tunneling, and a complex scalar field $\Phi$ that plays a major role in the coupling to the U(1) gauge field $A_\mu$. Our action is given by \al{ S=S_{\rm tun}+S_{\rm AH} \,, } where \al{ &S_{\rm tun}=\int\dd^4 x\sqrt{-g} \Biggl[ \frac{M_{\rm pl}^2}{2}R -\frac{1}{2}g^{\mu\nu}\pd_\mu\sigma\pd_\nu\sigma -V_{\rm tun}(\sigma ) \Biggr] \,,\\ &S_{\rm AH} =\int\dd^4 x\sqrt{-g} \Biggl[ -\frac{1}{4}F_{\mu\nu}F^{\mu\nu} -g^{\mu\nu}\DD_\mu\Phi\overline{\DD_\nu \Phi} -V_{\Phi}(|\Phi |) \Biggr] \,.\label{eq: first action} } Here the potential $V_{\rm tun}(\sigma)$ is assumed to be the form that realizes the fast vacuum decay, $\DD_\mu =\pd_\mu - ieA_\mu$ is the gauge-covariant derivative, $F_{\mu\nu} = \partial_{\mu} A_\nu-\partial_\nu A_\mu$ is the field strength of the gauge field, and an overbar denotes the complex conjugate. Since the potential term of $\Phi$ depends only on its absolute value, this action has the local U(1) symmetry: \al{ \Phi\rightarrow \Phi\, e^{i\alpha (x)} \,,\ \ \ A_\mu\rightarrow A_\mu -\frac{1}{e}\pd_\mu\alpha (x) \,.\label{eq:local U(1) symmetry} } However, if $\Phi$ acquires a non-zero vacuum expectation value, $\ave{\Phi} \neq 0$, then the local U(1) symmetry is spontaneously broken. In that case, the phase degree of freedom is absorbed into the vector field and the gauge field becomes massive as it is well known as Higgs mechanism. In this paper, we consider a simple open inflation model in which the bubble nucleation can be well described by the single-field Coleman-de Luccia (CDL) instanton~\cite{Coleman:1977py,Coleman:1980aw} on the exact de Sitter spacetime with the Hubble parameter $H$\,. Hence we assume that the tunneling transition can be described by a Euclidean $O(4)$-symmetric bounce solution on a Euclidean de Sitter geometry. \bc \begin{figure}[tbp] \bc \includegraphics[width=80mm]{Penrose_diagram.eps} \caption{ Penrose diagram of bubble nucleating universe. } \label{fig:Penrose_diagram} \ec \end{figure} \ec The Euclidean geometry can then be well described by the Euclidean de Sitter metric: \al{ \dd s^2=a_\rmE^2 (\eta_\rmE )\Bigl[\dd\eta_\rmE^2 +\dd r_\rmE^2 +\sin^2 r_\rmE\,\omega_{ab}\dd\theta^a\dd\theta^b\Bigr] \,,\label{eq:metric in E} } where $-\infty \leq\eta_\rmE\leq +\infty$\,, $0\leq r_\rmE\leq\pi$\,, $a_\rmE (\eta_\rmE )=1/H\cosh\eta_\rmE$\,, and $\omega_{ab}={\rm diag}(1,\sin^2\theta )$ denotes the metric on the unit $2$-sphere. The background geometry in the Lorentzian regime is obtained by analytic continuation of the bounce solution. The coordinates in the Lorentzian regime are \al{ &\eta_\rmE =\eta =-\eta_\rmR -\frac{\pi}{2}i=\eta_\rmL +\frac{\pi}{2}i \,,\label{eq:eta relation}\\ &r_\rmE =-ir +\frac{\pi}{2}=-ir_\rmR =-ir_\rmL \,,\label{eq:r relation}\\ &a_\rmE =a =ia_\rmR =ia_\rmL \,. } Each set of these coordinates covers one of three distinct parts of the Lorentzian de Sitter spacetime, called regions-R, L, and C. Hereafter, we suppress the subscript C because we mainly work in the region-C. The Penrose diagram for this open FLRW universe is presented in Fig.~\ref{fig:Penrose_diagram}. As seen in Fig.~\ref{fig:Penrose_diagram}, the surfaces which respect the maximal symmetry in the region-R and L, i.e. $\eta_{\rm R,L} =$const hypersurfaces are not Cauchy surfaces of the whole spacetime, and hence they are not appropriate to normalize mode functions (see e.g. \cite{Sasaki:1994yt,Yamamoto:1996qq}). In the region-C, however, $r =$const hypersurfaces behave as Cauchy surfaces. Therefore, we need to find the reduced action and properly construct the Klein-Gordon (KG) norm on a Cauchy surface in the region-C. The analytic continuation of eq.~\eqref{eq:metric in E} to the region-C is given by \al{ \dd s^2 =a^2 (\eta )\bar g_{\mu\nu}\dd x^\mu\dd x^\nu =a^2 (\eta ) \Bigl[ \dd\eta^2 -\dd r^2 +\cosh^2 r\,\omega_{ab}\dd\theta^a\dd\theta^b \Bigr] \,,\label{eq:metric in region-C} } where $a(\eta )=1/H\cosh\eta$\,. Note that in the region-C, $\eta={\rm const.}$ hypersurfaces are no longer spacelike, and $r$ instead of $\eta$ plays the role of a time coordinate there. \section{Reduced action and Klein-Gordon norm} \label{sec: KG norm} To describe the Euclidean vacuum state, we need a complete set of mode functions, which should be properly normalized on a Cauchy surface. In order to determine whether supercurvature modes of the U(1) gauge field exist or not, we thus have to construct the KG norm on a Cauchy surface and to check if the modes can be properly normalized. In this section, we discuss the quantization of the U(1) gauge field in the open chart of the de Sitter spacetime and derive the KG norm on a Cauchy surface. Since we are interested only in the gauge field, we hereafter neglect the quantum fluctuations of $\varphi\equiv |\Phi|$ and treat it as a non-vanishing constant value by assuming that the mass squared $V_{\Phi}''(|\Phi|)$ around the potential minimum is large enough. Hence, it is convenient to decompose the complex scalar field $\Phi$ into its absolute value and phase as $\Phi (x)=\varphi\, e^{i \Theta (x)}$ with $\varphi ={\rm const}$\,. Based on the gauge transformation property, eq.~\eqref{eq:local U(1) symmetry}, one can construct a gauge-invariant variable. In the case of the nonvanishing coupling constant, $e\neq 0$, one possible choice of such variable is \al{ \mcA_\mu\equiv A_\mu -\frac{1}{e}\pd_\mu\Theta \,. \label{eq:def-mcAmu} } This is chosen as an appropriate variable in the unitary gauge : $\Theta =0$\,. With these assumptions, the relevant part of the action \eqref{eq: first action} can be written in terms of the gauge-invariant variable: \al{ S_{\rm eff} =-\int\dd^4 x\sqrt{-\bar g} \biggl[ \frac{1}{4}\bar g^{\mu\alpha}\bar g^{\nu\beta} \left(\pd_\mu\mcA_\nu -\pd_\nu\mcA_\mu\right)\left(\pd_\alpha\mcA_\beta -\pd_\beta\mcA_\alpha\right) +a^2m_A^2\,\bar g^{\mu\nu}\mcA_\mu\mcA_\nu \biggr] \,,\label{eq:relevant action} } where $\bar g_{\mu\nu}$ is the conformally transformed metric defined in eq.~\eqref{eq:metric in region-C} and we have introduced $m_A=e\varphi$ to denote the effective mass of the gauge field. As we mentioned above, we need to work in the region-C, where the background configuration is spatially inhomogeneous. Hence we should expand perturbations in a way that respects the symmetry of the $2$-sphere rather than that of the $3$-hyperboloid on which harmonics of various types are defined (see Appendix \ref{sec:Harmonics in open universe}). To rewrite the action \eqref{eq:relevant action} in terms of the $(1+1+2)$ decomposition, let us decompose $A_\mu$ into the variables $\{A_\eta\,,A_r\,,A_a\}$ with $a=\theta\,,\phi$. Note that $A_\eta$ and $A_r$ behave as the even parity modes with respect to the two-dimensional rotation. Since $A_a$ behaves as $2$-vector, we can further decompose it into the even and odd parity parts as \al{ A_a =A^{({\rm e})}_{:a}+\epsilon_a{}^bA_{:b}^{({\rm o})} \,,\label{eq:even odd decomposition} } where we have introduced the colon ( $:$ ) as the covariant derivative with respect to the unit $2$-sphere metric $\omega_{ab}$\,, and $\epsilon^a{}_b$ is the unit anti-symmetric tensor on the unit $2$-sphere, which are defined in eqs.~\eqref{eq:intrinsic covariant derivative}, \eqref{eq:antisymmetric tensor}, respectively. Before constructing the reduced action, let us consider the boundary conditions at $\eta\rightarrow\pm\infty$ for the gauge-invariant variables. Since the boundary of the open slice of the de Sitter spacetime, that is $\eta\rightarrow\pm\infty$\,, is regular, we can impose the condition that scalar gauge-invariant quantities such as $F_{\mu\nu}F^{\mu\nu}$ be all regular at $\eta\rightarrow \pm\infty$\,. In the case of the nonvanishing coupling constant, we can construct another gauge-invariant quantity $\mcA_\mu$ as defined in (\ref{eq:def-mcAmu}). We can then impose the condition that the tetrad components of the gauge-invariant vector, $\mcA_\mu e^\mu_{(\alpha )}=(\mcA_\eta /a\,,\mcA_r/a\,,\mcA_a/a)$\,, be regular at $\eta\rightarrow\pm\infty$\,. Here, $\{e^\mu_{(\alpha )}\}$ is a tetrad basis of the de Sitter spacetime. In consequence, $\mcA_\eta$\,,$\mcA_r$\,, and $\mcA_a$ have to decay as fast as (or faster than) $e^{-|\eta |}$ at $\eta\rightarrow\pm\infty$\,. Hereafter, using the U(1) gauge degree of freedom, we adopt the unitary gauge : $\Theta =0$\,. The gauge-invariant vector $\mcA_\mu$ then reduces to the original gauge field $A_\mu$. We now expand the perturbations in terms of the spherical harmonics as \al{ &A_\eta (\eta ,r,\Omega ) =\sum_{\ell m}A_\eta^{\ell m}(\eta ,r)Y_{\ell m}(\Omega ) \,,\ \ A_r (\eta ,r,\Omega ) =\sum_{\ell m}A_r^{\ell m}(\eta ,r)Y_{\ell m}(\Omega ) \,,\label{eq:spherical expansion 1}\\ &A^{(\lambda )}(\eta ,r,\Omega ) =\sum_{\ell m}A^{(\lambda )\ell m}(\eta ,r)Y_{\ell m}(\Omega ) \, \label{eq:spherical expansion 2} } where $\lambda ={\rm e}$ and ${\rm o}$\,. It is straightforward to express the action in terms of the coefficients of the spherical harmonic expansion. We then find that the action can be decomposed into the even and odd parity parts as \al{ S_{\rm eff}=S^{({\rm e})}+S^{({\rm o})} \,,\label{eq:reduced action} } where \al{ &S^{({\rm e})} =\frac{1}{2}\sum_{\ell m}\int\dd r\dd\eta \biggl\{ \cosh^2 r\left(\pd_r A_\eta^{\ell m} -\pd_\eta A_r^{\ell m}\right)^2 +a^2m_A^2\cosh^2 r \biggl[ \left( A_r^{\ell m}\right)^2 -\left( A_\eta^{\ell m}\right)^2 \biggr] \notag\\ &\quad +\ell (\ell +1) \biggl[ \left(\pd_r A^{({\rm e})\ell m} -A_r^{\ell m}\right)^2 -\left(\pd_\eta A^{({\rm e})\ell m}-A_\eta^{\ell m}\right)^2 -a^2m_A^2\left( A^{({\rm e})\ell m}\right)^2 \biggr] \biggr\} \,,\label{eq:even mode action} } is the action for the even parity modes, and \al{ &S^{({\rm o})} =\frac{1}{2}\sum_{\ell m} \ell (\ell +1)\int\dd r\dd\eta \biggl\{ \left(\pd_r A^{({\rm o})\ell m}\right)^2 -\left(\pd_\eta A^{({\rm o})\ell m}\right)^2 -\left(\frac{\ell (\ell +1)}{\cosh^2 r}+a^2m_A^2\right)\left( A^{({\rm o})\ell m}\right)^2 \biggr\} \,,\label{eq:odd mode action} } is the action for the odd parity modes. Since the U(1) gauge field contains an auxiliary variable, or non-dynamical degree of freedom, we need to remove it to find the appropriate KG norm that contains only the physical degrees of freedom. In the region-C, $A_r$ rather than $A_\eta$ behaves as the auxiliary variable which does not have the time kinetic term. Varying the action with respect to $A_r$\,, we have the constraint equation, which is given by \al{ \hat\mcO A_r^{\ell m} \equiv \biggl[-\pd_\eta^2 +m_A^2a^2+\frac{\ell (\ell +1)}{\cosh^2 r}\biggr] A_r^{\ell m} =\frac{\ell (\ell +1)}{\cosh^2 r}\pd_r A^{({\rm e})\ell m}-\pd_\eta\pd_r A_\eta^{\ell m} \,.\label{eq:constraint eq} } The even parity modes contain two degrees of freedom. One of them is the $3$-dimensional scalar mode, and the other is the $3$-dimensional even parity vector mode. In order to decompose the action properly we introduce the even parity vector-type variable by \al{ V^{({\rm e})\ell m} \equiv A^{({\rm e})\ell m}+\hat\mcK^{-1}\pd_\eta A_\eta^{\ell m} \,,\label{eq:V^e def} } where $\hat\mcK$ is a derivative operator given by \al{ \hat\mcK =-\pd_\eta^2 +m_A^2a^2 \,. } We then switch $\{A_\eta^{\ell m} ,A^{({\rm e})\ell m}\}$ to $\{A_\eta^{\ell m}, V^{({\rm e})\ell m}\}$ as dynamical degrees of freedom. In terms of the new set of variables, namely $\{A_\eta^{\ell m},V^{({\rm e})\ell m}\}$\,, the constraint equation \eqref{eq:constraint eq} can be rewritten as \al{ A_r^{\ell m} =\hat\mcO^{-1}\frac{\ell (\ell +1)}{\cosh^2 r}\pd_r V^{({\rm e})\ell m} -\hat\mcK^{-1}\pd_\eta\pd_r A_\eta^{\ell m} \,.\label{eq:constraint eq 2} } Note that we need to specify a boundary condition to properly define $\hat\mcK^{-1}\pd_\eta A_\eta^{\ell m}$ in eq.~\eqref{eq:V^e def} since it contains an inverse operator. Different boundary conditions would lead to different prescriptions for the decomposition of $A^{({\rm e})\ell m}$. Note that the boundary condition for $\hat\mcK^{-1}\pd_\eta A_\eta^{\ell m}$ must be consistent with the boundary condition for the source $\pd_\eta A_\eta^{\ell m}$ but otherwise can be specified arbitrarily for our convenience. We have already imposed the boundary condition that $A_\eta^{\ell m}\,,A_r^{\ell m}\,, A^{({\rm e})\ell m}$ decay as fast as (or faster than) $e^{-|\eta|}$ at $\eta\rightarrow\pm\infty$\,. In particular this boundary condition for $A_\eta^{\ell m}$ makes it possible for us to impose the boundary condition that $\hat\mcK^{-1}\pd_\eta A_\eta^{\ell m}$ also decay as fast as (or faster than) $\propto e^{-|\eta |}$ at $\eta\rightarrow\pm\infty$\,. The boundary condition for $A^{({\rm e})\ell m}$ then implies that $V^{({\rm e})\ell m}$ also decays as fast as (or faster than) $\propto e^{-|\eta |}$ at $\eta\rightarrow\pm\infty$\,. Substituting eqs.~\eqref{eq:V^e def} and \eqref{eq:constraint eq 2} into eq.~\eqref{eq:even mode action}\,, after lengthly calculation, we obtain the reduced action only for the dynamical degrees of freedom. The resultant reduced action is given by \al{ S^{({\rm e})}=S^{({\rm e})}_{\rm s}+S^{({\rm e})}_{\rm v} \,,\label{eq:reduced action} } where \al{ S^{({\rm e})}_{\rm s} =&\frac{1}{2}\sum_{\ell m}\int\dd r\dd\eta \biggl\{ \cosh^2 r\left(\pd_r A_\eta^{\ell m}\right) \Bigl[ 1+\pd_\eta\hat\mcK^{-1}\pd_\eta\Bigr] \left(\pd_r A_\eta^{\ell m}\right) \notag\\ &\quad\quad -\ell (\ell +1)A_\eta^{\ell m}\Bigl[ 1+\pd_\eta\hat\mcK^{-1}\pd_\eta\Bigr] A_\eta^{\ell m} -m_A^2a^2\cosh^2 r\left( A_\eta^{\ell m}\right)^2 \biggr\} \,, } for the scalar mode, and \al{ S^{({\rm e})}_{\rm v} =&\frac{1}{2}\sum_{\ell m}\ell (\ell +1)\int\dd r\dd\eta \biggl\{ \left(\pd_r V^{({\rm e})\ell m}\right) \biggl[ 1-\hat\mcO^{-1}\frac{\ell (\ell +1)}{\cosh^2 r}\biggr] \left(\pd_r V^{({\rm e})\ell m}\right) -V^{({\rm e})\ell m}\hat\mcK\, V^{({\rm e})\ell m} \biggr\} \,, } for the vector mode. Here, we have used the boundary conditions for $A_\eta^{\ell m}$ and $V^{({\rm e})\ell m}$ to show that some boundary terms, such as those including the interaction between $A_\eta^{\ell m}$ and $V^{({\rm e})\ell m}$, vanish. Hence the scalar and vector modes are completely decoupled in the action for the even parity mode. We can now define the KG norm by using the reduced actions obtained above, following and extending \cite{Mukohyama:1999kj}.~\footnote{As we shall see below, an equivalent method to define the KG norm is provided by a general formula derived in Appendix \ref{sec:KG norm}. An advantage of this alternative method is that it can be applied without eliminating auxiliary fields in the action. The result is of course the same, as far as the boundary conditions specified above are imposed.} To quantize the system of the U(1) gauge field, we promote the physical degrees of freedom ${\bm A}$ to operators $\hat{\bm A}$\,, and expand $\hat{\bm A}$ by mode functions $\{{\bm A}_\mcN\,,\overline{{\bm A}_\mcN}\}$ as \al{ \hat{\bm A}(x) =\sum_\mcN\biggl[\hat a_\mcN{\bm A}_\mcN (x)+\hat a_\mcN^\dagger\overline{{\bm A}_\mcN (x)}\biggr] \,, } where $\hat a_\mcN$ and $\hat a_\mcN^\dagger$ are the annihilation and creation operators, respectively, that satisfy the commutation relation, $[\hat a_\mcN,\hat a_\mcM^\dagger ]=\delta_{\mcN\mcM}$\,. The quantum fluctuations of the field are described by the vacuum state $|0\rangle$ such that $\hat a_\mcN|0\rangle =0$ for any $\mcN$\,. We note that $\{{\bm A}_\mcN\,,\overline{{\bm A}_\mcN}\}$ should form a complete set of linearly independent solutions of the equation of motion. With these variables, we define the KG norms as \al{ &\left({\bm A}_\mcN ,{\bm A}_\mcM\right)_{\rm KG}^{({\rm s})} \notag\\ & =-i\cosh^2 r\int\dd\eta\dd\Omega \biggl\{ A_{\eta ,\mcN}\Bigl[ 1+\pd_\eta\hat\mcK^{-1}\pd_\eta\Bigr]\pd_r\overline{A_{\eta ,\mcM}} -\Bigl[ 1+\pd_\eta\hat\mcK^{-1}\pd_\eta\Bigr]\pd_r A_{\eta ,\mcN}\overline{A_{\eta ,\mcM}} \biggr\} \,,\label{eq:even scalar KG norm} } for the even parity scalar modes, \al{ &\left({\bm A}_\mcN ,{\bm A}_\mcM\right)_{\rm KG}^{({\rm v})} \notag\\ & =-i\ell (\ell +1)\int\dd\eta\dd\Omega \biggl\{ V^{({\rm e})}_\mcN\biggl[ 1-\hat\mcO^{-1}\frac{\ell (\ell +1)}{\cosh^2 r}\biggr]\pd_r\overline{ V^{({\rm e})}_\mcM} -\biggl[ 1-\hat\mcO^{-1}\frac{\ell (\ell +1)}{\cosh^2 r}\biggr]\pd_r V^{({\rm e})}_\mcN\overline{V^{({\rm e})}_\mcM} \biggr\} \,,\label{eq:even vector KG norm} } for the even parity vector modes, and \al{ &\left({\bm A}_\mcN,{\bm A}_\mcM\right)_{\rm KG}^{({\rm o})} =-i\ell (\ell +1)\int\dd\eta\dd\Omega \biggl\{ A_\mcN^{({\rm o})}\pd_r\overline{A_\mcM^{({\rm o})}} -\left(\pd_r A_\mcN^{({\rm o})}\right)\overline{A_\mcM^{({\rm o})}} \biggr\} \,,\label{eq:odd KG norm} } for the odd parity modes. With the KG norm defined above, all modes should be properly normalized on a Cauchy surface as \al{ \left({\bm A}_\mcN\,,{\bm A}_\mcM\right)_{\rm KG}^{(\lambda )} =\delta_{\mcN\mcM} \,,\label{eq:normalization} } with $\lambda ={\rm s}\,,{\rm v}\,,{\rm o}$\,. Once $A_\eta^{\ell m}$, $V^{({\rm e})\ell m}$ and $A_\eta^{(o)}$ are properly evaluated by solving the equation of motion, we can calculate the KG norm through eqs.~\eqref{eq:even scalar KG norm}-\eqref{eq:normalization}. In some cases it is convenient to rewrite the reduced action and the KG norm in terms of auxiliary fields. Introducing the new auxiliary fields, $\mcS_r^{\ell m}$ and $V_r^{\ell m}$\,, which obey the constraint equations: \al{ \hat\mcK\,\mcS_r^{\ell m}=-\pd_\eta\pd_r A_\eta^{\ell m} \,,\ \ \ \hat\mcO\, V_r^{\ell m}=\frac{\ell (\ell +1)}{\cosh^2 r}\pd_r V^{({\rm e})\ell m} \,,\label{eq:constraint eq 3} } $A_r^{\ell m}$ in eq.~\eqref{eq:constraint eq 2} can be reduced to \al{ A_r^{\ell m}=V_r^{\ell m}+\mcS_r^{\ell m} \,, } and we can use $\mcS_r^{\ell m}$ and $V_r^{\ell m}$ as the auxiliary fields for the scalar and vector modes rather than $A_r^{\ell m}$. With these variables, the reduced actions for the scalar- and vector-modes are rewritten as \al{ S^{({\rm e})}_{\rm sca} =&\frac{1}{2}\sum_{\ell m}\int\dd r\dd\eta \biggl\{ \cosh^2 r\left(\pd_r A_\eta^{\ell m}-\pd_\eta\mcS_r^{\ell m}\right)^2 +m_A^2a^2\cosh^2 r\left(\mcS_r^{\ell m}\right)^2 \notag\\ &\quad\quad -\ell (\ell +1)A_\eta^{\ell m}\Bigl[ 1+\pd_\eta\hat\mcK^{-1}\pd_\eta\Bigr] A_\eta^{\ell m} -m_A^2a^2\cosh^2 r\left( A_\eta^{\ell m}\right)^2 \biggr\} \,,\\ S^{({\rm e})}_{\rm vec} =&\frac{1}{2}\sum_{\ell m}\int\dd r\dd\eta \biggl\{ \ell (\ell +1)\left(\pd_r V^{({\rm e})\ell m}-V_r^{\ell m}\right)^2 \notag\\ &\quad\quad +\cosh^2 rV_r^{\ell m}\hat\mcK V_r^{\ell m} -\ell (\ell +1)V^{({\rm e})\ell m}\hat\mcK V^{({\rm e})\ell m} \biggr\} \,. } Following the same step as discussed in Appendix \ref{sec:KG norm}, we can define the KG norm in terms of the auxiliary fields as \al{ &\left({\bm A}_\mcN ,{\bm A}_\mcM\right)_{\rm KG}^{({\rm s})} =-i\cosh^2 r\int\dd\eta\dd\Omega \biggl\{ A_{\eta ,\mcN}\left(\pd_r\overline{A_{\eta ,\mcM}}-\pd_\eta\overline{\mcS_{r,\mcM}}\right) -\left(\pd_r A_{\eta ,\mcN}-\pd_\eta\mcS_{r,\mcN}\right)\overline{A_{\eta ,\mcM}} \biggr\} \,,\\ &\left({\bm A}_\mcN ,{\bm A}_\mcM\right)_{\rm KG}^{({\rm v})} =-i\ell (\ell +1)\int\dd\eta\dd\Omega \biggl\{ V^{({\rm e})}_\mcN\left(\pd_r\overline{ V^{({\rm e})}_\mcM}-\overline{V_{r,\mcM}}\right) -\left(\pd_r V^{({\rm e})}_\mcN -V_{r,\mcN}\right)\overline{V^{({\rm e})}_\mcM} \biggr\} \,, } for the even parity scalar and vector modes, respectively, where the auxiliary fields $\mcS_r^{\ell m}$ and $V_r^{\ell m}$ are determined by the constraint equation \eqref{eq:constraint eq 3}. It is easy to see that these expressions for the KG norm are equivalent to \eqref{eq:even scalar KG norm}-\eqref{eq:normalization}. \section{Mode functions} \label{sec: Mode functions} In this section we construct a complete set of mode functions. Since odd and even parity sectors are decoupled, we shall investigate each sector separately. \subsection{Odd parity modes} First we consider the odd parity sector. The odd parity sector contains one dynamical degree of freedom, which corresponds to the odd parity part of a $3$-dimensional transverse vector. Let us construct a set of positive frequency functions corresponding to the variable $A^{({\rm o})}$\,. Varying the action \eqref{eq:odd mode action} with respect to $A^{({\rm o})}$\,, we obtain \al{ \biggl[ \pd_r^2 -\pd_\eta^2 +\frac{\ell (\ell +1)}{\cosh^2 r}+m_A^2a^2 \biggr] A^{({\rm o})\ell m}=0 } In order to solve this equation, we expand $A^{({\rm o})}$ as \al{ A^{({\rm o})\ell m}(\eta ,r) =\sum_p v^{({\rm o})}_p(\eta )\left(\frac{1}{\sqrt{\ell (\ell +1)}}\cosh rf^{p\ell}(r)\right) \,, } where the ``summation'' on the r.h.s. should be understood as the integral over continuum modes ($p^2>0$) plus the summation over discrete modes ($p^2<0$), if any. Here, we have fixed the coefficient in front of $f^{p\ell}(r)$ so that the expression inside the parenthesis, when multiplied by $Y_{\ell m:b}\epsilon^b{}_a$\,, corresponds to the odd-mode vector-type harmonic function on a unit $3$-hyperboloid analytically-continued to the region-C (see eqs.~\eqref{eq:Odd normalization} and \eqref{eq:r relation}). Appendix \ref{sec:Harmonics in open universe} summarizes the characteristics of the scalar- and vector-type harmonic functions in the open universe. The equation for $f^{p\ell}$ is given by \al{ \biggl[ -\frac{1}{\cosh^2 r}\frac{\dd}{\dd r}\left(\cosh^2 r\frac{\dd}{\dd r}\right) -\frac{\ell (\ell +1)}{\cosh^2 r} \biggr] f^{p\ell}(r)=(p^2+1)f^{p\ell}(r) \,.\label{eq:f eq} } Adopting the Euclidean vacuum state as a natural choice after quantum tunneling, we impose that the positive frequency functions are regular at $r=0$\, (see eq.~\eqref{eq:f_pl 1}). We then have the explicit expression for $f^{p\ell}$ as \al{ f^{p\ell}(r) =\sqrt{\frac{\Gamma (ip+\ell +1)\Gamma (-ip+\ell +1)}{i\Gamma (ip)\Gamma (-ip)\cosh r}} P^{-\ell -\frac{1}{2}}_{ip-\frac{1}{2}}(i\sinh r) \,,\label{eq:f^pl} } where $P^\mu_\nu$ is the associated Legendre function of the first kind, and we have fixed the normalization constant so that the analytic continuation of $Y^{p\ell m}(r,\Omega )\equiv f^{p\ell}(r)Y_{\ell m}(\Omega )$ to the region-R or L behaves as a harmonic function properly normalized on a unit $3$-hyperboloid (see Appendix \ref{sec:Harmonics in open universe}). In this expression, $v^{({\rm o})}$ is an eigenfunction of the operator $\hat\mcK$ with the eigenvalue $p^2$, that is, $v^{({\rm o})}_p$ satisfies \al{ \hat\mcK v^{({\rm o})}_p =\biggl[-\frac{\dd^2}{\dd\eta^2}+m_A^2a^2\biggr] v^{({\rm o})}_p=p^2v^{({\rm o})}_p \,.\label{eq:v^o eq} } The boundary condition for $A^{({\rm o})\ell m}$ is that $v^{({\rm o})}_p$ should decay as fast as (or faster than) $e^{-|\eta |}$ at $\eta\rightarrow \pm\infty$. Since the effective potential $m_A^2a^2$ is clearly positive definite, this in particular implies that there is no solution with negative $p^2$. It is thus concluded that there is no supercurvature mode ($p^2<0$ mode) in the odd parity sector. To find the two independent solutions for $v^{({\rm o})}_p$ with $p^2> 0$, it is useful to introduce the two normalized orthogonal solutions $\varpi_{\pm ,p}$ which satisfy \al{ \hat\mcK\varpi_{\pm ,p} =\biggl[-\frac{\dd^2}{\dd\eta^2}+m_A^2a^2\biggr]\varpi_{\pm ,p}=p^2\varpi_{\pm ,p} \,.\label{eq:varpi eq} } Since $a=1/H\cosh\eta$\,, it is easy to solve this equation, and the general solution is \al{ \varpi_{\pm ,p} =C_{1,p}^{\pm}\,P_{\nu'}^{ip}(-\tanh\eta ) +C_{2,p}^{\pm}\,P_{\nu'}^{-ip}(-\tanh\eta ) \,,\label{eq:exact sol} } where $C_{1,p}^{\pm}$ and $C_{2,p}^{\pm}$ are constants, \al{ &\nu' =\sqrt{\frac{9}{4}-\frac{M_{\rm eff}^2}{H^2}}-\frac{1}{2} \,,\ \ M_{\rm eff}^2=m^2+2H^2 \,. \label{eq:nu'} } To construct the independent solutions, let us consider the scattering problem for $\varpi_{\pm ,p}$. Since the solutions asymptotically approach linear combinations of the plane waves $e^{\pm ip\eta}$ as $\eta\to\pm\infty$\,, the equation \eqref{eq:varpi eq} describes incident plane waves interacting with the effective potential $m_A^2a^2$ and producing reflected and transmitted waves to $\eta\to\pm\infty$ and $\eta\to\mp\infty$\,, respectively. We then take the two independent solutions having the following asymptotic behaviors: \al{ \varpi_{+,p} \rightarrow \Biggl\{ \begin{array}{ll} \rho_{+,p}e^{+ip\eta}+e^{-ip\eta} &:\ \eta\rightarrow +\infty\\ \sigma_{+,p}e^{-ip\eta} &:\ \eta\rightarrow -\infty\\ \end{array} \,,\label{eq:varpi+}\\ \varpi_{-,p} \rightarrow \Biggl\{ \begin{array}{ll} \rho_{-,p} e^{-ip\eta}+e^{+ip\eta} &:\ \eta\rightarrow -\infty\\ \sigma_{-,p}e^{+ip\eta} &:\ \eta\rightarrow +\infty\\ \end{array} \,.\label{eq:varpi-} } The reflection and transmission coefficients satisfy the following Wronskian relations~\cite{Garriga:1998he}: \al{ |\rho_{\pm ,p}|^2+|\sigma_{\pm ,p}|^2=1 \,,\ \ \sigma_{+,p}=\sigma_{-,p} \,,\ \ \sigma_{+,p}\overline{\rho_{-,p}}+\overline{\sigma_{-,p}}\rho_{+,p}=0 \,.\label{eq:Wronskian relations} } These solutions are shown to be orthogonal, \al{ \int^\infty_{-\infty}\dd\eta w_{\sigma ,p}\overline{w_{\sigma' ,p'}} =2\pi\delta_{\sigma\sigma'}\delta_{\rm D} (p-p') \,.\label{eq:orthognality condition} } Comparing the asymptotic behavior of the exact solution \eqref{eq:exact sol} and eqs.~\eqref{eq:varpi+}-\eqref{eq:Wronskian relations}\,, we find the corresponding coefficients as \al{ &C_{1,p}^+ =\frac{\Gamma (-ip-\nu' )\Gamma (1-ip+\nu' )}{\Gamma (-ip)} \,,\ \ C_{2,p}^+ =0 \,, \label{eq:C_2,p^+}\\ &C_{1,p}^- =\frac{\sin (\pi\nu' )}{\pi}\Gamma (1-ip)\Gamma (-ip-\nu' )\Gamma (1-ip+\nu' ) \,,\ \ C_{2,p}^- =\Gamma (1-ip) \,.\label{eq:C_2,p^-} } The two independent solutions for the odd parity modes are expressed in terms of these solutions, namely $v^{({\rm o})}_{\pm,p}\propto\varpi_{\pm ,p}$\,. If $\eta$ were the time variable then either one of $e^{\pm ip\eta}$ would be chosen by a boundary condition to specify a quantum state of the system (e.g. the Bunch-Davies vacuum). However, in the present situation, it is $r$ that is the time variable and thus a quantum state of the system is chosen by imposing a boundary condition on the function of $r$ as in eq.~\eqref{eq:f^pl}\,. Hence both of $e^{\pm ip\eta}$ should be treated as independent mode functions and are needed for the construction of a complete set of mode functions. In order to quantize the perturbations, we introduce a variable defined by \al{ A^{({\rm o})}_{\sigma p\ell m}(\eta ,r,\Omega ) =N_p^{({\rm o})}\varpi_{\sigma ,p}(\eta )\left(\frac{1}{\sqrt{\ell (\ell +1)}}\cosh rf^{p\ell}(r)Y_{\ell m}(\Omega )\right) \,, \label{eq:odd-modefunction} } where $N_p^{({\rm o})}$ is a normalization constant. Recalling that we have already proven the absence of supercurvature modes in the odd parity sector (see eq.~\eqref{eq:v^o eq}) and substituting this into eq.~\eqref{eq:odd KG norm}\,, we require \al{ ({\bm A}_{\sigma p\ell m},{\bm A}_{\sigma' p'\ell' m'})^{({\rm o})}_{\rm KG} =&4p\sinh (\pi p)\left( N^{({\rm o})}_p\right)^2\delta_{\sigma\sigma'}\delta_{\rm D}(p-p')\delta_{\ell\ell'}\delta_{mm'} \notag\\ =&\delta_{\sigma\sigma'}\delta_{\rm D} (p-p')\delta_{\ell\ell'}\delta_{mm'} \,, } where we have used the orthogonality condition for $\varpi_{\sigma ,p}$ (see eq.~\eqref{eq:orthognality condition})\,. Hence the KG normalization condition implies that the normalization constant is given by \al{ N^{({\rm o})}_p=\frac{1}{2\sqrt{p\sinh (\pi p)}} \,. \label{eq:odd-normalization} } In summary, we have shown that there is no supercurvature mode in the odd parity sector and that continuous odd parity modes (with $p^2>0$) are given by \eqref{eq:odd-modefunction} with \eqref{eq:odd-normalization}, \eqref{eq:exact sol}-\eqref{eq:nu'} and \eqref{eq:C_2,p^+}-\eqref{eq:C_2,p^-}. \subsection{Even parity modes} In this subsection, we construct a complete set of mode functions in the even parity sector. The equations for $A_\eta^{\ell m}$ and $V^{({\rm e})\ell m}$ are given by \al{ &\biggl[-\frac{1}{\cosh^2 r}\pd_r\left(\cosh^2 r\pd_r\right) -\frac{\ell (\ell +1)}{\cosh^2 r}\biggr] \left( 1+\pd_\eta\hat\mcK^{-1}\pd_\eta\right) A_\eta^{\ell m} =m_A^2a^2A_\eta^{\ell m} \,, } for the $3$-dimensional scalar modes, and \al{ \hat\mcK\, V^{({\rm e})\ell m} =-\pd_r\biggl[\left( 1-\hat\mcO^{-1}\frac{\ell (\ell +1)}{\cosh^2 r}\right)\pd_r V^{({\rm e})\ell m}\biggr] \,, } for the $3$-dimensional vector modes. Assuming $m_A^2\neq 0$\,, we then expand $A_\eta^{\ell m}$ and $V^{({\rm e})\ell m}$ as \al{ &A_\eta^{\ell m}(\eta ,r) =\sum_p\frac{\chi_p (\eta )}{a(\eta )}f^{p\ell}(r) \,,\\ &V^{({\rm e})\ell m}(\eta ,r) =\sum_p v_p^{({\rm e})}(\eta ) \biggl[ \frac{1}{\sqrt{\ell (\ell +1)}p} \frac{\dd}{\dd r}\left( \cosh r\,f^{p\ell}(r)\right) \biggr] \,, } where the coefficients have been fixed so that the analytic continuations of these functions to region-R or L corresponds to the scalar- and vector-type harmonic functions defined in Appendix \ref{sec:Harmonics in open universe}. Thus, the equations for $\chi_p$ and $v^{({\rm e})}_p$ can be reduced to \al{ &\biggl[-\frac{\dd^2}{\dd\eta^2}+\left( m_A^2a^2+a\frac{\dd^2}{\dd\eta^2}\left(\frac{1}{a}\right) -1\right)\biggr]\chi_p = \biggl[-\frac{\dd^2}{\dd\eta^2}+m_A^2a^2\biggr]\chi_p =p^2\chi_p \,,\\ &\biggl[-\frac{\dd^2}{\dd\eta^2}+m_A^2a^2\biggr] v_p^{({\rm e})}=p^2v_p^{({\rm e})} \,. } These equations for $\chi_p$ and $v^{({\rm e})}_p$ are exactly the same as that for $\varpi_{\pm ,p}$ investigated in the previous subsection (see eqs.~\eqref{eq:varpi eq}-\eqref{eq:C_2,p^-}). The boundary conditions for $A_\eta^{\ell m}$ and $V^{({\rm e})\ell m}$ imply that $\chi_p$ and $v_p^{({\rm e})}$ decay as fast as (or faster than) $e^{-2|\eta |}$ and $e^{-|\eta |}$, respectively, at $\eta\rightarrow \pm\infty$. It is thus concluded that there is no supercurvature mode in the even parity sector for the same reason as in the odd parity sector, i.e. because of the positivity of the effective potential $m_A^2a^2$\,. It is also straightforward to repeat the same procedure as in the previous subsection to find a complete set of continuous ($p^2>0$) mode functions in the even parity sector. Mode functions are simply expressed in terms of $\varpi_{\pm ,p}$\,. To quantize the perturbations, we introduce variables: \al{ &A_{\eta ,\sigma p\ell m}(\eta ,r,\Omega ) =N^{({\rm s})}_p\frac{\varpi_{\sigma ,p} (\eta )}{a(\eta )}f^{p\ell}(r)Y_{\ell m}(\Omega ) \,,\label{eq:A_eta variable}\\ &V^{({\rm e})}_{\sigma p\ell m}(\eta ,r,\Omega ) =N_p^{({\rm v})}\varpi_{\sigma ,p}(\eta ) \biggl[ \frac{1}{\sqrt{\ell (\ell +1)}p} \frac{\dd}{\dd r}\left( \cosh r\,f^{p\ell}(r)\right) Y_{\ell m}(\Omega ) \biggr] \,,\label{eq:V^e variable} } where $N^{({\rm s})}_p$ and $N^{({\rm e})}_p$ are normalization constants to be determined for the scalar and vector modes, respectively. We can determine the normalization constants by using the KG norms. Substituting eqs.~\eqref{eq:A_eta variable} and \eqref{eq:V^e variable} into the KG norm for the even parity modes defined in eqs.~\eqref{eq:even scalar KG norm} and \eqref{eq:even vector KG norm}, we have \al{ \left({\bm A}_{\sigma p\ell m},{\bm A}_{\sigma' p'\ell' m'}\right)^{({\rm s})}_{\rm KG} =&m_A^2\frac{4p\sinh (\pi p)}{p^2+1}\left( N_p^{({\rm s})}\right)^2 \delta_{\sigma\sigma'}\delta_{\rm D}(p-p')\delta_{\ell\ell'}\delta_{mm'} \,, } for the $3$-dimensional scalar mode, and \al{ \left({\bm A}_{\sigma p\ell m},{\bm A}_{\sigma' p'\ell' m'}\right)^{({\rm v})}_{\rm KG} =&4p\sinh (\pi p)\left( N_p^{({\rm v})}\right)^2 \delta_{\sigma\sigma'}\delta_{\rm D}(p-p')\delta_{\ell\ell'}\delta_{mm'} \,, } for the $3$-dimensional vector mode. When we require the normalization condition eq.~\eqref{eq:normalization}, the scalar- and vector-modes are normalized respectively as \al{ &N_p^{({\rm s})}=\frac{1}{2m_A}\sqrt{\frac{p^2+1}{p\sinh (\pi p)}} \,,\ \ \ N_p^{({\rm v})}=\frac{1}{2\sqrt{p\sinh (\pi p)}} \,.\label{eq:even parity mode normalization conditon} } In summary, we have found that there is no supercurvature mode in the even parity sector and that a complete set of even parity continuous mode functions is given by \eqref{eq:A_eta variable}, \eqref{eq:V^e variable} with \eqref{eq:even parity mode normalization conditon}. \section{Consistency of neutral case and decoupling limit} \label{sec: massless limit} In the previous section, we have shown that there is no supercurvature mode for a U(1) gauge field with both gauge and conformal symmetries spontaneously broken through the Higgs mechanism, for any values of the mass of the vector field. It has been known that a scalar field $\phi$ with a sufficiently light effective mass, $0\leq m<\sqrt{2}H$, has a supercurvature mode~\cite{Sasaki:1994yt}, and the supercurvature mode survives the massless limit. Furthermore, the existence of the supercurvature mode is essential for the recovery of the correct massless limit of the Wightman function. On one hand, one can show that the Euclidean Wightman function in the limit $m\to 0$ contains a constant divergent contribution~\cite{Sasaki:1994yt}: \al{ \lim_{m\rightarrow 0}\ave{0|\phi (x)\phi (x')|0}=\frac{3H^4}{8\pi^2 m^2} + \mathcal{O}(m^0) \,.\label{scalar diverge} } On the other hand, the contribution of all subcurvature modes (with $p^2>0$) to the Wightman function remains finite in the massless limit. This means that the set of all subcurvature modes does not form a complete set of mode functions and that something is missing. It is the supercurvature mode that is missing here. The contribution from the supercurvature mode (with $p^2=-1$) correctly reproduces the divergent behavior of the Euclidean Wightman function shown in eq.~\eqref{scalar diverge}. From the above observation on the Wightman function of a scalar field, it is expected that the massless limit serves as a useful consistency check also for vector fields. We thus consider the massless limit of the massive vector field and see whether the correct behavior of the Wightman function can be reproduced by the contributions from subcurvature modes only, without need for any supercurvature modes. For the system of the U(1) gauge field considered in the present paper, the massless limit is provided by the decoupling limit, i.e. the $e\to 0$ limit. As we shall see in the following, the decoupling limit appears to be rather confusing. On one hand, we have shown that the massive vector field does not have a supercurvature mode for any non-zero value of $e$. On the other hand, for $e=0$, i.e. if the (would-be) Higgs field is neutral under the U(1), the system consisting of the U(1) gauge field and the phase of the complex (would-be) Higgs field is reduced to a massless vector field plus a massless scalar field. Since a massless scalar field is known to have a supercurvature mode, there appears discontinuity in the $e\to 0$ limit. We need to reconcile these two apparently contradicting results. In this section we first reconcile the apparent contradiction between the $e\to 0$ limit of the $e\ne 0$ theory and the $e=0$ theory (subsection \ref{sec:exact massless}). We then investigate the Wightman function in the decoupling ($e\to 0$) limit as a consistency check (subsection \ref{sec: massive gauge}). \subsection{Neutral ($e=0$) case} \label{sec:exact massless} Before considering the decoupling ($e\to 0$) limit, let us investigate the neutral ($e=0$) case. In this subsection, we focus only on the scalar sector since the apparent contradiction explained above is in this sector. One can begin with the action eq.~\eqref{eq:relevant action} with $e=0$ to derive the equation of motion of the scalar degree of freedom. Adopting the gauge condition $A_\eta =0$ for convenience, we obtain the action for the phase of the (would-be) Higgs field as \al{ &S^{({\rm e})} \supset\frac{1}{2}\varphi^2\sum_{\ell m}\int\dd r\dd\eta a^2\cosh^2 r \biggl\{ \left(\pd_r\Theta^{\ell m}\right)^2 -\left(\pd_\eta\Theta^{\ell m}\right)^2 -\frac{\ell (\ell +1)}{\cosh^2 r}\left( \Theta^{\ell m}\right)^2 \biggr\} \,, } where $\Theta^{\ell m}$ is the coefficient of the spherical harmonic expansion of the phase of the (would-be) Higgs field $\Theta$. Expanding $\Theta^{\ell m}$ in terms of $f^{p\ell}$ (see eqs.~\eqref{eq:f eq} and \eqref{eq:f^pl}) as $\Theta^{\ell m}(\eta ,r)=\sum_p\Theta_p (\eta )f^{p\ell}(r)$\,, we obtain the equation for $\Theta_p$ as \al{ \biggl[\frac{1}{a^2}\frac{\dd}{\dd\eta}\left( a^2\frac{\dd}{\dd\eta}\right) +(p^2+1)\biggr]\Theta_p =0 \,. } Since this is the same as the equation of motion for the massless scalar field, one might think that there should be a supercurvature mode at $p^2=-1$\,, according to \cite{Sasaki:1994yt}. However, the solution to this equation with $p^2=-1$ is trivial, namely $\Theta_p=\text{const}$\,, in the entire region-C and turns out to be a gauge degree of freedom. A key observation here is that in the neutral ($e=0$) case, the U(1) gauge symmetry manifests itself as a global shift symmetry: $\Theta (x)\rightarrow \Theta (x)+\lambda$, where $\lambda$ is a constant. Note that this shift symmetry must be respected by any interactions including $\Theta$. In particular, observers or detectors interacting with $\Theta$ can probe derivatives $\partial_{\mu}\Theta$ but cannot probe the value of $\Theta$ itself even in principle. Hence, the constant solution with $p^2=-1$ is not within the physical spectrum of the theory. In other words, the $p^2=-1$ solution does not affect any correlation functions invariant under the global shift symmetry since $\Theta$ enters invariant quantities only through its derivatives. Therefore it is concluded that there is no supercurvature mode in the neutral ($e=0$) case. This reconciles the apparent contradiction between the $e=0$ theory and the $e\to 0$ limit of the $e\ne 0$ theory. \subsection{Decoupling ($e\to 0$) limit} \label{sec: massive gauge} Let us now explore the massless limit of the massive U(1) gauge field, i.e. the decoupling limit, $e\to 0$, of the theory with $e\ne 0$. In this subsection we compute the Wightman function of the U(1) gauge field in the decoupling limit and explicitly verify that the correct behavior of the Euclidean Wightman function is reproduced by the contributions from subcurvature modes only, without need for any supercurvature modes. The Wightman function for the massive U(1) gauge field in de Sitter spacetime is previously studied in the literature~\cite{Frob:2013qsa,Allen:1985wd, Tsamis:2006gj, Youssef:2010dw}~\footnote{ In \cite{Frob:2013qsa}, the authors found that the Wightman function of a massive gauge field depends on the way how gauge is fixed. Our gauge choice corresponds to what they call the Proca theory, and in this gauge the massless limit of the Wightman function has a simple form. It seems that the behavior of the Wightman function of a massive gauge field in de Sitter spacetime is not yet fully understood~\cite{Allen:1985wd, Tsamis:2006gj, Youssef:2010dw}. }. According to \cite{Frob:2013qsa}, the Wightman function of the massive U(1) gauge field in the decoupling ($e\to 0$) limit can be written in terms of the scalar propagator as \al{ \lim_{m_A\rightarrow 0}\ave{0|A_\mu (x)A_{\mu'} (x')|0} &=\lim_{m_A\rightarrow 0}\frac{1}{m_A^2}\pd_\mu\pd_{\mu'}\Delta_{M^2} (Z(x,x'))+\mcO (m_A^0) \,.\label{eq:gauge Wightman functon} } where $Z$ denotes the de Sitter invariant distance between two points, $x$ and $x'$\,, in de Sitter spacetime, and $\Delta_{M^2}(Z)$ denotes the propagator of the scalar field with the mass $M=\sqrt{9/4-(\nu'+3/2)^2}$, which is defined by \al{ \Delta_{M^2}(Z) =\frac{H^2}{(4\pi )^2}\frac{\Gamma (3+\nu' )\Gamma (-\nu' )}{\Gamma (2)} {}_2F_1\left( 3+\nu' ,-\nu' ;2;\frac{1+Z}{2}\right) \,, } where ${}_2F_1(a,b;c;z)$ is the hypergeometric function. It should be noted that the propagator of the massive scalar field in the massless (decoupling) limit is divergent as seen in eq.~\eqref{scalar diverge}. However, the divergent contribution is $Z$-independent and thus drops out when derivatives are acted on the propagator as in \eqref{eq:gauge Wightman functon}. We then take the massless limit and describe the explicit expression for the divergent contributions of the Wightman function for the U(1) gauge field as \al{ \lim_{m_A\rightarrow 0}\ave{0|A_\mu (x)A_{\mu'} (x')|0} =\frac{H^2}{(4\pi )^2m_A^2} \left[ \frac{Z-3}{(Z-1)^3}(\partial_\mu Z)(\partial_{\mu'}Z)-\frac{Z-2}{(Z-1)^2} (\partial_\mu \partial_{\mu'}Z) \right] \,.\label{vector diverge} } Hereafter we neglect higher-order contributions of order $\mcO (m_A^0)$ for simplicity. In order to compare the leading-order Wightman function in the decoupling limit with our result derived in the present paper, we rewrite eq.~\eqref{vector diverge} in terms of the coordinates in the open chart of the de Sitter spacetime i.e. the coordinate in the region-J $(\eta_\rmJ ,r_\rmJ ,\Omega )$\,. The invariant distance $Z$ is then reduced to \al{ Z(x_\rmJ ,x_\rmJ' ) =\frac{\cosh \eta_\rmJ \cosh \eta_\rmJ'-\cosh\zeta}{\sinh\eta_\rmJ\sinh\eta_\rmJ'} \,,\label{eq:invariant distance in J} } where $\cosh\zeta \equiv \cosh r_\rmJ\cosh r_\rmJ' -\sinh r_\rmJ\sinh r_\rmJ' \cos\Xi$ with $\cos\Xi$ being the directional cosine between $\Omega$ and $\Omega'$. Substituting the invariant distance eq.~\eqref{eq:invariant distance in J} into eq.~\eqref{vector diverge}, we can easily rewrite each component of the Wightman function of the U(1) gauge field in term of the coordinate in the region-J. We also calculate the Wightman function by using the explicit expressions for the mode functions derived in section \ref{sec: Mode functions}. As an example, let us focus on the $(\eta ,\eta' )$-component of the Wightman function of the U(1) gauge field. Taking the massless (decoupling) limit ($\nu'\rightarrow 0$) and the analytic continuation to the region-J, we can rewrite the two independent solutions for the $\eta$-component of the U(1) gauge field \eqref{eq:A_eta variable} as \al{ &A_{\eta ,+p\ell m}(\eta_\rmJ ,r_\rmJ ,\Omega ) =\frac{1}{2m_A}\sqrt{\frac{p^2+1}{p\sinh (\pi p)}}\frac{1}{a_\rmJ (\eta_\rmJ )}\,e^{ip\eta_\rmJ -\pi p/2}f^{p\ell}(r_\rmJ )Y_{\ell m}(\Omega ) \,,\\ &A_{\eta ,-p\ell m}(\eta_\rmJ ,r_\rmJ ,\Omega ) =\frac{1}{2m_A}\sqrt{\frac{p^2+1}{p\sinh (\pi p)}}\frac{\Gamma (1-ip)}{\Gamma (1+ip)} \frac{1}{a_\rmJ (\eta_\rmJ )}\,e^{-ip\eta_\rmJ +\pi p/2}f^{p\ell}(r_\rmJ )Y_{\ell m}(\Omega ) \,,\label{Apm} } where $a_\rmJ (\eta_\rmJ )=-1/H\sinh\eta_\rmJ$\,, and we have used the following relation: $P_0^{ip}(-\tanh\eta)=e^{ip\eta_R-\pi p/2}/\Gamma (1-ip)$\,. We can then calculate the $(\eta ,\eta' )$-component of the Wightman function for the U(1) gauge field as \al{ &\lim_{m_A\rightarrow 0}\ave{0|A_\eta (x)A_{\eta'}(x')|0} =\lim_{m_A\rightarrow 0}\sum_{\sigma =\pm}\sum_{p\ell m} A_{\eta ,\sigma p\ell m}(\eta_\rmJ ,r_\rmJ ,\Omega ) \overline{A_{\eta ,\sigma p\ell m}(\eta_\rmJ' ,r_\rmJ' ,\Omega' )} \notag\\ &\quad =\frac{H^2\sinh\eta_\rmJ\sinh\eta_\rmJ'}{4\pi^2 m_A^2} \int_0^\infty\dd p\frac{(p^2+1)\sin (p\zeta )}{\sinh\zeta} \biggl\{ \frac{1}{1-e^{-2\pi p}}e^{-ip(\eta_\rmJ -\eta_\rmJ' )} +\frac{e^{-2\pi p}}{1-e^{-2\pi p}}e^{+ip(\eta_\rmJ -\eta_\rmJ' )} \biggr\} \notag\\ &\quad =\frac{H^2}{8\pi^2 m_A^2} \sinh\eta_\rmJ\sinh\eta_\rmJ' \frac{2+\cosh^2\zeta -3\cosh\zeta\cosh (\eta_\rmJ -\eta_\rmJ' )}{(\cosh\zeta -\cosh (\eta_\rmJ -\eta_\rmJ' ))^3} \,,\label{eq:eta eta Wightman function} } where we have used the completeness relation for the scalar harmonics, $Y^{p\ell m}(r_\rmJ ,\Omega )\equiv f^{p\ell}(r_\rmJ )Y_{\ell m}(\Omega )$\,, which is given by \al{ &\sum_{\ell m}Y^{p\ell m}(r_\rmJ ,\Omega )\overline{Y^{p\ell m}(r_\rmJ' ,\Omega' )} =\frac{p\sin (p\zeta )}{2\pi^2\sinh\zeta} \,. } One can easily compare the resultant Wightman function eq.~\eqref{eq:eta eta Wightman function} with one obtained by substituting eq.~\eqref{eq:invariant distance in J} into \eqref{vector diverge} and find that these leading-order expressions exactly coincide. This confirms that the $(\eta ,\eta' )$-component of the Wightman function of the U(1) gauge field in the decoupling limit is correctly reproduced by the contribution from subcurvature modes only, without need for supercurvature modes. Following the same step as the $(\eta ,\eta')$-component, we can verify the consistency between eq.~\eqref{vector diverge} and our results for the other components. \section{Summary} \label{sec:summary} In this paper, we have investigated the Euclidean vacuum mode functions of a massive vector field in the spatially open chart of de Sitter spacetime. In order to clarify whether supercurvature modes exist, we have studied the U(1) gauge field with gauge and conformal symmetries spontaneously broken through the Higgs mechanism. We have found that there is no supercurvature mode for both the even and odd parity sectors. This implies that it is difficult to generate the sufficient amount of the magnetic field on large scales by using the superadiabatic growth within the one-bubble open inflation scenario even if the Higgs mechanism spontaneously breaks gauge and conformal invariances. Utilizing the obtained mode functions, we have explicitly computed the Wightman function of the U(1) gauge field in terms of the coordinates in the open chart of the de Sitter spacetime, and have compared it with one obtained by other methods. It was found that the leading-order Wightman function in the decoupling ($e\to 0$) limit is correctly reproduced by the sum of the products of the subcurvature modes without need for introducing supercurvature modes. In consequence we have verified that the supercurvature mode is not needed as a part of a complete set of mode functions of the U(1) gauge field in the decoupling limit. An interesting observation made in subsection \ref{sec:exact massless} is that the existence/absence of supercurvature modes can be strongly related to symmetries of the theory. While a massive scalar field with a sufficiently light mass has a supercurvature mode~\cite{Sasaki:1994yt} that survives the massless limit, a theory of a scalar field with shift symmetry does not allow a physical supercurvature mode. This is because the would-be supercurvature mode does not show up in any correlation functions invariant under the shift symmetry. Furthermore, a vector field with a U(1) gauge symmetry does not have a supercurvature mode even when the vector field is given a mass by the Higgs mechanism and thus absorbs a light scalar degree of freedom (the phase of the complex Higgs field). It may be interesting to investigate supercurvature modes of the vector field when we take metric perturbations into account since gravity has the diffeomorphism symmetry, although in the present paper we take account of the effect of gravity only through a curved background. The evaluation of the metric perturbations is beyond the scope of the present paper and we hope to come back to this issue in a future publication. In this paper, we have assumed several simplifications: (i) the universe during inflationary era after a quantum tunneling is assumed to be well approximated by an exact de Sitter spacetime in the open chart; (ii) the origin of the breaking of the gauge and conformal symmetries and the mass of the vector field is assumed to be the standard Higgs mechanism; (iii) the mass squared $V''_{\Phi}(|\Phi|)$ around the minimum of the Higgs potential is assumed to be large enough so that the mass of the vector field can be considered as constant during inflation. Relaxing some of these assumptions would in principle affect details of our results, although generic features that we have found are expected to remain the same. Furthermore, we have neglected the interactions between the tunneling field and the other fields such as the Higgs field. If we take into account such interactions, then spatially localized, bubble-shaped features may appear~\cite{Sugimura:2012kr}. We hope to come back to these issues in the near future. \acknowledgments We would like to thank M.~Sasaki and T.~Tanaka for useful discussions. D.Y. and T.F. are supported by Grant-in-Aid for JSPS Fellows Nos.~259800 and 248160. S.M. is supported by Grant-in-Aid for Scientific Research 24540256 and 21111006. This work was supported by the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan.
1,477,468,751,273
arxiv
\section{Introduction} \label{I} Taken the importance of transportation networks in different types of natural and man-made structures the relevance of their stability against disturbances be they individual failures or complete breakdown is obvious. In turn, one may single out two main ingredients which determine this stability, these are ({\em i}) dynamical features of transport processes that take part on such networks, i.e. their fluctuating {\em load} and ({\em ii}) structural features of the networks themselves, i.e. their {\em topology} \cite{complex_networks}. Whereas a comprehensive treatment of transportation network stability has to deal with both of these mentioned factors, the complexity of the problem often calls for a separate account and analysis of each factor. Moreover, recently network structure stability has become the subject of a separate field of research within complex network science, where the {\em attack vulnerability} of a complex network is treated by means of a combination of tools of random graph theory \cite{complex_networks} and those of percolation theory \cite{percolation} and statistical physics \cite{network_attacks}. The very notion of attack vulnerability of a complex network originates from earlier studies of computer networks and reflects the decrease of network performance as caused by the removal or dysfunction of either their nodes or links (or both) \cite{Albert00,Tu00}. The study of network vulnerability against failure or attack has conceptually much in common with studies of percolation and they gained a lot from concepts and insights in percolation theory. However, standard percolation theory \cite{percolation} deals with homogeneous lattices whereas the non-homogeneity of complex networks gives rise to a variety of phenomena which are particular for these structures. To give an example, the empirical analysis of numerous scale-free real-world networks (the www and the internet \cite{Albert00,Tu00}, metabolic \cite{Jeong00}, food web \cite{Sole01}, protein \cite{Jeong01} networks) has revealed that these networks display an unexpectedly high degree of robustness under random failure. However, if the scenario is changed towards ``targeted'' attacks, the same networks may appear to be especially vulnerable \cite{Cohen00,Callaway00}. It is the non-homogeneity of networks that allows to choose different attack scenarios, i.e. to remove network links or nodes not at random, but following specific sequences prepared according to characteristics determining their `importance'. For vertex-targeted attacks, the sequence may be ordered by decreasing vertex degree \cite{Barabasi99,Broder00} or betweenness centrality \cite{Holme02} for the unperturbed network and the attack successively removes vertices according to this original sequence. One may further extend the above scenarios by recalculating the characteristics of the remaining vertices after each removal step and reordering the lists \cite{Albert00}. Former analysis has shown that attacks according to recalculated lists often turn out to be more effective \cite{Girvan02,Holme02}. So far, the prevailing analytic results on complex network stability have been obtained for idealized models of infinite networks. In particular, important insight on network structure stability may be gained assuming that a complex network may perform its function as long as it possesses a giant connected component (GCC) i.e. a connected subnetwork which in the limit of an infinite network contains a finite fraction of the network. Under this assumption, the network robustness may be judged using the Molloy-Reed criterion, which has been formulated for essentially treelike networks with a given node degree distribution $P(k)$ but otherwise random linking between vertices. The criterion for a GCC to be present in such networks is \cite{Cohen00,Callaway00,Molloy_Reed}: \begin{equation}\label{1} \langle k(k-2) \rangle \geq 0, \end{equation} where $\langle \dots \rangle$ means the ensemble average over networks with given $P(k)$. Defining the Molloy-Reed parameter as the ratio of the moments of the degree distribution \begin{equation}\label{2} \kappa^{(k)}= \langle k^2 \rangle/ \langle k \rangle, \end{equation} one may rewrite (\ref{1}) as: \begin{equation}\label{3} \kappa^{(k)}\geq 2. \end{equation} For an uncorrelated network the parameter $\kappa$ can be equally represented by the ratio between the mean number $z_1$ of next neighbors (which is by definition equal to the mean node degree $\langle k \rangle$) and the mean number $z_2$ of second nearest neighbors: \begin{equation}\label{4} \kappa^{(z)}=z_2/z_1. \end{equation} In terms of $\kappa^{(z)}$, condition (\ref{3}) can be rewritten as: \begin{equation}\label{5} \kappa^{(z)}\geq 1. \end{equation} For obvious reasons, relations (\ref{3}), (\ref{5}) can not be directly applied to real-world networks, which usually are correlated and are of finite size. Therefore, an important issue which arises in the analysis of attack vulnerability of real-world networks is the choice of the observables which may be used to measure network stability. Since the GCC is well-defined only for an infinite network, often the size of the largest network component $S$ is used. Alternatively, one can estimate network stability from the average shortest path lengths or their inverse values \cite{Holme02,berche09}. Recently, a unique measure for robustness was introduced \cite{schneider11a,schneider11b} and has been used to devise a method to restructure a network and to make it more robust against a malicious attack. Observing the normalised size $S(c)$ of the largest component as function of the share $c$ of removed vertices or links a measure of stability is provided by the area A under the curve for the interval $c\in[0,1]$. We will normalise this value as \begin{equation}\label{5a} A = 100\int_0^1 S(c) {\rm d}c. \end{equation} Here, the size of the largest component is normalised such that $S(0)=1$. In this respect, the measure captures the network reaction over the whole attack sequence. The goal of this paper is to elaborate criteria, which allow to give {\em a priori} information on the attack stability of real world correlated networks of finite size and to check how these criteria correspond to the analytic results available for the infinite uncorrelated networks. As a case study, we consider public transportation networks (PTN) of several major cities of the world. This paper continues studies initiated in \cite{berche09}, where we have considered PTN attack vulnerability. The results presented below complement Ref. \cite{berche09} by describing the effects of link-targeted attacks as well as by applying the above mentioned measure for network robustness \cite{schneider11a,schneider11b} to evaluate attack efficiency. For the remaining part of the paper we will use the following set-up. In the next section we will shortly describe our PTN database, attack scenarios and the observables used to describe different features of the PTNs considered here. Results for the transportation network stability against node-targeted and link-targeted attacks will be given in sections \ref{III} and \ref{IV}, correspondingly. In section \ref{V} we present some observed correlations between PTN characteristics measured prior to attack and the PTN stability during attacks following different scenarios. Discussions and outlook are presented in section \ref{VI}. \section{Database and attack scenarios description} \label{II} The systematic analysis of PTN using tools of complex network theory dates back to the early 2000-s \cite{PTN1} and continues to this day \cite{berche09,Ferber09a,Ferber09b,Berche10,holovatch11,PTN2}. It has been revealed that these networks share common statistical properties: they appear to be strongly correlated small-world structures with high values of clustering coefficients and comparatively low mean shortest path values. The power-law node degree distributions observed for many PTN give strong evidence of correlations within these networks. \begin{table} \caption{Some characteristics of the PTNs analyzed in this study. Types of transport taken into account: {\underline B}us, {\underline E}lectric trolleybus, {\underline F}erry, {\underline S}ubway, {\underline T}ram, {\underline U}rban train; $N$: number of stations; $R$: number of routes. The following characteristics are given: $\langle k \rangle$ (mean node degree); $\ell^{\rm max}$, $\langle \ell \rangle$ (maximal and mean shortest path length); $C$ (relation of the mean clustering coefficient to that of the classical random graph of equal size); $\kappa^{(z)}$, $\kappa^{(k)}$ (c.f. Eqs. (\ref{4}), (\ref{2})); $\gamma$ (an exponent in the power law (\ref{6}) fit, bracketed values indicate less reliable fits, see the text). More data is given in \cite{berche09,holovatch11}. \label{tab1}} \begin{center} \tabcolsep1.2mm {\small \begin{tabular}{lrrrrrrrrrr} \toprule City & Type & $N$ & $R$ & $\langle k \rangle$ & $\ell^{\rm max}$ & $\langle \ell \rangle$ & $C$ & $\kappa^{(z)}$ & $\kappa^{(k)}$ & $\gamma$ \\ \colrule Berlin & BSTU & 2992 & 211 & 2.58 & 68 & 18.5 & 52.8 & 1.96 & 3.16 & (4.30) \\ Dallas & B & 5366 & 117 & 2.18 & 156 & 52.0 & 55.0 & 1.28 & 2.35 & 5.49 \\ D\"usseldorf & BST & 1494 & 124 & 2.57 & 48 & 12.5 & 24.4 & 1.96 & 3.16 & 3.76 \\ Hamburg & BFSTU & 8084 & 708 & 2.65 & 156 & 39.7 & 254.7 & 1.85 & 3.26 & (4.74) \\ Hong Kong & B & 2024 & 321 & 3.59 & 60 & 11.0 & 60.3 & 3.24 & 5.34 & (2.99) \\ Istanbul & BST & 4043 & 414 & 2.30 & 131 & 29.7 & 41.0 & 1.54 & 2.69 & 4.04 \\ London & BST & 10937 & 922 & 2.60 & 107 & 26.5 & 320.6 & 1.87 & 3.22 & 4.48 \\ Moscow & BEST & 3569 & 679 & 3.32 & 27 & 7.0 & 127.4 & 6.25 & 7.91 & (3.22) \\ Paris & BS & 3728 & 251 & 3.73 & 28 & 6.4 & 78.5 & 5.32 & 6.93 & 2.62 \\ Rome & BT & 3961 & 681 & 2.95 & 87 & 26.4 & 163.4 & 2.02 & 3.67 & (3.95) \\ Sa\~o Paolo & B & 7215 & 997 & 3.21 & 33 & 10.3 & 268.0 & 4.17 & 5.95 & 2.72 \\ Sydney & B & 1978 & 596 & 3.33 & 34 & 12.3 & 82.9 & 2.54 & 4.37 & (4.03) \\ Taipei & B & 5311 & 389 & 3.12 & 74 & 20.9 & 186.2 & 2.42 & 4.02 & (3.74) \\ \botrule \end{tabular} } \end{center} \end{table} In this work we analyse a selection of PTNs drawing from a database compiled by the present authors earlier and described in Refs. \cite{berche09,Ferber09a,Ferber09b,Berche10,holovatch11}. The choice for the selection of these PTNs is motivated by the idea to collect network samples from cities of different geographical, cultural, and economical background. Some characteristics of these networks are given in table \ref{tab1}. For each selected city the available information on all different types of public transportation is included. More data as well as details about the database are given in \cite{berche09,holovatch11}. As one can see from the table, the typical number of routes is several hundreds while the typical number of stops (i.e. network nodes) is several thousands with a mean node degree of $\langle k \rangle \sim 3$. This number of network nodes is to be related to comparatively low values for the mean and maximal shortest path. As mentioned above, the node degree distribution $P(k)$ for some of the PTN has been observed \cite{berche09,holovatch11} to display a power-law decay \begin{equation}\label{6} P(k) \sim k^{-\gamma}, \end{equation} for large values of the node degree $k$. Results for the corresponding exponent values are given in the last column of table \ref{tab1}. If the distribution $P(k)$ is better fitted by an exponential decay, the exponent corresponding to a power-law fit is given in brackets (this is the case for seven out of thirteen listed PTNs). As a measure of local network correlation we give the mean clustering coefficient of each PTN normalised by the value $C_{ER}$ for an Erdos-Renyi random graph with the same numbers of nodes $N$ and links $M$, $C_{ER}=2M/N^2$. Recall that the clustering coefficient $C(i)$ of a given node $(i)$ is the ratio of the number of links $E_i$ between the $k_i$ nearest neighbours of node $(i)$ and the maximal possible number of mutual links between these: \begin{equation} \label{7} C(i) =\frac{2E_i}{k_i(k_i-1)}. \end{equation} The values of $C$ quoted in table \ref{tab1} give convincing evidence for the presence of strong local correlations. In our earlier work on the PTN resilience to attacks of different types we introduced different scenarios to remove network nodes or links, to model random failure or attack. However, the focus of that work was primarily on node-targeted attacks. In previous work \cite{berche09,Ferber09b,Berche10}, the present authors have shown that for PTNs the most effective attack scenarios correspond to removing nodes $(i)$ either with highest degree $k_i$ or with highest betweenness centrality values ${\cal C}_B(i)$. For a given node $(i)$, the latter quantity is defined as: \begin{equation} \label{8} {\cal C}_B(i)= \sum_{j\neq i \neq k} \frac{\sigma_{jk}(i)}{\sigma_{jk}}, \end{equation} where $\sigma_{jk}$ is the number of shortest paths between nodes $j$ and $k$ and $\sigma_{jk}(i)$ is the number of these paths that go via node $(i)$. The results presented below show the outcome of node- and link-targeted attacks, where either nodes or links are removed following specific sequences corresponding to so-called scenarios. For node-targeted attacks we concentrate on five different scenarios, by selecting the nodes: (i) at random, (ii) according to their initial degree (prior to the attack) (iii) according to their degree recalculated after nodes of higher degree have been removed (iv) according to their initial betweenness centrality (v) according to their recalculated betweenness centrality. The same five scenarios are implemented for the link-targeted attacks. However, in this case one has to generalize the notions of node degree and betweenness centrality for links. We will define the degree $k^{(l)}$ of the link between nodes $i$ and $j$ with degrees $k_i$ and $k_j$ as: \begin{equation}\label{9} k^{(l)}_{ij} = k_i + k_j - 2. \end{equation} For the simple graph with two vertices and a single link, the link degree will be zero, $k^{(l)}=0$, while for any link in a connected graph with more than two vertices the link degree will be at least one, $k^{(l)} \geqslant 1$. The link betweenness centrality ${\cal C}_B^{(l)}(i)$ measures the importance of a link $i$ with respect to the connectivity between the nodes of the network. The link betweenness centrality is defined as \begin{equation}\label{10} {\cal C}_{B}^{(l)}(i)= \sum_{s\neq t\in \cal{N}} \frac{\sigma_{st}(i)}{\sigma_{st}}, \end{equation} where $\sigma_{st}$ is the number of shortest paths between the two nodes $s,t\in \cal{N}$, that belong to the network $\cal{N}$, and $\sigma_{st}(i)$ is the number of shortest paths between nodes $s$ and $t$ that go through the link $i$. The two subsequent sections demonstrate how PTNs react on attacks of the above described scenarios when these attacks are targeted on PTN nodes (section \ref{III}) and links (section \ref{IV}). To quantify the outcome of these attacks we monitor the evolution of the normalised size $S(c)$ of the largest network component as function of the share $c$ (with $0\leq c \leq 1$) of removed links or nodes: \begin{equation}\label{11} S(c)=N(c)/N(0), \end{equation} where $N(0)$ is the initial number of nodes of the largest connected component while $N(c)$ is the corresponding remaining number of nodes in that component after a share $c$ of nodes or links has been removed. Obviously, any network of non-zero size will have a largest connected component. \section{Node-targeted attacks} \label{III} The outcome of attacks targeting PTNs nodes has been reported by the present authors in Refs. \cite{berche09,Ferber09b,Berche10}. In particular, results of attacks of sixteen different scenarios have been presented and the most effective ones were singled out. Here, we recall the results of the five scenarios laid out in the previous section. In particular, this will allow to compare these with the corresponding link-targeted attack scenarios (section \ref{IV}) and analysing these to elaborate criteria for network stability (section \ref{V}). \begin{figure}[th] \centerline{\includegraphics[width=5.5cm]{fig1a} \hspace{3em} \includegraphics[width=5.5cm]{fig1b}} \centerline{{\bf a} \hspace{21em} {\bf b}} \vspace*{8pt} \caption{Size of the largest cluster $S$ as functions of a fraction of removed nodes $c$ normalized by their values at $c=0$. {\bf a}. For random node-targeted scenario. {\bf b}. For recalculated node-degree attack scenario. \label{fig1}} \end{figure} In figure \ref{fig1} we show the dependence of the normalized size $S(c)$ (\ref{11}) of the largest connected cluster as function of the share $c$ of removed PTN nodes for two attack scenarios: in the first one, Fig. \ref{fig1} {\bf a} the PTN nodes are removed at random, in the second one, Fig. \ref{fig1} {\bf b} the nodes are removed according to the a list of the nodes ordered by their node degree $k$ recalculated after each step comprising the removal of 1\% of the initial nodes. In the following, we will call this scenario the 'recalculated node degree scenario'. As noted, instead of recalculating the PTN characteristics after the removal of each individual node, the nodes are removed in groups of 1\% of the initial nodes and the PTN characteristics are recalculated after the removal of each such group. The random scenario Fig. \ref{fig1} {\bf a} presents results of a single instance of an attack, we have verified, however, that due to the large size of the PTNs size a certain 'self-averaging' effect takes place: averaging of $S(c)$ over many random attack sequences instances do not significantly modify the picture for $S(c)$ presented in Fig. \ref{fig1} {\bf a}. As one may infer from the figures, the individual PTNs may react on the attacks in very different way, ranging from a gradual decrease of $S(c)$ as function of $c$ to sudden jumps at certain values of $c$. A further striking feature of the plots visualising these scenarios is the qualitative differences seen between individual PTNs as well as between different attack scenarios. \begin{figure}[th] \centerline{\includegraphics[width=5.5cm]{fig2a} \hspace{3em} \includegraphics[width=5.5cm]{fig2b}} \centerline{{\bf a} \hspace{21em} {\bf b}} \vspace*{8pt} \caption{The normalised largest component size $S(c)$ of the PTN as function of the fraction $c$ of removed nodes for different attack scenarios. Each curve corresponds to a different scenario defined by a corresponding sequence of nodes. RV: random vertex sequence; $k$ and $k^i$: sequences ordered by recalculated and initial degrees; ${\cal C}_{B}$ and ${\cal C}^i_{B}$: sequences ordered by recalculated and initial betweenness. {\bf a}. Five scenarios for the PTN of Dallas. {\bf b}. Five scenarios for the PTN of Paris. \label{fig2}} \end{figure} To further illustrate the reaction of a given PTN to attacks of different type, we present in Fig. \ref{fig2} the changes in the largest component size of the PTNs of Dallas (Fig. \ref{fig2} {\bf a}) and Paris (Fig. \ref{fig2} {\bf b}) for attacks of five different scenarios, as described in the former section \cite{Ferber09a}. For the case of the Paris PTN we observe that for small values of the share $c$ of removed nodes ($c< 7\% $) these scenarios cause practically indistinguishable impact on $S(c)$ and $S(c)$ is a linear function of $c$. As $c$ increases, deviations from the linear behavior arise and the impact of different scenarios starts to vary. In particular, there appear differences between the roles played by the nodes with highest value of $k$ and highest betweenness centrality ${\cal C}_{B}$. Whereas the first quantity is a local one, i.e. it is calculated from properties of the immediate environment of each node, the second one is global. Moreover, the $k$-based strategy aims to remove a maximal number of edges whereas the ${\cal C}_{B}$-based strategy aims to cut as many shortest paths as possible. In addition, there arise differences between the 'initial' and 'recalculated' scenarios, suggesting that the network structure changes as important nodes are removed. Similar behavior of $S(c)$ is observed for all PTNs included in this study, while the order of effectiveness of different attack scenarios may differ between PTNs. \section{Link-targeted attacks} \label{IV} \begin{figure}[th] \centerline{\includegraphics[width=5.5cm]{fig3a} \hspace{3em} \includegraphics[width=5.5cm]{fig3b}}\centerline{{\bf a} \hspace{21em} {\bf b}} \vspace*{8pt} \caption{The normalised size $S(c)$ of the largest cluster as functions of the share of removed links for the PTNs of 13 cities. {\bf a}. Random link-targeted scenario. {\bf b}. Recalculated link-degree attack scenario. \label{fig3}} \end{figure} \begin{figure}[th] \centerline{\includegraphics[width=5.5cm]{fig4a} \hspace{3em} \includegraphics[width=5.5cm]{fig4b}} \centerline{{\bf a} \hspace{21em} {\bf b}} \vspace*{8pt} \caption{Normalized size $S(c)$ of the largest component of the PTN as function of the share $c$ of removed links for different attack scenarios. Each curve corresponds to a different scenario as indicated in the legend. Lists of removed links were prepared according to their degree $k(l)$ and betweenness ${\cal C}_{B}(l)$ centrality. A superscript $i$ refers to lists prepared for the initial PTN before the attack; RL and RV denote the removal of a random link and removal of random node respectively. {\bf a}. For PTN of Dallas. {\bf b}. For PTN of Paris. \label{fig4}} \end{figure} A particular feature of link-targeted attacks is that when a link is removed, the neighbouring nodes survive. Therefore, during the link-targeted attacks all the nodes survive to the end of an attack, i.e. the number of nodes does not change, while the share of the removed links increases. In Figs. \ref{fig3} and \ref{fig4} we monitor the behaviour of the normalised size $S(c)$ for the largest connected component (\ref{11}) but now as a function of the removed links following corresponding link-attack scenarios. Besides removing links at random we will use the sequences ordered according to link degree and link betweenness centrality, (\ref{9}), (\ref{10}) either calculated for the {\em initial} unperturbed PTN (we will indicate the corresponding scenario by a superscript $i$, e.g. ${\cal C}_{B}^{i,(l)}$) or following sequences with lists recalculated for the remaining links after each step of removing 1\% of the initial set of links. In Fig. \ref{fig3} {\bf a} we show the change of the normalised size $S(c)$ of the largest cluster under random link-targeted attacks (RL). If one compares this behavior with that observed for the random node removal scenario (RV) (see Fig. \ref{fig1} {\bf a}) one can see, that for most PTNs with strong resilience to random node-targeted attacks random link removal is even less effective. On the other hand, for PTN with weak resilience there seems to be no significant difference. Similar to the random node attacks (RV) scenarios the random link attacks (RL) lead to changes of the largest connected component $S$ that range from an abrupt breakdown (Dallas) to a slow smooth decrease (Paris). The decay is even slower than for random node removal - removing a link does not necessary lead to removing a node from the largest cluster, while removing a node from the completely connected network decreases it at least by one node. Typical results for PTNs under different types of link-targeted attacks as applied to the PTNs of Dallas and Paris are displayed in Fig. \ref{fig4}. We show how the normalised size $S(c)$ of the largest connected component of the Dallas ({\bf a}) and Paris ({\bf b}) PTN varies as function of the share $c$ of removed links following the above described attack scenarios. As one can see, there is no significant difference between the effectiveness of most scenarios including the random one for the PTN of Dallas. The vulnerability behavior of the Dallas PTN under link-targeted attacks appears not to differ from the corresponding random vertex removal approach. For Paris the situation is quite different. The main observation is that initially the random vertex attack is more effective than any link-targeted attack, until breakdown and further, and only once the near to 50\% of the links have been removed the recalculated link degree ($k(l)$) targeted scenario starts to be more harmful. Comparing different link-targeted scenarios one notices similar behavior between these, only the recalculated degree scenario line initially decays slower, however to become more effective near to the breakdown. In the following section we will compare outcomes of node- and link-targeted attacks in more detail. The special behaviour of the recalculated link degree behaviour may be explained as follows: in each removal step the links with highest link degrees are removed, However, these may belong to a number of different nodes. The affected nodes will therefore remain, however, with lower degrees. After recalculation the links on these nodes affected in the last step will have moved to lower places in the ordered list such that other links will be affected. This will continue until the degrees of all nodes have been reduced to three or less. At that point the removal of almost any link will cut down the connected component and a rapid breakdown of the largest component takes place. When the sequence for the removal of links is calculated using the initial degrees of the vertices, then obviously in the first 1\% step a set of all links connected to the highest degree vertices are removed which is approximately equivalent to removing the corresponding 1\% of all highest degree nodes. As far as no recalculation is involved the second step will essentially cut the links off the second 1\% of highest degree nodes. Disregarding correlations between these operations one may therefore expect that the initial link and node degree scenarios result in `similar breakdown behaviour. \section{Robustness measures and correlations} \label{V} \begin{table} \caption{Robustness measure $A$, Eq. (\ref{5a}), for the PTNs of different cities as analyzed in this study. Columns 2-6 give the value of $A$ for node-targeted attacks, columns 7-11 give $A$ for link-targeted attacks. The results for $A$ for the following attack scenarios are reported -- RV: random node; $k$ : node with maximal recalculated degree; $k^i$ : node with maximal initial degree; ${\cal C}_{B}$ : node with maximal recalculated betweenness centrality; ${\cal C}_{B}^{i}$ : node with maximal initial betweenness centrality; RL : random link; $k^{(l)}$ : link with recalculated maximal degree; $k^{i,(l)}$ : link with maximal initial degree; ${\cal C}_{B}^{(l)}$ : link with maximal recalculated betweenness; ${\cal C}_{B}^{i,(l)}$ : link with maximal initial betweenness \label{tab2}} \begin{center} \tabcolsep1.2mm {\small \begin{tabular}{lrrrrrrrrrr} \toprule \multicolumn{1}{c}{City}& \multicolumn{5}{c}{Node-targeted attacks}& \multicolumn{5}{c}{Link-targeted attacks}\\ & RV & $k$ & $k^i$ & ${\cal C}_{B}$ & ${\cal C}_{B}^{i}$ &RL& $k^{(l)}$ & $k^{i,(l)}$ & ${\cal C}_{B}^{(l)}$ & ${\cal C}_{B}^{i,(l)}$ \\ \colrule Berlin & 22.71 & 6.52 & 7.12 & 7.27 & 9.44 & 31.21 & 22.27 & 25.57 & 29.91 & 30.92\\ Dallas & 9.81 & 3.41 & 3.61 & 6.07 & 13.28 & 11.17 & 8.94 & 10.68 & 11.75 & 19.58\\ D\"uesseldorf& 25.47 & 7.45 & 9.39 & 8.26 & 12.65 & 31.22 & 23.88 & 28.69 & 30.58 & 31.44\\ Hamburg & 15.82 & 6.34 & 6.99 & 6.53 & 12.19 & 20.74 & 22.49 & 24.02 & 20.22 & 20.47\\ Hong Kong & 31.57 & 9.99 & 9.78 & 6.1 & 15.0 & 47.55 & 41.41 & 40.17 & 47.08 & 34.13\\ Istanbul & 16.05 & 4.46 & 5.03 & 5.62 & 9.42 & 18.45 & 13.13 & 15.1 & 19.78 & 18.86\\ London & 29.31 & 5.45 & 6.28 & 8.71 & 14.17 & 27.45 & 20.95 & 22.85 & 27.2 & 27.33\\ Moscow & 34.61 & 8.02 & 8.37 & 7.82 & 11.63 & 51.18 & 38.99 & 41.96 & 50.68 & 41.58\\ Paris & 37.93 & 10.77 & 13.12 & 10.67 & 14.07 & 56.04 & 47.12 & 51.83 & 55.93 & 48.03\\ Rome & 22.26 & 6.61 & 7.68 & 7.05 & 14.81 & 32.52 & 29.2 & 27.8 & 33.99 & 30.13\\ S\~aopaolo & 32.4 & 4.43 & 4.59 & 5.22 & 6.23 & 47.09 & 33.19 & 32.08 & 47.46 & 33.85\\ Sydney & 32.15 & 8.74 & 9.49 & 6.61 & 18.53 & 46.45 & 37.26 & 35.74 & 49.14 & 26.15\\ Taipei & 27.59 & 10.92 & 13.55 & 11.71 & 20.31 & 39.35 & 36.03 & 40.41 & 38.21 & 35.37\\ \botrule \end{tabular} } \end{center} \end{table} As it was mentioned in the Introduction, different indicators may be used in order to evaluate network stability. Here, for this purpose we will use a measure, recently introduced in Refs. \cite{schneider11a,schneider11b}. In our case, this measure corresponds to the area below the curve describing the normlised size $S(c)$ as function of the share $c$ of removed links, as defined by Eq. (\ref{5a}). As follows from the definition, the measure captures the effects on the network over the complete attack sequence. It is especially useful in the analysis of the real-world networks which are of finite size and usually are not characterized by a single well-defined concentration at which phenomena analogous to percolation (network clustering) occurs. Instead, the value $A$ is an integral characteristics, which is well-defined for a finite-size network and is, as we will see below, nicely suited to compare robustness of different PTN during attacks. In table \ref{tab2} we give the value of $A$ for the node- and link-targeted attacks (left and right parts of the table, correspondingly). Columns marked as RV (RL) give $A$ for the attacks at which nodes (links) were chosen at random, these numbers can be compared with the outcome of attacks made according to the initially prepared sequences of nodes (links) ordered by decreasing degrees ($k^i$, $k^{i,(l)}$) and betweenness centralities (${\cal C}^i_B$, ${\cal C}_B^{i,(l)}$). For the last four scenarios these indicators were recalculated after each step of the attack, and the corresponding results are given in columns marked as $k$, $k^{(l)}$ and ${\cal C}_B$, ${\cal C}_B^{(l)}$. With the data of table \ref{tab2} at hand, it is easy to compare the robustness of a given PTN to attacks of different scenarios as well as to compare the robustness of different PTNs. Assuming that the most stable PTNs are those characterized by larger values of $A$ one may conclude from the table, that for the node-targeted attacks the most harmful appear to be attacks targeted either on the nodes of highest degree (PTN of Berlin, Dallas, D\"usseldorf, Hamburg, Istanbul, London, Rome, Sa\~opaolo, and Taipei) or on the nodes of highest betweenness centrality (Hong Kong, Moscow, Paris, Sydney). Another observation is that attacks performed according to the lists of nodes recalculated after each step of the attack scenario appear to be more effective than those performed according to the lists prepared prior to the attack. Moreover, this difference is much more pronounced for the highest betweenness centrality targeted nodes as for those with highest node degree. On the other hand, for link-targeted attacks the most effective appear to be the highest link degree targeted attacks according to the recalculated (PTN of Berlin, Dallas, D\"usseldorf, Istanbul, London, Moscow, Paris) or initial (Rome, Sa\~opaolo) lists of links. Only for the PTN of Hamburg, Hong Kong, Taipei, and Sydney the highest betweenness centrality scenario appears to be the most effective, however even in this case the difference between different scenarios is not much pronounced. This similarity in behaviour for 'initial' and 'recalculated' scenarios seems to be an intrinsic feature of the link-targeted attacks. Moreover, as we noticed before, sometimes the 'initial' approach occurs to be more effective. It is interesting to mention that for three PTN (Hamburg, Istanbul, Sydney) which are not very resilient against any kind of attacks (however not for PTN of Dallas, which is least), most efficient is the scenario of removing links with initial highest values of the betweenness centrality ${\cal C}_{B}^{i,(l)}$. It is worthwhile to note here, that the order of the PTN according to their vulnerability under link-targeted attacks is similar to that for the node-targeted scenarios, there are just few light shifts. \begin{figure}[th] \centerline{\includegraphics[width=5.5cm]{fig5a} \hspace{3em} \includegraphics[width=5.5cm]{fig5b}} \centerline{{\bf a} \hspace{21em} {\bf b}} \centerline{\includegraphics[width=5.5cm]{fig5c} \hspace{3em} \includegraphics[width=5.5cm]{fig5d}} \centerline{{\bf c} \hspace{21em} {\bf d}} \centerline{\includegraphics[width=5.5cm]{fig5e} \hspace{3em} \includegraphics[width=5.5cm]{fig5f}} \centerline{{\bf e} \hspace{21em} {\bf f}} \vspace*{8pt} \caption{Attacks on nodes (left column) and on links (right column). Correlation between $A$ and $\kappa$ for the random ({\bf a, b}), recalculated node degree ({\bf c, d}) and recalculated betweenness ({\bf e, f}) scenarios. Results for $\kappa^{(z)}$ are shown by filled circles, results for $\kappa^{(k)}$ are shown by open circles. Solid lines show linear fits of the corresponding data points. \label{fig5}} \end{figure} To further shed light on correlation between the network characteristics {\em prior to the attack} and their stability {\em during the attack} we check correlation of $A$, Eq. (\ref{5a}), for all PTN out of our database at different attack scenarios (table \ref{tab2}) with the value of the Molloy-Reed parameters $\kappa^{(k)}$, Eq. (\ref{2}), and $\kappa^{(z)}$, Eq. (\ref{4}) of the unperturbed networks, as given in Table \ref{tab1}. The results are displayed in Figs. \ref{fig5}. There we show the value of $A$ correlated with the Molloy-Reed parameters $\kappa^{(z)}$ (filled circles), and $\kappa^{(k)}$ (open circles) of the same network for the node- and link-targeted attacks (left and right columns, correspondingly). One notices two different regimes in the behavior of the relation between $A$ and $\kappa$ for random and recalculated highest degree scenarios both for node- and link-targeted attacks. First, $A$ rapidly increases with an increase of $\kappa$, then, in the second regime, when $\kappa$ exceeds certain 'marginal' value, there is no pronounced correlation between $A$ and $\kappa$ any more, however still a weak increase of $A$ with $\kappa$ is observed. These two regimes are observed both in $A(\kappa^{(z)})$ and $A(\kappa^{(k)})$ functions, however the behavior is more pronounced in $A(\kappa^{(z)})$ plots (filled circles). We show the linear fits for both regimes by solid lines in the figures. The region of $\kappa$ where the first regime is observed is $1 \lesssim \kappa^{(z)} \lesssim 2$ ($2 \lesssim \kappa^{(k)} \lesssim 4$). Thus, if two PTNs have initial values of corresponding Molloy-Reed parameters in this region, it is very probable, that the PTN with higher value of $\kappa$ will be essentially more stable than the PTN with lower value of $\kappa$. However, the PTNs with the Molloy Reed parameters $\kappa^{(z)} > 2$ ($\kappa^{(k)} > 4$) although in general being more stable than those with lower $\kappa$ do not differ substantially in their stability. A similar bahaviour is observed for the link-targeted highest betweenness centrality attacks (Fig. \ref{fig5} {\bf f}) but it is less pronounced, even less pronounced it is for the node-targeted highest betweenness centrality attacks (Fig. \ref{fig5} {\bf e}), where almost no correlation between $A$ and $\kappa$ is observed. To understand the origin of the particular sensitivity of PTN stability for small values of $\kappa$, let us recall the results for uncorrelated networks (see formulas (\ref{3}), (\ref{5}) and references in the text): a GCC in an infinite network can exist only if $\kappa$ exceeds the marginal value of $\kappa^{(z)} =1$ ($\kappa^{(k)} =2$). In the vicinity of this marginal value the network is especially sensitive to even slight changes. Obviously, the finiteness of the PTN and the correlation effects present there lead to a variation for the criteria (\ref{3}), (\ref{5}), however a general sensitivity of network stability to the changes in $\kappa$ for small $\kappa$ remains. \begin{figure}[th] \centerline{\includegraphics[width=5.5cm]{fig6a} \hspace{3em} \includegraphics[width=5.5cm]{fig6b}} \centerline{{\bf a} \hspace{21em} {\bf b}} \centerline{\includegraphics[width=5.5cm]{fig6c} \hspace{3em} \includegraphics[width=5.5cm]{fig6d}} \centerline{{\bf c} \hspace{21em} {\bf d}} \centerline{\includegraphics[width=5.5cm]{fig6e} \hspace{3em} \includegraphics[width=5.5cm]{fig6f}} \centerline{{\bf e} \hspace{21em} {\bf f}} \vspace*{8pt} \caption{Results of node targeted attacks (left column) and link targeted attacks (right column). Correlation between $A$ and $\langle k \rangle$ for ({\bf a, b}) the random, ({\bf c, d}) the recalculated degree and ({\bf e, f}) the recalculated betweenness scenarios. \label{fig6}} \end{figure} Another interesting observation is illustrated by Figs. \ref{fig6}. There, we show the correlation of $A$ with the mean node degree $\langle k \rangle$ for the random ({\bf a, b}), recalculated degree ({\bf c, d}) and recalculated betweenness ({\bf e, f}) scenarios. A generic feature of the $A(\langle k \rangle)$ plots is the linear increase of $A$ with increasing of $\langle k \rangle$ which is observed for all values of $\langle k \rangle$ and for all three scenarios. A similar increase is observed both for the node- and link-targeted attacks, c.f. Figs. \ref{fig6} ({\bf a, c}) and Figs. \ref{fig6} ({\bf b, d}), however the linear approximation holds for the node-targeted attacks with less accuracy and is almost useless for the highest betweenness centrality plots, Fig. \ref{fig6} ({\bf e}). The corresponding fits are shown by solid lines in the figures. The plots of Figs. \ref{fig6} demonstrate correlation of the network stability with the initial 'density' of network constituents, nodes or links, without relation to the correlations in the PTN structure. This is different to the plots of Fig. \ref{fig5}, where the correlations where considered by analyzing the second moment of the node degree distribution $\langle k^2 \rangle$, that enters the Molloy-Reed parameter. Therefore, Fig. \ref{fig6} shows the correlation of the network stability measure $A$ with the mean node degree, $\langle k\rangle$. There, for both cases, within the expected scatter of data one observes clear evidence of an increase of $A$ with $\langle k \rangle$, i.e. networks with smaller mean node degree $\langle k \rangle$ break down at smaller values of $c$ and are thus more vulnerable to the attacks. Again, this observation holds for the link-targeted attacks as well for the node-targeted attack of random and recalculated highest degree scenarios. \begin{figure}[th] \centerline{\includegraphics[width=5.5cm]{fig7a} \hspace{3em} \includegraphics[width=5.5cm]{fig7b}} \centerline{{\bf a} \hspace{21em} {\bf b}} \vspace*{8pt} \caption{Results of node targeted attacks. Correlation of $A$ with respect to $\gamma$ for ({\bf a}) the random and ({\bf b}) the recalculated node degree scenarios. Filled circles correspond to the PTNs with more pronounced power-law decay of the node-degree distribution, open circles correspond to the PTNs where the power-law decay is less pronounced (see the section \ref{II}). \label{fig7}} \end{figure} For the node-targeted attacks on scale-free networks it is useful also to check the correlation between the node degree distribution exponent $\gamma$, Eq. (\ref{6}) and the stability measure $A$. Analytic results for infinite scale-free networks as well as empirical observations for numerous real-world scale-free networks have confirmed a particular stability of scale-free networks: there is no percolation threshold for exponents $\gamma\leq 3$ \cite{Cohen00,Callaway00}. As we have observed in the previous studies \cite{Ferber09a} some of the PTNs under consideration are scale-free: their node-degree distributions have been fitted to a power-law decay (\ref{6}) with the exponents shown in Table \ref{tab1}. Others are characterized rather by an exponential decay, but up to a certain accuracy they can also be approximated by a power-law behavior (then, the corresponding exponent is shown in Table \ref{tab1} in brackets). In Fig. \ref{fig7}{\bf a} and \ref{fig7}{\bf b} we show the correlation between the fitted node-degree distribution exponent $\gamma$ and $A$ for the random and recalculated node degree scenarios. One observes a notable tendency to find PTNs with smaller values of $\gamma$ to be more resilient as indicated by larger values of $A$. This tendency holds even if we include the PTNs which are better described by the exponential decay of the node-degree distributions. \section{Conclusions and outlook} \label{VI} In this paper we have presented an empirical analysis of the reaction of PTNs of different cities of the world upon random failure or directed attack scenarios. There may be numerous reasons for individual failure, ranging from a random accident to a targeted destruction. However, in accumulation these may lead to an emergent behavior as a result of which the PTN ceases to function. On the one hand our analysis is motivated by practical interest in the stability of individual PTNs thereby comparing the operating features of different PTNs. On the other hand we were seeking to identify criteria, which allow to judge {\em a priori} on the attack stability of real world correlated networks of finite size checking how do these criteria correspond to the analytic results available for the infinite uncorrelated networks. To perform the present analysis we have used previously accumulated \cite{Ferber09a} data on PTNs of several major cities of the world (see table \ref{tab1}) and simulated attacks of different scenarios targeted on the PTN nodes and links. To quantify the PTN stability to attacks of different scenarios we use a recently introduced \cite{schneider11a,schneider11b} numerical measure of network robustness. In our case, this measure is defined as the area below curve described by the normalized size $S(c)$ of the largest connected component as function of the share $c$ of removed nodes. In this respect, the measure captures the overall resilient behavior over the complete attack sequence. Table \ref{tab2} allows to compare the robustness of a given PTN to attacks of different scenarios as well as to compare the relative robustness of different PTNs. The comparison of PTN characteristics measured {\em prior} to the attack with the PTN robustness monitoring its behaviour {\em during} the attack allowed us to propose criteria that allow an a priori estimate of PTN robustness and stability with respect to an attack. This stability is indicated by a high value of the Molloy-Reed parameters $\kappa^{(k)}$, Eq. (\ref{2}), and $\kappa^{(z)}$, Eq. (\ref{4}) as well as by the high value of the mean node degree $\langle k \rangle$ of the unperturbed networks. Moreover, if the PTN node degree distribution manifests a power-law decay, we have observed a notable tendency to find PTNs with smaller values of $\gamma$ to be more stable. \section*{Acknowledgments} BB, CvF, and YuH acknowledge partial support by the FP7 EU IRSES project N269139 'Dynamics and Cooperative Phenomena in Complex Physical and Biological Media'. This work was in part performed in frames of the COST Action MP0801 'Physics of Competition and Conflicts'.
1,477,468,751,274
arxiv
\section{Introduction} Since their discovery in 1960s (Schmidt 1963), quasars have become important extra-galactic objects in astrophysics. They not only can be used to probe the physics of supermassive black holes and accretion/jet process, but also are closely related to the studies of galaxy evolution, intergalactic medium, large scale structure and cosmology. More than 120,000 quasars have been discovered from the large optical sky surveys, such as the Two-Degree Fields survey (Boyle et al. 2000) and the Sloan Digital Sky Survey (SDSS, York et al. 2000; Schneider et al. 2010). The quasar candidates in these surveys were mainly selected by optical colors, namely that, due to the strong UV and optical emissions, quasars at $\rm z<2.2$ and $\rm z>3.0$ can be distinguished from the stellar objects in the color-color and color-magnitude diagrams based on optical photometry (Smith et al. 2005; Richards et al. 2002; Fan et al. 2000). However, in the redshift range $\rm 2.2<z<3.0$, the redshifted spectral energy distributions of quasars show similar optical colors to that of normal stars, and quasar selections using the optical color-color diagrams become very inefficient due to the serious contaminations of stars (Fan 1999; Richards et al. 20002; 2006; Scheneider et al. 2007). Because of the crucial importance of z$>$2.2 quasars in studying the Ly$\alpha$ forest and cosmic baryon acoustic oscillation (BAO) (White 2003; McDonald \& Eisenstein 2007) and in constructing the accurate luminosity function to study the quasar evolution in the mid-redshift universe (Wolf et al. 2003; Jiang et al. 2006), we have to explore other efficient ways to identify the missing $\rm 2.2<z<3.0$ quasars. In the last a few years, two main approaches have been taken to separate quasars and stars rather than using optical color-color diagrams. The first approach is to use optical variability, as this is one of the well known quasar properties (Hook et al. 1994; Cristiani et al. 1996; Giveon et al. 1999). Schmidt et al. (2010) have proposed a method to select quasar candidates by their intrinsic variability. They showed that the quasar structure functions, constructed from the light-curves of known quasars in SDSS Stripe 82 (hearafter S82; see also Sesar et al. 2007), can be modeled by a power-law function with amplitude A and power-law index $\gamma$. Quasars can be separated from stars in the $\rm A-\gamma$ plane, which enables efficient selection of quasar candidates based on long-term single-band optical photometry (Schmidt et al. 2010). They also pointed out that in the redshift range $\rm 2.5<z<3.0$, variability can help to select quasars with a completeness of 90$\%$. MacLeod et al. (2011) also developed a method to use the damping timescale and asymptotic amplitude of variable sources in S82 to separate quasars from stars with an efficiency higher than 75\%. Butler \& Bloom (2011) recently presented a similar time-series study of quasars in S82, and proposed to use two statistics, a quasar-like variability metric and a non-quasar variability metric, to yield the separation of quasar candidates from stars. They claimed that with their method they can achieve nearly a factor of two increase of quasars at $\rm 2.5<z<3.0$. In addition, very recent results from the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS; Eisenstein et al. 2011) also confirmed the high success rate of spectroscopically identifying variability selected quasars, which leads to a significant increase of z$>$2.2 quasar density in S82 than that based on optical colors only (Palanque-Delabrouille et al. 2011; Ross et al. 2011). The second approach to separate z$>$2.2 quasars from stars is to utilize their near-IR colors. As the continuum emission from stars usually decreases more rapidly from optical to the near-IR wavelengths than that of quasars, the near-IR colors for stars are different from quasars. This leads to a method to use the K-band excess to identify quasars at z$>$2.2 (eg. Warren, Hewett \& Foltz 2000; Croom, Warren \& Glazebrook 2001; Sharp et al. 2002; Hewett et al. 2006; Chiu et al. 2007; Maddox et al. 2008; Smail et al. 2008; Wu \& Jia 2010). Using the photometric data in the $ugriz$ bands of SDSS DR7 (Abazajian et al. 2009) and YJHK bands of UKIRT InfraRed Deep Sky Surveys (UKIDSS\footnote{ The UKIDSS project is defined in Lawrence et al. (2007). UKIDSS uses the UKIRT Wide Field Camera (WFCAM; Casali et al. 2007) and a photometric system described in Hewett et al. (2006). The pipeline processing and science archive are described in Hambly et al. (2008).}) Large Area Survey (LAS) DR3, Wu \& Jia (2010) compiled a sample of 8498 SDSS-UKIDSS quasars and a sample of 8996 SDSS-UKIDSS stars. Based on these two samples they compared different optical/near-IR color-color diagrams and proposed an efficient empirical criterion for selecting z$<$4 quasars in the near-IR Y$-$K and optical g$-$z color-color diagram (i.e. $\rm Y-K>0.46(g-z)+0.53$, where all magnitudes are Vega magnitudes). With this criterion, they obtained the completeness of 98.6$\%$ of recovering z$<$4 quasars with the mis-identifying rate of 2.3$\%$ of classifying stars as quasars. A check with the FIRST (Becker, White \& Helfand 1995) radio-detected SDSS quasars, which are believed to be free of color selection bias, also proved that with this Y-K/g-z criterion they can achieve the completeness higher than 95$\%$ for these radio-detected quasars with z$<$3.5, which seems to be difficult in the case of using the SDSS optical color selection criteria alone where a dip around z$\sim$2.7 in the redshift distribution obviously exists (Richards et al. 2002,2006; Scheneider et al. 2007,2010). Recently, Peth, Ross \& Schneider (2011) extended the study of Wu \& Jia (2010) to a much larger sample of 130000 SDSS-UKIDSS selected quasar candidates and re-examined the methods of separating stars and mid-redshift quasars with the near-IR/optical colors. Using the Y-K/g-z selection criterion, Wu et al. (2010a,b) also successfully identified some $\rm 2.2<z<3.0$ quasars during the commissioning period of the Chinese GuoShouJing Telescope (LAMOST), which provides further supports to the effectiveness of selecting the mid-redshift quasars using the optical/near-IR colors. Although both approaches we mentioned above can be used to identify quasars at $\rm 2.2<z<3.0$, a more ideal approach is to combine the variability and optical/near-IR color to achieve the maximum efficiency. In this paper, we present a case study by selecting a sample of variable, non-UV excess, SDSS-UKIDSS quasar candidates in S82 (Schmidt et al. 2010), and spectroscopically identifying 14 new quasars at z=2.36 to 2.88. We also apply this method to some new variable quasar candidates in S82 recently suggested by Butler \& Bloom (2011) and found that 188 SDSS-UKIDSS sources are probably new quasars with 2.2$<$z$<$3. We describe the sample selections and spectroscopic observations in Section 2, present more new 2.2$<$z$<$3 quasar candidates in S82 in Section 3 and discuss the results in Section 4. \section{Target selections and spectroscopic observations} Our purpose is to efficiently select $\rm 2.2<z<3.0$ quasars by combining the variability and optical/near-IR colors, so we focus on S82 region where both variability and SDSS-UKIDSS photometric data are available with high quality. A sample of 118 non-UV excess quasar candidates from S82 has been selected with the algorithm presented in Schmidt et al. (2010), which have UV-optical colors similar to that of stars (i.e. consistent with the observed colors of quasars at $\rm 2.2<z<3.0$ and optical variability properties consistent with the region defined by quasars on the $\rm A-\gamma$ plane). 70 of them have near-IR YJHK photometric data from the UKIDSS/LAS DR4\footnote{Available at http://surveys.roe.ac.uk/wsa/}. All photometric magnitudes are corrected for Galactic extinction using a map of Schlegel, Finkbeiner \& Davis (1998). We plot the 70 objects on the Y-K/g-z color-color diagram (see Fig. 1). They are clearly separated into two parts on this diagram. 54 sources match the selection criterion of $\rm Y-K>0.46(g-z)+0.82$ defined for z$<$4 quasars (here we convert the original criterion given in Wu \& Jia (2010) to a new one to keep the g and z magnitudes in AB system and Y and K magnitudes in Vega system, see dashed line in Fig. 1). Five sources locate slightly below but very close to the criterion. Therefore, we think these 59 sources are probably $\rm 2.2<z<3.0$ quasars. The photometric redshifts of these 59 sources are estimated to be from z=2.43 to 3.05 using their nine-band SDSS-UKIDSS photometric data with a program introduced in Wu \& Jia (2010). Indeed, 44 among them have been spectroscopically identified as quasars by SDSS previously. These 44 known quasars have spectroscopic redshifts from 0.59 to 3.29, and 35 of them are $\rm 2.2<z<3.0$ quasars. The spectroscopic redshifts for 40 of these 44 known quasars are consistent with their photometric redshifts within $\rm |\Delta z|\leq 0.3$. This confirms the high efficiency of selecting $\rm 2.2<z<3.0$ quasars by combining the variability and optical (g-z)/near-IR (Y-K) color. Spectroscopic identification of the remaining 15 quasar candidates is needed. Apart from the above 59 quasar candidates, the other 11 objects are located much below the quasar selection criterion in Fig. 1, and their Y-K and g-z colors are indistinguishable from those of stars in the stellar locus (see Fig. 5 of Wu \& Jia (2010)). In addition, ten of them have very bright optical magnitudes (i.e. $i<16.5$) and are unlikely to be quasars at the expected redshifts ($\rm 2.2<z<3.0$). Indeed, four of them (SDSS J034751.14-001730.7, SDSS J035208.92+005919.6, SDSS J224630.25+010018.3 and SDSS J225342.13+011207.1) have already been cataloged as stars in the SIMBAD database\footnote{http://simbad.u-strasbg.fr/simbad/}. As we mentioned above, spectroscopic identification is still required for the remaining 15 quasar candidates with $\rm 2.2<z<3.0$ in S82. All of them have $i$-band magnitudes brighter than 19.3. In this paper we present optical spectra for 14 of them\footnote{The only one left is SDSS J220808.97+002858.3 with a photometric redshift of 2.78. We are planning to observe it in the fall of 2011.}. Eight of them were observed using the Boller \& Chivens Spectrograph on the Bok 2.3m Telescope at Kitt Peak in November 2010. The observation covers a wavelength range of 3620--6900 $\rm \AA$ with a spectral resolution of 8.3 $\rm \AA$. The spectra of the other six objects were obtained with the Blue Channel Spectrograph on the MMT 6.5m Telescope at Mt. Hopkins in December 2010, with a wavelength coverage of 3600 $\rm \AA$ to 8000 $\rm \AA$ and a spectral resolution of 5.8 $\rm \AA$. We reduce the data with IRAF package and some broad line emissions, such as Ly$\alpha$+$\rm N\,{\small V}$, $\rm Si\,{\small IV}$+$\rm O\,{\small IV}]$, and $\rm C\,{\small IV}$, have been clearly detected in the spectra of all of 14 quasar candidates. We measure the redshifts of these 14 new quasars by fitting Gaussian line profiles to the Ly$\alpha$+$\rm N\,{\small V}$, $\rm Si\,{\small IV}$+$\rm O\,{\small IV}]$ $\rm \lambda1399$ and $\rm C\,{\small IV}$ $\rm lambda1549$ emission lines. The details of the sources and observational results, including their names, coordinates, magnitudes, exposure times, photometric and spectroscopic redshifts, are summarized in Table 1. The spectra of these 14 new quasars with z=2.36 to 2.88 taken with Bok and MMT are presented in Fig. 2 and Fig. 3, respectively. These observations clearly demonstrated the high efficiency of selecting $\rm 2.2<z<3.0$ quasars by combining the variability and optical/near-IR colors. For the 11 sources located much below the quasar selection criterion of the Y-K/g-z color-color diagram, we also took the spectra of one of them (SDSS J035658.21+003801.8, $i$=18.69) with the Bok 2.3m telescope in November 2010 and two of them (SDSS J034950.99+ 010845.9, $i$=11.63 and SDSS J035816.05+002351.9, $i$=13.58) with the 2.16m telescope of Xinglong/NAOC in January 2011, and confirmed their nature as stars due to the lack of emission lines and the presence of Balmer absorption features. Together with other four previously known stars, seven of these 11 sources located in the stellar locus have been identified as stars. Although there are still four sources remaining unidentified, they are obviously too bright ($i<14$) to be quasars. Therefore, we believe that all these 11 sources are stars with certain level of optical variability. Combining with their optical/near-IR colors we can easily separate them from quasars. \section{More $\rm 2.2<z<3.0$ quasar candidates in SDSS Stripe 82} In a recent paper, Butler \& Bloom (2011) presented a similar time-series study of quasars in S82 as in Schmidt et al. (2010) and MacLeod et al. (2011). They proposed two different statistics, namely a quasar-like variability metric and a non-quasar variability metric, to separate quasar candidates from stars. They obtained 1875 new quasar candidates in S82 and claimed that with their method they can achieve nearly a factor of two increase of quasars at $\rm 2.5<z<3.0$. Here we use their variable quasar candidates to cross-correlate with the sources in the UKIDSS/LAS DR5 and obtained 643 new quasar candidates with SDSS-UKIDSS nine-band photometric data. In Fig. 4 we plot these sources in the Y-K/g-z diagram, in comparison with the quasar selection criterion suggested by Wu \& Jia (2010). 597 of these 643 sources (with a fraction of 93$\%$) meet the selection criterion, suggesting that most of them should be real quasars with z$<$4. This comparison also provides mutual support to the quasar selection method based on variability or optical/near-IR colors. For more reliably selecting $\rm 2.2<z<3.0$ quasars from these 597 quasar candidates, we used a program introduced in Wu \& Jia (2010) to estimate the photometric redshifts of quasar candidates based on their SDSS-UKIDSS nine-band photometric data. In the upper panel of Fig. 5 we show the photometric redshift distribution of these 596 new quasar candidates. Although they distribute in a broad redshift range from 0.1 to 3.8, obviously a large fraction of them are at $\rm 2.2<z<3.0$. Among these 597 quasar candidates, 244 sources have photometric redshifts larger than 2 and 188 of them are $\rm 2.2<z<3.0$ quasar candidates. Considering the fact that only 948 quasars at $\rm 2.2<z<3.0$ in S82 have been identifed in the SDSS DR7 (Schneider et al. 2010), the fraction of $\rm 2.2<z<3.0$ quasar candidates in our SDSS-UKIDSS variable source sample is significantly higher. This is understandable because SDSS quasar survey mainly focused on finding quasars with z$<$2.2 and z$>$3.5 (Richards et al. 2002). Many quasars with $\rm 2.2<z<3.0$ are therefore missing in the SDSS quasar survey but can be discovered by combining variability and optical/near-IR colors as we suggested in this paper. In the lower panel of Fig. 5 we show the distribution of the dereded $i$-band magnitudes of 597 quasar candidates, as well as that of 188 quasar candidates at $\rm 2.2<z<3.0$. Clearly, majority of them are located between $i=19.1$ and $i=20.5$. This is also the reason why they are missing in the SDSS survey because most of the SDSS known quasars have $i<19.1$. We expect that the ongoing BOSS survey in SDSS III, which aims to discover 150000 quasars with z$>$2.2 (Eisenstein et al. 2011; Ross et al. 2011), could confirm the quasar nature and redshifts of these quasar candidates soon. In Table 2 we listed the coordinates, photometric redshifts, and SDSS and UKIDSS magnitudes of the 188 quasar candidates at $\rm 2.2<z<3.0$. We also noticed that three bright sources ($i<19.3$) among them have been spectroscopically identified by us in Section 2. \section{Discussion} We have presented a case study to demonstrate that we can effectively select $\rm 2.2<z<3.0$ quasars by combining the optical variability and optical/near-IR colors. Our successful spectroscopic identifications of 14 new quasars at z=2.36 to 2.88 with the Bok 2.3m telescope and the MMT 6.5m telescope provide further support to this combination approach, which can be used to select quasars with probably the highest efficiency (here we define the efficiency as the percentage of quasars identified from the spectroscopic targets, similar to the definition in SDSS-III (Ross et al. 2011)). We also compiled a catalog of 188 quasar candidates with photometric redshifts at $\rm 2.2<z<3.0$ from variable SDSS-UKIDSS sources in S82, and expect that the ongoing SDSS III spectroscopy will confirm their quasar nature and redshifts soon. We noticed that although combining the optical/near-IR colors and time-series information can help increase the efficiency in identifying quasars, it may decrease the completeness of quasars if both selection criteria on colors and variability are required. This can also be seen from Fig. 1 and Fig. 4 as some quasars selected by variability do not meet our color selection criterion. One possible way to avoid this is to decrease the threshold for each criterion. For example, relaxing our color criterion to $Y-K>0.46(g-z)+0.50$ would include all variability-selected quasars in Fig. 1 and 98.8\% of variability-selected quasar candidates in Fig. 4, without increasing much contamination from stars as most of them are less variable and still far away from our color selection criterion. However, how to best combine both the optical/near-IR colors and time-series information to select quasars with both higher efficiency and higher completeness obviously needs more investigations in the future based on complete samples of quasars and stars in certain sky areas. For using this combination approach, we need both the optical variability measurements and optical/near-IR photometry in a large sky area. Especially for finding z$>2.2$ quasars, deeper imaging and multi-epoch photometry are neccesary. However, so far both variability and optical/near-IR photometric observations have been realized only for a small part of the sky, such as in S82. Because the typical variability timescales of quasars are usually in years in the optical band, we need to measure the variability of sources for many epochs in at least several years in order to get better statistics to determine their variability features. That is why so far the variability studies related to quasars have been done only on some smaller sky areas, which significantly limits the efforts in discovering quasars with variability. However, even if we may not have both time-series and color information for quasar candidate selection in most sky areas, utilizing both information as much as possible often allows us to get the most quasars. Fortunately, there are also several ongoing and upcoming large projects with both photometric and variability information, especially the Panoramic Survey Telescope \& Rapid Response System (Pan-STARRS; Kaiser et al. 2002) and the Large Synoptic Survey Telescope (LSST; Ivezic et al. 2008). The multi-epoch photometry in multi-bands covering a large part of the sky by these facilities will hopefully provide better opportunities to use variability to construct a much larger sample of quasars than that currently available. On the other hand, several ongoing and upcoming optical and near-IR photometric sky surveys will also provide crucial helps to us to extend the SDSS-UKIDSS optical/near-IR color selection of quasars to larger and deeper fields. In addition to SDSS III (Eisenstein et al. 2011), which has taken 2500 deg$^2$ further imaging in the south galactic cap, the SkyMapper (Keller et al. 2007) and Dark Energy Survey (DES; The Dark Energy Survey Collaboration 2005) will also present the multi-band optical photometry in 20000/5000 deg$^2$ of the southern sky, with the magnitude limit of 22/24 mag in $i$-band, respectively. The Visible and Infrared Survey Telescope for Astronomy (VISTA; Arnaboldi et al. 2007) will carry out its VISTA Hemisphere Survey (VHS) in the near-IR YJHK bands for 20000 deg$^2$ of the southern sky with a magnitude limit at K=20.0, which is about five magnitude and two magnitude deeper than the Two Micron ALL Sky Survey (2MASS; Skrutskie et al. 2006) and UKIDSS/LAS limits (Lawrence et al. 2007), respectively. Therefore, the optical and near-IR photometric data obtained with these ongoing and upcoming surveys will provide us a large database for quasar selections. By combining the optical variability and the optical/near-IR colors, we expect that a much larger and more complete quasar sample can be efficiently constructed in the near future. Although by combining the variability and optical/near-IR colors we can efficiently select quasar candidates and reliably estimate their photometric redshifts, the spectroscopic identifications are still crucial to determine their quasar nature and redshifts. The ongoing BOSS project in SDSS III has identified 29000 quasars with $z>2.2$ and expects to obtain the spectra of 150000 quasars at $2.2<z<4$ (Eisenstein et al. 2011; Ross et al. 2011). We believe that many $\rm 2.2<z<3.0$ quasars, including the candidates we listed in this paper, should be spectroscopically identified by BOSS. In addition, the Chinese GuoShouJing telescope (LAMOST; Su et al. 1998), a spectroscopic telescope with 4000 fibers currently in the commissioning phase, is also aiming at discovering 0.3 million quasars with magnitudes bright than $i=20.5$ (Wu et al. 2010b). By combining the variability and optical/near-IR colors, large input catalogs of reliable quasar candidates will be provided to these quasar surveys for future spectroscopic observations. Therefore, we expect that a much larger and more complete quasar sample covering a wider range of redshift will be constructed in the near future, which will play an important role in studying extra-galactic astrophysics, including the physics of accretion around supermassive black holes, galaxy evolution, intergalactic medium, large scale structure and cosmology. \acknowledgments We thank Zhaoyu Chen and Wenwen Zuo for taking the spectra of two star candidates with the 2.16m telescope at Xinglong/NAOC. XBW is supported by the National Natural Science Foundation of China (11033001) and the National Key Basic Research Science Foundation of China (2007CB815405). He thanks the colleagues at Steward Observatory, University of Arizona for hospitality during his visit there in the spring of 2011 as a senior visiting fellow supported by the Chinese Scholarship Council. RW acknowledges the support from the National Radio Astronomy Observatory (NRAO) through the Jansky Fellowship program. NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. KBS is funded by and would like to thank the Marie Curie Initial Training Network ELIXIR, which is funded by the Seventh Framework Programme (FP7) of the European Commission. KBS is a member of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD), Germany. This work was partially supported by the Open Project Program of the Key Laboratory of Optical Astronomy, NAOC, CAS. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the US Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society and the Higher Education Funding Council for England. The SDSS web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory and the University of Washington. {\it Facilities:} \facility{Sloan (SDSS)}, \facility{UKIDSS}, \facility{Bok}, \facility{MMT},\facility{2.16m/NAOC}
1,477,468,751,275
arxiv
\section{\label{sec:Intro} INTRODUCTION} Network theory has proven a powerful framework for studying the effects of randomness and heterogeneity on the dynamics of interacting agents with non-trivial connectivity patterns \cite{Strog1}. One of the most important applications of this work is to the spread of infectious diseases among human populations, where the interaction structure is highly complex, showing salient features such as power-law degree distributions, small average path lengths, and modularity \cite{Newman3,Vespignani1}. Various models have been proposed, primarily with random graph configuration, that incorporate these complex features while remaining theoretically tractable. Within the context of disease dynamics, graph nodes are generally taken to represent individuals, and edges to represent interactions between them, through which infection can spread. Both deterministic and stochastic infection dynamics have been studied on networks as well as bond percolation for the associated branching process \cite{Newman2, Robins1,Noel1,Noel2,Volz1}. How various thermodynamic quantities of interest -- such as the steady state incidence, the epidemic (percolation) threshold, and the distribution of small outbreak sizes -- depend upon network topology is of great interest. Often these approaches disregard the multiscale organization of many real systems, in which agents can be most naturally thought of as partitioned into densely connected communities with sparser coupling among neighboring communities. In some cases, it may be useful to conceptualize the topology as a network of networks, where agent-to-agent interactions and community-to-community interactions are both useful representations depending on the scale of resolution \cite{Vazquez1}. The latter has been successfully developed in ecology, with a network of interconnected populations referred to as a ``metapopulation" \cite{Keeling1,Hanski1}. This framework is very useful in studying large scale propagation of diseases where most infection transmission occurs in localized regions, but can be transported on larger scales by the mobility of individuals, traveling among population centers \cite{Belik1}. However, most metapopulation models assume that populations are fully mixed, with no inherent complexity in the connectivity between agents. Much less understood is how the multiscale structure of agent interactions affects the larger scale propagation of infectious processes through interconnected networks \cite{Dikison,Mendiola}. In this paper, we expand on a possible avenue for addressing this question using a multitype generalization of random graphs with simple, meta-level topology \cite{Vazquez1, Antoine1}, and construct a dynamical mean-field theory for the SIR infection model in multitype configuration model networks. Putting these together, we analyze the average infection dynamics and propagating front profile on a simple metapopulation composed of coupled population centers on a one-dimensional lattice and calculate the phenomenological transport properties of the system as functions of the underlying network's degree distributions. Our results are compared to stochastic simulations of the infection kinetics on various networks and found to be in good agreement in the thermodynamic limit. Broadly, we present this work as an illustration of how well-developed ideas from different areas of statistical physics and ecology can be naturally combined. \section{\label{sec:MultiTypeNet} MULTITYPE CONFIGURATION MODEL NETWORKS} In order to incorporate relevant node attribute information into our network models, (generically applicable for such things as age, sex, ethnicity, and place of residence), we use a generalization of configuration model random graphs, wherein nodes are assigned a type from an arbitrary set of $M$ possible types and a degree to each type from an arbitrary joint distribution for degree types, $P_{i}(k_{1},k_{2},...k_{M})=P_{i}(\vec{k})$, with degree $k_j$ denoting the number of connections to nodes of type $j$ \cite{Strog1,Antoine1,Vazquez1}. Additionally, nodes of type $i$ occupy a fraction of the total network $w_{i}$, where $\sum_{i}w_{i} =1$. Following the configuration model prescription, we consider graphs chosen uniformly at random from the ensemble of possible graphs with the prescribed degree distributions and self-consistent edge constraint: $w_{i}\sum_{\vec{k}}k_{j}P_{i}({\vec k}) = w_{j}\sum_{{\vec{k'}}}k'_{i}P_{j}({\vec{k'}}), \forall (i, j)$\cite{Strog1,Newman1,Antoine1}. From this formalism, a variety of quantities can be described compactly using generating functions \cite{Strog1,Wilf1}. The generating function for the probability of a randomly selected node of type $i$ to have degree ${\vec{k}}$, is given by \begin{equation} \label{eq:GF} G_{i}({\vec{x}}) = \sum_{{\vec{k}}}P_{i}({\vec{k}})\prod_{l=1}^{M}x_{l}^{k_{l}} : \end{equation} \noindent written as a power series in $\vec{x}$, an auxiliary variable defined over the unit interval, with expansion coefficients equal to the respective probabilities. Moments of the degree distributions can be represented simply as derivatives of the corresponding generating function. For example, the average degree of a type $i$ node to a type $j$ node is \begin{equation} \label{eq:Avg} \sum_{{\vec{k}}}k_{j}P_{i}({\vec{k}}) = \partial_{x_{j}}G_{i}({\vec{x}})\vert_{\vec{1}} \equiv \left<k_{j}\right>_{i}. \end{equation} Since node interactions occur along edges, an important quantity in network models is the excess degree: the number of neighbors a node has which can be reached by selecting a randomly chosen edge, and not including the neighbor on the end of the selected edge. For a multiype configuration model network, the probability that a randomly chosen edge from a type $i$ node leads to a type $j$ node with degree $\vec{k}$ is proportional to $k_{i}P_{j}({\vec{k}})$, and thus the probability for the corresponding excess degree is generated by $\partial_{x_{i}}G_{j}({\vec{x}})/\partial_{x_{i}}G_{j}({\vec{x}})\vert_{\vec{1}}$ \cite{Antoine1}, with average degree to type $l$ nodes, \begin{equation} \label{eq:Exc} \left<k_{l}\right>_{i-j} = \frac{\partial_{x_{l}}\partial_{x_{i}}G_{j}({\vec{x}})\vert_{\vec{1}}}{\partial_{x_{i}}G_{j}({\vec{x}})\vert_{\vec{1}}}= \frac{\left<k_{l}k_{i}\right>_{j}}{\left<k_{i}\right>_{j}}-\delta_{il}. \end{equation} By construction, this framework lacks two-point correlations, in which the excess degree distributions depend on the degrees of both nodes sharing an edge \cite{Vespignani1}. \section{\label{sec:VolzMiller} VOLZ-MILLER MEAN-FIELD SIR IN MULTITYPE NETWORKS} In this report we consider simple dynamics for disease spread: the Susceptible-Infected-Recovered (SIR) model, wherein each individual is assigned a disease state, $Y\in \{S,I,R\}$, and may undergo reactions to other states depending on its state and the state of its neighbors. In this model, if a node of type $i$ is susceptible and has a single infected neighbor of type $j$, then it will change its state to infected with a constant probability per unit time $\beta_{ji}$. Likewise, an infected node of type $i$ will recover with a constant probability per unit time $\gamma_{i}$. Since the underlying dynamics is a continuous time Markov process, a complete analysis would describe the full probability distribution for all system trajectories. However for our purposes, it will be sufficient to focus on the behavior of extensive outbreaks (i.e., those which scale with the system size), the average dynamics of which, can be derived in the limit when the number of nodes tends to infinity, by generalizing a mean-field technique for single type networks, developed by Volz and Miller, to multitype networks. Below, we follow the basic structure of the derivations presented in \cite{Volz2, Miller1}. In the thermodynamic limit, configuration model random graphs are locally tree-like \cite{Newman1}, which by construction allows them to satisfy many of the generic criteria for the applicability of mean-field theory assumptions \cite{Gleeson1}. In our case, we assume that nodes are differentiated by their degree and disease state alone and that susceptible nodes feel a uniform force of infection along every edge, related to the average number of edges connecting susceptible and infected nodes at any given time in the network: a Curie-Weiss type approximation \cite{Goldenfeld}. Furthermore, from the perspective of susceptible nodes, all infection attempts along different edges can be treated as uncorrelated -- a consequence of the local tree-like property \cite{Newman1,Miller1, Robins1}-- and thus we assume that the states of neighbors of susceptible nodes are effectively independent. Let the probability that a node of type $j$ has not transmitted the infection to a node of type $i$ along a randomly chosen $i-j$ edge, be $\theta_{ij}$. This quantity is interpretable as the complement of the average cumulative hazard function along such edges. Given $\theta_{ij}$, it follows that the fraction of susceptible nodes of type $i$ at time $t$ is \begin{align} \label{eq:SuscI} S_{i}(t)&= \sum_{{\vec{k}}}P_{i}({\vec{k}})\prod_{j=1}^{M}\theta_{ij}^{k_{j}}(t) = G_{i}(\theta_{i1}(t),\theta_{i2}(t),...\theta_{iM}(t)) \nonumber \\ &\equiv G_{i}(\vec \theta_{i}(t)). \end{align} \noindent The fractions of infected and recovered nodes of type $i$ follow from probability conservation, $S_{i}+I_{i}+R_{i}=1$, and a constant recovery rate for infected nodes $\gamma_{i}$: \begin{align} \label{eq:IRequ} \frac{dI_{i}}{dt}&= - \frac{d\vec{\theta_{i}}}{dt} \cdot \vec{\nabla} G_{i}(\vec{x})\vert_{\vec \theta_{i}} - \gamma_{i}I_{i} \nonumber \\ \frac{dR_{i}}{dt}&= \gamma_{i}I_{i} \;, \end{align} \noindent with the total fraction of susceptible nodes \begin{equation} \label{eq:Susc} S=\sum_{i}w_{i}G_{i}(\vec \theta_{i})\equiv \vec{w} \cdot \vec{G}(\underline{\underline{\theta}}). \end{equation} The central probability and order parameter, $\theta_{ij}$, can be subdivided into three compartments depending on the disease state of the terminal node $j$, \begin{equation} \label{eq:ThetaFlux} \theta_{ij} = \theta_{ij}^{S} + \theta_{ij}^{I} + \theta_{ij}^{R}\;, \end{equation} \noindent and its dynamics determined by tracking the fluxes among these compartments. Since $\theta$ can only change when an infected node transmits the disease, the rate at which $\theta_{ij}$ changes is equal to the rate at which a corresponding neighbor infects, and therefore $d\theta_{ij}=-\beta_{ji}\theta_{ij}^{I}dt$. Similarly, since $\theta^{R}$ can only change if an infected node recovers, the rate at which $\theta_{ij}^{R}$ changes is equal to the rate at which a corresponding neighbor recovers, and thus $d\theta_{ij}^{R}=\gamma_{j}\theta_{ij}^{I}dt$. Lastly, the probability that a type $j$ neighbor of a type $i$ node has not transmitted and is susceptible, $\theta_{ij}^{S}$, is simply the probability that the corresponding neighbor is susceptible. Because this neighbor could not have been infected along any of its other edges and has excess degree distribution generated by $\partial_{x_{i}}G_{j}({\vec{x}})/\partial_{x_{i}}G_{j}({\vec{x}})\vert_{\vec{1}}$ , it follows that $\theta_{ij}^{S}=\partial_{x_{i}}G_{j}({\vec{x}})\vert_{\vec{\theta_{j}}}/\partial_{x_{i}}G_{j}({\vec{x}})\vert_{\vec{1}}$. Combining the latter with the two flux relations and the initial conditions \eqref{eq:ThetaFlux}, $\theta_{ij}(0)=1$ and $\theta_{ij}^{R}(0)=0$, we find \begin{align} \label{eq:MFequ} &\frac{d\theta_{ij}}{dt} = \beta_{ji}\left( \frac{\partial_{x_{i}}G_{j}(\vec x)\vert_{\vec \theta_{j}}}{\partial_{x_{i}}G_{j}(\vec x)\vert_{\vec{1}}} -\theta_{ij}\right) + \gamma_{j}\left(1-\theta_{ij}\right). \nonumber \\ \end{align} \noindent These $M^{2}$, first-order and coupled ODEs, $\dot{\underline{\underline{\theta}}}=\underline{\underline{F}}(\underline{\underline{\theta}})$, define the full system's approximate mean dynamics, and form the basis of our subsequent analysis. For a more detailed derivation of the analogous results for the special case of a single type network, see \cite{Volz2, Miller1}. The steady state is given by the fixed point of \eqref{eq:MFequ}, \begin{align} \label{eq:SteadyState} &\bar \theta_{ij} = (1-T_{ji}) +T_{ji}\frac{\partial_{x_{i}}G_{j}(\vec{x})\vert_{\vec{\bar{\theta_{j}}}}}{\partial_{x_{i}}G_{j}(\vec{x})\vert_{\vec{1}}} \; , \nonumber \\ \end{align} \noindent which upon substitution into \eqref{eq:SuscI}, gives the cumulative infection, $P=1-S$, at equilibrium (i.e., the final epidemic size), with $T_{ji}= \beta_{ji}/\left(\beta_{ji}+\gamma_{j}\right)$ the corresponding bond percolation probability, or transmissibility \cite{Antoine1}. This can have a non-trivial solution corresponding to the existence of extensive outbreaks, if the disease-free state, $\theta_{ij}=1 \;\; \forall (i, j)$, is unstable. The threshold or phase transition, which signifies the region in parameter space that separates the epidemic and non-epidemic phases, can be obtained through a stability analysis of the disease-free state, where the eigenvalue of the Jacobian for \eqref{eq:MFequ} with the largest real part, is real and vanishes when \begin{align} \label{eq:GenThresh} &det(\underline{\underline{N}}-\underline{\underline{I}})=0, \;\;\; \text{with} \\ \nonumber &N_{(i,j)(k,l)}= T_{ji}\delta_{jk}\left<k_{l}\right>_{i-j} \end{align} \noindent an $M^{2}$x$M^{2}$ matrix \cite{Meyer}. Similar results for the equilibrium properties are derivable from a multitype bond percolation approach \cite{Antoine1}. \section{\label{sec:MultiScaleFramework}FRAMEWORK FOR MULTISCALE NETWORKS} Of interest to us are systems where type structure adds an additional scale of relevant topology, and not just demographic complexity \cite{Vazquez1, Antoine1}. For instance, we can apply the multitype network formalism to a simple model for a metapopulation by affiliating population centers with node types and coupling among populations with edges connecting their constituent nodes. In this way, a complex topology can be encoded on a micro-scale with a macro-scale adjacency matrix, $\underline{\underline{A}}$, describing which populations are directly connected through node interactions \cite{Vazquez1}. We envisage example systems where $\underline{\underline{A}}$ describes the connectivity among urban centers, such as cities, towns, or villages, facilitated by roads or airlines. By conceptualizing the topology in this manner, we can study the phenomenology of infection propagation among population centers and describe how the propagation properties depend on the underlying connectivity patterns. A schematic is shown in Fig.\ref{fig:Schematic}-(a) for a simple system with the pertinent structure. More broadly, we advance this approach as an avenue for combining the frameworks of network theory, metapopulations, and front propagation, which will be particularly useful if the interaction topology is coherent after some level of coarse graining. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.34]{1DLatticeSchm.pdf} \caption{ (a) A schematic of SIR dynamics on a metapopulation, where infection spreads along edges connecting nodes of various types at the finest scale (shown with integer, population labels), and the macro-scale topology identifies which populations are directly connected through agent interactions. (b) A particular example of this framework, in which the macro-scale topology takes the form of a one-dimensional lattice. In Sec.\ref{sec:1DLatt}, we focus on a simple case with configuration model construction, where each site has an identical degree distribution, specifying the probability of having a given number of internal (0), right (+), and left (-) external edges (shown above with labels for site $i$).} \label{fig:Schematic} \end{center} \end{figure} \section{\label{sec:1DLatt} 1-D LATTICE METAPOPULATION DYNAMICS} To illustrate this approach, we consider a special case of the above where the macro-scale topology is an infinite one-dimensional lattice, $M\rightarrow \infty$, in which agents interact with other agents of the same type and agents of neighboring types, $A_{nj}=(\delta_{j,n}+\delta_{j,n+1}+\delta_{j,n-1})$ \cite{Belik1}. If infection is started at a single site (e.g., site $0$) in a fully susceptible system, a strict directionality applies: in order for site $i$ to be infected, sites $i-1, i-2,...$ must be infected first. In such a case, we expect a well-defined infection front to propagate through the lattice. In keeping with the above, we focus on an effective force of infection model among populations with static configuration networks having prescribed degree distributions -- a generalization of the paradigmatic, spatial SIR model in one dimension, where the assumption of well mixed populations is relaxed to include complexity in agent interactions \cite{Sazonov1, Keeling1}. A schematic is shown in Fig.\ref{fig:Schematic}-(b). Since each node has three edge types, the mean equations of motion describe a three-component field, $\vec \theta_{n}(t)\equiv\left(\theta_{nn},\theta_{nn+1},\theta_{nn-1}\right)\equiv\left(\theta^{0}_{n}(t),\theta^{+}_{n}(t),\theta^{-}_{n}(t)\right)$, where $(0)$, $(+)$, and $(-)$ denote internal, right-external, and left-external edges, at the corresponding site. For simplicity, homogeneity is assumed, with $\beta$, $w$, $\gamma$, and $G$ all uniform -- reducing the field equations to \begin{align} \label{eq:1DLatt} &\frac{d\theta^{0}_{n}}{d\tau}= (1-\theta^{0}_{n}) +T\left( \frac{G_{0}(\vec \theta_{n})}{G_{0}(\vec1)} -1\right) \nonumber \\ &\frac{d\theta^{\pm}_{n}}{d\tau}= (1-\theta^{\pm}_{n}) +T\left( \frac{G_{\mp}(\vec \theta_{n\pm1})}{G_{\mp}(\vec1)} -1\right), \end{align} \noindent where the time, $\tau$, is measured in units of $1/(\beta +\gamma)$, and the subscript in $G$ denotes a partial derivative with respect to the corresponding variable. For edge number consistency, $G_{+}(\vec{1})=G_{-}(\vec{1})$, but in general we allow for other asymmetries in the degree distributions. \subsection{\label{sec:Dispersion} Dispersion Relation and Transport Coefficients} To understand the spatio-temporal dynamics \eqref{eq:1DLatt}, we first quantify how perturbations away from the unstable state propagate by linearizing the dynamics around the disease-free equilibrium, $\vec \theta_{n}(t)=\vec{1}-\vec{\epsilon}_{n}(t)$, and decoupling the perturbations into basis modes using the Inverse Discrete Fourier Transform, $\vec{\epsilon}_{n}(t)= \frac{1}{M}\sum^{M-1}_{\nu=0} \vec{\epsilon}_{\nu}(t)e^{i(2\pi\nu n/M)}$, (IDFT). The dispersion relation can be found by substituting the IDFT into \eqref{eq:1DLatt} and using the basis properties of orthogonality and completeness. In the limit $M\rightarrow \infty$ the site perturbations approach the integral, $\vec{\epsilon}_{n}(t)= \frac{1}{2\pi}\int_{0}^{2\pi} \vec{\epsilon}(k)e^{i\left(kn-\omega(k)t\right)}dk$. With this prescription, we find the dispersion relation takes the form of a cubic, characteristic equation \begin{align} \label{eq:Dispersion} &\det\left(\underline{\underline{K_{e}}}(q)-\frac{1\!+\!s(q)}{T}\underline{\underline{I}}\right) = 0, \;\;\;\;\;\; \text{with} \\ &\underline{\underline{K_{e}}}(q)= \renewcommand{\arraystretch}{2.5} \setlength\arraycolsep{7.2pt} \begin{bmatrix*}[l] \large \frac{\left<k_{0}^2\right>}{\left<k_{0}\right>}-1 &\frac{\left<k_{0}k_{+}\right>}{\left<k_{0}\right>}&\frac{\left<k_{0}k_{-}\right>}{\left<k_{0}\right>} \\ \frac{\left<k_{-}k_{0}\right>}{\left<k_{-}\right>}e^{-q}&\frac{\left<k_{-}k_{+}\right>}{\left<k_{-}\right>}e^{-q} &\left(\frac{\left<k_{-}^2\right>}{\left<k_{-}\right>}\!-\!1\right)e^{-q}\\ \frac{\left<k_{+}k_{0}\right>}{\left<k_{+}\right>}e^{q} &\left(\frac{\left<k_{+}^2\right>}{\left<k_{+}\right>}\!-\!1\right)e^{q} &\frac{\left<k_{+}k_{-}\right>}{\left<k_{+}\right>}e^{q}\\ \end{bmatrix*} \nonumber \end{align} \noindent which for convenience, is written in terms of $s$ and $q$, where $\omega=is$ and $k =iq$. Interestingly, this method reveals a generalization of the average excess degree matrix, $\underline{\underline{K_{e}}}(0)$ -- whose elements are found by selecting a randomly chosen edge in a particular direction, and counting the average number of reachable neighbors of a particular type -- for the interconnected network system, $\underline{\underline{K_{e}}}(q)$, which incorporates the relative states of adjacent sites on the lattice for each mode $q$. We expect this operator to emerge in similar problems on interconnected networks. Combining the above with the behavior of infection near the phase transition, where there is no exponential growth in time and each site has the same field value: $s\rightarrow0$ and $q\rightarrow0$ \eqref{eq:Dispersion}, we find a simple condition for the critical transmissibility $T_{c}$: \begin{align} \label{eq:1DThresh} &T_{c}=\frac{1}{\lambda_{m}^{k}(0)}\;, \end{align} \noindent where $\lambda_{m}^{k}(q)$ is the maximum eigenvalue of $\underline{\underline{K_{e}}}(q)$, with $\lambda_{m}^{k}(0)$ corresponding to $\underline{\underline{K_{e}}}(0)$. Because the addition of external edges increases the spreading capacity of the disease, the critical transmissibility in the coupled system is less than the uncoupled case, implying that transport-mediated infections from neighboring sites can sustain epidemics even when individual populations on their own cannot \cite{Mendiola, Dikison}. Also from the dispersion relation, we can find the asymptotic transport coefficients for rightward moving disturbances by making a standard saddle-point-approximation of the perturbations' integral representation in fourier space: expanding the integrand around its dominant contribution, $k^{*}$, in the co-moving frame, $\xi=n-v^{*}t$, \begin{align} &e^{\!^{\mathlarger{i(kn\!-\!\omega(k)t)}}}\!\!\sim\!e^{\!\mathlarger{^{\!ik\xi\!}}}e^{it\left(\!kv^{*}\!-\omega(\!k^{*}\!)-\!\left.\mathlarger{\frac{d\omega}{dk}}\right\vert_{k^{*}}\!\!\!\left(k-k^{*}\right)\!\right)}\!e^{\!\!\left.\mathlarger{\frac{-it\left(k-k^{*}\right)^{2}}{2}}\!\mathlarger{\frac{d^{2}\!\omega}{dk^{2}}}\right \vert_{k^{*}}}\!\!\! \nonumber \label{eq:FourierArg} \end{align} \noindent and taking the infinite time limit while enforcing approximate constancy with no exponential growth and $\xi$ finite -- where $v^{*}$ is the asymptotic speed at which perturbations to the unstable state propagate \cite{Saarloos1}. This procedure uncovers an exponential moving pulse for the leading-edge of the infection profile with a diffusive correction \cite{Saarloos1}: \begin{equation} 1-\theta \sim \dfrac{e^{-q^{*}\xi}e^{-\xi^{2}/4D^{*}t}}{\sqrt{D^{*}t}}\;, \label{eq:PertField} \end{equation} \noindent where $q^*$, $v^*$, and $D^*$ satisfy the saddle-point relations: \begin{equation} \label{eq:SaddlePoint} v^{*}= \left. \frac{ds}{dq} \right \vert_{q^{*}} =\frac{s(q^{*})}{q^{*}} = T \left. \frac{d\lambda^{k}_{m}}{dq} \right \vert_{q^{*}} = \frac{-1+T\lambda^{k}_{m}(q^{*})}{q^{*}} \end{equation} \begin{equation} \label{eq:Diff} \text{and} \;\;\;\; D^{*}= \left. \frac{1}{2}\frac{d^{2}s}{dq^{2}} \right \vert_{q^{*}} = \left. \frac{T}{2}\frac{d^{2}\lambda^{k}_{m}}{dq^{2}} \right \vert_{q^{*}} \;, \end{equation} \noindent giving a transcendental equation for $q^{*}$. When the average excess degree matrix is irreducible (the domain of interest to us), the dominant growth exponent for each $q$ is real and corresponds to a uniquely positive eigenvector \eqref{eq:Dispersion} \cite{Meyer}, and thus we expect the same selected velocity for all fields \cite{Theodorakis}. Approximately, the fields propagate in this regime with proportions $\vec{1}-\epsilon(t)\vec{Q}(q^{*},s^{*})$, where $\vec{Q}(q^{*},s^{*})$ is the corresponding mode of $\underline{\underline{K_{e}}}(q^{*})$\eqref{eq:Dispersion}. If multiple solutions exist for $v^{*}$, the fastest solution is selected \cite{Saarloos1}. The characteristic wavelength, $1/q^{*}$, is related to the asymptotic size of the front's leading edge, and diverges near the phase transition. The diffusion coefficient, $D^{*}$, gives the effective widening of the mean-field pulse in the co-moving frame and is proportional to the largest finite-size correction to $v^{*}$ in the limit where the number of nodes at each site tends to infinity. In order to uncover the principal dependencies of the transport coefficients, we study \eqref{eq:Dispersion}-\eqref{eq:Diff} near the phase transition, where the power series expansion for the dispersion relation is a convenient representation; the latter is found by substituting $(s(q)+1)/T = a+bq+\frac{c}{2}q^{2}+...$ into \eqref{eq:Dispersion}, and equating powers in $q$. When $s^{*}$ and $q^{*}$ are small in the vicinity of $T_{c}$, the expansion can be truncated at low order, giving a Fisher-Kolmogorov-like dispersion relation with the approximate scaling \begin{equation*} \begin{aligned}[c] &s^{*}\sim\left(T\lambda^{k}_{m}(0)-1\right) \\ &q^{*}\sim\left(T\lambda^{k}_{m}(0)-1\right)^\frac{1}{2}{D^{*}}^{-\frac{1}{2}}\\ \end{aligned} \qquad \begin{aligned}[c] &v^{*}\sim\left(T\lambda^{k}_{m}(0)-1\right)^\frac{1}{2}{D^{*}}^{\frac{1}{2}}\\ &\frac{D^{*}}{T\lambda^{k}_{m}(0)}\sim \delta \end{aligned} \end{equation*} \begin{align} \label{eq:Scaling} &\delta=\frac{\frac{\left<k_{-}k_{0}\right>\left<k_{0}k_{+}\right>}{\left<k_{-}\right>\left<k_{0}\right>}+\frac{\left<k_{-}k_{+}\right>}{\left<k_{-}\right>} \left(\lambda^{k}_{m}(0)-\frac{\left<k_{0}^2\right>}{\left<k_{0}\right>}\!+\!1\right)}{\left(\lambda^{k}_{m}(0)-\lambda^{k}_{2}(0)\right) \left(\lambda^{k}_{m}(0)-\lambda^{k}_{3}(0)\right)}\;, \end{align} \noindent where $\lambda^{k}_{2}(0)$ and $\lambda^{k}_{3}(0)$ are the subdominant eigenvalues of $\underline{\underline{K_{e}}}(0)$. In this regime, we find an effective reaction-diffusion behavior with the generic dependence of the shape and speed of the propagating front's leading edge on the reproductive number $T\lambda^{k}_{m}(0)$ (a product of the spreading capacity along edges and the magnitude of topological fluctuations) and on the normalized diffusion coefficient $\delta$: measuring the relative strength of connection between lattice sites \eqref{eq:Scaling}. We see that the effective reaction rate is equal to the distance from the phase transition, $T\lambda^{k}_{m}(0)-1$, and that all coefficients grow from zero with this distance, except for $D^{*}$ which varies discontinuously through $T_{c}$. Furthermore, the normalized diffusion coefficient increases from zero with $\left<k_{-}k_{0}\right>\left<k_{0}k_{+}\right>$ and $\left<k_{-}k_{+}\right>$ -- the correlation moments of the degree distribution which encode the propensity for transport from the $i\mp1$ site to the $i\pm1$ site (both of which cannot be zero, otherwise epidemics are locally confined), and with the viability of subdominant modes to support growth. In general, we find that as $\delta$ increases: $v^{*}$ and $D^{*}$ increase, $q^{*}$ decreases, and $s^{*}$ remains constant, implying faster transport and greater similarity among sites, as more edge-type pairs allow for traversing the lattice, but with little change in the growth exponent. The above demonstrates the typical trend for these models, that the front dynamics is strongly influenced by the joint degree distribution's second moments (i.e., the relevant excess degree properties are generally amplified by correlation among degree types and degree heterogeneity). For example, in analogy with the single network case, fast transport can be achieved with the presence of a few nodes with large internal and external degrees, or ``transport hubs", even if the average degrees in the network are small \cite{Newman3,Vespignani1}. \subsection{\label{sec:SimpleMixing} Simple Mixing Example} Additional understanding of the basic form of the transport coefficients is gained by looking at a special case of the micro-scale degree distribution, where the generating function takes the form $G\left(fx_{0}+\frac{1-f}{2}(x_{+}+ x_{-})\right)$, with total degree described by $G$, and a given edge connecting nodes of the same site with probability $f$, and nodes of left and right neighboring sites with equal probability $\left(1-f\right)/2$, where $1-f$ is an effective mixing parameter among populations. With this prescription, the critical transmissibility is reduced to the inverse of the total-edge excess degree, $T_{c}=\frac{G'(1)}{G''(1)}$, and the normalized diffusion coefficient, to the fraction of external edges in a each direction, $\delta=(1-f)/2$. Moreover, the dispersion relation takes the instructive form \begin{equation} \label{eq:SimpleDisp} s(q)= -1 + \frac{TG''(1)}{G'(1)}\left(f + (1-f)\cosh(q) \right) \;, \end{equation} \noindent where $s+1$ is given by the basic reproductive number multiplied by the average relative incidence, $e^{-\Delta x q}$, on the end of a randomly selected edge -- illustrating the intuitive generalization of the single network case, where different edge types are more and less likely to connect to infected nodes depending on their place in the lattice, and thus to contribute to local growth. Likewise, from \eqref{eq:SaddlePoint} and \eqref{eq:Diff}, we find the speed and diffusion coefficient, \begin{equation} \label{eq:SimpleSpeed} v^{*}= \frac{TG''(1)}{G'(1)}(1-f)\sinh(q^{*}) \end{equation} \noindent and \begin{equation} \label{eq:SimpleDiff} D^{*}= \frac{TG''(1)}{2G'(1)}(1-f)\cosh(q^{*}) \end{equation} \noindent where $q^{*}$ satisfies \eqref{eq:SaddlePoint}, and $v^{*}$ is given by the basic reproductive number multiplied by the average product of relative position and incidence, $-\Delta x e^{-\Delta x q^{*}}$, on the end of a randomly selected edge. Fig. \ref{fig:Transport} shows the transport coefficients, \eqref{eq:SimpleDisp}, \eqref{eq:SimpleSpeed}, and \eqref{eq:SimpleDiff}, as functions of $T/T_{c}$ and $f$, with partial scaling collapse \eqref{eq:Scaling} for the corresponding class of network configurations. The expected reaction-diffusion scaling can be observed near the critical point, and far away from the critical region, when $T \gg T_{c}$, $q^{*}$, $\frac{s^{*}G'(1)}{TG''(1)}$, $\frac{v^{*}G'(1)}{TG''(1)}$, and $\frac{D^{*}G'(1)}{TG''(1)}$ tend to limiting curves which depend only on $1-f$ , suggesting the intuitive asymptotic proportionality to the reproductive number. \begin{figure}[t] \centerline{\includegraphics[scale=0.43]{TransCoeff2-eps-converted-to.pdf}} \caption{{{The scaled transport coefficients for a one-dimensional lattice of configuration model networks with arbitrary total-degree distribution and inter-population mixing parameter $1-f$, shown as functions of the latter (Sec.\ref{sec:SimpleMixing}): $q^{*}$ (a), $s^{*}$ (b), $v^{*}$ (c), and $D^{*}$ (d). The colored regions mark the range of each coefficient, which are bounded by the critical-region scaling $(T \gtrsim T_{c})$, and the limiting behavior $(T \gg T_{c})$, delineated by dashed and solid curves respectively; the former are straight lines, signifying agreement with the predicted scaling \eqref{eq:Scaling}. Each panel's arrow indicates the direction of increase in the distance from the phase transition, $T/{T_{c}}-1$. }}} \label{fig:Transport} \end{figure} \subsection{\label{sec:PulledFront} Pulled Front Classification} In order to connect the transport properties of the linear equations to the full nonlinear system, we refer to the classification of fronts propagating into unstable states, which in our system is the fully susceptible metapopulation lying ahead of the infection front. In general, there are two types of deterministic fronts: pulled and pushed, with the former having fronts with asymptotic speed equal to the linear spreading speed and the latter having fronts with asymptotic speed greater than the linear spreading speed \cite{Saarloos1}. Pushed fronts occur because nonlinearities in the equations of motion tend to increase the growth of perturbations on the unstable state, resulting in nontrivial front shape dependence of the speed. However, in our system all nonlinearities are proportional to probability generating functions, $\sim G'(\theta)/G'(1)$ \eqref{eq:1DLatt}, which are monotonically increasing over the unit interval. Therefore, all nonlinearities tend to increase $\theta$, and consequently dampen the growth of infection -- a sufficient condition for pulled fronts \cite{Panja}, and thus we anticipate fronts in this model to be pulled; this agrees with the intuition that epidemic propagation is governed by its behavior in a fully susceptible population. In practice, the classification has importance for control strategies in systems with similar structure, implying that to mitigate the spread of infection among populations, efforts should be focused on the leading edge of the front, and not on larger outbreaks occurring farther behind. \subsection{\label{sec:LocalDynamics} Relaxation Properties} In addition to quantifying the transport, the front speed can be used to extract information about the dynamics away from the unstable state. As shown above, the $\vec \theta$-field settles onto a solution with translational similarity, $\vec{\theta}_{n\pm x}(t)=\vec{\theta}_{n}(t\mp \frac{x}{v^{*}})$, after an initial transient period. Behind the leading edge of the front, the behavior resembles a relaxation to the stable equilibrium \eqref{eq:SteadyState}, $\vec{\theta}_{n}(t)\approx \vec{\bar{\theta}}+\vec{\eta}(t-\frac{n}{v^{*}})\approx \vec{\bar{\theta}}+\vec{\eta}e^{-z^{*}(n-v^{*}t)}$, where the spatial rate, $|z^{*}|$, is the dominant eigenvalue of the nonlinear eigenvalue equation \begin{align} \label{eq:Relax} & \det\left(\underline{\underline{G'_{e}}}(\vec{\bar{\theta}},z^{*})-\frac{1\!+\!v^{*}z^{*}}{T}\underline{\underline{I}}\right)=0, \;\;\;\; \text{with} \\ & \underline{\underline{G'_{e}}}(\vec{\bar{\theta}},z) = \left. \renewcommand{\arraystretch}{2.4} \setlength\arraycolsep{7.0pt} \begin{bmatrix*}[l] \textstyle{\frac{G_{00}}{G_{0}(\vec1)}} &\frac{G_{0+}}{G_{0}(\vec1)}&\frac{G_{0-}}{G_{0}(\vec1)} \\ \frac{G_{-0}}{G_{-}(\vec1)}e^{-z}&\frac{G_{-+}}{G_{-}(\vec1)}e^{-z}&\frac{G_{--}}{G_{-}(\vec1)}e^{-z} \\ \frac{G_{+0}}{G_{+}(\vec1)}e^{z}&\frac{G_{++}}{G_{+}(\vec1)}e^{z}&\frac{G_{+-}}{G_{+}(\vec1)}e^{z} \\ \end{bmatrix*} \right \vert_{\mathlarger{\vec{\bar{\theta}}}} \;. \nonumber \end{align} \noindent The latter is the analogue of $\underline{\underline{K_{e}}}(q)$ at the stable state, which does not depend on the first two moments of the degree distribution directly, but on the generating function's properties near the equilibrium \eqref{eq:SteadyState}. In general, the two characteristic spatial rates for this system are not equal, $|z^{*}| \neq q^{*}$, and when their difference is large, it often signifies a significant separation in the time scales of growth, $1/s^{*}$, and relaxation $1/v^{*}|z^{*}|$. The latter provides an estimate for the amount of time a site is infectious, with $1/|z^{*}|$ yielding a related estimate for the width of the propagating front (i.e., the typical spatial extent of an outbreak at a given time). In particular, when the front speed is very fast and the degree distribution's second moments are large with the first moments $\mathcal{O}(1)$, we find that $|z^{*}|\ll q^{*}$, which suggests broad front profiles. In this case, propagation and relaxation can be thought of as approximately distinct processes. \begin{figure}[th] \includegraphics[scale=0.47]{Prof_SFvsER.pdf} \caption{{{(a) A comparison between the average cumulative infection profile for stochastic simulations of SIR dynamics on a one-dimensional lattice of scale-free (blue) and Poisson (green) networks and mean-field predictions \eqref{eq:1DLatt}. Various site sizes are shown with different symbols and color shades -- varying from light to dark for $10^{3}$ to $4\times10^{5}$ respectively. Front shapes for increasingly large sizes are found to converge to the respective mean-field front. The parameters for the underlying graphs were chosen to be: $K=100$, $\alpha=2$ and $p=0.3$ for the scale-free, and $C=2.90157$ and $p=0.3$ for the Poisson (Sec.\ref{sec:Compare}). A lattice size of 50 sites was used, which was large enough to ensure uniformity with the above graph parameters and reaction rates $\beta=\gamma=1$. The arrow indicates the propagation direction. (b) Stochastic front realizations conditioned on the middle lattice site having cumulative infection equal to half the equilibrium value \eqref{eq:SteadyState}. Averaging over realizations produced profiles like those in (a).}}} \label{fig:Profile} \end{figure} \section{\label{sec:Compare} COMPARISON WITH STOCHASTIC SIMULATIONS} The above predictions for the mean-field dynamics on the one-dimensional metapopulation were compared to stochastic simulations of SIR dynamics on random instances of multi-scale, metapopulation networks, using Gillespie's Direct Method \cite{Keeling1, Gillespie1, Gibson}. The graphs were constructed using the multitype configuration model by first generating a degree sequence from the desired degree distribution and then connecting pairs of edge-``stubs", selected uniformly at random \cite{Newman3, Antoine1,Newman1}. An outbreak was started by choosing one node from the first lattice site to be infected with all others susceptible. Only outbreaks which lead to epidemics with $\mathcal{O}(N)$ cumulative infection were considered for comparison with mean-field predictions. In order to ignore fluctuations in the initial transients, time was zeroed after the first 100 reactions. \begin{figure}[t] \centerline{\includegraphics[scale=0.46]{FracSpeedDiff_SFvsER-eps-converted-to.pdf}} \caption{{{(a) Convergence of the average velocities, $v_{\bar{N}}$, to the mean-field predictions, $v^{*}$, for scale-free (blue) and Poisson (green) networks (Fig. \ref{fig:Profile}) as functions of the cumulative number of infected nodes at each site, shown with fits to the expected pulled front scaling, $v^{*}-v_{\bar{N}} \sim \frac{D^{*}q^{*}\pi^{2}}{\ln^{2}(\bar{N})}$(Sec.\ref{sec:Compare}).}}} \label{fig:VelocityConvergence} \end{figure} We are interested in the average shape of the front that connects the fully susceptible unstable state lying ahead of the infectious wave and the fully recovered (equilibrium) state lying behind it. The average shape was computed by taking instantaneous ``snapshots'' of the profile for each stochastic realization, conditioned on the middle lattice site having cumulative infection equal to half the equilibrium value \eqref{eq:SteadyState}, and averaging the cumulative infection of the other sites over different realizations. In general, the ``snapshots'' did not occur at the same instant; however, the shifting of different front realizations in time, such that they overlapped at a given point, and conditionally averaging over the shape, eliminated some of the effects of diffusive wandering. The measured fronts were compared to the mean-field profiles by integrating the lattice equations \eqref{eq:1DLatt}. A comparison is shown in Fig. \ref{fig:Profile} for two graphs with scale-free and Poisson degree distributions, with generating functions \begin{equation} G_{S.F.}(\vec{x})= \textrm{Li}_{\alpha}\left(e^{-1/K}x_{0}\left(1-p+px_{+}\right)\left(1-p+px_{-}\right)\right) \nonumber \\ \end{equation} \noindent and \begin{equation} G_{P}(\vec{x})= \exp \left(C\left(x_{0}\left(1-p+px_{+}\right)\left(1-p+px_{-}\right)-1\right) \right) \;, \nonumber \end{equation} \noindent where $\textrm{Li}_{\alpha}$ is the polylogarithm function with exponent $\alpha$ \cite{Newman3}. The parameters for the degree distributions were chosen such that each network had the same average degree and cloning parameter, $p$ (i.e., given a specified internal degree distribution, each of a node's internal edges is copied to form an external edge with probability $p$), but with different inherent levels of heterogeneity. We see in Fig. \ref{fig:Profile} that the epidemic front is broader in the scale-free network than in the Poisson. This difference comes from the much larger front speed of the former, which had average excess degrees an order of magnitude larger than the latter, \eqref{eq:Dispersion} and \eqref{eq:SaddlePoint}, and the relatively similar relaxation times \eqref{eq:Relax} for the two classes of networks (implying that the time scale over which a site is infectious in each network is roughly the same). In the more homogeneous Poisson networks, the front is more narrow and propagates through the lattice on the same time scales as the local infection dynamics; whereas in the scale-free case, the leading edge of the front propagates quickly through the lattice, followed by a slower relaxation to the stable equilibrium state behind the front. This comparison shows that assumptions of homogeneity can drastically underestimate the speed and extent of fronts in systems with heterogenous interactions. Additionally, the front speed $v$ was numerically estimated from the average time $\left<\tau_{prog}\right>$ required for the leading edge of the front to move forward by one lattice site (where the leading edge was defined as that site where the incidence first reached a set $\mathcal{O}(1)$ level) and averaging over such levels; i.e., $1/v=\left<\tau_{prog}\right>$, once the initial spatial variation had decayed. Fig. \ref{fig:VelocityConvergence} shows the convergence of the measured speed from simulations to the mean-field prediction for each graph as a function of the steady state, cumulative infected population size at every lattice site, $\bar{N}=\bar{P}N$ (Sec.\ref{sec:VolzMiller}), with total size $N$. The lines represent fits to the expected scaling of the largest finite-size correction for pulled fronts, $v^{*}-v_{\bar{N}} \sim \frac{D^{*}q^{*}\pi^{2}}{\ln^{2}(\bar{N})}$, obtained from a general $1/\bar{N}$ cutoff in the mean-field equations \cite{Panja, Saarloos1, Brunet}; the coefficients are found to be $\mathcal{O}(1)$ of the expected scaling. In general, higher order corrections in $\bar{N}$ must be calculated from an analysis of the full, stochastic system \cite{Panja}. The very slow convergence in $\bar{N}$ comes from the transport dependence on the linearized equations where infinitesimal infection levels apply and sensitivity to stochastic effects is highest. This can be seen in the fairly large finite-size corrections to the velocity, particularly for the scale-free network, leading to a more narrow conditionally averaged front relative to the mean-field, with fewer sites initiated at a given time (Fig. \ref{fig:Profile}). \begin{figure}[t] \centerline{\includegraphics[scale=0.46]{PT_Asymm1_SteadyStatePrev-eps-converted-to.pdf}} \caption{{{The average epidemic fraction at each site for a network with asymmetric generating function (Sec.\ref{sec:Compare}) and varying transmissibilities. The critical point at which the epidemic vanished agrees with the prediction, $T_{c}=0.115$ \eqref{eq:1DThresh}. Each site was occupied by 20,000 nodes on a lattice of 100 sites.}}} \label{fig:Threshold} \end{figure} Finally, the average epidemic profile and transmissibility threshold, \eqref{eq:SteadyState} and \eqref{eq:1DThresh}, were compared to simulations. Fig. \ref{fig:Threshold} plots those comparisons for a system with left-right asymmetric generating function \begin{equation} G_{Asym}(\vec{x})=\frac{1}{3}\left( 2x_{0}^{2}x_{+}^{2}x_{-}^{4} +x_{0}^{4}x_{+}^{6}x_{-}^{2}\right) \;, \nonumber \end{equation} \noindent on a lattice of 100 sites, with 20,000 nodes on each site. Both the epidemic size for various transmissibilities and the threshold were found to be in good agreement with mean-field predictions, though the finite-size effects became increasingly important as the critical region was approached, leading to significantly smaller outbreaks near the edges of the lattice. \section{\label{sec:Conclusion} CONCLUSION} In this paper we have generalized a mean-field theory for infection dynamics on multitype networks, and used such networks to model multiscale metapopulations. Together, this enabled us to explore how macro-scale disease propagation depends on micro-scale interaction structure. As a necessary first step in this direction, we applied the approach to a simple metapopulation model for a chain of coupled populations, and derived the transport properties for infection, including their scaling with the disease transmissibility and the statistical properties of the underlying network. We also found a threshold for the viability of epidemics, and calculated the relaxation properties of the propagating front. These were compared for different network models, with heterogeneous networks having considerably higher speeds and broader fronts than their homogeneous counterparts -- illustrating the importance of including complexity in the fine-scale topology in order to accurately capture transport phenomenology. Various extensions of the work presented -- both in terms of analyses carried out and systems studied -- could be considered. We have addressed here only the average dynamics of the one-dimensional, homogeneous system, without any description of finite-size fluctuations, or consideration of the dynamics in higher-dimensional generalizations. Greater complexity could be introduced through the spatio-temporal dependence of network parameters, and/or more general network configurations \cite{Karrer}. An interesting extension of the model discussed here would include dynamic contacts between nodes and explicit mobility, instead of the assumed time scale separation between topology and the overlying process \cite{Perra,Belik1, Barabasi}. However, the basic formalism presented here can enable one to study such factors and build more realistic models for infectious processes in multiscale problems. \section*{\label{sec:Ack}ACKNOWLEDGMENTS} This work was supported by the Science and Technology Directorate of the U.S. Department of Homeland Security via the interagency agreement no.\ HSHQDC-10-X-00138. We thank Drew Dolgert for his assistance with our implementation of stochastic simulations.
1,477,468,751,276
arxiv
\section{Introduction} \label{sec:int} The renormalization group (RG) has been recognized as a powerful theoretical tool to study phase transitions and critical phenomena since the seminal work by Wilson {\it et al.} \cite{Wilson:1971bg, Wilson:1971dh, Wilson:1971dc, Wilson:1973jj}. A second-order phase transition is regarded to be located on the critical surface of a stable fixed point of RG flow equations. Critical behaviors of the phase transition, such as universal critical exponents and their relations to the dimension and symmetries, are closely connected to relevant eigenperturbations in the proximity of the fixed point, see, e.g., \cite{Ma:2020a} for more details. When the spatial dimension $d$ is smaller than and close to 4, that is, the parameter $\varepsilon \equiv 4-d$ being a positive and small quantity, the RG flows can be expanded in powers of $\varepsilon$ \cite{Wilson:1971dc, Wilson:1973jj}. In other words, one is able to use the technique of perturbation theory to compute, e.g., critical exponents, in powers of $\varepsilon$ order by order. Nevertheless, the reliability of perturbation theory is gradually loosened with the increase of the expansion parameter $\varepsilon$, e.g., in the case that the dimension is $d=3$, or even approaches toward 2. In such cases, nonperturbative RG flows are indispensable. The functional renormalization group (fRG) provides us with a convenient framework to deal with nonperturbative RG flows \cite{Wetterich:1992yh}. In the fRG approach physics of nonperturbation theory are encoded in a self-consistent flow equation for the effective action or effective potential. For more details about the method of fRG and its applications in studies of nonperturbative physics, such as the strongly correlated QCD, one is referred to, e.g., \cite{Berges:2000ew, Pawlowski:2005xe, Braun:2011pp, Dupuis:2020fhh, Fu:2022gou} for reviews and \cite{Braun:2014ata, Mitter:2014wpa, Rennecke:2015eba, Cyrol:2016tym, Cyrol:2017qkl, Cyrol:2017ewj, Fu:2019hdw, Braun:2020ada, Fu:2022uow} for recent progresses in first-principle fRG calculations in QCD. Truncations for the RG flows in the fRG approach can be usually made in a systematic way, such as the derivative expansions (DE) \cite{Balog:2019rrg, DePolsi:2020pjk}, which provides us with a set of closed flow equations or fixed-point equations for the effective potential and other dressing functions. To solve, e.g., the fixed-point equation for the potential, one can either use the Taylor expansion to expand the potential around vanishing or finite field if the dimension $d$ is not too small \cite{Litim:2002cf}, or integrate the fixed-point equation starting from the vanishing field \cite{Codello:2012sc}. It is found that direct integration of the fixed-point equation starting from the vanishing field would end up in a singularity at a finite field \cite{Codello:2012sc}, which implies that a proper treatment at large field is necessary. Studies of the global solution to the fixed-point equation with the correct asymptotic behavior in the limit of large field in the fRG approach have made progress in recent years. A combined technique with both small and large field expansion is used to obtain the global potential of the fixed point \cite{Juttner:2017cpr}. Moreover, pseudospectral methods are also employed to construct global solutions \cite{Borchardt:2015rxa} or to evolve the flow equation \cite{Chen:2021iuo}. Recently, discontinuous Galerkin methods have been developed to resolve the flow equation of a global potential \cite{Grossi:2019urj, Grossi:2021ksl, Ihssen:2022xkr}. In calculating the global potential to a fixed-point equation, the whole range of the argument of potential, i.e., the field, is usually segmented into several subranges, e.g. the region of large field and that of small field. Different methods are applied in different regimes, such as, Taylor expansion in the region of small field and numerical calculations in that of large field \cite{Juttner:2017cpr}, a standard Chebyshev series in small field and a rational Chebyshev series in large field \cite{Borchardt:2015rxa}. A global potential is finally obtained by connecting the solutions in different regimes. In this work, we try to simplify the procedure, and would like to directly integrate the fixed-point equation starting at a sufficient large value of field, where the asymptotic potential in the limit of large field is implemented as initial conditions. As a consequence, a global fixed-point potential with high numerical accuracy is resolved after the integration is finished at vanishing field, that naturally incorporates the correct asymptotic behaviors both in the limit of vanishing and in the large field. Moreover, we would like to discuss the Laurent expansion of the potential in the limit of large field for a general case that the spatial dimension $d$ is a continuous variable in the range $2\leq d \leq 4$. This paper is organized as follows: In \sec{sec:flow} we give a brief introduction about the fRG approach with the flow equation of the effective potential and the fixed-point equation, as well as some notations used thereafter. The Taylor expansion of the potential around vanishing or finite field and the Laurent expansion in the limit of large field are discussed in \sec{sec:local-solu}. In \sec{sec:global-solu} we discuss how the fixed-point equation is integrated out with the asymptotic potential in the limit of large field implemented. Moreover, eigenperturbations near the fixed point are also discussed. In \sec{sec:num-resul} we present numerical results on fixed-point potentials and critical exponents. Finally, a summary with outlook is given in \sec{sec:summary}. \section{Flow equation of the effective potential} \label{sec:flow} We begin with a RG scale $k$-dependent effective action for the $O(N)$ scalar theory, which reads \begin{align} \Gamma_{k}[\phi]=&\int \mathrm{d}^d x \left[\frac{1}{2}Z_{\phi,k}\left(\partial_{\mu} \phi_{a}\right)^{2}+V_{k}(\rho)\right]\,,\label{eq:action} \end{align} with $\mu=1,\,2,\cdots d$ and $a=0,\,1,\,2,\cdots N-1$, where $d$ is the spatial dimension and $N$ is the number of components for the scalar field. Summations over the subscripts $\mu$ and $a$ in \Eq{eq:action} are assumed. The effective potential $V_{k}(\rho)$ is $O(N)$ invariant with $\rho=\phi^2/2$ and $\phi^2=\phi_a \phi_a$. Note that in \Eq{eq:action} we have employed the local potential approximation (LPA) supplemented with a $k$-dependent wave function renormalization $Z_{\phi,k}$, which is usually called as the truncation of $\mathrm{LPA}'$. The evolution of effective action in \Eq{eq:action} with the RG scale is described by the Wetterich equation \cite{Wetterich:1992yh}, i.e., \begin{align} \partial_t \Gamma_{k}[\phi]&=\frac{1}{2}\mathrm{Tr}\Big \{\big(\partial_t R_k\big) G_{k}[\phi]\Big \}\,, \label{eq:WetterichEq} \end{align} with the RG time $t\equiv\ln (k/\Lambda)$, where $\Lambda$ is a reference scale, e.g., the UV cutoff or the initial evolution scale. The propagator in \Eq{eq:WetterichEq} reads \begin{align} G_{k}[\phi]&=\frac{1}{\Gamma^{(2)}_{k}[\phi]+R_k}\,, \label{eq:Gk} \end{align} with \begin{align} \Gamma^{(2)}_{k}[\phi]&\equiv \frac{\delta^2 \Gamma_{k}[\phi]}{\delta \phi^2}\,. \label{} \end{align} Here, the bilinear regulator $R_k$ in \Eq{eq:WetterichEq} and \Eq{eq:Gk} is devised to suppress quantum fluctuations of momenta smaller than the RG scale, while make others unaltered, and see, e.g., \cite{Pawlowski:2005xe, Fu:2022gou} for more details. In this work we adopt the flat regulator \cite{Litim:2000ci,Litim:2001up}, which reads \begin{align} R_{k}(q)&= Z_{\phi,k} q^2 r(q^2/k^2)\,,\label{eq:regulatorOpt} \end{align} with \begin{align} r(x)&= \Big(\frac{1}{x}-1\Big)\Theta(1-x)\,,\label{eq:regulatorOpt2} \end{align} where $\Theta(x)$ is the Heaviside step function. The flow equation of the effective potential in \Eq{eq:action} is readily obtained from the Wetterich equation in \Eq{eq:WetterichEq}, viz., \begin{align} \partial_t V_k(\rho)=& \mathscr{C} k^{d}\left[\frac{1}{1+\bar m_{\sigma,k}^2}+\frac{N-1}{1+\bar m_{\pi,k}^2}\right]\,, \label{eq:dtVk} \end{align} with the coefficient \begin{align} \mathscr{C}\equiv& \frac{1}{2}\frac{1}{(4\pi)^{d/2}}\frac{1}{\Gamma(d/2)}\left[(2-\eta)\frac{2}{d}+\eta\frac{2}{d+2}\right] \,, \label{} \end{align} where the masses of the longitudinal and transversal modes are denoted by \begin{align} \bar m_{\sigma,k}^2\equiv&\frac{1}{Z_{\phi,k} k^2}\Big(V'_k(\rho)+2 \rho V^{(2)}_k(\rho)\Big)\,,\\[2ex] \bar m_{\pi,k}^2\equiv&\frac{1}{Z_{\phi,k} k^2} V'_k(\rho)\,,\label{eq:masspi2} \end{align} respectively, and the anomalous dimension reads \begin{align} \eta=& -\frac{\partial_t Z_{\phi,k}}{Z_{\phi,k}}\,. \label{} \end{align} It is more convenient to work with renormalized and dimensionless variables, such that the explicit dependence of the RG scale $k$ is absorbed. To that end, one introduces \begin{align} \bar \rho=&k^{-(d-2)} Z_{\phi,k} \rho\,, \qquad u_k(\bar \rho)=k^{-d}V_k(\rho)\,.\label{} \end{align} Hence, the flow equation of the effective potential in \Eq{eq:dtVk} turns out to a flow of $u_k(\bar \rho)$ with $\bar \rho$ fixed, that is, \begin{align} \partial_t u(\bar\rho)=&-d\, u(\bar\rho)+(d-2+\eta)\,\bar\rho \,u'(\bar\rho) \nonumber\\[2ex] & +\mathscr{C} \left[\frac{1}{1+u'(\bar\rho)+2\bar\rho u^{(2)}(\bar\rho)}+\frac{N-1}{1+u'(\bar\rho)}\right]\,,\label{eq:dtu} \end{align} where the subscript $k$ for $u_k$ is not shown explicitly. In the truncation of $\mathrm{LPA}'$, the anomalous dimension can be extracted from the momentum dependence of the two-point correlation function of the transversal $\pi$ mode or the longitudinal $\sigma$ mode. Both modes give the same anomalous dimension for the Gau\ss ian fixed point, while there is indeed a difference in the case of, e.g., the Wilson-Fisher (WF) fixed point \cite{Wilson:1971dc}. In this work we adopt the anomalous dimension of the $\pi$ mode, which reads \begin{align} \eta=&\frac{1}{2^{d-2}\pi^{d/2}\Gamma(1+\frac{d}{2})} \frac{\bar\rho_0 u^{(2)}(\bar\rho_0)^2}{\left[1+2\bar\rho_0 u^{(2)}(\bar\rho_0)\right]^2}\,,\label{} \end{align} where $\bar\rho_0$ stands for the location of minimum of the potential $u(\bar \rho)$. The equation of fixed points for the effective potential is obtained by demanding $\partial_t u=0$, that immediately yields \begin{align} &(d-2+\eta)\,\bar\rho \,u'(\bar\rho)-d\, u(\bar\rho) \nonumber\\[2ex] & +\mathscr{C} \left[\frac{1}{1+u'(\bar\rho)+2\bar\rho u^{(2)}(\bar\rho)}+\frac{N-1}{1+u'(\bar\rho)}\right]=0\,.\label{eq:fixedPointEq} \end{align} \section{Local solutions of the fixed-point equation} \label{sec:local-solu} Prior to the discussion of global solutions to the fixed-point equation in \Eq{eq:fixedPointEq}, we would like to have a brief review on two approaches that provide us with local information for the solution of effective potential, that is, the Taylor expansion at vanishing or finite field, and the Laurent expansion in terms of $1/\bar\rho$ in the limit of $\bar\rho \to \infty$. \subsection{Taylor expansion at vanishing or finite field} \label{subsec:Taylor-expan} The most straightforward method to solve the flow equation of effective potential in \Eq{eq:dtu} is to expand the potential around the vanishing field $\bar\rho=0$, to wit, \begin{align} u(\bar\rho)\simeq&\sum_{n=1}^{N_{\mathrm{tr}}}\frac{\lambda_n}{n!}\bar\rho^n\,,\label{eq:uTaylorZero} \end{align} where the field-independent term is ignored, and $N_{\mathrm{tr}}$ is the maximal order of the Taylor expansion used in a calculation. Inserting \Eq{eq:uTaylorZero} into \Eq{eq:dtu}, one is led to a set of flow equations for the expansion coefficients of effective potential, i.e., \begin{align} \partial_t \lambda_n\equiv& \beta_n(\lambda_1, \lambda_2, \cdots , \lambda_{n+1})\,,\label{eq:dtlam} \end{align} where the $\beta$ function of order $n$ is a function of couplings up to order of $n+1$. Evidently, the fixed points are determined by $\partial_t \lambda_n^*=0$, to wit, \begin{align} & \beta_n(\lambda_1^*, \lambda_2^*, \cdots , \lambda_{n+1}^*)=0\,,\label{eq:betaEq} \end{align} which constitute a set of closed equations with number $N_{\mathrm{tr}}$. Critical behaviors of flows near a fixed point is readily analyzed by linearizing the flows in \Eq{eq:dtlam} near the fixed point with $\lambda_n \simeq \lambda_n^*+\delta\lambda_n$, where $\delta\lambda_n$ is a small quantity. Hence, one arrives at \begin{align} \partial_t (\delta\lambda_n)=&\sum_{n'=1}^{N_{\mathrm{tr}}} M_{n n'}\delta\lambda_{n'}\,,\label{} \end{align} with the stability matrix $M$ given by \begin{align} M_{n n'}=&\frac{\partial \beta_n}{\partial \lambda_{n'}}\bigg|_{\lambda=\lambda^*}\,,\label{} \end{align} whose eigenvalues provide us with the critical exponents pertinent to the fixed point, see, e.g., \cite{Litim:2002cf} for more detailed discussions. The Taylor expansion at vanishing field in \Eq{eq:uTaylorZero} is efficient and reliable to compute, e.g., critical exponents, when the spatial dimension is $d \to 4$, or at least not far away from $d=4$. A naive power counting immediately yields the dimension of $\lambda_n$, i.e., $[\lambda_n]=2n-(n-1)d$, if the anomalous dimension is assumed to be vanishing for the moment. Irrelevance of the expansion coefficient $\lambda_n$ demands $[\lambda_n]<0$, that leaves us with \begin{align} n>&\frac{d}{d-2}\,.\label{} \end{align} Therefore, as the spatial dimension is approaching $d=2$ from above, the required expansion order $N_{\mathrm{tr}}$ in \Eq{eq:uTaylorZero} increases significantly, due to the rapidly increased number of relevant parameters. Generically, the convergence of \Eq{eq:uTaylorZero} becomes more and more difficult as $d \to 2$. A natural extension of the Taylor expansion at vanishing field is to expand the effective potential at a finite field, which is able to alleviate the aforementioned problem born by the vanishing field expansion in some degree. This finite-field expansion reads \begin{align} u(\bar\rho)\simeq&\sum_{n=1}^{N_{\mathrm{tr}}}\frac{\lambda_n}{n!}\big(\bar\rho-\kappa\big)^n\,,\label{eq:uTaylorFini} \end{align} where the expansion point $\kappa$ can be chosen to be $k$-independent or dependent. Moreover, it is convenient to choose $\kappa=\bar\rho_0$ being the minimum of the function $u(\bar\rho)$. Substituting \Eq{eq:uTaylorFini} into \Eq{eq:dtu} one is able to obtain fixed points and their relevant critical exponents by using the similar method described above, which will not be elaborated on anymore. \subsection{Laurent expansion in the limit of large field} \label{subsec:Laurent-expan} When the field in \Eq{eq:uTaylorFini} is very large, e.g., $\bar\rho/\bar\rho_0\gg 1$, with $\bar\rho_0$ being the minimum of $u(\bar\rho)$, the nonlinear terms in the square bracket in \Eq{eq:fixedPointEq} can be safely neglected, and then the asymptotic behavior of $u(\bar\rho)$ in the limit of large field is readily obtained, as follows \begin{align} u(\bar\rho) \sim & \,\gamma\, {\bar\rho}^{d/(d-2+\eta)}\,,\qquad \bar\rho \to \infty\,,\label{eq:uInfty} \end{align} with a constant $\gamma$. The missing subleading terms on the right side of \Eq{eq:uInfty} can be formulated with a Laurent expansion in terms of $1/\bar\rho$. Firstly, let us consider the case that the leading power $d/(d-2+\eta)$ is an integer \cite{Litim:2016hlb, Juttner:2017cpr}, and a more general case is discussed in \sec{subsubsec:Laurent-expan-general}. The expansion in the limit of large field reads \begin{align} u(\bar\rho)\simeq& \,\gamma\, {\bar\rho}^{d/(d-2+\eta)}\left[1+\sum_{n=1}^{N_{\mathrm{tr}}} \gamma_{n}\left(\frac{1}{\bar\rho}\right)^n\right]\,.\label{eq:uLaurent} \end{align} In the same way, the expansion coefficients $\gamma_{n}$ can be extracted by inserting \Eq{eq:uLaurent} into the fixed-point equation \labelcref{eq:fixedPointEq}. In the following, we present the first few nonvanishing coefficients for some values of $d$ with $\eta=0$. When $d=3$, one is left with \begin{align} \gamma_5=&\mathscr{C}\frac{5N-4}{75\gamma^2}\,,\quad \gamma_7=-\mathscr{C}\frac{25N-24}{1575\gamma^3}\,,\nonumber\\[2ex] \gamma_9=&\mathscr{C}\frac{125N-124}{30375\gamma^4}\,, \cdots \,. \label{eq:Laurentd3} \end{align} Here, the order of the first nonvanishing coefficient is $n=5$. In the case of $d=2.5$, one arrives at \begin{align} \gamma_9=&2\mathscr{C}\frac{9N-8}{405\gamma^2}\,,\quad\gamma_{13}=-2\mathscr{C}\frac{81N-80}{26325\gamma^3}\,,\nonumber\\[2ex] \gamma_{17}=&2\mathscr{C}\frac{729N-728}{1549125\gamma^4}\,, \cdots \,.\label{} \end{align} For $d=2.1$, one has \begin{align} \gamma_{41}=&10\mathscr{C}\frac{41N-40}{35301\gamma^2}\,,\quad\gamma_{61}=-10\mathscr{C}\frac{1681N-1680}{45220581\gamma^3}\,,\nonumber\\[2ex] \gamma_{81}=&10\mathscr{C}\frac{68921N-68920}{51700467861\gamma^4}\,, \cdots \,.\label{} \end{align} Evidently, as the spatial dimension $d$ decreases towards $d=2$, the order of the first nonvanishing coefficient increases significantly. \subsubsection{Laurent expansion in the limit of large field for the case of $d/(d-2+\eta)$ being a rational fraction} \label{subsubsec:Laurent-expan-general} We have discussed above the Laurent expansion of potential in the limit of large field, cf. \Eq{eq:uLaurent}, where the leading power $d/(d-2+\eta)$ is an integer. In this subsection, we discuss a more general case with $d/(d-2+\eta)$ being a rational fraction. Note that even it is an irrational number, one can always approximate it with a rational one up to any desired accuracy. Let the leading power be \begin{align} \frac{d}{d-2+\eta}=&\frac{l}{m}\,,\label{eq:fraclm} \end{align} where the right side has been fully simplified, and both $l$ and $m$ are integers. Then the Laurent expansion in \Eq{eq:uLaurent} can be modified as \begin{align} u(\bar\rho)\simeq& \,\gamma\, {\bar\rho}^{d/(d-2+\eta)}\left[1+\sum_{n=1}^{N_{\mathrm{tr}}} \gamma_{n}\left(\frac{1}{\bar\rho^{\frac{1}{m}}}\right)^n\right]\,.\label{eq:uLaurent2} \end{align} Here we show some examples with $\eta=0$ and the dimension in the vicinity of $d=3$ . Firstly, considering the case of $d=2.9$, one has $d/(d-2+\eta)=29/9$. In the same way, inserting \Eq{eq:uLaurent2} into the fixed-point equation \labelcref{eq:fixedPointEq}, one is able to obtain the first few nonvanishing coefficients, as follows \begin{align} \gamma_{49}=&90\mathscr{C}\frac{49N-40}{69629\gamma^2}\,,\quad \gamma_{69}=-270\mathscr{C}\frac{2401N-2320}{46442543\gamma^3}\,,\nonumber\\[2ex] \gamma_{89}=&7290\mathscr{C}\frac{117649N-116920}{255371390029\gamma^4}\,, \cdots \,. \label{} \end{align} Then the potential in \Eq{eq:uLaurent2} reads \begin{align} u(\bar\rho)\Big|_{d=2.9}\simeq& \,\gamma\, {\bar\rho}^{3.22}+\gamma \gamma_{49}{\bar\rho}^{-2.22}+\gamma \gamma_{69}{\bar\rho}^{-4.44}\nonumber\\[2ex] &+\gamma \gamma_{89}{\bar\rho}^{-6.67}+\cdots\,.\label{eq:ud2d9} \end{align} If the value of $d$ is increased up to $d=2.96$, the fraction in \Eq{eq:fraclm} reads $l/m=37/12$. It follows that \begin{align} \gamma_{62}=&75\mathscr{C}\frac{31N-25}{35557\gamma^2}\,,\quad \gamma_{87}=-600\mathscr{C}\frac{961N-925}{38152661\gamma^3}\,,\nonumber\\[2ex] \gamma_{112}=&1350\mathscr{C}\frac{29791N-29575}{10563024661\gamma^4}\,, \cdots \,, \label{} \end{align} which yields \begin{align} u(\bar\rho)\Big|_{d=2.96}\simeq& \,\gamma\, {\bar\rho}^{3.08}+\gamma \gamma_{62}{\bar\rho}^{-2.08}+\gamma \gamma_{87}{\bar\rho}^{-4.17}\nonumber\\[2ex] &+\gamma \gamma_{112}{\bar\rho}^{-6.25}+\cdots\,.\label{eq:ud2d96} \end{align} From \Eq{eq:Laurentd3}, one immediately finds for the Laurent expansion of potential with $d=3$ \begin{align} u(\bar\rho)\Big|_{d=3}\simeq& \,\gamma\, {\bar\rho}^{3}+\gamma \gamma_{5}{\bar\rho}^{-2}+\gamma \gamma_{7}{\bar\rho}^{-4}+\gamma \gamma_{9}{\bar\rho}^{-6}+\cdots\,.\label{eq:ud3} \end{align} From Eqs. \labelcref{eq:ud2d9}, \labelcref{eq:ud2d96}, \labelcref{eq:ud3}, one immediately finds that as the dimension $d$ approaches towards $d=3$, the powers of $\bar \rho$ of different orders converges at their respective values of $d=3$. \section{Global solutions of the fixed-point equation} \label{sec:global-solu} In this section we would like to solve the fixed-point equation in \Eq{eq:fixedPointEq} directly by using numerical method. \Cref{eq:fixedPointEq} is a differential algebraic equation (DAE) of index 1 \cite{Campbell:2008}, which indicates that one has to take one further derivative on the DAE to transform it into a set of ordinary differential equations (ODE), and see e.g., \cite{Grossi:2019urj} for relevant discussions. The resulting ODEs read \begin{subequations} \begin{align} u'(\bar\rho)=&u_1(\bar\rho)\,,\label{eq:dudrho1}\\[2ex] u_1'(\bar\rho)=&u_2(\bar\rho)\,,\label{eq:dudrho2}\\[2ex] u_2'(\bar\rho)=&\frac{1}{2\mathscr{C}\bar\rho}\Big(1+u_1(\bar\rho)+2\bar\rho u_2(\bar\rho)\Big)^2\bigg[(-2+\eta)u_1(\bar\rho)\nonumber\\[2ex] &+(d-2+\eta)\bar\rho u_2(\bar\rho)-\mathscr{C}(N-1)\frac{u_2(\bar\rho)}{\big(1+u_1(\bar\rho)\big)^2}\bigg]\nonumber\\[2ex] &-\frac{3}{2\bar\rho}u_2(\bar\rho)\,.\label{eq:dudrho3} \end{align} \label{eq:dudrhos} \end{subequations} Here, we have defined $u_1$ and $u_2$ in \labelcref{eq:dudrho1} and \labelcref{eq:dudrho2}, which correspond to the first and second derivatives of $u(\bar\rho)$, respectively. \Cref{eq:dudrho3} is obtained by differentiating \Eq{eq:fixedPointEq} with respect to $\bar\rho$. We integrate the differential equations in \labelcref{eq:dudrhos}, starting at a sufficient large $\bar\rho=\bar\rho_{L}$ with initial values for $u(\bar\rho_{L})$, $u_1(\bar\rho_{L})$ and $u_2(\bar\rho_{L})$ obtained from the asymptotic expression of $u(\bar\rho)$ in the limit of large field as shown in \Eq{eq:uInfty}. The functions are integrated out from $\bar\rho=\bar\rho_{L}$ towards $\bar\rho \to 0$. The parameter $\gamma$ in \Eq{eq:uInfty} is fine tuned such that the potential $u(\bar\rho)$ and its derivatives of different orders $u^{(n)}(\bar\rho)$ are finite in the limit of $\bar\rho \to 0$. Using such approach, one is able to find a solution to the fixed-point equation \labelcref{eq:fixedPointEq}, denoted by $u^{*}(\bar\rho)$ in what follows. \begin{figure*}[t] \includegraphics[width=0.45\textwidth]{d1u-N1-d3}\hspace{0.5cm} \includegraphics[width=0.45\textwidth]{d2u-N1-d3} \caption{Derivatives of the effective potential $u^\prime(\bar\rho)$ (left panel) and $u^{(2)}(\bar\rho)$ (right panel) at the Wilson-Fisher fixed point as functions of $\bar\rho$ for the $d=3$ dimensional $O(1)$ scalar theory, i.e., the Ising universality class, obtained in the truncation of LPA. The global potential is in comparison to those obtained from Taylor and Laurent expansions, and for the Taylor expansion both the vanishing and finite expansion points are employed. The zoomed-out view of $u^\prime(\bar\rho)$ is also shown in the inlay of the left panel.}\label{fig:du-N1-d3} \end{figure*} \begin{figure}[t] \includegraphics[width=0.45\textwidth]{v-N1-d3} \caption{Eigenfunctions of the first few eigenvalues for the potential of WF fixed point shown in \Fig{fig:du-N1-d3}, where we have used the normalization $v(0)=1$ and the calculation is done for the $3d$ $O(1)$ scalar theory in LPA approximation.}\label{fig:v-N1-d3} \end{figure} \begin{figure*}[t] \includegraphics[width=0.45\textwidth]{nu}\hspace{0.5cm} \includegraphics[width=0.45\textwidth]{eta} \caption{Critical exponents $\nu$ (left panel) and $\eta$ (right panel) of the $O(N)$ universality class as functions of the spatial dimension $d$ with several different values of $N$ obtained in LPA and $\mathrm{LPA}'$, where $d$ is a continuous variable in the range $2\leq d \leq 4$. The exact results for the $2d$ Ising model and the spherical model with $N \to \infty$ are also shown.}\label{fig:nu-eta} \end{figure*} \begin{figure}[t] \includegraphics[width=0.45\textwidth]{rho0} \caption{Location of the minimum of the potential $u(\bar\rho)$ at the Wilson-Fisher fixed point, $\bar \rho_0$, as a function of the spatial dimension $d$ with several different values of $N$ obtained in the truncations of LPA and $\mathrm{LPA}'$.}\label{fig:rho0} \end{figure} We proceed with the discussion of eigenperturbations near the fixed-point potential $u_{*}(\bar\rho)$ and the relevant eigenvalues, i.e., critical exponents \cite{Codello:2014yfa}. The effective potential near the fixed point can be written as \begin{align} u(\bar\rho)=&u_{*}(\bar\rho)+\epsilon\,\mathrm{e}^{-\omega t}v(\bar\rho)\,,\label{eq:eigenpertur} \end{align} where $\epsilon$ is a small parameter, $v(\bar\rho)$ and $\omega$ are the eigenfunction and the corresponding eigenvalue. One can see that only perturbations of eigenvalue $\omega>0$ are relevant. Inserting \Eq{eq:eigenpertur} into the flow equation of the effective potential in \Eq{eq:dtu} and reformulating the equation by powers of the small parameter $\epsilon$, one immediately finds that the fixed-point equation in \Eq{eq:fixedPointEq} is reproduced in the zeroth-order term, and the linear term in $\epsilon$ provides us with a homogeneous differential equation for the eigenfunction $v(\bar\rho)$ that we are looking for, i.e., \begin{align} \omega v(\bar\rho)=&d v(\bar\rho)-(d-2+\eta)\bar\rho v'(\bar\rho)\nonumber\\[2ex] &+\mathscr{C}\Bigg[\frac{v'(\bar\rho)+2\bar\rho v^{(2)}(\bar\rho)}{\big(1+u_*^{\prime}(\bar\rho)+2\bar\rho u_*^{(2)}(\bar\rho)\big)^2}+\frac{(N-1)v'(\bar\rho)}{\big(1+u_*^{\prime}(\bar\rho)\big)^2}\Bigg]\,.\label{eq:dvdrho} \end{align} The asymptotic behavior of $v(\bar\rho)$ in the limit of $\bar\rho\to\infty$ is readily obtained from \Eq{eq:dvdrho} by ignoring the subleading terms in the square bracket, since they are suppressed by $u_*^{\prime}(\bar\rho)$ and $u_*^{(2)}(\bar\rho)$ in the denominator. Thus, one is led to \begin{align} v(\bar\rho) \simeq&\,c \,{\bar\rho}^{(d-\omega)/(d-2+\eta)}\,,\qquad \bar\rho \to \infty\,.\label{eq:vInfty} \end{align} with a normalized constant $c$. In the same way, as $\bar\rho \to 0$ one obtains from \Eq{eq:dvdrho} the relation as follows \begin{align} v'(0) =&\frac{1}{\mathscr{C} N}(\omega-d)\big(1+u_*^{\prime}(0)\big)^2v(0)\,.\label{eq:v0} \end{align} Similarly as solving the equations in \labelcref{eq:dudrhos}, we integrate the differential equation for the eigenfunction $v(\bar\rho)$ in \Eq{eq:dvdrho}, starting at a sufficient large value of $\bar\rho$, e.g., $\bar\rho=\bar\rho_{L}$ with initial values for $v(\bar\rho_{L})$ and $v'(\bar\rho_{L})$ obtained in \Eq{eq:vInfty}. The function $v(\bar\rho)$ is resolved as the differential equation is integrated out from $\bar\rho=\bar\rho_{L}$ to $\bar\rho \to 0$. We fine tune the value of $\omega$, such that $v(\bar\rho)$ and $v'(\bar\rho)$ are finite in the limit of $\bar\rho \to 0$, or more exactly the relation in \Eq{eq:v0} is satisfied. Then one can obtain the eigenfunction $v(\bar\rho)$ and its related eigenvalue $\omega$. \section{Numerical results} \label{sec:num-resul} \begin{table*}[t] \begin{center} \begin{tabular}{cc|cc|cc} \hline\hline & & & & & \\ & Method & \multicolumn{2}{c|}{$d=3$} & \multicolumn{2}{c}{$d=2$}\\[2ex] \hline & & $\nu$ & $\eta$ & $\nu$ & $\eta$ \\[1ex] \hline & & & & & \\[-2ex] $O(1)$ LPA (this work) & fRG global (direct) &0.6495619 &0 \\[1ex] $O(1)$ LPA$'$ (this work) & fRG global (direct) &0.6473203 &0.0442723 &1.3266022 &0.2335624 \\[1ex] $O(4)$ LPA (this work) & fRG global (direct) &0.8043477 &0 \\[1ex] $O(4)$ LPA$'$ (this work) & fRG global (direct) &0.7811038 &0.0373204 \\[1ex] $O(40)$ LPA (this work) & fRG global (direct) &0.9807813 &0 \\[1ex] $O(1)$ LPA \cite{Juttner:2017cpr} & fRG global (combination) &0.6495618 &0 \\[1ex] $O(4)$ LPA \cite{Juttner:2017cpr} & fRG global (combination) &0.8043477 &0 \\[1ex] $O(40)$ LPA \cite{Juttner:2017cpr} & fRG global (combination) &0.9807813 &0 \\[1ex] $O(1)$ LPA$'$ \cite{Borchardt:2015rxa}& fRG global (pseudospectral) &0.645995 &0.0442723 & & \\[1ex] $O(1)$ LPA$'$ \cite{Codello:2012sc, Codello:2012ec, Codello:2014yfa} & fRG iterative &0.65 &0.044 &1.33 &0.23 \\[1ex] $O(4)$ LPA$'$ \cite{Codello:2012sc, Codello:2012ec, Codello:2014yfa} & fRG iterative &0.78 &0.037 & & \\[1ex] $O(1)$ scalar theories \cite{Balog:2019rrg, DePolsi:2020pjk} & fRG DE $\mathcal{O}(\partial^6)$ & 0.63012(5) & 0.0361(3) &\\[1ex] $O(4)$ scalar theories \cite{DePolsi:2020pjk} & fRG DE $\mathcal{O}(\partial^4)$ & 0.7478(9) & 0.0360(12) &\\[1ex] $O(1)$ CFTs \cite{Kos:2014bka} & conformal bootstrap & 0.629971(4) & 0.0362978(20) &\\[1ex] $O(4)$ CFTs \cite{Kos:2015mba} & conformal bootstrap & 0.7472(87) & 0.0378(32) &\\[1ex] $O(4)$ spin model \cite{Kanaya:1994qe} &Monte Carlo & 0.7479(90) & 0.025(24) &\\[1ex] $O(1)$ (Ising) & exact & & &1 &1/4 \\[1ex] $O(N \to \infty)$ &exact &$1/(d-2)$ & 0 &$1/(d-2)$ &0 \\[1ex] \hline\hline \end{tabular} \caption{Critical exponents $\nu$ and $\eta$ of the $O(N)$ scalar theory in $d=3$ and 2 spatial dimension obtained in fRG with truncations LPA and LPA$'$, where several different values of $N$ are adopted. Note that the $O(1)$ symmetry corresponds to that of the Ising model. Our results are in comparison to those from other approaches, e.g., the global fixed points with a combination of analytical and numerical techniques \cite{Juttner:2017cpr}, the pseudospectral methods \cite{Borchardt:2015rxa}, the iterative method \cite{Codello:2012ec, Codello:2012sc, Codello:2014yfa}, derivative expansions \cite{Balog:2019rrg, DePolsi:2020pjk}, the conformal bootstrap for the $3d$ conformal field theories (CFTs) \cite{Kos:2014bka, Kos:2015mba}, Monte Carlo simulations \cite{Kanaya:1994qe}. Moreover, exact results for the $2d$ Ising model and the $O(N)$ symmetry with $N \to \infty$ are also presented.} \label{tab:exponent} \end{center}\vspace{-0.5cm} \end{table*} In this work we employ the fifth order Radau IIA method (RadauIIA5) \cite{Hairer:1996,Hairer:1999} encoded in a \textit{Julia} package \cite{Rackauckas:2017} to solve the differential equations in \labelcref{eq:dudrhos} numerically. In \Fig{fig:du-N1-d3} we show the first and second derivatives of the effective potential $u^\prime(\bar\rho)$ and $u^{(2)}(\bar\rho)$ at the Wilson-Fisher fixed point for the scalar theory of the $O(1)$ symmetry in $d=3$ spatial dimension, obtained in the LPA approximation. As we have discussed above, the global potential is obtained by integrating the differential equations in \Eq{eq:dudrhos} from a sufficient large $\bar\rho=\bar\rho_{L}$ towards $\bar\rho \to 0$. Usually the value of $\bar\rho_{L}$ should be chosen large enough such that one has $\bar\rho_{L}\gg \bar\rho_0$, where $\bar\rho_0$ is the location of the minimum of potential $u(\bar\rho)$, and the obtained potential would not show dependence on the choice of $\bar\rho_{L}$. It is found that $\bar\rho_{L}=10$ would meet the requirements in \Fig{fig:du-N1-d3}. The leading order expansion coefficient of the potential at large field, i.e., $\gamma$ in \Eq{eq:uInfty} or \Eq{eq:uLaurent}, is fine tuned to pin down the desired solution of fixed point as discussed in \sec{sec:global-solu}. For the WF fixed point, we find $\gamma=28.060767758247646$ that is in good agreement with $\gamma=28.060767757778700$ obtained from pseudospectral methods \cite{Borchardt:2015rxa}, being identical for the first 10 significant digits. Moreover, the location of minimum of the potential, i.e., the crossing point between the curve of $u^\prime$ and the horizontal zero dashed line as shown in the left panel of \Fig{fig:du-N1-d3}, is found to be $\bar\rho_0=0.03064794240852456$, which is also consistent with $\bar\rho_0=0.03064794240869777$ from pseudospectral methods \cite{Borchardt:2015rxa}. In \Fig{fig:du-N1-d3} the global solution of potential is also compared with local solutions obtained from the Taylor expansion with expansion points $\kappa=0$ and $\kappa=\bar\rho_0$ and from the Laurent expansion in the limit of $\bar\rho \to \infty$. The maximal order of expansion $N_{\mathrm{tr}}$ is 10 for the Taylor expansion with $\kappa=\bar\rho_0$ and 20 for the two others. One can see that the global $u^\prime$ coincides with that of the Taylor expansion in the regime of small $\bar\rho$, and an obvious deviation takes place at about $\bar\rho\simeq 0.1$ for $\kappa=0$ and $\bar\rho\simeq 0.15$ for $\kappa=\bar\rho_0$. This also indicates that Taylor expansion at finite field is superior to that at vanishing field, though the expansion order $N_{\mathrm{tr}}$ of the former is smaller. On the contrary, the deviation between the global solution and the Laurent expansion happens in the region of small field, say $\bar\rho\lesssim 0.07$ as shown in the left panel of \Fig{fig:du-N1-d3}, but they are in good agreement when the field is large. Similar behaviors are also found in $u^{(2)}$ as shown in the right panel of \Fig{fig:du-N1-d3}. Eigenfunctions of the first several low orders and their respective eigenvalues related to the potential of the WF fixed point in \Fig{fig:du-N1-d3} are presented in \Fig{fig:v-N1-d3}. Here, the different values of the eigenvalue $\omega$ in \Eq{eq:eigenpertur} are denoted by $\omega_n$, with $n$ standing for the order of the eigenfunction. Since the WF fixed point is a stable fixed point, this indicates that there is only one positive eigenvalue with $\omega_0=1.5395$, that is, only one mode of eigenperturbation is relevant for the WF fixed point. This eigenvalue allows us to obtain the critical exponent $\nu=1/\omega_0=0.64956$ for the $3d$ Ising model in LPA approximation. In \Tab{tab:exponent} our calculated critical exponents $\nu$ and $\eta$ in $d=3$ and 2 spatial dimension with different values of $N$ for the $O(N)$ symmetry universality class are shown, which are also compared with relevant results from other approaches in the literatures. For instance, the values of $\nu$ in LPA with $d=3$ obtained in this work are in good agreement with those obtained from the same truncation in \cite{Juttner:2017cpr}, being identical for the first $6\sim7$ significant digits. Note that rather than a direct solution of the fixed point equation, a quite different approach, that is, a combination of analytical and numerical techniques, is used to determine global fixed points in \cite{Juttner:2017cpr}. The critical exponents for the $2d$ Ising model are also calculated in $\mathrm{LPA}'$ with $\nu=1.327$ and $\eta=0.2336$, which are consistent with the results obtained from the iterative method \cite{Codello:2012ec, Codello:2012sc, Codello:2014yfa}, and comparable to the exact results of $\nu=1$ and $\eta=0.25$. In order to make a comparison to the exact results of $\nu=1/(d-2)$ and $\eta=0$ in the limit of $N \to \infty$, i.e., the critical exponents in the spherical model. We have done a computation with $N=40$ and $d=3$ in LPA and found $\nu=0.9808$, being very close to the exact result. In \Fig{fig:nu-eta} the dependence of the critical exponents $\nu$ and $\eta$ for the $O(N)$ universality class on the spatial dimension $d$ and the number of field components $N$ is investigated. Here the dimension $d$ is not constrained to be an integer anymore, but rather a continuous variable in the range $2\leq d \leq 4$. Both truncations LPA and $\mathrm{LPA}'$ are used. The exact results for the $2d$ Ising model and the limit $N \to \infty$ are presented for comparison. One can see when $d \to 4$, the Wilson-Fisher fixed point coincides with the Gau\ss ian fixed point, and all the curves converge at the trivial results of $\nu=1/2$ and $\eta=0$. With the decrease of $d$ from 4 to 2, $\nu$ increases for each value of $N$, and the magnitude of increase is larger for larger $N$, but this behavior is rapidly saturated around $\nu=1/(d-2)$ in the limit of $N \to \infty$. $\nu=1.327$ and $\eta=0.2336$ is found for $d=2$ and $N=1$ in $\mathrm{LPA}'$. We find that the calculations for $N \geq 2$ in $d=2$ are quite difficult. This can also be inferred from the asymptotic behavior of the potential in the limit of large field in \Eq{eq:uInfty}, where the power $d/(d-2+\eta)$ is divergent when $d=2$ and $\eta=0$. In fact, this behavior is closed related to the Mermin-Wagner-Hohenberg theorem \cite{Mermin:1966fe, Hohenberg:1967zz, Coleman:1973ci}, stating that the continuous symmetries, e.g., $N \geq 2$ for the $O(N)$ universality class, would not be broken spontaneously in $d=2$ dimension, that indicates one has $\nu \to \infty$ for $N \geq 2$ in $d=2$ dimension. Moreover, it is found that the anomalous dimension $\eta$ increases monotonically with the decrease of $d$ from 4 to 2 for $N=1$, while it is not monotonic for $N\geq 2$. In \Fig{fig:rho0} we show a non-universal variable, the location of minimum of the potential $u(\bar\rho)$ at the Wilson-Fisher fixed point, i.e., $\bar \rho_0$, and its dependence on the dimension $d$ with several different values of $N$. One can see that $\bar \rho_0$ also increases rapidly for $N \geq 2$ when $d$ is approaching 2. \section{Summary and outlook} \label{sec:summary} In this work the fixed-point equation for the nonperturbative effective potential in the fRG approach is integrated from large to vanishing field, where the asymptotic potential in the limit of large field is implemented as initial conditions. This approach provides us with a global fixed-point potential with high numerical accuracy, that captures both the asymptotic behavior in the limit of vanishing field and more importantly that in the limit of large field. The obtained global potential is in good agreement with the results from the Taylor expansion and the Laurent expansion in the regimes where they are applicable, i.e., small field for the former and large field for the latter, respectively. Furthermore, Laurent expansion of the potential in the limit of large field for the general case, that the spatial dimension $d$ is a continuous variable in the range $2\leq d \leq 4$, is obtained. By virtue of the method of eigenperturbations, we also compute the eigenfunctions and eigenvalues of perturbations near the Wilson-Fisher fixed point with high numerical accuracy. Consequently, critical exponents for different values of the spatial dimension $d$ and the number of field components $N$ of the $O(N)$ universality class are obtained. Our calculated critical exponents are in good agreement with the relevant results in the literatures with the same truncation, and are also comparable with the exact results for the $2d$ Ising model and the spherical model with $N \to \infty$. Furthermore, it is also desirable to apply the approach used in this work to other physical problems of interest, such as the dynamical critical exponent \cite{Tan:2021zid}, the Yang--Lee edge singularity \cite{Stephanov:2006dn, Mukherjee:2019eou, Connelly:2020gwa, Rennecke:2022ohx, Ihssen:2022xjv}, multi-critical fixed points \cite{Yabunaka:2017uox}, etc. \begin{acknowledgments} We thank Jan M. Pawlowski, Nicolas Wink for discussions. This work is supported by the National Natural Science Foundation of China under Grant No. 12175030. \end{acknowledgments}
1,477,468,751,277
arxiv
\section{Introduction} This paper concerns the task of graph exploration by a finite automaton guided by a graph labeling scheme. A finite automaton $\mathcal{R}$, called a robot, must be able to visit all the nodes of any unknown anonymous undirected graph $G = (V,E)$. The robot has no a priori information about the topology of $G$ and its size. While visiting a node the robot can distinguish between the edges that are incident on this node. At each node $v$ the edges incident on it are ordered and labeled by consecutive integers $0, \ldots, d-1$ called port numbers, where $d = \mathtt{deg}(v)$ is the degree of $v$. We will refer to port ordering as a local orientation. We use \emph{Mealy Automata} to model the robot. The robot has a transition function $f$ and a finite number of states. If the automaton in state $s$ knows the port $i$ through which it enters a node of degree $d$, it switches to state $s'$ and exits the node through port $i'$, that is, $f(s,i,d)=(s',i')$. The graph exploration by mobile agents (robots) recently received much attention, and different graph exploration scenarios have been investigated. In the case of tree exploration, it is shown by Diks et al.~\cite{DIKS04} that the exploration of $n$-node trees such that the robot can stop once exploration is completed, requires a robot with memory size $\Omega(\log\log\log n)$ bits, and $\Omega(\log n)$ bits are necessary for exploration with return. Moreover, they constructed an algorithm of exploration with return for all trees of size at most $n$, using $O(\log^2 n)$ bits of memory. In the work of Amb\"{u}hl et al.~\cite{Gas07}, the memory is lowered to $O(\log n)$ bits for exploration with return. Flocchini et al.~\cite{FIP10} later showed that a team of $\Omega(n)$ asynchronous oblivious robots are necessary for most $n$-node trees, and that it is possible to explore the tree by $O(\log n/\log\log n)$ robots only if the maximum degree of the tree is 3. The memory size of the robot is widely adopted as the measurement of the efficiency~\cite{Ko80,FI04,FIR05,FIP05,Re05}. Fraigniaud et al.~\cite{FIP05} proved that a robot needs $\Theta(D\log \Delta)$ bits of memory to explore all graphs of diameter $D$ and maximum degree $\Delta$. By the result of Reingold~\cite{Re05}, a robot equipped with $O(\log n)$ bits of memory is able to explore all $n$-node graphs in the perpetual exploration model, where the return to the starting node is not required. The lower bound of memory bits $\Omega(\log n)$ is proved by Rollik~\cite{Ko80}. In the scenario adopted in~\cite{ben02,FI04,FIR05}, the robot is provided with a \emph{pebble} that can be dropped on a node and used to identify the node later. The authors in~\cite{ben02} showed that a robot can explore the graph with only one pebble if it knows an upper bound on the number of nodes, otherwise $\Theta(\log\log n)$ pebbles are necessary and sufficient. Flocchini et~al. \cite{FMS09} studied a dynamic scenario where the exploration is on a class of highly dynamic graphs. Recently, much research is focused on the exploration of anonymous graphs guided by labeling the graph nodes~\cite{KKP05,FIP08,Ilcinkas06,Ilcinkas08,GKM08,KM09,CDG09}. The periodic graph exploration requires that the automaton has to visit every node in an undirected graph infinitely many times in a periodic manner. Ilcinkas~\cite{Ilcinkas06} considered minimizing the length of the exploration period by appropriate assignment of local port numbers. G\c{a}sieniec et al.~\cite{GKM08} improved the upper bound of the exploration period $\pi$ from $4n - 2$ to $3.75n - 2$ in an $n$-node graph, providing the agent with a constant memory. For an oblivious agent, \cite{DJS05} achieved a period of $10n$. Recently, Cyzyowicz et al.~\cite{CDG09} showed a period of length at most $4 \frac 13 n$ for oblivious agents and a period of length at most $3.5n$ for agents with constant memory. Kosowski et al.~\cite{KM09} provided a new port labeling which leads to shorter exploration cycles, improving the bound to $\pi\leq 4 n - 2$ for oblivious agents. Cohen et al.~\cite{Ilcinkas08} introduced the exploration labeling schemes. The schemes consist of an algorithm $\mathcal{L}$ and a robot $\mathcal{R}$ such that given any simple graph $G$ with any port numbering, the algorithm $\mathcal{L}$ labels the nodes of $G$, and $\mathcal{R}$ explores $G$ with the help of the labeling produced by $\mathcal{L}$. It is shown that using only 2-bit (actually, 3-valued) labels a robot with a constant memory is able to explore all graphs, and the exploration is completed in time $O(m)$ in any $m$-edge simple graph. The authors also presented a 1-bit labeling scheme (two kinds of labels, namely \textbf{black} and \textbf{white}) on bounded degree graphs and an exploration algorithm for the colored graph. The robot uses a memory of at least $O(\log\Delta)$ bits to explore all simple graphs of maximum degree $\Delta$. The robot stops once the exploration is completed. The completion time of the exploration is $O(\Delta^{O(1)}m)$. \subsection{Our Results} \if 0 In this paper, we consider a model where the system designer is provided with a limited number of flags and is allowed to add flags to graph nodes in the preprocessing stage; these flags can guide the robot in the exploration of the graph. Maintaining of flags may have costs (e.g., a lighting lamp), so it is necessary to limit the number of flags. Given an $n$-node graph $G$ and $b$ flags where $b \leq n/2$, we design a flagging scheme where the number of flagged nodes is not greater than $b$. Following the terms in~\cite{Ilcinkas08}, this scheme is a 1-bit labeling scheme where each black node corresponds to a flagged node while each white node corresponds to a node without a flag; the ratio of the number of all the nodes to the number of black nodes is not less than a given rational number $\rho=n/b$.\fi We consider the problem of adjustable label guided graph exploration. Since maintaining different labels may have different costs, it is necessary to limit the number of some labels. For example, in a 1-bit labeling scheme, if we use a lighting lamp to represent `1' and a turned off lamp to represent `0', the number of lighted lamps (label `1') may be limited to reduce the cost. For a 1-bit labeling scheme on an $n$-node graph $G$ where the number of nodes labeled black is $b$, we define \emph{\textbf{$N$-ratio}} as the ratio of the number of nodes to the number of nodes colored black, that is, $n/b$. Given a rational number $\rho$, we can design a 1-bit labeling scheme on $G$ such that the $N$-ratio is not less than $\rho$. The 1-bit labeling scheme in~\cite{Ilcinkas08} does not guarantee an arbitrary $N$-ratio and works specifically on simple graphs, i.e., undirected graphs without loops or multiple edges. This scheme employs the function of counting the number of neighbors for a node, which is impossible in a non-simple graph with multi-edges and loops. Using only the port numbering will not allow a robot to know whether two neighbors of a node are the same. We present 1-bit labeling schemes that can adjust the $N$-ratio and can work on non-simple graphs. We first investigate a family of $N$-ratio tunable labeling schemes where the $N$-ratio can be changed but not in a precise way. We classify the nodes in $G$ by the distances between each node, and a specific node $r$ is assigned as the \textbf{\emph{root}}. Each class of nodes in the classification is called a \textbf{\emph{layer}}. In this family of labeling schemes, all nodes in the same layer are labeled similarly. We call $\rho'=bl/l$ the \emph{\textbf{$L$-ratio}} of the labeling scheme where $l$ is the number of layers, and $bl$ is the number of black layers. We introduce the $L$-ratio tunable labeling schemes, enabling a robot to explore all graphs of maximum degree $\Delta$. Starting from any node, the robot returns to the root once the exploration is completed. We also design a procedure for a robot to label the graph. But we need an extra label to indicate that a node is not labeled yet. Based on the $L$-ratio tunable labeling schemes, we introduce the $N$-ratio adjustable labeling schemes. Precisely, given an expected $N$-ratio $2\leq \rho\leq (D+1)/4$, we derive a series of labelings from an $L$-ratio tunable labeling. Throughout the paper, we use $\rho'$ to denote the $L$-ratio and $\rho$ to denote the expected $N$-ratio. We prove that a labeling scheme with $N$-ratio not less than $\rho$ can be found in these labeling schemes. The exploration is completed in time $O(n\Delta^{\frac{16\rho+7}{3}}/\rho+\Delta^{\frac{40\rho+10}{3}})$; the robot need $O(\rho\log \Delta)$ bits of memory. Table~\ref{table} compares our approach with the work of Cohen et al.~\cite{Ilcinkas08}. In the case of $\rho=2$, our approach extends the 1-bit labeling scheme in~\cite{Ilcinkas08} from simple graphs to non-simple graphs. The exploration algorithms are different, but their space and time complexities are similar for simple graphs. When working on a simple graph labeled by the 1-bit labeling scheme in~\cite{Ilcinkas08}, our exploration algorithm runs in time $O(\Delta^{10}n)$ as in~\cite{Ilcinkas08}. Both approaches derive a spanning tree from the graph by the labeling. In~\cite{Ilcinkas08}, the tree contains all nodes; in our approach the tree contains only black nodes, and the edges are paths of the graph. To find a path of length $l$, the robot performs at most $\Delta^{2l+2}$ traversals. Moreover, we use a new method to identify the root and its neighbors for non-simple graphs. When $\rho$ comes close to the diameter, the amount of memory used by the robot is not far from that of the situation where all nodes are white (that is, there is no labeling). It is known that $\Omega(D \log \Delta)$~\cite{FIP05} bits of memory are necessary without pre-labeling of the graph, which is the same bound as ours when $\rho$ comes close to the diameter. \begin{table*}[t] \caption{Comparison of the labeling schemes in~\cite{Ilcinkas08} and ours. The first two rows are from~\cite{Ilcinkas08}. \begin{center} \small \begin{tabular}{|c|c|c|c|c|} \hline Label size & Robot's memory & Time & Ratio & Works on\\ (\#bits) & (\#bits) & (\#edge-traversals) & (\#black nodes) & \\ \hline $2$ & $O(1)$ & $O(m)$ & $-$ & simple graphs\\ \hline $1$ & $O(\log \Delta)$ & $O(\Delta^{O(1)}m)$&no guarantee\footnotemark[3] & simple graphs\\ \hline 1&$O(\rho\log \Delta)$ & $O(n\Delta^{16\rho/3}/\rho+\Delta^{40\rho/3+1})$ & $ \leq n/\rho$ & non-simple graphs\\ \hline \end{tabular} \end{center} \label{table \end{table* \footnotetext[3]{The number of black nodes in~\cite{Ilcinkas08} can vary (without any control) from $\Theta(1)$ to $\Theta(n)$, depending on the cases.} \section{$L$-ratio Tunable 1-Bit Labeling Schemes for Bounded Degree Graphs \label{S2}} In this section, we describe an $L$-ratio tunable exploration labeling scheme using 1-bit labels. Let $G$ be an $n$-node graph of degree bounded by $\Delta$. It is possible to color the nodes of $G$ with two colors namely black and white, while the $L$-ratio of the labeling is tunable. There exists a robot that can explore the graph $G$ by the aid of the labeling, starting from any node and terminating after identifying that the entire graph has been traversed. \subsection{Notions} Let $v$ and $u$ be nodes connected by edge $e$. Denote by $port(e,u)$ the port number of the port of $u$ which $e$ is incident on. A path $P$ in a non-simple graph is defined as a series of edges $e_0,e_1,\ldots,e_k$ such that for a series of nodes $n_0,n_1, \ldots,n_{k+1}$, edge $e_i$ connects $n_i$ and $n_{i+1}$ $(0\leq i\leq k)$. The string $p_0p_1\ldots p_{2k+1}$, where $p_i=port(e_{\lfloor i/2\rfloor},n_{\lceil i/2\rceil})$ $(0\leq i\leq 2k+1)$, is called the \emph{label} of $P$. We denote by $P^{-1}$ the reversal path of $P$. We say that a path $P$ is greater than path $P'$, if the label of $P$ is lexicographically greater than the label of $P'$. The distance between two nodes $u$, $v$ is the number of edges in the shortest path from $u$ to $v$, denoted by $d(u,v)$. Let $L_i$ denote the set of nodes that are at distance $i$ from $r$, and $L_0=\{r\}$. For layers $L$ and $L'$, we let $d(r,L)$ denote the distance between any node in layer $L$ and $r$, $d(L,L')$ denote $|d(r,L)-d(r,L')|$. \subsection{Labeling Schemes} The following is a class of $L$-ratio tunable 1-bit labeling schemes. \noindent\emph{\textbf{Labeling $\mathcal{AL}$.}} Pick an arbitrary node $r\in V$ and assign it the \textit{root} of $\mathcal{AL}$. Label $r$ black. Select two different non-negative integers $d_1$, $d_2$ satisfying $d_1\geq 2$ and $\lfloor d_2/2\rfloor\geq d_1$. Define four classes of nodes $A,B,C$, and $D$ as follows: \begin{center} \begin{flushleft} $C=\{v\in V\mid d(r,v) \ \mathtt{ mod }\ (d_1+d_2+2)=0\}$,\\ $D=\{v\in V\mid d(r,v) \ \mathtt{ mod }\ (d_1+d_2+2)=1\}$, \\ $A=\{v\in V\mid d(r,v) \ \mathtt{ mod }\ (d_1+d_2+2)=d_2+1\}$,\\ $B=\{v\in V\mid d(r,v) \ \mathtt{ mod }\ (d_1+d_2+2)=d_1+d_2+1\}$. \end{flushleft} \end{center} Label all the nodes in class $A,B,C$, and $D$ black and label all the nodes left white. The $\mathcal{AL}$ labeling is denoted by $\langle r,d_1,d_2 \rangle$.\vspace{4pt} An example of $\mathcal{AL}$ labeling schemes is shown in Figure~\ref{scheme}. A layer is called a white (black) layer if all nodes in this layer are white (black). Denote black layers by $\mathit{BL}_0, \mathit{BL}_1,\ldots, \mathit{BL}_{\mathit{D_B}}$, where $\mathit{BL}_0=L_0$, $\mathit{D_B}+1$ is the number of black layers, and $d(r,\mathit{BL}_i)<d(r,\mathit{BL}_j)$ if $i<j$. For $X\in\{A,B,C,D\}$, layer $\mathit{BL}_i$ is said to be an $X$-layer if $\mathit{BL}_i\subset X$. Two black layers are said to be adjacent if one is $\mathit{BL}_i$ and another is $\mathit{BL}_{i+1}$. The black nodes whose neighbors are all black are called \textbf{B}-nodes. \begin{figure}[htbp] \centering \begin{minipage}[t]{0.52\linewidth} \centering \includegraphics[width=2.9in]{scheme.eps} \caption{ An $\mathcal{AL}$ labeling scheme. Each line represents a layer. Black lines represent black layers, and white lines represent white layers. \label{scheme} \end{minipage} \hspace{13mm} \begin{minipage}[t]{0.35\linewidth} \centering \includegraphics[width=1.5in]{observations.eps} \caption{ $R_W(v)=1$, $R_W(v')=2$. By property 3, node $u$ and $u'$ can be distinguished by $R_W(v)$ and $R_W(v')$. \label{claim} \end{minipage} \end{figure} The $L$-ratio of the labeling can be altered by adjusting $d_1$ and $d_2$, but it cannot be adjusted precisely to guarantee that the $L$-ratio is not less than a given rational value. We assume that $D\geq d_1+d_2+1$, that is, there are at least four black layers. Then the upper bound on the $L$-ratio is $(D+1)/4$. The minimal $L$-ratio is of an $\mathcal{AL}$ labeling where $d_1=2,d_2=4$, and $D=9$, and there are six black layers in the labeled graph. We have the $L$-ratio $\rho'\geq 5/3$. For $\mathcal{AL}$ labeling schemes, we will prove the following in the remaining of Section~\ref{S2}. \begin{Theorem} Let $G$ be an $n$-node graph of degree bounded by an integer $\Delta$, and let $G$ be labeled by an $\mathcal{AL}$ labeling scheme. There exists a robot that can explore the graph $G$, starting from any given node and terminating at $r$ after identifying that the entire graph has been traversed. The robot has $O(\rho'\log\Delta)$ bits of memory, and the total number of edge traversals by the robot is $O(\Delta^{12\rho'-9}n)+ o(\rho'\Delta n)$, where $\rho'$ is the $L$-ratio of the labeling. \label{main} \end{Theorem} For a black node $u$, we identify two subsets of nodes that can be reached by a path from $u$. For $u\in \mathit{BL}_{i}$ $(0< i\leq \mathit{D_B})$, $pred(u)$ is the set of nodes in $\mathit{BL}_{i-1}$ such that for any $x\in pred(u)$, $d(u,x)=d(\mathit{BL}_i,\mathit{BL}_{i-1})$. For $u\in \mathit{BL}_{i}$ $(0\leq i< \mathit{D_B})$, $succ(u)$ is the set of nodes in $\mathit{BL}_{i+1}$ such that for any $x\in succ(u)$, $d(u,x)=d(\mathit{BL}_{i+1},\mathit{BL}_{i})$. For root $r$, we set $pred(r) = \varnothing$, and we have $succ(r) = \mathit{BL_1}$. For $u\in \mathit{BL}_{\mathit{D_B}}$, $succ(u)=\varnothing$. In the following, we derive an \textit{implicit spanning tree} of black nodes rooted at $r$ from an $\mathcal{AL}$ labeling scheme. For $u\in \mathit{BL}_{i}$ $(0\leq i< \mathit{D_B})$, denote by $succ\_path(u)$ the set of paths of length $d(\mathit{BL}_i,\mathit{BL}_{i+1})$ whose starting node is $u$ and ending node is in $\mathit{BL}_{i+1}$. For $u\in \mathit{BL}_{\mathit{\mathit{D_B}}}$, $succ\_path(u)=\varnothing$. For $u\in \mathit{BL}_{i}$ $(0< i\leq \mathit{\mathit{D_B}})$, denote by $\mathit{pred\_path}(u)$ the set of paths of length $d(\mathit{BL}_i,\mathit{BL}_{i-1})$ whose starting node is $u$ and ending node is in $\mathit{BL}_{i-1}$. The path in $\mathit{pred\_path}(u)$ with the lexicographically smallest label is called the \emph{parent path} of $u$, denoted by $\mathit{par\_path}(u)$. We set $\mathit{pred\_path}(r)=\varnothing$. The ending node of $\mathit{par\_path}(u)$ is called the \emph{parent} of $u$, denoted by $\mathit{parent}(u)$. The set of nodes whose parent is $u$ is denoted by $child(u)$. We have $child(u)\subseteq succ(u)$ and $\mathit{parent}(u)\in pred(u)$. The reversal paths of the parent paths of the nodes in $child(u)$ are called \emph{child paths} of $u$. All black nodes, their parent paths, and their child paths form an implicit spanning tree. \subsection{Properties of $\mathcal{AL}$ Labeling Schemes} In this section we describe three properties on $\mathcal{AL}$ labeling schemes. These properties are the basis of the exploration algorithm. Since for any node $u$ there is a shortest path from $u$ to $r$, we have the following property. \newtheorem{Property}{Property} \begin{Property} Let $u\neq r$ be a node, and let $L_i$ be a black layer such that $i<d(r,u)$. There exists at least a node $x\in L_i$ such that $d(x,u)=d(r,u)-i$. \label{back} \end{Property} A useful corollary of Property~\ref{back} is that any class $D$ node has a $\textbf{B}$-node neighbor. Assume that the nearest black nodes to some node $v$ are at distance $\ell$. Then the \emph{white-radius} of $v$ is $\ell-1$, denoted by $R_W(v)$. Property~\ref{maxR} gives the upper bound on the white radius of white nodes between two adjacent black layers. Figure~\ref{claim} gives an example. \begin{Property} Let $u$ be a white node, and let $d(r,\mathit{BL}_i)<d(r,u)< d(r,\mathit{BL}_{i+1})$. We have $R_W(u)\leq d(\mathit{BL}_i,\mathit{BL}_{i+1})-2$. \label{maxR} \end{Property} Let $P$ be a path from $u$ to $v$ of length $\ell$ where only $u$ and $v$ are allowed to be black. Path $P$ is called a \emph{white-path} from $u$, or precisely, an \emph{$\ell$-white-path}. Let $u\in \mathit{BL}_i$ $(i\neq 0)$, and let $\ell=d(\mathit{BL}_i,\mathit{BL}_{i-1})$. According to Property~\ref{back}, there is at least one $\ell$-white-path from $u$ to a node in $\mathit{BL}_{i-1}$. The maximal white radius of nodes in this path is $\lfloor \ell/2\rfloor-1$, which leads to the following property. \begin{Property} Let $u\in \mathit{BL}_i$ $(i\neq 0)$, and let $\ell=d(\mathit{BL}_i,\mathit{BL}_{i-1})$. There exists a white path from $u$ that reaches a white node whose white radius is not less than $\lfloor \ell/2\rfloor-1$. \label{maxR+} \end{Property} These properties are used in our exploration algorithm. For example, we can distinguish between a class $A$ node and a class $B$ node by applying these properties. For $u\in A$, there exists a white node $x$ that can be reached by a white path from $u$ such that $R_W(x)=\lfloor d_2/2\rfloor-1$. But for a class $B$ node $u$, the maximal white radius of white nodes that can be reached by a white path from $u$ is not greater than $d_1-2$. Since $d_1\leq \lfloor d_2/2\rfloor$ (see the definition of the $\mathcal{AL}$ labeling), $d_1-2$ is less than $\lfloor d_2/2\rfloor-1$. Figure~\ref{claim} gives an illustration. \subsection{The Local Search Procedure\label{ls}} The following local search procedure can be used to visit all nodes at distance not greater than a given radius from a node.\vspace{6pt} \noindent \textbf{Procedure} $\mathit{LocalSearch}(u,\ell,inport)$ \noindent\textbf{Input}: $u$ is the starting node, $\ell$ is the radius, and $inport$ is the port from which $\mathcal{R}$ enters $u$. \begin{algorithmic}[1] \STATE \textbf{if} \ \ {$\ell=0$} \ \ \textbf{then} $report(u)$\footnote[4]{When the robot reports a node, it does not exit from the procedure nor makes any movement.} \STATE \textbf{else} \STATE \ \ \ \textbf{for}{ $outport$ from 0 to $\mathtt{deg}(u)-1$ and $outport\neq inport$ } \textbf{do} \STATE \ \ \ \ \ \ $v\leftarrow$ the neighbor of $u$ which $outport$ leads to \STATE \ \ \ \ \ \ $\mathcal{R}$ moves to $v$ \STATE \ \ \ \ \ \ $inport'\leftarrow$ the port from which $\mathcal{R}$ enters $v$ \STATE \ \ \ \ \ \ $LocalSearch(v,\ell-1,inport')$ \STATE \ \ \ \ \ \ $\mathcal{R}$ moves back to $u$ \STATE \textbf{return} \end{algorithmic} By the call $\mathit{LocalSearch}(u,\ell,-1)$, the robot explores all neighbors of $u$ up to distance $\ell$. In the local search from $u$ within radius $\ell$, there are at most $\mathit{LS}(\ell)=2\Delta\Sigma_{i=0}^{\ell-1}(\Delta-1)^i=O(\Delta^{\ell})$ edge traversals, and at most $\Delta(\Delta-1)^{\ell-1}$ nodes are reported. Note that an edge may be visited more than once, and a node could be reported more than once. The robot is in node $u$ when the procedure terminates. We summarize the results on the $\mathit{LocalSearch}$ procedure in the following lemma. \begin{lemma} In the local search from node $u$ within radius $\ell$, a robot with $O(\ell\log\Delta)$ bits of memory visits all nodes at distance not greater than $\ell$ from $u$ without visiting any other node. There are at most $O(\Delta^{\ell})$ edge traversals and at most $\Delta(\Delta-1)^{\ell-1}$ nodes being reported. The robot is in node $u$ when the local search terminates. \label{LSP} \end{lemma} We can revise the procedure to explore only the paths that are greater than a given path $P$ from $u$ as follows. The robot first moves to the end of $P$ via $P$ and restores the context of the procedure for $P$ in its memory and then starts the procedure. \subsection{Exploration Guided by Labeling\label{exp}} The overall exploration performed by the robot is a depth first search (DFS) of the implicit spanning tree. All nodes will be visited in the DFS. The robot maintains a state $s\in\{\mathtt{up}, \mathtt{down}\}$. Initially, $\mathcal{R}$ is at the root $r$ of an $\mathcal{AL}$ labeling and leaves $r$ by the port numbered 0 in state $\mathtt{down}$. Assume that $\mathcal{R}$ enters a black node $u$ via a path $P$ that belongs to the implicit spanning tree. If $\mathcal{R}$ is in state $\mathtt{down}$, it searches for the minimal child path of $u$. If $\mathcal{R}$ is in state $\mathtt{up}$, it moves down to the starting node of $P$ and searches for the minimal child path of $u$ that is greater than $P^{-1}$. In both cases, if $\mathcal{R}$ does not find the desired path, $\mathcal{R}$ moves to $\mathit{parent}(u)$ via the parent path of $u$ and transits the state to $\mathtt{up}$; otherwise $\mathcal{R}$ moves to the end node of the path found and transits the state to $\mathtt{down}$. The correctness of these procedures will be proved later. To know whether a path belongs to the spanning tree, we use the following procedures. \begin{enumerate} \item $\mathit{Get\_Par\_Path}(u)$ identifies the parent path of $u\notin\{r\}\cup \mathit{BL}_1$ and $\mathit{parent}(u)$. If $v=\mathit{parent}(u)$ is found, the procedure returns $v$, and $\mathcal{R}$ has moved to $v$ and recorded the parent path of $u$ in its memory; otherwise the procedure returns ``false". \item $\mathit{Next\_Child\_Path}(u,P)$ identifies the minimal child path from $u\neq r$ that is greater than $P$ where $P$ is a child path of $u$ or $\varnothing$\footnotemark[5]. When such a child path, say $P'$, is found, the procedure returns the end of $P'$, and $\mathcal{R}$ has moved to the end of $P'$. If no path is found, the robot goes back to $u$, and the procedure returns ``false". \footnotetext[5]{$P$ can be replaced by the label of $P$ as the initial node of $P$ is also input.} \end{enumerate} All these procedures use a revised local search procedure, namely \emph{white local search}. Given a radius $d$, a node $u$, and a path $P$ from $u$\footnotemark[5], the white local search procedure enumerates all the $d$-white-paths from $u$ that are greater than $P$. It returns ``true" if such path exists and ``false" otherwise. In both cases, the robot is in $u$ when the procedure terminates. This procedure is derived from $\mathit{LocalSearch}$, and the following line should be inserted into $\mathit{LocalSearch}$ between line 2 and line 3. \begin{algorithmic}[ ] \STATE \textbf{if} {$u$ is black and $\ell\neq$ the initial radius of the local search} \textbf{then}\ \textbf{return} \end{algorithmic} This procedure has the same property as Lemma~\ref{LSP}. The term ``local search" refers to the white local search procedure in the remainder of the paper. \subsubsection{Procedure $\mathit{Get\_Par\_Path}$ and $\mathit{Next\_Child\_Path}$\label{Proc}} We first present procedures that will be used many times in the exploration procedures. \textbf{Procedure $\mathit{Is\_B}$} The $\mathit{Is\_B}$ procedure takes as input a black node $x$ that belongs to class $B$, $C$, or $D$ and returns ``$B$" iff $x$ is in class $B$. The robot first checks whether $x$ is a \textbf{B}-node. If it is, $\mathit{Is\_B}(x)$ returns ``\textbf{B}-node". If not, the robot performs a local search from $x$ within radius $d_1$ (denote the local search by $\mathit{LS}_1$). Once a black node $y$ that has no \textbf{B}-node neighbor is reported, $\mathit{Is\_B}(x)$ returns ``$B$". If no such black node $y$ is reported or no node is reported, $\mathit{Is\_B}(x)$ returns ``$D$". In any case, $\mathcal{R}$ is in node $x$ when the procedure returns. \textbf{Procedure $\mathit{C\_or\_D}$} The $\mathit{C\_or\_D}$ procedure takes as input a black node $x$ ($x\neq r$) that belongs to class $C$ or $D$ and returns the class in which $x$ is. If $x$ is not a \textbf{B}-node $\mathit{C\_or\_D}(x)$ returns ``$D$". Otherwise the robot performs a local search from $x$ within radius $1$ (denoted by $\mathit{LS}_1$). For each black neighbor $y$ of $x$ reported, perform $\mathit{Is\_B}(y)$. Once $\mathit{Is\_B}(y)$ returns ``$B$", $\mathit{C\_or\_D}(x)$ returns $``C"$. If for every $y$, $\mathit{Is\_B}(y)$ does not return ``$B$", then $\mathit{C\_or\_D}(x)$ returns $``D"$. In any case, $\mathcal{R}$ is in node $x$ when the procedure returns. \textbf{Procedure $\mathit{A\_or\_B}$} The $\mathit{A\_or\_B}$ procedure takes as input a black node $x$. If $x$ belongs to class $A$ or $B$, $\mathit{A\_or\_B}(x)$ returns the class to which $x$ belongs. The robot performs a local search (denoted by $\mathit{LS}_1$) within radius $d_1$ from $x$. $\mathit{A\_or\_B}(x)$ returns ``$A$" if in this local search a white node is reported whose white radius is $d_1-1$ and returns ``$B$" otherwise. \if 0 If $x$ is in class $D$ and $x$ has children in the spanning tree, $\mathit{A\_or\_B}(x)$ will return ``$A$".\fi In any case, $\mathcal{R}$ is in node $x$ when the procedure returns. \vspace{6pt} Now we present procedure $\mathit{Get\_Par\_Path}$ and $\mathit{Next\_Child\_Path}$. \textbf{Procedure $\mathit{Get\_Par\_Path}(u)$} Assume that $\mathcal{R}$ starts from a node $u\in \mathit{BL}_i$ $(i\geq 2)$. $\mathcal{R}$ aims at identifying the parent path of $u$ and moving to $\mathit{parent}(u)$. According to the class that $u$ belongs to, we consider four cases. In the following, ``$X\rightarrow Y$" means that $\mathcal{R}$ is in an $X$-layer node $u$ and tries to move to $\mathit{parent}(u)$ in the adjacent $Y$-layer. In each case, the robot first calls procedure $\mathit{Path\_Enumeration}$ (PE for short) and then calls procedure $\mathit{Node\_Checking}$ (NC for short) for each path enumerated by PE. The functions of these two procedures are: (1) PE: Enumerating (reporting) a set of white paths comprising $\mathit{pred\_path}(u)$ and their ends. (2)~NC: Checking whether a path enumerated by PE is in $\mathit{pred\_path}(u)$. Since the local search enumerates paths in lexicographic order, in the following cases, the node in $pred(u)$ firstly found by NC is $\mathit{parent}(u)$, and the path recorded by the robot is the parent path of $u$. Figure~\ref{B2A} gives an illustration. \begin{figure*}[ht \hbox to\textwidth{\hfil\includegraphics[width=\textwidth]{B2A_large.eps}\hfil \caption{ Four cases in $\mathit{Get\_Par\_Path}$. The automaton starts from node $u$. It can reach node $x$ and $x'$ by PE. Node $x$ is in $pred$ set of node $u$ while $x'$ is not. \label{B2A \hbox to\textwidth{\hfil\includegraphics[width=\textwidth]{A2B_large.eps}\hfil \caption{Four cases in $\mathit{Next\_Child\_Path}$. The automaton starts from node $u$. It can reach node $x$ and $x'$ by PE. Node $x$ is in $succ$ set of node $u$ while $x'$ is not. \label{A2xB \end{figure* \noindent \textbf{ Case(1)} $C\rightarrow B$ \noindent PE: Perform a local search from $u$ within radius 1 ($\mathit{LS}_1$). \noindent NC: For each black node $x$ reported by PE, call $\mathit{Is\_B}(x)$. Once $\mathit{Is\_B}(x)$ returns``$B$", we return $x$. \noindent \textbf{ Case(2)} $B\rightarrow A$ \noindent PE: Perform a local search from $u$ within radius $d_1$ ($\mathit{LS}_1$). \noindent NC: For each black node $x$ reported by PE, call $\mathit{A\_or\_B}(x)$. Once $\mathit{A\_or\_B}(x)$ returns ``$A$", we return $x$. \noindent \textbf{ Case(3)} $A\rightarrow D$ \noindent PE: (i) Perform a local search from $u$ within radius $d_1$ ($\mathit{LS}_1$). (ii) From each white node $v$ reported, perform a local search within radius $d_1-1$ ($\mathit{LS}_2$). (iii) If all nodes visited in $\mathit{LS}_2$ are not black, perform local search within radius $d_2-d_1$ from $v$ ($\mathit{LS}_3$). \noindent NC: For each black node $x$ reported by PE, if $x$ has a \textbf{B}-node neighbor, we return $x$. \noindent \textbf{ Case(4)} $D\rightarrow C$ \noindent PE: Perform a local search from $u$ within radius 1 ($\mathit{LS}_1$). \noindent NC: For each black node $x$ reported by PE, call $\mathit{C\_or\_D}(x)$. Once $\mathit{C\_or\_D}(x)$ returns ``$C$", we return $x$. In the above cases, if $x$ is returned by NC, then $x$ is $\mathit{parent}(u)$, the path recorded in $\mathcal{R}$ is the parent path of $u$, and the robot has moved to $\mathit{parent}(u)$; otherwise we go back to PE to enumerate another $x$ for NC. \vspace{4pt} \noindent\textit{Identification of the Root and $\mathit{BL}_1$ Nodes.\label{IDr}} In $D\rightarrow C$, we distinguish a $C$-layer node from a $D$-layer node by checking whether the node has a neighbor in a $B$-layer. Since the nodes in $\mathit{BL}_1\cup\{r\}$ have no ancestor in any $B$-layer, for $u\in \mathit{BL}_1\cup\{r\}$, $D\rightarrow C$ in $\mathit{Get\_Par\_Path}(u)$ will fail to find the parent of $u$. For any other nodes, $\mathit{Get\_Par\_Path}$ will succeed in finding their parents. Thus if $\mathit{Get\_Par\_Path}(u)$ fails, then $u$ is in $\mathit{BL}_1\cup\{r\}$. The next problem is how the robot identifies the root. The solution is that when leaving the root, the robot memorizes the ports in the arrived nodes by which it should return to the root. We revise $\mathit{Get\_Par\_Path}(u)$ as follows. If $D\rightarrow C$ fails to find $\mathit{parent}(u)$, we have $u\in \mathit{BL}_1\cup\{r\}$; $\mathcal{R}$ goes to $r$ through the port it memorized, and the procedure returns $r$. \vspace{6pt} \textbf{Procedure $\mathit{Next\_Child\_Path}(u,P)$} For a node $u\in \mathit{BL}_i$ $(i\geq 0)$, and a path $P$ from $u$, the procedure identifies the minimal child path of $u$ greater than $P$. The robot calls the $\mathit{Enumerating}$ procedure to enumerate some paths from $u$ greater than $P$ and calls the $\mathit{Identifying}$ procedure to check whether an enumerated path is a child path. If such a path is found, we return its end node; otherwise we return ``false", and the robot backtracks to $u$, that is, $P$ is the maximal child path of $u$. \if 0 For a node $u\in \mathit{BL}_i$ $(i\geq 0)$, and a path $P$ from $u$, the procedure identifies the minimal child path of $u$ that is greater than $P$. The robot first calls procedure $\mathit{Enumerating}$, then calls procedure $\mathit{Identifying}$ for each path enumerated by $\mathit{Enumerating}$. The functions of the two procedures are: (1) $\mathit{Enumerating}$: (i) PE: Using a local search to enumerate all $d$-white-paths from $u$ that are greater than $P$, where $d=d(\mathit{BL}_{i+1},\mathit{BL}_{i})$. (ii) NC: Checking whether a node enumerated by PE is in $succ(u)$. (2) $\mathit{Identifying}$: Check whether the path leading to a node is a child path. If so, we return the node. If all children of $u$ are visited or $u$ has no child in the spanning tree, we return ``false", and the robot backtracks to $u$. \fi \vspace{4pt} \noindent$\mathit{Enumerating}$. The procedure contains two parts: (i) PE: Use a local search to enumerate all $d$-white-paths $P'$ from $u$ that are greater than $P$, where $d=d(\mathit{BL}_{i},\mathit{BL}_{i+1})$. If $P'$ does not exist, return ``false". (ii) NC: Check whether the end node of $P'$ is in $succ(u)$, if so, return this node. We consider the following cases. Figure~\ref{A2xB} gives an illustration. \noindent\textbf{ Case(1)}. $C\rightarrow D$ \noindent PE: Perform a local search from $u$ within radius 1 starting from $P$ ($\mathit{LS}_1$). \noindent NC: For each black node $x$ reported by PE, call $\mathit{Is\_B}(x)$. If ``$D$" is returned, we return $x$. If ``\textbf{B}-node" is returned, we call $\mathit{C\_or\_D}(x)$; if ``$D$" is returned, we return $x$. \noindent\textbf{ Case(2)}. $D \rightarrow A$ \noindent PE: Perform a local search from $u$ within radius $d_2$ starting from $P$ ($\mathit{LS}_1$). \noindent NC: For each black node $x$ reported by PE, if $x$ has no \textbf{B}-node neighbor, we return $x$. \noindent\textbf{ Case(3)}. $A\rightarrow B$ \noindent PE: Perform a local search from $u$ within radius $d_1$ starting from $P$ ($\mathit{LS}_1$). \noindent NC: For each black node $x$ reported by PE, call $\mathit{A\_or\_B}(x)$. Once ``$B$" is returned, we return $x$. \noindent\textbf{ Case(4)}. $B\rightarrow C$ \noindent PE: Perform local search from $u$ within radius 1 starting from $P$ ($\mathit{LS}_1$). \noindent NC: The black nodes without any white neighbor reported by PE are in $succ(u)$ ($\mathit{LS}_2$). We return the first such node. In the above cases, if $x$ is returned by NC, then $x$ is in $succ(u)$; otherwise we check another $x$ reported by PE. \vspace{4pt} \noindent$\mathit{Identifying}$. When $\mathit{Enumerating}$ has found a shortest path $P'$ from $u$ to a node $x$ in $succ(u)$, $\mathcal{R}$ has moved to $x$ and recorded $P'$ in its memory. $\mathcal{R}$ then checks whether this path is a child path of $u$. If it is, the parent path of $x$ should be $P'^{-1}$. We use \textbf{$\mathit{Check\_Par\_Path}(x,P'^{-1})$} to verify it. The $\mathit{Check\_Par\_Path}$ procedure is similar to the $\mathit{Get\_Par\_Path}$ procedure except that the former's PE part is performed in decreasing lexicographic order. If $\mathit{Check\_Par\_Path}(x,P'^{-1})$ finds a node in $pred(x)$, then $P'^{-1}$ is not the parent path of $x$ and $\mathit{Check\_Par\_Path}$ returns ``false"; otherwise $P'^{-1}$ is the parent path of $x$, and $\mathit{Check\_Par\_Path}$ returns ``true". In both cases, $\mathcal{R}$ is in node $x$ when $\mathit{Check\_Par\_Path}$ terminates. If $P'$ is not a child path of $u$, we go back to $\mathit{Enumerating}$ to enumerate another path. If a child path of $u$ is found, then $\mathit{Next\_Child\_Path}(u,P)$ returns the end node of $P$; otherwise returns ``false". \vspace{6pt} \textbf{Exploration from an Arbitrary Node} When starting from an arbitrary node $x$, the robot should first find the root. If $x$ is a white node, the robot performs a normal local search within radius $d_2-1$ from $x$ and stops when reaching a black node $u$ ($u$ is not a \textbf{B}-node). If $x$ is a \textbf{B}-node, the robot performs a normal local search from $x$ within radius $2$ and stops when reaching a black node that is not a \textbf{B}-node. A \textbf{B}-node is either in class $C$ or in class $D$. For a \textbf{B}-node in class $C$, a non-\textbf{B}-node black node will be reached by a local search within radius 1; for a \textbf{B}-node in class $D$, such node will be reached by a local search with radius 2. Therefore, in all cases the robot can reach a black node $u$ that is not a \textbf{B}-node. The robot then identifies in which class $u$ is. If $u$ has a \textbf{B}-node neighbor, the robot performs $\mathit{Is\_B}(u)$. If ``$B$" is returned, then $u\in B$. If ``$D$" is returned, then $u\in D$. (Note that $\mathit{Is\_B}(u)$ cannot answer ``\textbf{B}-node".) If $u$ does not have a \textbf{B}-node neighbor, the robot calls $\mathit{A\_or\_B}(u)$. We have $u\in A$ if ``$A$" is returned and $u\in B$ if ``$B$" is returned. After knowing the class of the starting node, the robot calls procedure $\mathit{Get\_Par\_Path}$ all the way to find the root. But our exploration cannot identify $r$ without memorizing the port returning $r$. Fortunately, the robot knows whether it is in a $\mathit{BL}_1$ node from the previous section. Let $r'$ be the first $\mathit{BL}_1$ node found by the robot. Then $r$ is one of the \textbf{B}-node neighbors of $r'$. We use every \textbf{B}-node neighbor of $r'$ as a root and perform explorations from them. The robot should memorize the port by which it will return to $r'$. At least, the exploration rooted at $r$ will be performed that visits all the nodes in $G$. The number of edge traversals in this case is at most $\Delta$ times as large as that in exploration from $r$. \subsection{Correctness of the Exploration} \begin{lemma} For a black node $x$ that belongs to class $B$, $C$, or $D$, $\mathit{Is\_B}(x)$ returns ``$B$" iff $x$ is in class $B$ and returns ``$D$" iff $x$ is in $D$ and not a \textbf{B}-node. The robot is in node $x$ when $\mathit{Is\_B}(x)$ exits. In the call $\mathit{Is\_B}(x)$, the robot needs $O(d_1\log\Delta)$ bits of memory, and the total number of edge traversals of the robot is $O(\Delta^{d_1+2})$. \label{isb} \end{lemma} \begin{proof} If $x\in B$, $\mathit{LS}_1$ will report at least one node in class $A$ according to Property~\ref{back}. Since any node in class $A$ has no \textbf{B}-node neighbor and has a white neighbor, $\mathit{LS}_1$ will report at least one such node. Therefore, if $x$ is a node in class $B$, $\mathit{Is\_B}(x)$ returns ``$B$". If $x$ is a \textbf{B}-node, $\mathit{Is\_B}(x)$ returns ``\textbf{B}-node". It can be easily verified that $x$ is in class $C$ or in class $D$. Let $x$ be a class $D$ node and not a \textbf{B}-node. We have that either $\mathit{LS}_1$ does not report any node or any node reported by $\mathit{LS}_1$ belongs to class $D$. Since $d_2> d_1$, $\mathit{LS}_1$ will not reach any class $A$ node. By Property~\ref{back}, any node in class $D$ has at least a neighbor in class $C$ (that is a \textbf{B}-node), then any black node reported by $\mathit{LS}_1$ has a \textbf{B}-node neighbor, and thus $\mathit{Is\_B}(x)$ returns ``$D$". Therefore, for a node $x$ that belongs to class $B$, $C$, or $D$, $\mathit{Is\_B}(x)$ returns ``$B$" iff $x$ is in class $B$ and ``$D$" iff $x$ is in $D$ and not a \textbf{B}-node. By definition, $\mathcal{R}$ is in node $x$ when $\mathit{Is\_B}(x)$ exits. The number of edge traversals of $\mathit{Is\_B}(x)$ is not greater than that of a local search from $x$ within radius $d_1+2$. By Lemma~\ref{LSP}, there are at most $O(\Delta^{d_1+2})$ edge traversals in the call $\mathit{Is\_B}(x)$, and the memory space of $\mathcal{R}$ is $O(d_1\log\Delta)$ bits. \end{proof} \begin{lemma} For a black node $x\neq r$ that belongs to class $C$ or $D$, $\mathit{C\_or\_D}(x)$ returns ``$C$" iff $x\in C$ and ``$D$" iff $x\in D$. The robot is in node $x$ when $\mathit{C\_or\_D}$ exits. In the call $\mathit{C\_or\_D}(x)$, the robot needs $O(d_1\log\Delta)$ bits of memory, and the total number of edge traversals of the robot is $O(\Delta^{d_1+3})$. \label{cord} \end{lemma} \begin{proof} If $x\in C\setminus\{r\}$, then $x$ has a neighbor $y$ in class $B$, and thus $\mathit{Is\_B}(y)$ returns ``$B$" by Lemma~\ref{isb}. Therefore, $\mathit{C\_or\_D}(x)$ returns ``$C$". If $x\in D$, then all neighbors of $x$ belong to class $C$ or $D$. Thus, for any neighbor $y$ of $x$, $\mathit{Is\_B}(y)$ does not return ``$B$" by Lemma~\ref{isb}. Therefore, $\mathit{C\_or\_D}(x)$ returns ``$D$". By definition, $\mathcal{R}$ is in node $x$ when $\mathit{C\_or\_D}(x)$ exits. The number of edge traversals of $\mathit{C\_or\_D}(x)$ is not greater than that of a local search from $x$ within radius $d_1+3$. By Lemma~\ref{LSP}, there are at most $O(\Delta^{d_1+3})$ edge traversals in the call $\mathit{C\_or\_D}(x)$, and the memory space of $\mathcal{R}$ is $O(d_1\log\Delta)$ bits. \end{proof} \begin{lemma} For a black node $x$ that belongs to class $A$ or $B$, $\mathit{A\_or\_B}(x)$ returns ``$B$" if $x$ is in class $B$, and returns ``$A$" otherwise. The robot is in $x$ when $\mathit{A\_or\_B}$ exits. In the call $\mathit{A\_or\_B}(x)$, the robot needs $O(d_1\log\Delta)$ bits of memory, and the total number of edge traversals of the robot is $O(\Delta^{2d_1-1})$. \if 0 If $x\in D$ and $child(x)\neq\varnothing$, then $\mathit{A\_or\_B}(x)$ returns ``$A$".\fi \label{aorb} \end{lemma} \begin{proof} According to Property~\ref{maxR+}, for $x\in A$, there exists a white node whose white radius is not less than $\lfloor d_2/2\rfloor-1$ that can be reached by a white path from $x$. According to Property~\ref{maxR}, for $x\in B$, the white radius of nodes that have a white path to $x$ are not greater than $d_1-2$. By $\mathcal{AL}$, we have $\lfloor d_2/2\rfloor\geq d_1$. Therefore, we know whether $x$ is in class $A$ or in class $B$ by checking the maximal white radius of nodes that have a white path to $x$. Thus, if $x\in A$, $\mathit{LS}_1$ will find a node with white radius $d_1-1$, and $\mathit{A\_or\_B}(x)$ returns ``$A$"; if $x\in B$, no such node will be found, and $\mathit{A\_or\_B}(x)$ returns ``$B$". \if 0 For $x\in D$ and $child(x)\neq\varnothing$, there is a $d_2$-white-path from $x$ to a child of $x$, then there is a white node in this path whose white radius is $d_1-1$. $\mathit{A\_or\_B}(x)$ returns ``$A$".\fi By definition, $\mathcal{R}$ is in node $x$ when $\mathit{A\_or\_B}(x)$ exits. The number of edge traversals in the call $\mathit{A\_or\_B}(x)$ is not greater than that of the local search from $x$ within radius $2d_1-1$. By Lemma~\ref{LSP}, there are at most $O(\Delta^{2d_1-1})$ edge traversals in the call $\mathit{A\_or\_B}(x)$, and the memory space of $\mathcal{R}$ is $O(d_1\log\Delta)$ bits. \end{proof} \begin{lemma} For a black node $u$, let $\mathcal{R}$ know to which class $u$ belongs. For $u\notin \mathit{BL}_1\cup\{r\}$, when $\mathit{Get\_Par\_Path}(u)$ exits, $\mathit{parent}(u)$ is returned, and $\mathcal{R}$ is in $\mathit{parent}(u)$ and recorded the parent path of $u$ in its memory. For $u\in \mathit{BL}_1$, $\mathit{Get\_Par\_Path}(u)$ can identify that $u$ is in $\mathit{BL}_1$ and makes $\mathcal{R}$ return to $r$. In a call to $\mathit{Get\_Par\_Path}$, there are at most $O(\Delta^{d_2+2})$ edge traversals, and the robot needs $O(d_2\log\Delta)$ bits of memory space. \label{GetPar} \end{lemma} \begin{proof} Let $\mathcal{R}$ initiate at a node $u\in \mathit{BL}_i$, $i\geq 2$, when $\mathit{Get\_Par\_Path}(u)$ is called. We check separately the four cases in the procedure. In each case, two parts are to be proved: (1) PE can enumerate all paths in $\mathit{pred\_path}(u)$ and their ends; (2) NC can identify whether the end nodes reported by PE are in $pred(u)$. \textbf{ Case(1)}. $u\in C$ ($C\rightarrow B$) In this case, $pred(u)$ is a subset of the neighbors of $u$, since $d(\mathit{BL}_{i},\mathit{BL}_{i-1})=1$. Therefore, all paths in $\mathit{pred\_path}(u)$ can be enumerated by PE, so do the nodes in $pred(u)$. Any neighbor $x$ of $u$ belong to class $B$ ($\mathit{BL}_{i-1}$), class $C$ ($\mathit{BL}_{i}$), or class $D$ ($\mathit{BL}_{i+1}$). By Lemma~\ref{isb}, if and only if $\mathit{Is\_B}(x)$ returns ``$B$", $u$ is in class $B$, i.e., $pred(u)$. \textbf{ Case(2)}. $u\in B$ ($B\rightarrow A$) In this case, $d(\mathit{BL}_{i},\mathit{BL}_{i-1})=d_1$. By the local search from $u$ within radius $d_1$ (PE), all paths in $\mathit{pred\_path}(u)$ and all nodes in $pred(u)$ can be reported. The reported black nodes belong to class $A$ ($\mathit{BL}_{i-1}$) or $B$ ($\mathit{BL}_{i}$); among them only the nodes in class $A$ are in $pred(u)$. According to Lemma~\ref{aorb}, if and only if $\mathit{A\_or\_B}(x)$ returns ``$A$" then $u$ is in class $A$. Thus by calling $\mathit{A\_or\_B}$ the nodes in $pred(u)$ can be identified. \textbf{ Case(3)}. $u\in A$ ($A\rightarrow D$) In this case, $d(\mathit{BL}_{i},\mathit{BL}_{i-1})=d_2$. By $\mathit{LS}_1$ and $\mathit{LS}_2$, all white nodes at distance $d_1$ from both $u$ and $BL_i$ can be reported. As $d(\mathit{BL}_{i},\mathit{BL}_{i+1})=d_1$, these white nodes are between $\mathit{BL}_{i}$ and $\mathit{BL}_{i-1}$. The paths in $\mathit{pred\_path}(u)$ containing such a white node can be enumerated by $\mathit{LS}_3$. Since every path in $\mathit{pred\_path}(u)$ contains such a white node, PE can enumerate all paths in $\mathit{pred\_path}(u)$ and their ends. \if 0 By $\mathit{LS}_1$, the white node $v$ in the parent path of $u$ that is at distance $d_1$ from $u$ will be reported. As the white radius of $v$ is $d_1-1$, in $\mathit{LS}_2$, no black node will be visited. In $\mathit{LS}_3$, the parent path of $u$ and $\mathit{parent}(u)$ will be reported. \fi A black node $x$ reported by PE belongs to class $D$ or $A$. Through the observations on $\mathcal{AL}$, any node in class $D$ has at least one \textbf{B}-node neighbor, and any node in class $A$ has no \textbf{B}-node neighbor. So if $x$ has a \textbf{B}-node neighbor then $x$ belongs to class $D$. Therefore nodes in $pred(u)$ can be identified. \textbf{ Case(4)}. $u\in D$ ($D\rightarrow C$) In this case, $pred(u)$ is a subset of the neighbors of $u$. PE can enumerate all paths in $\mathit{pred\_path}(u)$ and all nodes in $pred(u)$. For any neighbor $x$ of $u$, $x\in C$ or $x\in D$. By Lemma~\ref{cord}, $\mathit{C\_or\_D}(x)$ can determine whether $x$ is in class $C$ which means $x\in pred(u)$. \vspace{6pt} All the local searches in this procedure are performed in increasing lexicographic order. According to $\mathcal{AL}$, in the above cases, the node in $pred(u)$ first be found is $\mathit{parent}(u)$, and the path stored in the memory of $\mathcal{R}$ is the parent path of $u$. Since $\mathit{parent}(u)$ exists, $\mathit{Get\_Par\_Path}(u)$ returns $\mathit{parent}(u)$. For $u\in \mathit{BL}_1\cup \{r\}$, let $\mathcal{R}$ take $u$ as a class $D$ node. We can verify that $\mathit{Is\_B}(u)$ returns ``$D$", and $D\rightarrow C$ will fail to find the parent path of $u$. By the above discussion, for nodes $x\notin \mathit{BL}_1\cup \{r\}$, $\mathit{Get\_Par\_Path}(x)$ returns $\mathit{parent}(x)$. Thus $\mathit{Get\_Par\_Path}(u)$ identifies that $u$ is in $\mathit{BL}_1\cup \{r\}$. $\mathcal{R}$ then moves to $r$ from $u$ by the memorized port. The worst-case number of edge traversals occurs in Case $A\rightarrow D$. By Lemma~\ref{LSP}, this number is not greater than $\mathit{LS}(d_1)+O(\Delta^{d_1})\bigl(\mathit{LS}(d_1-1)+\mathit{LS}(d_2-d_1+2)\bigr )=O(\Delta^{d_2+2})$. For the memory of $\mathcal{R}$, in the worst case ($A\rightarrow D$), the robot records a path of length $d_2+2$ and maintains a constant number of variables, therefore the space is $O(d_2\log\Delta)$ bits. \end{proof} \begin{lemma} Let $u\notin \mathit{BL}_1\cup\{r\} $ be a black node, and let $P$ be a white path from $u$. Let the robot know to which class $u$ belongs. $\mathit{Check\_Par\_Path}(u,P)$ returns ``true" if $P$ is the parent path of $u$ and returns ``false" otherwise. For $u\in \mathit{BL}_1\cup \{r\}$, let $\mathcal{R}$ take $u$ as a class $D$ node. $\mathit{Check\_Par\_Path}(u,P)$ returns ``true" for any path $P$ from $u$ to $r$ containing one edge. When the procedure exits, $\mathcal{R}$ is in node $u$. There are at most $O(\Delta^{d_2+2})$ edge traversals in a call to $\mathit{Check\_Par\_Path}$, and the memory space of $\mathcal{R}$ is $O(d_2\log\Delta)$ bits. \label{ChkSon} \end{lemma} \begin{proof} Procedure $\mathit{Check\_Par\_Path}$ is similar to $\mathit{Get\_Par\_Path}$ except that the PE part of $\mathit{Check\_Par\_Path}$ is performed in decreasing lexicographic order. By Lemma~\ref{GetPar}, for $u\notin \mathit{BL}_1\cup\{r\} $, providing the robot knows in which class $u$ is, $\mathit{Check\_Par\_Path}(u,P)$ will find a path in $\mathit{pred\_path}(u)$ that is lexicographically smaller than $P$ if this path exists. Whenever a path in $\mathit{pred\_path(u)}$ is found by $\mathit{Check\_Par\_Path}(u,P)$, $P$ is not the parent path of $u$ according to the definition of parent path. $\mathcal{R}$ returns to $u$ via the recorded path. If no such path is found, $P$ is the minimal path in $\mathit{pred\_path(u)}$, i.e., the parent path of $u$. $\mathcal{R}$ returns to $u$ by Lemma~\ref{LSP}. Therefore $\mathit{Check\_Par\_Path}(u,P)$ can tell whether $P$ is the parent path of $u$. For any $x\in \mathit{BL}_1\cup \{r\}$, $\mathit{Is\_B}(x)$ returns ``$D$", and thus $\mathit{C\_or\_D}(x)$ returns ``$D$". Therefore, $\mathit{Check\_Par\_Path}(u,P)$ returns ``true" for any path $P$ from $u$ to $r$ of length 1. The time and space complexity is similar as $\mathit{Get\_Par\_Path}$. \end{proof} \begin{lemma} Let $u\neq r$ be a black node, and let $P$ be a white path from $u$. Let $P'$ be the minimal child path of $u$ greater than $P$ if this path exists, and let $\mathcal{R}$ know to which class $u$ belongs. Procedure $\mathit{Next\_Child\_Path}(u,P)$ returns the end of $P'$ if $P'$ exists, and $\mathcal{R}$ is in the end node of $P'$ when the procedure exits. If $P'$ does not exist, then $\mathit{Next\_Child\_Path}$ returns ``false" and $\mathcal{R}$ moves to $u$. There are at most $O(\Delta^{2d_2+2})$ edge traversals in $\mathit{Next\_Child\_Path}$, and the memory space of $\mathcal{R}$ is $O(d_2\log\Delta)$ bits.\label{GetChd} \end{lemma} \begin{proof} Let $\mathcal{R}$ start from node $u\in \mathit{BL}_i$ ($i\geq 1$). We first discuss four cases in the $\mathit{Enumerating}$ procedure. In each case, two parts are to be proved: (1) PE can enumerate all paths in $succ\_path(u)$ that are greater than $P$; (2) NC can identify whether the end nodes of the paths reported by PE are in $succ(u)$. \textbf{ Case(1)}. $u\in C$ ($C\rightarrow D$) In this case, $succ(u)$ is a subset of the neighbors of $u$, since $d(\mathit{BL}_{i},\mathit{BL}_{i+1})=1$. Therefore, all paths in $succ\_path(u)$ that are greater than $P$ can be enumerated by PE. The neighbors of $u$ belong to class $B$ ($\mathit{BL}_{i-1}$), class $C$ ($\mathit{BL}_{i}$), or class $D$ ($\mathit{BL}_{i+1}$). Let $x$ be in $succ(u)$. By Lemma~\ref{isb}, if and only if $x$ is not a \textbf{B}-node, $\mathit{Is\_B}(x)$ returns ``$D$". By Lemma~\ref{cord}, if and only if $x$ is a \textbf{B}-node, $\mathit{C\_or\_D}(x)$ returns ``$D$". Thus NC can identify whether $x$ is in $succ(u)$. \if 0 If $\mathit{Is\_B}(x)$ returns ``$D$", by Lemma~\ref{isb}, $x$ is in $succ(u)$. If $\mathit{Is\_B}(x)$ returns ``\textbf{B}-node", then $x$ is a \textbf{B}-node. If $\mathit{C\_or\_D}(x)$ returns ``$D$", by Lemma~\ref{cord}, $x$ is in $succ(u)$. Let $y$ is in $succ(u)$. By Lemma~\ref{isb}, if $y$ is not a \textbf{B}-node, $\mathit{Is\_B}(y)$ returns ``$D$". By Lemma~\ref{cord}, if $y$ is a \textbf{B}-node, $\mathit{C\_or\_D}(y)$ returns ``$D$". Thus NC can identify whether $x$ is in $succ(u)$.\fi \textbf{ Case(2)}. $u\in D$ ($D\rightarrow A$) In this case, $d(\mathit{BL}_i,\mathit{BL}_{i+1})=d_2$, the local search from $u$ within radius $d_2$ can report all paths in $succ\_path(u)$ that are greater than $P$ and their end nodes. Any black node reported by PE belongs to either class $A$ or class $D$. Only the nodes in $A$ belong to $succ(u)$. From the observations of $\mathcal{AL}$ ( i.e., any node in class $D$ has at least one \textbf{B}-node neighbor, but any node in class $A$ has none), $D\rightarrow A$ identifies whether a reported node is in $succ(u)$. \textbf{ Case(3)}. $u\in A$ ($A\rightarrow B$) For $d(\mathit{BL}_i,\mathit{BL}_{i+1})=d_1$, the local search from $u$ within radius $d_1$ can report all paths in $succ\_path(u)$ that are greater than $P$ and their end nodes. All the nodes in $succ(u)$ can be reported by $\mathit{LS}_1$. For the reported black nodes, only the nodes in class $B$ are in $succ(u)$. According to Lemma~\ref{aorb}, $u$ is in class $B$ iff $\mathit{A\_or\_B}(x)$ returns ``$B$". Thus by calling $\mathit{A\_or\_B}$ the nodes in $succ(u)$ can be identified. \textbf{ Case(4)}. $u\in B$ ($B\rightarrow C$) For $succ(u)$ are the neighbors of $u$, $LS_1$ can report all paths in $succ\_path(u)$ greater than $P$ and their end nodes. Any layer $C$ node is a black node without white neighbors. Thus NC can identify whether the end nodes of the paths reported by PE are in $succ(u)$. In all above cases, if a node reported by PE is identified as a node in $succ(u)$ by NC, then the path $P'$ reported by PE is in $succ\_path(u)$. Now we consider the $\mathit{Identifying}$ procedure. Let $x$ be the node returned by $\mathit{Enumerating}$. According to Lemma~\ref{ChkSon}, $\mathit{Check\_Par\_Path}(x,P'^{-1})$ can tell whether $P'^{-1}$ is the parent path of $x$, and if so, $\mathcal{R}$ returns to $x$. Therefore, the minimal child path of $u$ that is greater than $P$ will be identified if it exists. If it does not exist, all paths reported by PE do not pass $\mathit{Identifying}$. $\mathcal{R}$ returns to $u$ in the end, and the procedure returns ``false". Denote by $T_{X\rightarrow Y}$ the number of edge traversals of each case in $\mathit{Next\_Child\_Path}$ and $\mathit{Get\_Par\_Path}$. By Lemma~\ref{LSP}, Case $D\rightarrow A$ has the maximal number of edge traversals that is $T_{D\rightarrow A}\leq \mathit{LS}(d_2+2)+O(\Delta^{d_2})T_{A\rightarrow D}=O(\Delta^{2d_2+2})$. For the memory of $\mathcal{R}$, in the worst case ($D\rightarrow A$), the robot records two paths of length $d_2$ and $d_2+2$ and maintains a constant number of variables, thus the space is $O(d_2\log\Delta)$ bits. \end{proof} We consider the cases that $r$ is an input of $\mathit{Next\_Child\_Path}$. \begin{lemma} Let $P$ be a path from $u=r$, containing only one edge $e$ ($e$ maybe a self loop). Let $port(e,r)\neq \mathtt{deg}(r)-1$, and let $\mathcal{R}$ know that $u$ is a class $C$ node. $\mathit{Next\_Child\_Path}(u,P)$ identifies the path $P'$, containing only one edge $e'$ from $r$ such that $port(e',r)=port(e,r)+1$, as the minimal child path of $r$ that is greater than $P$.\label{DFSroot} \end{lemma} \begin{proof} We can verify that for any $x\in \mathit{BL}_1\cup \{r\}$, $\mathit{Is\_B}(x)$ returns ``$D$". Thus the following two statements hold. (1) In the $\mathit{Enumerating}$ part of $\mathit{Next\_Child\_Path}(r,P)$, the end node $u$ of $P'$ will be returned. (2) For any path $P''$ from $u$ to $r$ that contains one edge, $\mathit{Check\_Par\_Path}(u,P'')$ returns ``true". Thus the lemma is proved. \end{proof} The overall exploration performed by our algorithm is the DFSs of subtrees rooted at each $\mathit{BL}_1$ node of the implicit spanning tree along with an exploration of $\mathit{BL}_1\cup\{r\}$. The robot starts from $r$ and explores each node in $\mathit{BL}_1$ and then explores the subtree rooted at each node. By Lemma~\ref{GetPar},~\ref{ChkSon},~\ref{GetChd},~\ref{DFSroot}, starting from any $x\in \mathit{BL}_1$, the robot can conduct a DFS of the subtree rooted at $x$. Since the robot can identify nodes in $\mathit{BL}_1$, it can identify whether the DFS of a subtree is finished. If there are multi-edges between $r$ and $x$, the subtree of $x$ will be explored more than once. If $r$ has self loops, $r$ will be identified as a $D$ layer node but without any child in the spanning tree. In DFSs all the black nodes will be visited. For any white node $y$, let $\mathit{BL}_i$ be the black layer such that $d(r,\mathit{BL}_i)<d(r,y)$ and $\ell=d(y,\mathit{BL}_i)$ is the minimal. By Property~\ref{back}, there exists $u\in \mathit{BL}_i$ such that there is an $\ell$-white-path from $u$ to $y$. Thus, the PE procedure of $\mathit{Next\_Child\_Path}(u)$ in a DFS will visit $y$. Therefore, all white nodes will also be visited by DFSs, and thus all the nodes in $G$ will be visited. The robot stops once the exploration is completed, i.e., the robot returns to $r$ via the largest port at $r$. \subsection{Bound on the Number of Edge Traversals} By Lemma~\ref{GetPar},~\ref{GetChd}, the maximal number of edge traversals of one call to an exploration procedure is $O(\Delta^{2d_2+2})$. In the DFS, when the robot moves from node $u$ to $\mathit{parent}(u)$ through the parent path $P$ of $u$ in state $\mathtt{up}$, the robot has to move back to $u$ to search for the minimal child path greater than $P^{-1}$. The total number of edge traversals of these moving backs is not greater than $bd_2$ where $b=o(n)$ is the number of black nodes. By Lemma~\ref{DFSroot}, the edges from $r$ are all identified as child paths in the DFS. If there are $q$ edges between $r$ and a $\mathit{BL}_1$ node $x$, the subtree rooted at $x$ will be traversed $q$ times. Denote by $T_{all}$ the total number of edge traversals by the robot. We have $T_{all}\leq \Delta(O(\Delta^{2d_2+2})+d_2)o(n)=O(\Delta^{2d_2+3}n)+o(d_2\Delta n)$. For simple graphs, the repetitive traversals can be avoided. When using our algorithm to explore a simple graph labeled by the 1-bit labeling scheme of~\cite{Ilcinkas08} ($\langle r,2,4\rangle$), the total number of edge traversals by the robot is $O(\Delta^{10}n)$ which is similar to that in~\cite{Ilcinkas08}. Given an $\mathcal{AL}$ labeling $\langle r,d_1,d_2 \rangle$ on $G$ with $L$-ratio $\rho'$. If there are six black layers and $D=d_1+d_2+3$, the labeling has the minimal $L$-ratio $\frac{d_1+d_2+4}{6}$, i.e., $\rho'\geq\frac{d_1+d_2+4}{6}$, so $d_1+d_2\leq 6\rho'-4$. For $d_1\geq 2$, we have $2d_2+3 \leq 2(d_1+d_2)-1\leq 12\rho'-9$. Thus our exploration algorithm completes in time $O(\Delta^{12\rho'-9}n)+ o(\rho'\Delta n)$. Since no more than a constant number of paths need to be stored at the same time and the length of such a path is not greater than $d_2$, $O(d_2\log \Delta)=O(\rho'\log \Delta)$ bits of memory is necessary for the robot to explore the graph. \section{Exploration While Labeling\label{labeling}} We present an algorithm allowing the robot to label the graph according to an $\mathcal{AL}$ labeling. As in~\cite{Ilcinkas08}, we assume that before labeling, the graph nodes are labeled by an initial color named ``blank" that the robot can identify. The labeling algorithm takes as input an $\mathcal{AL}$ labeling $L=\langle r,d_1,d_2 \rangle$ and labels the black layers in order. Denote by $G_i$ the subgraph of graph $G$ induced by all nodes at distance at most $d(r,\mathit{BL}_i)$ from the root. In phase $i$ ($i\geq 2$) of the algorithm, the robot starts from the root and traverses all nodes in $G_i$ and colors the nodes in $\mathit{BL}_i$ black and colors the nodes in layers between $\mathit{BL}_{i-1}$ and $\mathit{BL}_{i}$ white. At the end of phase $i$, the robot has colored $G_i$ according to $L$ and returned to the root. During the labeling, the labeling algorithm labels each node only once. In phase $i$, we call $\mathit{BL}_{i-1}$ the \emph{border layer}, nodes in the border layer \emph{border nodes}, and the set of nodes that are in the layers between $\mathit{BL}_{i-1}$ and $\mathit{BL}_{i}$ the \emph{working interval}. In this section, we always use $\mathit{BL}_{i-1}$ to denote the border layer. Initially, the robot labels the root (phase 0) and its neighbors black (phase 1). It then returns to the root. In phase $i$ ($i\geq 2$), if the border layer belongs to class $A$ or $D$, the labeling procedure has two stages: \vspace{4pt} \noindent(1) The robot colors all nodes in $\mathit{BL}_i$ black and returns to $r$. \noindent(2) The robot colors all nodes in the working interval white and returns to $r$. \vspace{4pt} \noindent If the border layer belongs to class $B$ or $C$, there is only stage 1. We use $X.x$ to denote the stage of the labeling algorithm in which the border layer belongs to class $X$ and the stage is $x$. A 3-bit variable $\mathtt{stage}$ is used to store the stage, initialized to $D.1$ in phase $2$. The labeling algorithm includes two procedures: (1) the exploration procedure; (2) the labeling procedure. The exploration procedure is a revision of the exploration procedure in Section~\ref{exp}. In a stage of phase $i$, the robot identifies some border nodes by the exploration procedure and calls the labeling procedure from each of these nodes to label the blank nodes. After calling the labeling procedure from a node, the robot sets state to $\mathtt{up}$ and moves up to the parent of the node. When the robot returns to $r$ from the largest port, variable $\mathtt{stage}$ transforms according to the following diagram. \begin{figure*}[ht \hbox to\textwidth{\hfil\includegraphics[width=0.5\textwidth]{state-trans-2.eps}\hfil \label{A2B \end{figure* \subsection{Labeling the Nodes} The robot uses the ${Label\_Succ}$ procedure to color nodes. In stage $*.1$, for a node $u$ in the border layer, ${Label\_Succ}$ colors all nodes in $succ(u)$ black. In stage $A.2$ and $D.2$, procedure ${Label\_Succ}$ colors all nodes in the working interval white. ${Label\_Succ}$ accepts a parameter: $u$, a node in the border layer. The detail of ${Label\_Succ}$ is given in the following where we consider six cases. (1) \textbf{$\mathtt{stage}=B.1$} or $C.1$. The robot labels all blank neighbors of $u$ black. (2) \textbf{$\mathtt{stage}=D.1$}. The robot performs a local search from $u$ within radius $d_2$. For each reported blank node $x$, it performs a local search from $x$ within radius $d_2-1$. If all black nodes visited in the local search have no \textbf{B}-node neighbor, then the robot colors $x$ black. (3) \textbf{$\mathtt{stage}=D.2$}. The robot performs a local search from $u$ within radius $d_2$. For every visited blank node, the robot colors the node white. (4) \textbf{$\mathtt{stage}=A.1$}. The robot performs a local search from $u$ within radius $d_1$. For every blank node $x$ reported, it performs a local search from $x$ within radius $d_1-1$. If all black nodes visited in the local search do not have any white neighbor, then the robot colors $x$ black. (5) \textbf{$\mathtt{stage}=A.2$}. The robot performs a local search from $x$ within radius $d_1$. For every visited blank node, the robot colors the node white. \subsection{ Revising the Exploration Procedure } We revise the exploration procedure in Section~\ref{exp} to explore the colored subgraph and color the uncolored subgraph. In a local search, when we say that the robot \emph{ignores} a node, we mean that as soon as the robot moves in the node, it leaves this node by the port from which it moves in, not visiting any neighbor of the node, and continues the local search. The revisions are given as follows. The revised $\mathit{Get\_Par\_Path}$ procedure ignores all blank nodes it visited. The revised $\mathit{Next\_Child\_Path}$ procedure ignores all blank nodes it visited except in the case where $\mathit{Next\_Child\_Path}(u,P)$ visits a blank node in the case $X\rightarrow Y$ and $\mathtt{stage}=X.*$. In this case, the robot returns to $u$ and calls ${Label\_Succ}(u)$. Table~\ref{table2} gives the operation that the robot performs for each case of the $\mathit{Next\_Child\_Path}$ procedure when visiting a blank node in different stages. When ${Label\_Succ}(u)$ terminates, the robot is in node $u$, and it then backtracks to $\mathit{parent}(u)$ with state $\mathtt{up}$ and continues the exploration. \begin{table*}[!] \caption{ In each case, for each stage the robot performs an operation when visiting a blank node. ``-" means that the robot will not visit a blank node in a combination of a case and a stage. ``$\heartsuit$" denotes the operation to return to $u$ and call ${Label\_Succ}(u)$. ``$\diamondsuit$" denotes the operation to ignore the blank node. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Case $\setminus$ $\mathtt{stage}$ & D.1 & D.2 & A.1 & A.2 & B.1 & C.1 \\ \hline $D\rightarrow A$ & $\heartsuit$ & $\heartsuit$ &$\diamondsuit$ &$\diamondsuit$ & - & - \\ \hline $A\rightarrow B$ & - & $\diamondsuit$ &$\heartsuit$ &$\heartsuit$ & $\diamondsuit$ & - \\ \hline $B\rightarrow C$ & $\diamondsuit$ & $\diamondsuit$ &- &$\diamondsuit$ & $\heartsuit$ & $\diamondsuit$ \\ \hline $C\rightarrow D$ & $\diamondsuit$ & $\diamondsuit$ & - & - & $\diamondsuit$ & $\heartsuit$ \\ \hline \end{tabular} \end{center} \label{table2 \end{table* \subsection{Correctness} For a black node $u$, let $u\in \mathit{BL}_k$, $rd=d(\mathit{BL}_k,\mathit{BL}_{k+1})$, denote by ${wdisc}(u)$ the set of nodes that the robot visits in a white local search within radius $rd$ from $u$. It is easy to verify that the whole graph $G$ is colored according to a labeling scheme $L$, if all the nodes in ${wdisc}(u)$ are colored according to $L$ for any black node $u$. A black node $u\in L_k$ $(k\geq 0)$ that has no neighbor in $L_{k+1}$ is called a \emph{leaf node}. We prove the correctness of the ${Label\_Succ}$ procedure in the following. \begin{lemma} Let $u\in \mathit{BL}_{i-1}$, and let $G_{i-1}$ $(i\geq 2)$ be colored according to an $\mathcal{AL}$ labeling $L$. If $\mathtt{stage}=B.1$ or $C.1$ or $A.1$ or $D.1$ and some nodes in $\mathit{BL}_i$ are colored black, ${Label\_Succ}(u)$ colors all blank nodes in ${succ}(u)$ black, not coloring any other nodes. If $\mathtt{stage}=A.2$ or $D.2$ and $\mathit{BL}_i$ is colored according to $L$ and some nodes in the working interval are colored white, ${Label\_Succ}(u)$ colors all blank nodes in ${wdisc}(u)$ white, not coloring any other nodes.\label{Label_n} \end{lemma} \begin{proof} Let the border layer belong to class $C$ or $B$, and $\mathtt{stage}=C.1$ or $B.1$ accordingly. Since $G_{i-1}$ and part of $\mathit{BL}_i$ are colored according to $L$, all blank neighbors of $u$ are in $succ(u)$. ${Label\_Succ}(u)$ only labels all blank neighbors of $u$ black. Therefore, ${Label\_Succ}(u)$ colors all blank nodes in ${succ}(u)$ black, not coloring any other nodes. Let $u\in D$, and $\mathtt{stage}=D.1$. Let $x$ be a blank nodes reported by the local search from $u$ within radius $d_2$. In this case, $G_{i-1}$ and part of $\mathit{BL}_i$ are colored according to $L$, and all nodes in the working interval are blank. If $x\in succ(u)$, nodes at distance not greater than $d_2-1$ from $x$ are either blank nodes or black nodes in $\mathit{BL}_i$; otherwise there is at least one node in $\mathit{BL}_{i-1}$ at distance less than $d_2-1$ from $x$ by Property~\ref{back}. Layer $\mathit{BL}_{i-1}$ is a $D$-layer in which every node has a \textbf{B}-node neighbor, while each node in $\mathit{BL}_i$ has no \textbf{B}-node neighbor. Therefore, ${Label\_Succ}(u)$ can determine whether $x$ is in $succ(u)$. So ${Label\_Succ}(u)$ colors all blank nodes in ${succ}(u)$ black, not coloring any other nodes. Let $u\in A$, and $\mathtt{stage}=A.1$. Let $x$ be a blank node visited by the local search from $u$ within radius $d_1$. If $x\in succ(u)$, all nodes at distance not greater than $d_1-1$ from $x$ are either blank nodes or black nodes in $\mathit{BL}_i$; otherwise some of these nodes may belong to $\mathit{BL}_{i-1}$. Since $G_{i-1}$ has been colored according to $L$ and $\mathit{BL}_{i-1}$ is an $A$-layer, every node in $\mathit{BL}_{i-1}$ has a white neighbor. For nodes in the working interval are blank in stage $A.1$, any node in $\mathit{BL}_i$ has no white neighbor. By this observation, ${Label\_Succ}(u)$ can determine whether $x$ is in $succ(u)$. So ${Label\_Succ}(u)$ colors all blank nodes in ${succ}(u)$ black, not coloring any other nodes. If $G_{i-1}$ and $\mathit{BL}_i$ are colored according to $L$ and $\mathtt{stage}=D.2$ or $A.2$, by definition, $\mathit{Label\_Succ}(u)$ colors all blank nodes in ${wdisc}(u)$, not coloring any other nodes. \end{proof} Now we prove the correctness of the labeling algorithm. \begin{Theorem} By the end of the execution of the labeling algorithm taking as input an $\mathcal{AL}$ labeling $L=\langle r,d_1,d_2 \rangle$, the graph is fully colored according to $L$, and the robot has explored the entire graph, terminating at the root. \label{labelalg} \end{Theorem} \begin{proof} For each $i\geq 0$, we say that $\mathit{Property}(i)$ holds at the end of phase $i$, if \vspace{4pt} \noindent (1) The robot colors all nodes of $G_i$ according to $L$ and returns to the root. \noindent (2) Only nodes of $G_i$ are colored. \vspace{4pt} We now prove that, at the end of phase $i$, $\mathit{Property}(i)$ holds. Initially, $\mathit{Property}(1)$ holds at the end of phase 1. For $i\geq 1$, assume that at the end of phase $i-1$, $\mathit{Property}(i-1)$ holds. We prove that $\mathit{Property}(i)$ holds at the end of phase $i$. By the induction hypothesis, during phase $i$, all nodes of $G_{i-1}$ are colored according to the labeling $L$, and all other nodes are blank. We first prove that in $X\rightarrow Y$ of $\mathit{Next\_Child\_Path}$ from $u$, if a blank node is visited and $\mathtt{stage}=X.*$ then $u$ is a border node. By definition, for $v\in \mathit{BL}_s$, $\mathit{Next\_Child\_Path}$ from $v$ will not visit any node in $L_t$ such that $t>d(r,\mathit{BL}_{s+2})$. For a class $X$ node $v\in \mathit{BL}_s$ $(s<i-1)$, we have $s\leq (i-1)-4$, since all blank nodes are in layers after $\mathit{BL}_{i-1}$, $\mathit{Next\_Child\_Path}$ from $v$ will not visit any blank node. Therefore, according to Table~\ref{table2}, $u$ is a border node if ${Label\_Succ}(u)$ is called. The robot returns to $\mathit{parent}(u)$ with state $\mathtt{up}$ when ${Label\_Succ}(u)$ terminates. Suppose that for $u\in \mathit{BL}_{i-1}$, in a call to $\mathit{Next\_Child\_Path}$ from $u$, the robot does not visit any blank node and arrives at a child of $u$ say $v$. By definition, all neighbors of $v$ will be visited in $\mathit{Next\_Child\_Path}$ from $u$. Therefore, node $v$ has no neighbor in layer after $\mathit{BL}_{i}$, and $v$ is a leaf node. In the followed exploration from $v$, blank nodes will be ignored (see Table~\ref{table2}). $\mathit{Next\_Child\_Path}(v,\varnothing)$ returns ``false", and $\mathcal{R}$ will return from $v$ to $u$ in state $\mathtt{up}$. If all calls to $\mathit{Next\_Child\_Path}$ from $u$ do not find a blank node then all children of $u$ are leaf nodes, which implies that ${wdisc}(u)$ has been colored according to $L$, and $\mathcal{R}$ returns to $\mathit{parent}(u)$ in state $\mathtt{up}$ not visiting any node beyond $G_i$. By the above argument, for $u\in \mathit{BL}_{i-1}$, no mater whether ${Label\_Succ}(u)$ is called, the robot will return to $\mathit{parent}(u)$ in state $\mathtt{up}$. For $u\notin \mathit{BL}_{i-1}$, the blank nodes will be ignored in explorations from $u$ (see Table~\ref{table2}). For $G_{i-1}$ is colored correctly, by Theorem~\ref{main}, in phase $i$, all border nodes are visited, and finally the robot returns to the root. In the end of phase $i$, for $u\in \mathit{BL}_{i-1}$, either ${Label\_Succ}(u)$ is called or ${wdisc}(u)$ has been colored according to $L$. By Lemma~\ref{Label_n}, for every border node $u$, nodes in ${wdisc}(u)$ are colored correctly. Therefore all the nodes in $G_i$ is colored correctly. Since $\bigcup_{u\in \mathit{BL}_{i-1}}\mathit{wdisc}(u)\subseteq G_i$, only nodes of $G_i$ are colored. In summary, for each $i\geq 0$, $\mathit{Property}(i)$ holds at the end of phase $i$. It follows that after $\lceil (D+1)/(d_1+d_2+2) \rceil$ phases, the robot has fully colored and explored the entire graph. In the end, the last phase is performed, in which the robot finds that the exploration and the coloring are completed. \end{proof} \section{Labeling Schemes Enabling Adjusting the Ratio of Black Nodes \label{S4}} Based on $\mathcal{AL}$ labeling schemes, we introduce the labeling schemes that allow the adjustment of the $N$-ratio. We will prove the following in the remaining of Section~\ref{S4}. \begin{Theorem} There exists a robot with the property that for any $n$-node graph $G$ of degree bounded by integer $\Delta$, it is possible to color the nodes of $G$ with two colors (black and white), while the $N$-ratio is not less than a given rational number $\rho\in (2,(D+1)/4]$. Using the labeling, the robot can explore the graph $G$, starting from a node $r$ and terminating at $r$ after identifying that the entire graph has been traversed. The robot has $O(\rho\log\Delta)$ bits of memory, and the total number of edge traversals by the robot is $O(n\Delta^{\frac{16\rho+7}{3}}/\rho+\Delta^{\frac{40\rho+10}{3}})$. \label{main2} \end{Theorem} In the remainder of the paper, word ``\emph{ratio}" refers to ``\emph{$N$-ratio}" if not mentioned. \subsection{From $L$-ratio Tunable to $N$-ratio Adjustable} We generalize the $\mathcal{AL}$ labeling to the \emph{periodic layer oriented labeling} (PL in short). A PL labeling of a graph is composed of a root node and the sets of layers are colored black and white. A PL labeling colors the graph in a periodic manner, that is, $L_i$ and $L_{i+p}$ are colored with the same color where $p$ is the period. We can represent a PL labeling by a triple $\langle r, p,BL\rangle$, where $r$ is the root, $0<p\leq D+1$ is an integer denoting the period, and $BL$ is an integer set on $[0,p-1]$ denoting the black layers within a period. The set of black layers of the labeling $\langle r, p,BL\rangle$ is $\{L_i\mid (i \ \mathtt{mod}\ p)\in BL, 0\leq i\leq D\}$. We call the interval $[ip,(i+1)p-1]$, $i\geq 0$, the $i$th \textit{unit} of the labeling. For example, the labeling in Figure~\ref{scheme} can be denoted by $\langle r, 11,\{0,1,7,10\}\rangle$. Let $S_1=\langle r, p, BL\rangle,S_2=\langle r, p, BL'\rangle$ be two PL labeling schemes with the same root and period. The \textit{union} of $S_1$ and $S_2$ is denoted by $S_1\cup S_2=\langle r, p, BL\cup BL'\rangle$. Denote by $N(L_i)$ the number of nodes in layer $L_i$ of a labeling scheme $P$. Denote by $BN(P)$ the number of black nodes in $P$. Denote by $\rho(P)=n/BN(P)$ the $N$-ratio of the labeling scheme $P$. We relax some restrictions of the $\mathcal{AL}$ labeling and define the following: \noindent\emph{\textbf{Labeling $\mathcal{MP}$.}} $\mathcal{MP}=\langle r,p,BL\rangle$ is a PL labeling, where $BL=\{P_B,P_C,P_D,P_A\}$ satisfies the following properties. \begin{itemize} \item $(P_C-P_B)\ \ \mathtt{ mod }\ p=1$, \item $(P_D-P_C)\ \ \mathtt{ mod }\ p=1$, \item $(P_B-P_A)\ \ \mathtt{ mod }\ p=d_{AB}$, $d_{AB}\geq 2$, \item $(P_A-P_D)\ \ \mathtt{ mod }\ p=d_{DA}$, \item $\lfloor d_{DA}/2\rfloor\geq d_{AB}$. \end{itemize} We call $\mathcal{MP}$ labelings the \textit{elementary labelings}, and any $\mathcal{AL}$ labeling is an elementary labeling. In an elementary labeling, $p=d_{AB}+d_{DA}+2$. For convenience, we use quadruple $\langle r,P_A,d_{AB},d_{DA}\rangle$ to denote an elementary labeling, e.g., the $\mathcal{AL}$ labeling can be denoted by $\langle r, d_{DA}+1,d_{AB},d_{DA}\rangle$. The labeling in Figure~\ref{scheme} can be denoted by $\langle r, 7,3,6\rangle$. In this section, all labelings are elementary labelings or combinations of elementary labelings. As for $\mathcal{AL}$ labelings, we partition the black nodes in an elementary labeling to the following four sets: \begin{center} \begin{flushleft} $C=\{v\in V\mid d(r,v)\ \mathtt{ mod }\ p=P_C\}$,\\ $D=\{v\in V\mid d(r,v)\ \mathtt{ mod }\ p=P_D\}$, \\ $A=\{v\in V\mid d(r,v)\ \mathtt{ mod }\ p=P_A\}$,\\ $B=\{v\in V\mid d(r,v)\ \mathtt{ mod }\ p=P_B\}$. \end{flushleft} \end{center} An $\mathcal{AL}$ labeling scheme cannot guarantee that its $N$-ratio is not less than its $L$-ratio. The following lemma implies a method to close the gap. \begin{lemma} Given a rational number $1\leq \rho\leq D+1$, let $\rho=m/t$, where $m>0$ and $t>0$ be integers. Let $PS$ be a set of labeling schemes of $G$ that have the same root, and $|PS|=m$. If $\sum_{P\in PS}BN(P)=tn$, then there exists $P\in PS$ such that $\rho(P)\geq \rho$.\label{Ratio_main} \end{lemma} \begin{proof} From $\sum_{P\in PS}BN(P)=tn$, we have \begin{equation} \frac{\sum_{P\in PS}BN(P)}{n}=\sum_{P\in PS}\frac{BN(P)}{n}=\sum_{P\in PS}\frac{1}{\rho(P)}=t. \end{equation} By the pigeonhole principle, for $|PS|=m$, there exists $P\in PS$ such that $1/\rho(P)\leq t/m$. Therefore, there exists $P\in PS$ such that $\rho(P)\geq m/t=\rho$. \end{proof} If we find a set of labelings that satisfies Lemma~\ref{Ratio_main}, then we can find a labeling where the $N$-ratio is not less than a given rational number. To generate such labelings, we introduce the circular shifts of a labeling. For a labeling $P=\langle r, p,BL\rangle$ and an integer $0\leq l\leq D$, denote $P^l=\langle r, p,BL^l\rangle$, where $BL^l=\bigl\{i\mid \bigl((i-l) \ \mathtt{mod}\ p\bigr)\in BL, 0\leq i< p \bigr\}$, called a \textit{circular shift} of $P$. We give some $N$-ratio adjustable labeling schemes as follows. Let $\rho\in[4, (D+1)/4]$ be an integer, and let $S=\langle r,4\rho, BL\rangle$, where $\mathit{BL}=\{0,5,6,7\}$. We have that $\bigcup_{i=0}^{\rho-1}\mathit{BL}^{4i}= [0,4\rho-1]$, and for any $0\leq i,j\leq\rho-1$, $\mathit{BL}^{4i}\cap \mathit{BL}^{4j}=\varnothing$. Let $H=\{S^0,S^4,\ldots,S^{4(\rho-1)}\}$. We have $\sum_{i=0}^{\rho-1}\mathit{BN}(S^{4i})=n$. By Lemma~\ref{Ratio_main}, there exists $S^{4j}\in H$ such that $\rho(S^{4j})\geq \rho$. This method does not work for $\rho=3$. We give a solution in Figure~\ref{rho} where we use six different labeling schemes $T_1,\ldots,T_6$ with period 12, and each layer is colored black by exactly two labeling schemes. Thus $\sum_{i=1}^{6}\mathit{BN}(T_i)=2n$. By Lemma~\ref{Ratio_main}, there is $T_i\in\{T_1,\ldots,T_6\}$ such that $\rho(T_i)\geq 3$. \begin{figure}[htbp] \centering \begin{minipage}[t]{0.5\linewidth} \begin{center} \includegraphics[width=\textwidth]{rho_new.eps} \end{center} \caption{Examples of adjustable labeling schemes. One unit of each labeling scheme is drawn. A dot represents a layer, and a line represents three adjacent layers. Layers colored similarly belong to the same labeling scheme. \label{rho \end{minipage} \hspace{10mm} \begin{minipage}[t]{0.40\linewidth} \begin{center} \includegraphics[width=\textwidth]{r_rho.eps} \end{center} \caption{ Above is a unit of a labeling scheme with a rational $\rho$ where $m=7$, $t=3$. Below is the root unit of $P$. Intervals in a unit are in dotted boxes. \label{r_rho \end{minipage} \end{figure} \subsection{$N$-ratio Adjustable Labeling Schemes} In this subsection, we introduce a general method to construct the $N$-ratio adjustable labelings based on Lemma~\ref{Ratio_main}. We only discuss the cases where $\rho\geq 2$. For $1<\rho< 2$, we can compute $\rho''=\rho/(\rho-1)\geq 2$. A labeling with $N$-ratio $\rho$ can be derived from a labeling with $N$-ratio $\rho''$ by reversing the color of each node. Given a rational number $\rho\geq 2$, let $\rho=m/t$ where $m$ and $t$ are relatively prime. The idea is to find a labeling scheme $P=\langle r, 4m, BL\rangle$ where $|BL|=4t$, $\rho(P)=\rho$. Let $D+1\geq 4m$, we demonstrate that such $P$ exists. If $m$ is so huge that $D+1<4m$, we have to find $m'$ and $t'$ that are relatively primes such that $\rho<m'/t'$ and $D+1\geq 4m'$. Then we try to find a labeling scheme $P=\langle r, 4m', BL\rangle$ where $|BL|=4t'$, $\rho(P)=m'/t'>\rho$. Let the length of the unit be $p=4m$, $x=4m\ \mathtt{mod} \ t$. Partition each unit into $t$ disjoint \textit{interval}s. The first $x$ intervals are of length $\lceil p/t\rceil$, the others are of length $\lfloor p/t\rfloor$. Let $d_{AB}=\lfloor(\lfloor p/t\rfloor-2)/3\rfloor$, $d_{DA}=\lfloor p/t\rfloor-2-d_{AB}$, and $d_{DA}'=\lceil p/t\rceil-2-d_{AB}$. We have $\lfloor d_{DA}/2\rfloor\geq d_{AB}$ and $\lfloor d'_{DA}/2\rfloor\geq d_{AB}$. The following $t$ elementary labelings with the same root and period can be derived. \begin{displaymath} S_i = \left\{ \begin{array}{ll} \langle r,\lceil 4m/t\rceil i,d_{AB},4m-d_{AB}-2 \rangle, &\quad\quad\textrm{$0\leq i\leq x-1$},\\ \langle r,\lceil 4m/t\rceil x+\lfloor 4m/t\rfloor(i-x),d_{AB},4m-d_{AB}-2 \rangle, &\quad\quad\textrm{$x\leq i\leq t-1$.} \end{array} \right. \end{displaymath} Let $P=\bigcup^{t-1}_{i=0}S_i=\langle r, 4m, BL\rangle$. We have $|BL|=4t$. In $P$, we classify the nodes into four classes $A,B,C$, and $D$. Class $X\in\{A,B,C,D\}$ of $P$ is the union of class $X$ of all $S_i$ ($0\leq i\leq t-1$). There are totally $4m$ circular shift labelings of $P$. For every layer $L_k$, there are $4t$ circular shift labelings of $P$ where $L_k$ is labeled black. Therefore, $\sum_{k=0}^{4m-1}BN(P^k)=4tn$. By Lemma~\ref{Ratio_main}, there exists a circular shift of $P$, say $P^*$, such that $\rho(P^*) \geq m/t$. An example of $P$ is shown in Figure~\ref{r_rho}. \subsubsection{Transformation of $P^*$} We call the labelings where $r$ is in class~$C$ the \textit{{$R^C$ labelings}}. All $\mathcal{AL}$ labelings are $R^C$ labelings. Let $P^*$ be the labeling such that $\rho(P^*)\geq \rho$, we can see that $P^*$ is not necessarily a $R^C$ labeling. In this section, we give a method to transform $P^*$ which is not a $R^C$ labeling to a $R^C$ labeling; the exploration algorithm for $\mathcal{AL}$ labelings can be used after minor revisions. The transformation of $P^*$ is as follows. Label $r$ and its neighbors black. Let the first $A$-layer in $P^*$ be $L_l$. If $\lfloor (l-1)/2\rfloor\geq d_\mathit{AB}$, we label the layers between $L_1$ and $L_l$ white. Otherwise, we label the layers between $L_1$ and the second $A$-layer white if this layer exists. If there is only one $A$-layer, then we label the layers after $L_1$ white. Denote the resulted labeling by $\hat{P^*}$. We redefine the units of $\hat{P^*}$ as follows: the interval $[0,d_r]$ is the $0$th unit called {\textit{root unit}}, where $d_r$ is either the distance between the root and the first $A$-layer if $A$-layers exist or the diameter of $G$ if no $A$-layers exist; the interval $[(i-1)p+d_r,ip+d_r-1]$ $(i\geq 1)$ is the $i$th \textit{unit}. We have $d_r\geq d_{DA}$. It is possible that $\mathit{BN}(\hat{P^*})> \mathit{BN}(P^*)$, and therefore $\rho(\hat{P^*})< \rho$. To make sure $\rho(\hat{P^*})\geq \rho$, we modify the transformation as follows. The root is chosen as a node with the minimal number of neighbors, say $\Delta'$. Label $L_0$ and $L_1$ black. If there exists an $A$-layer in $P^*$, say $L_k$, such that there is only one $C$-layer before $L_k$, we label the layers between $L_1$ and $L_k$ white. If no such $A$-layer exists, the diameter $D$ of $G$ is so short that we label the layers after $L_1$ white. We prove that $\rho(\hat{P^*})\geq \rho$ as follows. Suppose that $L_k$ exists. Let $nb_1$ be the total number of black nodes in layers before $L_k$ in $P^*$, and let $nb_2=N(L_1)+1=\Delta'+1$ be that in $\hat{P^*}$ which is the number of neighbors of $r$ plus 1. We have $\mathit{BN}(P^*)-\mathit{BN}(\hat{P^*})= nb_1-nb_2$. In $P^*$, before layer $L_k$, there are a $C$-layer and a $D$-layer, thus if the root is in a $C$-layer in $P^*$, then $P^*$ and $\hat{P^*}$ are similar; otherwise there are three adjacent $B$, $C$, and $D$ layers before $L_k$. Because $\Delta'$ is the minimal number of neighbors of a node in the graph and all the neighbors of nodes in the middle $C$-layer are involved in the three adjacent black layers, the number of black nodes in these three layers is not less than $\Delta'+1=nb_2$. Therefore, $nb_1-nb_2\geq 0$. Thus $\mathit{BN}(P^*)-\mathit{BN}(\hat{P^*})\geq 0$. We have $\rho(\hat{P^*})\geq \rho$. Suppose that $L_k$ does not exist. Since $\rho\leq (D+1)/4$, there are at least four black layers in the first unit of $P^*$. In this case, we have $D\leq p+d_2-1$, and in $P^*$ there is only one $A$-layer, and there are only one $B$-layer and one $C$-layer after this $A$-layer. When there are three adjacent $B$, $C$, and $D$ layers after the $A$-layer, based on the above discussions, we have $\rho(\hat{P^*})\geq \rho$. When there are no three adjacent $B$, $C$, and $D$ layers after the $A$-layer, the last two layers are a $B$-layer followed by a $C$-layer. Since all the neighbors of the nodes in the last $C$-layer are involved in the last two black layers, the number of black nodes in the last two black layers is not less than $\Delta'+1=nb_2$. Therefore, $nb_1-nb_2\geq 0$, and $\mathit{BN}(P^*)-\mathit{BN}(\hat{P^*})\geq 0$. As a result, we have $\rho(\hat{P^*})\geq \rho$. \subsubsection{Exploration Algorithm } We revise the graph exploration algorithm in Section~\ref{S2} to explore the graph labeled by $\hat{P^*}$ as follows. First, the memory of $\mathcal{R}$ increases to $O(d_r\log \Delta)$ bits. Second, add a 1-bit flag $fr$. If $\mathcal{R}$ is in the root unit, $fr=1$, otherwise $fr=0$. Third, in the following cases, $\mathcal{R}$ first determines the distance between a $D$-layer and the adjacent $A$-layer (we call this distance ``$d_2$" of the current interval) is $\mathit{d_{DA}}$ or $\mathit{d_{DA}}+1$ or $d_r$ as follows. (1) $D\rightarrow A$ of $\mathit{Next\_Child\_Path}$. Assume that $\mathcal{R}$ is currently in a $D$-layer node $u$. If $fr=1$, we set $d_2=d_r$ and execute the procedure. If $D\rightarrow A$ succeeds we set $fr=0$. Let $fr=0$. We first determine whether $u$ is a leaf node; if not, we determine the distance between a $D$-layer and the adjacent $A$-layer. Then we backtrack from $u$ or call $D\rightarrow A$ with the correct $d_2$. The distinguishing procedure is as follows. Perform a local search from $u$ within radius $d_{DA}$ ($\mathit{\mathit{LS}}_1$). If a black node in $A$ is visited then $d_2=d_{DA}$. If no class $A$ node is visited then perform a local search from $u$ within radius $d_{DA}+1$ ($\mathit{\mathit{LS}}_2$). If a black node $v$ in $A$ is visited then perform a local search from $v$ within radius $\lceil d_{DA}/2\rceil$\footnote[6]{If $d_{DA}$ is even, $\lceil d_{DA}/2\rceil=d_{DA}/2$. If $d_{DA}$ is odd, $\lceil d_{DA}/2\rceil=\lfloor d_{DA}/2\rfloor+1$}. For each white node $x$ reported, check whether $R_W(x)=\lfloor d_{DA}/2\rfloor-1$, and if so, perform a local search within radius $\lfloor d_{DA}/2\rfloor$ from $x$. If a node with a \textbf{B}-node neighbor is reported, we have $d_2=d_{DA}$ and $u$ is a leaf node; otherwise $d_2=d_{DA}+1$. If no class $A$ node is found in $\mathit{LS}_1$ and $\mathit{LS}_2$ then $u$ is a leaf node. (2) $A\rightarrow D$ of $\mathit{Get\_Par\_Path}$. Assume that $\mathcal{R}$ is currently in an $A$-layer node $u$. We first set $d_2=d_{DA}$ and call the procedure. If $A\rightarrow D$ fails to find the parent of $u$, we set $d_2=d_{DA}+1$ and redo $A\rightarrow D$. If it fails again, we set $d_2=d_r$ and $fr=1$ and redo the procedure. Now we consider the space and the time complexity of the exploration algorithm. For $d_{DA}=\lfloor p/t\rfloor-2-d_{AB}$, we have $d_{DA}=\lfloor 4\rho\rfloor-2-\lfloor(\lfloor 4\rho\rfloor-2)/3\rfloor\leq\frac{8\rho-4}{3}$. If $L_0$ is a $D$-layer in $P^*$, then $\hat{P^*}$ has the maximal $d_r$. In this case, $d_r=d_{DA}+1+\lceil p/t\rceil\leq\frac{20\rho-1}{3}$. Thus, the memory of $\mathcal{R}$ is still $O(\rho\mathrm{log}\Delta )$. Since $d_r\geq d_{DA}$, the number of edge traversals in exploring the root unit is increased comparing with $\mathcal{AL}$ labelings. The increased number of traversals is $O(\Delta^2\Delta^{2d_{r}+2})=O(\Delta^{\frac{40\rho+10}{3}})$. The total number of edge traversals is $O(n\Delta^{2(d_{DA}+1)+3}/\rho+\Delta^{2d_{r}+3})=O(n\Delta^{\frac{16\rho+7}{3}}/\rho+\Delta^{\frac{40\rho+10}{3}})$. \subsection{Labeling Algorithm} We use the algorithm in Section~\ref{labeling} with minor revisions to label a graph according to $\hat{P^*}$. The parameters of $\hat{P^*}$ are determined by system designers, including: $r$, $d_{AB}$, $d_{DA}$, and $d_r$. The robot takes as input these parameters and labels the graph. The revisions of the exploration procedures are as follows. When $\mathcal{R}$ explores from a $D$-layer node or an $A$-layer node, the robot has to know whether the distance from the $D$-layer to the adjacent $A$-layer (denoted by $d_2$) is $d_{DA}$ or $d_{DA}+1$. We define a variable $c$ of $\lg t$ bits to indicate that $\mathcal{R}$ is in the $c$th interval in a unit. Let there be $j$ intervals before the first $A$-layer of $\hat{P^*}$ in the first unit of ${P^*}$. According to the definition of $\hat{P^*}$, $d_2=d_{DA}+1$ if $(c+j+t) \ \mathtt{mod}\ t< 4m \ \mathtt{mod}\ t$, and $d_2=d_{DA}$ otherwise. In this description, all arithmetic operations are modulo $t$. Initially, variable $c$ is set to $t-1$. $c$ increases by 1 after $\mathcal{R}$ traversed from a $D$-layer node down to an $A$-layer node and decreases by 1 after $\mathcal{R}$ traversed from an $A$-layer node up to a $D$-layer node. When starting from a class-$A$ node or a class-$D$ node, by $c$ and $fr$, the robot knows exactly $d_2$ of the current interval. So the original exploration procedures in Subsection~\ref{exp} can be used to explore the graph when $c$ and $fr$ is introduced. Procedure ${Label\_Succ}$ does not need revision, since the robot knows $d_2$ of the current interval. Then using the revised exploration algorithm in this section, the labeling algorithm in Section~\ref{labeling} can label the graph according to $\hat{P^*}$. \section{Future Work} Further interesting questions include whether there exist labeling schemes that are not spanning tree based, and whether there exists a labeling algorithm for an $\mathcal{AL}$ labeling that only uses two colors. The parameters of the $N$-ratio adjustable labeling scheme, i.e., the root, are determined by system designers. A question is whether there exists a finite state automaton that takes as input a valid $N$-ratio and labels the graph accordingly. \section*{Acknowledgement} The authors thanks Leszek G\c{a}sieniec and the anonymous referees for their constructive suggestions.
1,477,468,751,278
arxiv
\section{Introduction} \label{s1} \setcounter{footnote}{0} In this note, we review the basic ideas presented in our recent works \cite{Sumino:2008hu,Sumino:2008hy}, which analyze possible connections between the charged lepton spectrum and family gauge symmetries; in particular, full advantage of Koide's mass formula has been taken to study this connection. Koide's mass formula \cite{Koide:1982wm}, found by Koide in 1982, is an empirical relation among the charged lepton masses, which holds with a striking precision. The formula can be described in the following way: Consider two vectors $(1,1,1)$ and $(\sqrt{m_e},\sqrt{m_\mu},\sqrt{m_\tau})$ in a 3-dimensional space; then, the angle between these two vectors is equal to $45^\circ$ \cite{Foot:1994yn}, see Fig.~\ref{fig1}\\ \begin{figure}[h]\centering \psfrag{111}{$(1,1,1)$} \psfrag{45}{$\theta=45^\circ$} \psfrag{rootm}{\small $(\sqrt{m_e},\sqrt{m_\mu},\sqrt{m_\tau})$} \includegraphics[width=6cm]{fig1-ppt.eps} \caption{\small Geometrical interpretation of Koide's mass formula, eq.~(\ref{KoideMF}). \label{fig1}} \end{figure}\\ Equivalently, the formula is expressed as \begin{eqnarray} \frac{\sqrt{m_e}+\sqrt{m_\mu}+\sqrt{m_\tau}}{ \sqrt{3\,(m_e+m_\mu+m_\tau)}} =\cos 45^\circ =\frac{1}{\sqrt{2}} \, . \label{KoideMF} \end{eqnarray} Present experimental values of the on-shell (pole) masses of the charged leptons read \cite{Amsler:2008zz} \begin{eqnarray} && m_e=0.510998910\pm 0.000000013~{\rm MeV} \, , \\ && m_\mu=105.658367\pm 0.000004~{\rm MeV} \, , \\ && m_\tau = 1776.84 \pm 0.17~{\rm MeV} \, . \end{eqnarray} It may be noteworthy that the accuracy of the tau mass measurement (which limits the experimental accuracy of Koide's formula) is still improving in the last few years. Using these values, one finds that \begin{eqnarray} \sqrt{2}\, \left[ \frac{\sqrt{m_e}+\sqrt{m_\mu}+\sqrt{m_\tau}}{ \sqrt{3\,(m_e+m_\mu+m_\tau)}} \right] =1.000005 \pm 0.000007 \, . \end{eqnarray} Thus, Koide's formula is valid within the current experimental accuracy of $7\times 10^{-6}\,$! We emphasize that it is the pole masses that satisfy Koide's formula with this precision. Given the remarkable accuracy with which Koide's mass formula holds, many speculations have been raised as to existence of some physical origin behind this mass formula \cite{Koide:1983qe,Foot:1994yn, Koide:1995xk,Koide:2005nv,Li:2006et,Xing:2006vk,Ma:2006ht,Rosen:2007rt}. Despite these attempts, so far no realistic model or mechanism has been found which predicts Koide's mass formula within the required accuracy. A most serious problem one faces, when speculating physics underlying Koide's formula, is caused by the QED radiative correction \cite{Xing:2006vk}. One expects that some physics at a short-distance scale beyond our current reach determines the spectrum of the Standard-Model (SM) fermions. Then it seems more natural that the relation (\ref{KoideMF}) is satisfied by the running masses $\bar{m}_i(\mu)$ (or the corresponding Yukawa couplings $\bar{y}_i(\mu)$) renormalized at a high energy scale $\mu\gg M_W$ than by the pole masses. If this is the case, however, the QED radiative correction violates the relation between the pole masses. \begin{figure}[h]\centering \psfrag{gamma}{$\gamma$} \psfrag{l}{$\ell$} \includegraphics[width=6cm]{QED1Lradcorr.eps} \caption{\small QED 1-loop diagram contributing to the pole mass. \label{QED1Lradcorr}} \end{figure}\\ In fact, the 1-loop QED radiative correction is given by \begin{eqnarray} m^{\rm pole}_i = \left[ 1+\frac{\alpha}{\pi}\left\{ \frac{3}{4}\log\left( \frac{\mu^2}{\bar{m}_i(\mu)^2} \right) +1 \right\} \right]\, \bar{m}_i(\mu) \, . \label{QED1Lcorr} \end{eqnarray} Here, $\bar{m}(\mu)$ and $m^{\rm pole}$ denote the running mass defined in the modified--minimal--subtraction scheme ($\overline{\rm MS}$ scheme) and the pole mass, respectively; $\mu$ represents the renormalization scale. Suppose $\bar{m}_i(\mu)$ satisfy the relation (\ref{KoideMF}) at a high energy scale $\mu\gg M_W$. Then $m_i^{\rm pole}$ do not satisfy the same relation \cite{Li:2006et,Xing:2006vk}: Eq.~(\ref{KoideMF}) is corrected by approximately 0.1\%, which is 120 times larger than the present experimental error. Note that this correction originates only from the term $-3\alpha/(4\pi) \times\bar{m}_i \, \log(\bar{m}_i^2)$ of eq.~(\ref{QED1Lcorr}), since the other terms, which are of the form ${\rm const.}\times\bar{m}_i$, do not affect Koide's formula. This is because, the latter corrections only change the length of the vector $(\sqrt{m_e},\sqrt{m_\mu},\sqrt{m_\tau})$ but not the direction. As a result, the QED correction to Koide's mass formula turns out to be independent of the UV scale $\mu$. The $\bar{m}_i \log(\bar{m}_i^2)$ correction results from the fact that $\bar{m}_i$ plays a role of an infrared (IR) cut--off in the loop integral. Hence, the QED correction to Koide's formula stems from this IR region. The 1--loop weak correction is of the form ${\rm const.}\times\bar{m}_i$ in the leading order of $\bar{m}_i^2/M_W^2$ expansion; the leading non--trivial correction is ${\cal O}(G_F \bar{m}_i^3/\pi)$ whose effect is smaller than the current experimental accuracy. Other radiative corrections within the SM (due to Higgs and would-be Goldstone bosons) are also negligible. Among various existing models which attempt to explain origins of Koide's mass formula, we find a class of models particularly attractive \cite{Koide:1989jq,Koide:1995pb}. These are the models which predict the mass matrix of the charged leptons to be proportional to the square of the vacuum expectation value (VEV) of a 9--component scalar field (we denote it as $\Phi$) written in a 3--by--3 matrix form: \begin{eqnarray} {\cal M}_\ell \propto \langle \Phi \rangle \langle \Phi \rangle ~~~\mbox{with}~~~ \langle \Phi \rangle = \left(\begin{array}{ccc} v_1(\mu)&0&0\\ 0&v_2(\mu)&0\\ 0&0&v_3(\mu) \end{array}\right) \, . \label{MasPhisq} \end{eqnarray} Thus, $(\sqrt{m_e},\sqrt{m_\mu},\sqrt{m_\tau})$ is proportional to the diagonal elements $(v_1,v_2,v_3)$ of $\langle \Phi \rangle$ in the basis where it is diagonal. The above form of the lepton mass matrix may originate from an effective higher-dimensional operator \begin{eqnarray} {\cal O}=\frac{\kappa(\mu)}{\Lambda^2}\, \bar{\psi}_{Li}\, \Phi_{ik}\, \Phi_{kj}\, \varphi \, e_{Rj} \, . \label{higherdimopO} \end{eqnarray} Here, $\psi_{Li}=(\nu_{Li},e_{Li})^T$ denotes the left--handed lepton $SU(2)_L$ doublet of the $i$--th generation; $e_{Rj}$ denotes the right-handed charged lepton of the $j$--th generation; $\varphi$ denotes the Higgs doublet field; $\Phi$ is a 9--component scalar field and is singlet under the SM gauge group. We suppressed all the indices except for the generation (family) indices $i,j,k=1,2,3$. (Summation over repeated indices is understood throughout the paper.) The dimensionless Wilson coefficient of this operator is denoted as $\kappa(\mu)$. Once $\Phi$ acquires a VEV, the operator $\cal O$ will effectively be rendered to the Yukawa interactions of the SM; after the Higgs field also acquires a VEV, $\langle\varphi\rangle=(0,v_{\rm ew}/\sqrt{2})^T$ with $v_{\rm ew}\approx 250$~GeV, the operator will induce the charged--lepton mass matrix of the form eq.~(\ref{MasPhisq}) at tree level. We assume that the dimension-4 Yukawa interactions $y_{ij}\,\bar{\psi}_{Li}\varphi e_{Rj}$ are fobidden by some mechanism; this will be imposed by symmetry in our scenario to be discussed through Secs.~3--5. As an example of underlying theory that leads to the higher-dimensional operator $\cal O$, we may consider see-saw mechanism, as depicted in Fig.~\ref{see-saw} \cite{Koide:1989jq,Koide:1995xk}. In this case, the operator $\cal O$ is induced after integrating out the heavy fermions $H$ and $H'$. \begin{figure}[h]\centering \includegraphics[width=6cm]{see-saw.eps} \caption{\small Diagram showing generation of the higher-dimensional operator ${\cal O}=\frac{\kappa(\mu)}{\Lambda^2}\, \bar{\psi}_{Li}\, \Phi_{ik}\, \Phi_{kj}\, \varphi \, e_{Rj} $ through see-saw mechanism. $H$ and $H'$ denote heavy fermions to be integrated out. \label{see-saw}} \end{figure}\\ On the other hand, the VEV $\langle \Phi \rangle$ is determined by minimizing the potential of scalar fields in each model. By deliberately choosing a specific form of the potential, the VEV is made to satisfy the relation \begin{eqnarray} \frac{v_1(\mu)+v_2(\mu)+v_3(\mu)} {\sqrt{3\,[v_1(\mu)^2+v_2(\mu)^2+v_3(\mu)^2]}} =\frac{1}{\sqrt{2}} \label{relvi} \end{eqnarray} in the basis where it is diagonal. Hence, the origin of Koide's formula is attributed to the specific form of the potential which realizes this relation in the vacuum configuration. Up to now, however, no existing model is complete with respect to symmetry. Namely, every model requires either absence or strong suppression of some of the terms in the potential (which are allowed by the symmetry of that model), without justification. In our study, we adopt a similar scenario for generating the charged lepton spectrum. We introduce a higher-dimensional operator similar to ${\cal O}$ of eq.~(\ref{higherdimopO}) within an effective field theory (EFT) valid below some cut-off scale $\Lambda(\gg M_W)$. We analyze a potential of scalar fields within this EFT and compute the spectrum of the charged leptons. We compute various radiative corrections and other types of corrections within this EFT. In the next section (Sec.~2), we explain philosophy of our analysis using EFT and argue for its validity and usefulness. In Sec.~3, we explain the mechanism how to cancel the QED correction in terms of the radiative correction induced by family gauge symmetry. In Sec.~4, we present a potential model within EFT which leads to Koide's mass formula and a realistic charged lepton spectrum. Summary and discussion are given in Sec.~5. \section{Why EFT? Virtue and assumptions} \setcounter{footnote}{0} Let us explain philosophy of our analysis using EFT. Conventionally a more standard approach for explaining Koide's mass formula has been to construct models within renormalizable theories. In comparison, it is certainly a retreat to make an analysis within EFT. Nevertheless, the long history since the discovery of Koide's formula shows that it is quite difficult to construct a viable renormalizable model for explaining Koide's relation. It is likely that we are missing some essential hints to achieve this goal, if the relation is not a sheer coincidence. The point we want to make through our study is that within EFT, explanation of Koide's formula is possible, by largely avoiding fine tuning of parameters. Consistency conditions (with respect to symmetries of the theory) can be satisfied relatively easily in EFT, or in other words, they can be replaced by reasonable boundary conditions of EFT at the cut-off scale $\Lambda$ without conflicting symmetry requirements of the theory. (See Sec.~4.) Even under this less restrictive theoretical constraints, we may learn some important hints concerning the relation between the lepton spectrum and family symmetries. These are the role of specific family gauge symmetry in canceling the QED correction, the role of family symmetry in stabilizing Koide's mass relation, or the role of family symmetry in realizing a realistic charged lepton spectrum consistently with experimental values. These properties do not come about separately but are closely tied with each other. These features do not seem to depend on details of more fundamental theory above the cut-off scale $\Lambda$ but rather on some general aspects of family symmetries and their breaking patterns. Thus, we consider that our approach based on EFT would be useful even in the case in which physics above the scale $\Lambda$ is obscure and may involve some totally unexpected ingredients ----- as it was the case with chiral perturbation theory before the discovery of QCD. Before discussing radiative corrections within EFT, one would be worried about effects of higher-dimensional operators suppressed in higher powers of $1/\Lambda$. Indeed, using the values of tau mass and the electroweak symmetry breaking scale $v_{\rm ew}$, one readily finds that $v_3/\Lambda \hbox{ \raise3pt\hbox to 0pt{$>$}\raise-3pt\hbox{$\sim$} } 0.1$. Hence, naive dimensional analysis indicates that there would be corrections to Koide's formula of order 10\% even at tree level. We now argue that this is not necessarily the case within the scenario under consideration. Let us divide the corrections into two parts. These are (i) $1/\Lambda^n$ corrections to the operator $\cal O$ of eq.~(\ref{higherdimopO}) (those operators which reduce to the SM Yukawa interactions after $\Phi$ is replaced by its VEV), and (ii) $1/\Lambda^n$ corrections to the relation (\ref{relvi}) satisfied by the VEV of $\Phi$. Concerning the corrections (i), we may consider the following example. Suppose that the operator $\cal O$ is induced from the interactions \begin{eqnarray} {\cal L}=y_1 \,\bar{\psi}_{Li}\Phi_{ij} H_{Rj} + M\, \bar{H}_{Ri} H_{Li} + y_2\, \bar{H}_{Li} \Phi_{ij} H'_{Rj} + M' \bar{H}'_{Ri} H'_{Li} + y_3\, \bar{H}'_{Li} \varphi e_{Ri} + ({\rm h.c.}) \end{eqnarray} through the diagram shown in Fig.~\ref{see-saw}, after fermions $H_{L,R}$ and $H'_{L,R}$ have been integrated out. Fermions $H_{L,R}$ and $H'_{L,R}$ are assigned to appropriate representations of the SM gauge group such that the above interactions become gauge singlet. For instance, in the case that $v_3/M' \hbox{ \raise3pt\hbox to 0pt{$>$}\raise-3pt\hbox{$\sim$} } 3$, $y_1,y_2,y_3\approx 1$ and $v_{\rm ew}/M'<3\times 10^{-3}$, one finds, by computing the mass eigenvalues,\footnote{ Since the values of $m_\tau$ and $v_{\rm ew}$ are known, once we choose the values of $v_3/M'(\hbox{ \raise3pt\hbox to 0pt{$>$}\raise-3pt\hbox{$\sim$} } 3)$ and $y_1,y_2,y_3(\approx 1)$, the value of $v_3/M(\hbox{ \raise3pt\hbox to 0pt{$<$}\raise-3pt\hbox{$\sim$} } 0.03)$ will be fixed. Then the mass eigenvalues corresponding to the SM charged leptons can be computed in series expansion in the small parameters $v_{\rm ew}/M'$, $v_i/M$ and $v_i^2/(MM')=\sqrt{2}m_i/v_{\rm ew}$. } that the largest correction to the lepton spectrum eq.~(\ref{MasPhisq}) arises from the operator $\displaystyle -\frac{y_1^3y_2^3y_3}{2M^3M'^3}\, \bar{\psi}_L \Phi^6 \varphi e_R $; its contribution to the tau mass is $\delta m_\tau/m_\tau = (m_\tau/v_{\rm ew})^2 \approx 5\times 10^{-5}$. This translates to a correction to Koide's relation of $3\times 10^{-6}$, since there is an additional suppression factor due to the fact $m_e,m_\mu\ll m_\tau$.\footnote{ Note that, in the limit $m_e,m_\mu \to 0$, the direction of $(\sqrt{m_e},\sqrt{m_\mu},\sqrt{m_\tau})$ becomes unaffected by a correction to $m_\tau$. } Thus, this is an example of underlying mechanism that generates the operator $\cal O$ without generating higher-dimensional operators conflicting the current experimental bound.\footnote{ Since in this example $M'$ is not large, it cannot be regarded as ``see-saw mechanism''. Nevertheless, we may still construct an EFT in which the fermions $H_{L,R}$ and $H'_{L,R}$ have been integrated out. } If we introduce even more (non--SM) fermions to generate the leading--order operator $\cal O$, one can always find a pattern of spectrum of these fermions, for which higher--dimensional operators are sufficiently suppressed, since the number of adjustable parameters increases. In general, sizes of higher-dimensional operators depend heavily on underlying dynamics above the cut-off scale. (See \cite{Ma:2006ht} for another example of underlying mechanism.) Let us restrict ourselves within EFT. If we introduce only the operator $\cal O$, by definition this is the only contribution to the charged lepton spectrum at tree level. Whether loop diagrams induce higher-dimensional operators which violate Koide's relation is an important question, and a detailed analysis is necessary. This is the subject of the present study, where the result depends on the mechanisms how Koide's formula is satisfied and how the charged lepton spectrum is determined, even within EFT. The conclusion is as follows. Within the model to be discussed in Secs.~3--5, the class of 1-loop diagrams shown in Fig.~\ref{1LoopAnalyInEFT} do not generate operators that violate Koide's relation sizably; see Sec.~5. (There is another type of 1-loop diagrams that possibly cancels the QED correction; see Sec.~3.) We do not find any loop-induced higher-dimensional operators which violate Koide's relation in conflict with the current experimental bound. \begin{figure}[t]\centering \psfrag{Phi}{\hspace{0mm}$\Phi$} \psfrag{psiL}{\hspace{0mm}$\psi_L$} \psfrag{eR}{\hspace{0mm}$e_R$} \includegraphics[width=13cm]{1LoopAnalyInEFT.eps} \caption{\small EFT 1-loop diagrams which generate higher-dimensional operators contributing to the charged lepton spectrum. Dashed line represents $\Phi$; $\otimes$ represents the higher-dimensional operator which generate the charged lepton masses at tree-level [corresponding to ${\cal O}$ of eq.~(\ref{higherdimopO})]. \label{1LoopAnalyInEFT}} \end{figure} Concerning the corrections (ii), we will introduce specific family gauge symmetries and their breaking patterns such that, in the first place the relation (\ref{relvi}) is satisfied at tree level, and secondly the corrections (ii) are suppressed. Since the above example of underlying mechanism that suppresses higher--dimensional operators is simple and fairly general, and since suppression of induced $1/\Lambda^n$ corrections within EFT provides a non--trivial cross check of theoretical consistency, we believe that our approach based on EFT has a certain justification and would be useful as a basis for considering more fundamental models. \section{Cancellation of QED corrections} \setcounter{footnote}{0} In this section, we consider $U(3)$ family gauge symmetry and examine the radiative correction to Koide's formula induced by the family gauge interaction. We denote the generators for the fundamental representation of $U(3)$ by $T^\alpha$ ($0\leq\alpha\leq 8$), which satisfy \begin{eqnarray} {\rm tr}\left(T^\alpha T^\beta\right)=\frac{1}{2}\, \delta^{\alpha\beta} ~~~;~~~ T^\alpha = {T^\alpha}^\dagger \, . \label{U3generators} \end{eqnarray} $T^0=\frac{1}{\sqrt{6}}{\bf 1}$ is the generator of $U(1)$, while $T^a$ ($1\leq a \leq 8$) are the generators of $SU(3)$. This fixes the normalization of the $U(1)$ charge. We assign $\psi_L$ to the representation $({\bf 3},1)$, where $\bf 3$ stands for the $SU(3)$ representation and 1 for the $U(1)$ charge, while $e_R$ is assigned to $(\bar{\bf 3},-1)$. Under $U(3)$, the 9--component field $\Phi$ transforms as three $({\bf 3},1)$'s. $\varphi$ is singlet under $U(3)$. Explicitly the transformations of these fields are given by \begin{eqnarray} \psi_L \to U \, \psi_L \, , ~~~ e_R \to U^* \, e_R \, , ~~~ \Phi \to U \, \Phi \, , ~~~ \varphi \to \varphi \label{U3transf} \end{eqnarray} with $U = \exp \left(i\theta^\alpha T^\alpha\right)$. We consider \begin{eqnarray} {\cal O}_1= \frac{\kappa_1(\mu)}{\Lambda^2}\, \bar{\psi}_{L}\, \Phi\, \Phi^T\, \varphi \, e_{R} \, \label{exampleO1} \end{eqnarray} as the higher-dimensional operator which generates the charged lepton spectrum at tree level [corresponding to $\cal O$ of eq.~(\ref{higherdimopO})]. It is invariant under the above $U(3)$ gauge symmetry, whereas $\cal O$ is not, since we assigned $\psi_L$ and $e_R$ to mutually conjugate representations. In fact, ${\cal O}_1$ is invariant under a larger symmetry $U(3)\times O(3)$, under which $\Phi$ transforms as $\Phi\to U\Phi O^T$ ($O\,O^T = {\bf 1}$). In this section we ignore the $O(3)$ symmetry and focus on the $U(3)$ gauge symmetry.\footnote{ For definiteness, one may assume that the $O(3)$ symmetry is gauged and spontaneously broken at a high energy scale before the breakdown of the $U(3)$ symmetry. } When $\Phi$ acquires a VEV\footnote{We assume that $\Phi$ can be brought to a diagonal form by $U(3)\times O(3)$ transformation.} \begin{eqnarray} \langle \Phi(\mu) \rangle = \left(\begin{array}{ccc} v_1(\mu)&0&0\\ 0&v_2(\mu)&0\\ 0&0&v_3(\mu) \end{array}\right) , \label{VEVPhiatmu} \end{eqnarray} and if all $v_i$ are different, $U(3)$ symmetry is completely broken by $\langle \Phi \rangle$, and the spectrum of the $U(3)$ gauge bosons is determined by $v_i$. With all this setup we may compute the radiative corrections to the pole masses induced by the family gauge interactions; see Fig.~\ref{RadcorrByFamilyGB}. \begin{figure}[t]\centering \includegraphics[width=7cm]{RadcorrByFamilyGB.eps} \caption{\small Diagram for the 1-loop correction to the charged lepton pole mass induced by exchange of family gauge bosons. \label{RadcorrByFamilyGB}} \end{figure} It turns out that the radiative corrections to the pole masses have the same form as the QED corrections eq.~(\ref{QED1Lcorr}) with opposite sign: \begin{eqnarray} \delta m^{\rm pole}_i = -\frac{3\,\alpha_F}{8\,\pi}\left[ \log\left( \frac{\mu^2}{v_i(\mu)^2} \right) + c \right] \, {m}_i(\mu) \, , \label{alphaFcorr} ~~~~~~~ {m}_i(\mu) = \frac{\kappa_1(\mu)\,v_{\rm ew}}{\sqrt{2}\Lambda^2} \, v_i(\mu)^2 \, . \label{mmu} \end{eqnarray} Here, $c$ is a constant independent of $i$. $\alpha_F = g_F^2/(4\pi)$ denotes the coupling constant of the $U(3)$ gauge symmetry, where we assume that the couplings of $U(1)$ and $SU(3)$ are common.\footnote{ One may worry about validity of the assumption for the universality of the $U(1)$ and $SU(3)$ couplings, since the two couplings are renormalized differently in general. The universality can be ensured approximately if these two symmetry groups are embedded into a simple group down to a scale close to the relevant scale. There are more than one ways to achieve this. A simplest way would be to embed $SU(3)\times U(1)$ into $SU(4)$ \cite{Sumino:2008hu}. } The Wilson coefficient $\kappa_1(\mu)$ is defined in $\overline{\rm MS}$ scheme. $v_i(\mu)$ are defined as follows: The VEV of $\Phi$ at renormlization scale $\mu$, $\langle \Phi(\mu)\rangle$ given by eq.~(\ref{VEVPhiatmu}), is determined by minimizing the 1--loop effective potential in Landau gauge (explicit form of the effective potential will be discussed in the next section); $\Phi$ is renormalized in $\overline{\rm MS}$ scheme. We ignored terms suppressed by ${m}_i^2/v_j^2(\ll 1)$ in the above expression. Some important features are as follows. \begin{enumerate} \item The sign is opposite to that of the QED correction eq.~(\ref{QED1Lcorr}), which results from the fact that $\psi_L$ and $e_R$ have the same QED charges but mutually conjugate (opposite) $U(3)$ charges. \item Suppose the relation (\ref{relvi}) is satisfied at tree level, such that Koide's formula is satisfied. Then there is no ${\cal O}(\alpha_F)$ correction to this relation. (Recall that the correction to the 1-loop effective potential in Landau gauge \`{a} la Coleman-Weinberg is ${\cal O}(\alpha_F^2)$.) \item The characteristic form of the radiative corrections eq.~(\ref{alphaFcorr}) is determined by the fact that ${\cal O}_1$ is multicatively renormalized, and also by the symmetry breaking pattern $U(3)\to U(2)\to U(1)\to \mbox{nothing}$. \item In the case that $\alpha=\frac{1}{4}\alpha_F$, the radiative corrections by family gauge interactions and the QED corrections to Koide's mass formula cancel for arbitrary $\mu$. \end{enumerate} Let us add some explanation on the feature 3. \begin{figure}[t]\centering \includegraphics[width=14cm]{DoubleLineDiagram1.eps} \vspace{-2mm}\\ (a)\vspace{2mm}\\ \includegraphics[width=13cm]{DoubleLineDiagram2.eps} \vspace{-2mm}\\ (b) \caption{\small 1--loop diagrams contributing to $\delta m_i^{\rm pole}$, (a) in the case that $\psi_L$ and $e_R$ are in the same representation of $SU(3)$ or $O(3)$, i.e.\ $\psi_L:({\bf 3},Q)$ and $e_R:({\bf 3},Q')$, and (b) in the case that $\psi_L$ and $e_R$ are in the conjugate representations of $U(3)$, i.e.\ $\psi_L:({\bf 3},1)$ and $e_R:(\bar{\bf 3},-1)$. The right--hand--sides show flows of family charge. In (a), the closed loop of the family charge flow represents ${\rm tr}\bigl(\Phi\Phi^{(\dagger)}\bigr)$. In (b), the charge flow is connected in one line, which has the same form as the tree diagram, showing multiplicative renormalization of the operator ${\cal O}_1$. \label{DoubleLineDiagram}} \end{figure} The operator ${\cal O}_1$ is the only dimension-6 operator invariant under $U(3)\times O(3)$, so it should be renormalized multiplicatively. In this regard, a pedagogical comparison is shown in Figs.~\ref{DoubleLineDiagram}(a)(b). Had we chosen the same representation for $\psi_L$ and $e_R$ under family symmetry such as $SU(3)$ or $O(3)$, the dimension-4 operator $\bar{\psi}_{Li} \varphi e_{Ri}$ would be allowed by symmetry \cite{Antusch:2007re}. In fact, the 1--loop diagram shown in Fig.~\ref{DoubleLineDiagram}(a) induces an effective operator \begin{eqnarray} {\cal O} '\sim \frac{\alpha_F}{\pi}\times{\kappa}\, \bar{\psi}_{Li} \, \varphi \, e_{Ri} \times \frac{\langle\Phi\rangle_{jk} \langle\Phi^{(\dagger)}\rangle_{kj} }{\Lambda^2} \, , \end{eqnarray} hence corrections universal to all the charged--lepton masses, $(\delta m_e, \delta m_\mu, \delta m_\tau) \propto (1,1,1)$, are induced. This correction changes the direction of $(\sqrt{m_e},\sqrt{m_\mu},\sqrt{m_\tau})$ and violates Koide's formula rather strongly; moreover the correction is dependent on the cut-off $\Lambda$ of the loop integral. In order that the correction to Koide's formula cancel the QED correction, a naive estimate shows that $\alpha_F/\pi$ should be order $10^{-5}$, provided that $\Lambda$ is not too large. By contrast, in the case that $\psi_L$ and $e_R$ are assigned to the conjugate representations of $U(3)$, the charge flow is connected in one line, so that it has the same charge flow structure as the tree graph. The form $\sim \log\mu \times m_i$ of the 1-loop correction can be understood in this way. $v_i^2$ in the argument of log in eq.~(\ref{alphaFcorr}) stems from IR cut-off of the loop integral. Namely they come from the masses of family gauge bosons. As $v_3>v_2>v_1>0$ are successively turned on, family symmetry breaks according to the pattern $U(3)\to U(2)\to U(1)\to \mbox{nothing}$. Gauge bosons corresponding to broken generators decouple at each stage, and their masses enter the argument of log as IR cut-off. The form of eq.~(\ref{alphaFcorr}) is essentially determined by this symmetry breaking pattern. (See \cite{Sumino:2008hy} for a more precise argument.) The same symmetry breaking pattern resides in the QED Lagrangian: as $m_\tau > m_\mu >m_e>0$ are successively turned on, chiral symmetry breaks according to $U(3)\to U(2)\to U(1)\to \mbox{nothing}$. Essentially this is the reason why the two radiative corrections have the same form. \begin{figure}[t]\centering \begin{tabular}{ccc} \includegraphics[width=7cm]{RunningCouplings1.eps}&~~~& \includegraphics[width=7cm]{RunningCouplings2.eps}\\ (a)&&(b) \end{tabular} \caption{\small Inverse of the three gauge couplings of the SM $\alpha_1^{-1},\alpha_2^{-1},\alpha_3^{-1}$ vs.\ $\log_{10}(\mu/{\rm GeV})$: (a) from electroweak scale to GUT scale, and (b) from electroweak scale up to $10^3$~TeV. The green line corresponds to the relevant coupling $\alpha_W$. The shaded band shows an allowed variation range of the unification scale in order to meet the present experimental accuracy of Koide's formula. \label{RunningCouplings}} \end{figure} Now we speculate on a possible scenario how the relation (feature 4) \begin{eqnarray} \alpha\approx\frac{1}{4}\alpha_F \label{relalphas} \end{eqnarray} may be satisfied. In fact, this relation should be satisfied within 1\% accuracy, in order that Koide's relation is satisfied within the present experimental bound. As already discussed, the scale of $\alpha$ is determined by the charged lepton masses, while the scale of $\alpha_F$ is determined by the family gauge boson masses, which should be much higher than the electroweak scale. Since the relevant scales of the two couplings are very different, we are unable to avoid assuming some accidental factor (or parameter tuning) to achieve this condition. Instead we seek for an indirect evidence which indicates such an accident has occurred in Nature. The relation (\ref{relalphas}) shows that the value of $\alpha_F$ is close to that of the weak gauge coupling constant $\alpha_W$, since $\sin^2\theta_W(M_W)$ is close to $1/4$. In fact, within the SM, $\frac{1}{4}\,\alpha_W(\mu)$ approximates $\alpha(m_\tau)$ at scale $\mu \sim 10^2$--$10^3$~TeV. Hence, if the electroweak $SU(2)_L$ gauge group and the $U(3)$ family gauge group are unified around this scale, naively we expect that $ \alpha \approx \frac{1}{4}\, \alpha_F $ is satisfied. Since $\alpha_W$ runs relatively slowly in the SM, even if the unification scale is varied within a factor of 3, Koide's mass formula is satisfied within the present experimental accuracy. This shows the level of parameter tuning required in this scenario. Fig.~\ref{RunningCouplings}(a)(b) show the running of the inverse of the three gauge couplings of the SM. The former figure is well-known, which shows the running from the electroweak scale up to GUT scale. The latter figure shows the same running up to $10^3$~TeV. The shaded band shows an allowed variation range (factor 3) of the unification scale, which is limited by the present experimental accuracy of Koide's formula. \section{Potential predicting Koide's formula and realistic lepton spectrum} \setcounter{footnote}{0} In this section we study how the relation (\ref{relvi}) can be satisfied by the VEV of $\Phi$. For this purpose, we introduce a model of charged lepton sector based on $U(3)\times O(3)$ family gauge symmetry, within EFT valid at scales below $\Lambda$. The choice of this family symmetry is motivated by the fact that this is the largest symmetry possessed by the operator ${\cal O}_1$ analyzed in the previous section. In particular, in this section we focus on the potential of scalar fields. In the first place, we find that the $U(3)\times O(3)$ family symmetry is not sufficiently restrictive. Namely, since this symmetry does not constrain the potential of $\Phi$ sufficiently, one needs to tune the parameters of the potential in order to realize the relation (\ref{relvi}). We need some symmetry enhancement in order to realize this relation without fine tuning. \begin{figure}[t]\centering \psfrag{45}{\hspace{0mm}$45^\circ$} \psfrag{Phi0}{\hspace{0mm}$\Phi^0T^0$} \psfrag{Phia}{\hspace{0mm}$\Phi^aT^a$} \psfrag{Phimu}{\hspace{0mm}$\Phi^\alpha T^\alpha$} \includegraphics[width=6cm]{PhiIn9dimSp.eps} \caption{\small Geometrical interpretation of \protect{eq.~(\ref{KFrelation})}. \protect{Eq.~(\ref{U3generators})} defines the inner product in a 9--dimensional real vector space spanned by the basis $\{ T^\alpha \}$. Since $\Phi^0 T^0$, $\Phi^a T^a$ and $\Phi=\Phi^\alpha T^\alpha$ form an isosceles right triangle, the angle between $T^0$ and $\Phi$ is $45^\circ$. This is Koide's formula in the basis where $\Phi$ is diagonal. \label{GeometricInt}} \end{figure} In order to find an appropriate larger symmetry, we recall the conditions equivalent to the relation (\ref{relvi}). Let us express components of $\Phi$ using $T^\alpha$, introduced in eq.~(\ref{U3generators}), as the basis: \begin{eqnarray} \Phi = \Phi^\alpha \, T^\alpha \, . \end{eqnarray} In general $\Phi^\alpha$ take complex values. As shown by Koide \cite{Koide:1989jq}, if \begin{eqnarray} (\Phi^0)^2= \Phi^a\, \Phi^a ~~~~~;~~~~~ \Phi^\alpha \in {\bf R}\, \label{KFrelation} \end{eqnarray} are satisfied, the relation (\ref{relvi}) is satisfied by the eigenvalues of $\Phi$. There is a geometrical interpretation of these conditions in terms of a real vector in a 9-dimensional space; see Fig.~\ref{GeometricInt}. This picture indicates that a symmetry associated with this 9-dimensional space may be relevant. Motivated by this picture, we adopt $SU(9)\times U(1)$ as an enhanced symmetry, where the $SU(9)\times U(1)$ transformation is given by $\Phi^\alpha \to U_9^{\alpha\beta}\,\Phi^\beta$ ($U_9$ is a 9-by-9 unitary matrix). Then we assume the following symmetry breaking scenario: Above the cut--off scale $\Lambda$ there is an $SU(9)\times U(1)$ gauge symmetry, and this symmetry is spontaneously broken to $U(3)\times O(3)$ below the cut--off scale; see Fig.~\ref{SymEnhanceSenario}. One can check that indeed $U(3)\times O(3)$ is a subgroup of $SU(9)\times U(1)$. \begin{figure}[t]\centering \includegraphics[width=8cm]{SymEnhanceSenario.eps} \caption{\small Symmetry breaking pattern assumed in our senario. \label{SymEnhanceSenario}} \end{figure} In what follows, we do not discuss any specific model at scales above $\Lambda$ but only assume restoration of this larger symmetry. Within this senario, we still need to introduce an additional scalar field $X$ in order to realize a desirable vacuum configuration. Under $SU(9)\times U(1)$, $X$ is in the representation $({\bf 45},Q_X)$ (the ${\bf 45}$ is the second--rank symmetric representation) and is unitary. It can be represented by a 9--by--9 unitary symmetric matrix: \begin{eqnarray} X^{\alpha\beta}=X^{\beta\alpha}, ~~~~~ X^{\alpha\gamma}\,{X^{\beta\gamma}}^* = \delta^{\alpha\beta} ~~~~~;~~~~~ X^{\alpha\beta}\to {U}_9^{\alpha\rho}\,X^{\rho\sigma}\,{U}_9^{\beta\sigma} \, . \label{transfX} \end{eqnarray} Note that the unitarity condition is compatible with the symmetry transformation. We have analyzed the potential of $\Phi$ and $X$ that is invariant under $U(3)\times O(3)$ and consistent with the assumed symmetry breaking pattern shown schematically in Fig.~\ref{SymEnhanceSenario}. The upshot is that in a finite region of the parameter space of the potential Koide's relation is satisfied by the eigenvalues of $\langle \Phi \rangle$. Furthermore, the eigenvalues can be made consistent with the experimental values of the charged lepton masses without fine tuning of parameters. In the rest of this section we briefly sketch the argument. (See \cite{Sumino:2008hy} for details.) Operators in the potential which are invariant under $SU(9)\times U(1)$ read \begin{eqnarray} && \tilde{V}_\Phi = -m^2\,{\Phi^\alpha}^*\Phi^\alpha +\lambda\, ({\Phi^\alpha}^*\Phi^\alpha)^2 + \cdots\, , \\ && \tilde{V}_X = {\rm const.}\, , \\ && \tilde{V}_{\Phi X}= \varepsilon_K \, \bigl| \Phi^\beta \, {X^{\beta\gamma}}^* \, \Phi^\gamma \bigr|^{2} + \cdots \, , \end{eqnarray} where only some representative terms have been shown explicitly. In particular, we take into account operators with dimensions higher than 4, although they are not shown explicitly. That $\tilde{V}_X={\rm const.}$ follows from the fact that $X$ is unitary and has a non-vanishing $U(1)$ charge. Similarly operators invariant under $U(3)\times O(3)$ but non-invariant under $SU(9)\times U(1)$ read \begin{eqnarray} && V_\Phi = g_1\,{\rm tr}(\Phi^\dagger\,\Phi\,\Phi^\dagger\,\Phi) +g_2\, {\rm tr}(\Phi\,\Phi^T\,\Phi^*\,\Phi^\dagger) + \cdots \, , \\ && V_X = h_1\,{\rm tr}(T^\alpha\,T^\rho\,T^\beta\,T^\sigma)\, X^{\alpha\beta}\,{X^{\rho\sigma}}^* \, + \cdots\, , \\ && V_{\Phi X}= \cdots \, . \end{eqnarray} After examination, we find that, in a certain parameter region, the global minimum of $V_X$ is located at the configuration\footnote{ To be more accurate, one needs to impose either approximate or (spontaneously-broken) exact $CP$ symmetry in addition to $U(3)\times O(3)$ symmetry. } \begin{eqnarray} \langle X^{\alpha\beta}\rangle = {\rm diag.} (-1,\underbrace{+1,\cdots,+1}_{{8}}) =-2\,\delta^{\alpha 0}\delta^{\beta 0}+\delta^{\alpha\beta}\, , \end{eqnarray} and $\tilde{V}_{\Phi X}$ is minimized in the case \begin{eqnarray} \Phi^\beta \, {X^{\beta\gamma}}^* \, \Phi^\gamma=0\, , \end{eqnarray} as may be inferred from the term of $\tilde{V}_{\Phi X}$ shown explicitly. One observes that if the above two equations are combined, Koide's relation \begin{eqnarray} (\Phi^0)^2= \Phi^a\, \Phi^a \end{eqnarray} follows. (Reality of $\Phi$ can also be derived from $\tilde{V}_{\Phi}$ and $V_{\Phi X}$, but we skip explanation of this part.) Let us first give an argument which is not very solid but more illustrative. The above observation shows that if there is a hierarchy of parameters given by \begin{eqnarray} \tilde{m}^2,\lambda,\varepsilon_K, \tilde{h}_1 \gg g_1, g_2 \, , \label{HierarchyOfParam} \end{eqnarray} while ignoring all the other parameters, Koide's mass formula will be satisfied approximately. Here, $\tilde{m}^2$ and $\tilde{h}_1$ denote dimensionless couplings defined from $m^2$ and $h_1$, respectively, after rescaling them by an appropriate mass scale (e.g.\ $v_3$). Furthermore, we find that if there is an additional hierarchy given by \begin{eqnarray} \tilde{m}^2,\lambda,\varepsilon_K, \tilde{h}_1 \gg g_2\gg g_1 \, , \label{HierarchyOfParam2} \end{eqnarray} a realistic charged lepton spectrum $v_1:v_2:v_3 = \sqrt{m_e}:\sqrt{m_\mu}:\sqrt{m_\tau}$ can be explained without fine tuning. (This follows from an explicit computation, and we do not know any other simple explanation.) Naively one may expect that the parameters in the $SU(9)\times U(1)$ non-invariant operators $g_1,g_2,\tilde{h}_1$ are suppressed as compared to the parameters in the $SU(9)\times U(1)$ invariant operators $\tilde{m}^2,\lambda,\varepsilon_K$, since the former parameters are generated only through spontaneous symmetry breaking at the cut-off scale. But this is not necessarily the case for parameters such as $\tilde{h}_1$ that are originally dimensionful parameters. Let us speculate on a possible underlying mechanism that would lead to a hierarchy of potential parameters given by eq.~(\ref{HierarchyOfParam2}). Suppose that the symmetry breaking $SU(9)\times U(1)\to U(3)\times O(3)$ is induced by a condensate of a scalar field $T^{\alpha\beta}_{\rho\sigma}$, which is a 4th-rank tensor under $SU(9)$. Indeed if $\langle T^{\alpha\beta}_{\rho\sigma} \rangle \sim {\rm tr}(T^\alpha{T^\beta}^*{T^\rho}^*T^\sigma)$, this symmetry breaking takes place.\footnote{ By analyzing the potential of $T^{\alpha\beta}_{\rho\sigma}$ up to quartic terms, we have checked that in a certain parameter region of the potential, $\langle T^{\alpha\beta}_{\rho\sigma} \rangle \sim {\rm tr}(T^\alpha{T^\beta}^*{T^\rho}^*T^\sigma)$ becomes a local minimum of the potential. (We were unable to clarify whether this can be a global minimum, due to technical complexity.) } \begin{figure}[t]\centering \includegraphics[width=15cm]{SpeculationGenPot.eps} \caption{\small Speculation on underlying physics that may generate $SU(9)\times U(1)$ non-invariant operators. \label{SpeculationGenPot}} \end{figure} Through the first diagram shown in Fig.~\ref{SpeculationGenPot}, the operator $g_2\, {\rm tr}(\Phi\,\Phi^T\,\Phi^*\,\Phi^\dagger)$ may be induced; the double line denotes a heavy degree of freedom with an $SU(9)\times U(1)$-invariant mass scale $M$. Since $\langle T\rangle \sim {\cal O}(\Lambda)$, the coefficient $g_2\sim \Lambda/M$ would be a small parameter provided $M\gg \Lambda$. $g_1$ is even more suppressed, since the operator $g_1\,{\rm tr}(\Phi^\dagger\,\Phi\,\Phi^\dagger\,\Phi)$ cannot be generated by a single insertion of $\langle T\rangle$ at tree-level. Either two insertions of $\langle T\rangle$ or a loop correction is necessary, which leads to additional suppression factors. The second diagram in Fig.~\ref{SpeculationGenPot} would induce the operator $h_1\,{\rm tr}(T^\alpha\,T^\rho\,T^\beta\,T^\sigma)\, X^{\alpha\beta}\,{X^{\rho\sigma}}^*$ (together with other operators). Since there is no intermediate heavy degree of freedom, the induced coupling $h_1$, when normalized by $\Lambda$, would be order 1. We turn to a more solid argument. It is legitimate to identify the above potential to be the effective potential (including loop corrections and in Landau gauge). Recall that, according to the argument in Sec.~3, we can connect the pole masses of the charged leptons to the VEV of $\Phi$, which is determined from the effective potential renormalized at an arbitrary scale $\mu$.\footnote{ This is a consequence of the fact that $\Phi$ is renormalized multiplicatively, namely the counter terms for all $v_i$ are common. More precisely, physically it is adequate to choose the scale $\mu$ to be larger than the family gauge boson masses in $\overline{\rm MS}$ scheme. } For our purpose, it is most convenient to set the scale to be $\mu=\Lambda$, since at this scale radiative corrections within EFT essentially vanish, and the parameters of the effective potential are set by the boundary (initial) conditions derived from the theory above the cut-off scale. The following conclusions have been drawn from a detailed analysis of the general potential of $\Phi$ and $X$. In the case that certain hierarchical relations among the parameters of the potential are satisfied, Koide's formula as well as a realistic charged lepton spectrum follow, consistently with the present experimental values. These hierarchical conditions on the parameters are a generalization of eq.~(\ref{HierarchyOfParam2}). Typical sizes of the required hierarchies of the potential parameters are of order $10^{-3}$--$10^{-4}$. These hierarchical relations are consistent with the assumed symmetry and symmetry enhancement. Namely, those parameters which need to be suppressed are associated with $SU(9)\times U(1)$ non-invariant operators. Their values at the boundary $\mu=\Lambda$ are determined by the dynamics above the cut-off scale. On the other hand, up to now there exists no model of the scales above the cut-off, $\mu>\Lambda$, which leads to these hierarchical relations among the potential parameters. (Nothing more than the speculation given in Fig.~\ref{SpeculationGenPot} exists.) Finally we comment on how a realistic charged lepton spectrum follows without fine tuning of parameters. Koide's formula imposes one relation among the three charged lepton masses. Hence, apart from an overall dimensionful scale of the three masses, there remains one-parameter degree of freedom. Suppose this degree of freedom is fixed by minimizing the operator $V_{\Phi 3}=g_2\, {\rm tr}(\Phi\,\Phi^T\,\Phi^*\,\Phi^\dagger)$. Then, $m_e/m_\tau$ is predicted to be 15\% away from the experimental value, and $m_\mu/m_\tau$ is predicted to be 1.5\% away from the experimental value. In other words, already the values are close to the true values and orders of magnitude of the mass ratios are predicted correctly. When any other $SU(9)\times U(1)$ non-invariant operators, which modify the potential minimum, are turned on, as long as contributions of these operators are suppressed as compared to $V_{\Phi 3}$, the values of $m_e/m_\tau,m_\mu/m_\tau$ do not alter significantly. Since Koide's relation is protected by the large couplings (e.g.\ $\tilde{m}^2,\lambda,\varepsilon_K, \tilde{h}_1$), and since Koide's relation is satisfied experimentally, with some small values of parameters, $m_e/m_\tau$ and $m_\mu/m_\tau$ are designated to coincide with the experimental values. This is not regarded as fine tuning. The overall scale of the lepton masses is determined by the parameters of $\tilde{V}_{\Phi}$, such as $m^2$ and $\lambda$, and by $\kappa_1$ of the operator ${\cal O}_1$. Since $v_3/\Lambda\hbox{ \raise3pt\hbox to 0pt{$>$}\raise-3pt\hbox{$\sim$} } 0.1$ is not extremely small, there is no fine tuning problem within EFT for predicting the overall scale. \section{Summary and discussion} \setcounter{footnote}{0} Let us summarize our study. We have analyzed the role of family gauge symmetries in relation with the charged lepton spectrum and Koide's mass formula. The analysis is performed within EFT valid below a cut-off scale $\Lambda$ and within the known scenario in which the charged lepton mass matrix is proportional to $\langle \Phi \rangle^2$ at leading order. Before describing the analysis, we made an argument to justify usefulness of an EFT approach and to discuss why EFT does not instantly run into problems by higher order corrections in $1/\Lambda$. In the first part of our analysis, we studied radiative corrections generated by $U(3)$ family gauge interaction. $U(3)$ family gauge symmetry has a unique property with respect to the radiative correction to Koide's formula. In fact, if $\psi_L$ and $e_R$ are assigned to mutually conjugate representations, the $U(3)$ radiative correction has the same form as the QED correction with opposite sign. In particular, if $\alpha=\frac{1}{4}\alpha_F$, both corrections cancel. We discussed this possibility within a scenario in which $U(3)$ family symmetry is unified with $SU(2)_L$ symmetry at $10^2$--$10^3$~TeV scale. Some key aspects which led to the non-trivial form of the radiative corrections are as follows. (1) Multiplicative renormalizability of the operator ${\cal O}_1= \frac{\kappa}{\Lambda^2}\, \bar{\psi}_{L}\, \Phi\, \Phi^T\, \varphi \, e_{R}$ ensures that only logarithmic corrections to the lepton mass matrix appear; furthermore, multiplicative renormalizability of $\langle \Phi \rangle$ ensures that the correction to Koide's formula is independent of the renormalization scale $\mu$ of the effective potential. (2) The symmetry breaking pattern $U(3) \to U(2)\to U(1)\to \mbox{(nothing)}\,$ essentially dictates how the IR cut-off enters the logarithmic correction at each stage of the symmetry breaking; this symmetry breaking pattern happens to be common to the family gauge sector and QED sector. (3) We assumed that $\langle \Phi \rangle$ can be brought to a diagonal form by symmetry transformation and also the tree-level Koide's relation for the diagonal elements: $ \frac{v_1(\mu)+v_2(\mu)+v_3(\mu)} {\sqrt{3\,[v_1(\mu)^2+v_2(\mu)^2+v_3(\mu)^2]}} =\frac{1}{\sqrt{2}} $. In the latter part of the analysis, we have examined how this relation among $v_i$ may be realized. We proposed a potential model within EFT with a family symmetry $U(3)\times O(3)$. Motivated by a geometrical interpretation of Koide's relation (c.f.\ Fig.~\ref{GeometricInt}), we further imposed symmetry enhancement to $SU(9)\times U(1)$ above the cut-off scale. We have introduced another scalar field $X$ and examined the general potential of $\Phi$ and $X$. In this manner, we were able to find a potential minimum which leads to Koide's formula and a realistic charged lepton spectrum. The potential parameters need to satisfy certain hierarchical relations at the boundary $\mu=\Lambda$ of EFT, which are consistent with the symmetry requirements. We have speculated on underlying physics which may lead to the hierarchical relations. \medbreak There are many unsolved questions in the present analysis. The list is as follows. \begin{itemize} \item Quarks and neutrinos are not included in the analysis. In relation to this, with the fermion content discussed in this analysis, anomalies induced by the family gauge interactions do not cancel. \item $O(3)$ gauge symmetry needs to be broken spontaneously above the $U(3)$ symmetry breaking scale, in order to suppress mixing of gauge bosons of both gauge groups. We have not implemented a mechanism to achieve this. \item The VEV $\langle \Phi\rangle$, which explains the realistic charged lepton spectrum, cannot be diagonalized by $U(3)\times O(3)$ symmetry transformation. In order to realize the scenario for the cancellation of QED correction, we need, for instance, to introduce another field $\Sigma$ and its potential with $\Phi$ \cite{Sumino:2008hy} and replace the operator ${\cal O}_1$ as $ \frac{\kappa}{\Lambda^2}\, \bar{\psi}_{L}\, \Phi\, \Phi^T\, \varphi \, e_{R} \to \frac{\kappa}{\Lambda^3}\, \bar{\psi}_{L}\, \Phi\, \Sigma\,\Phi^T\, \varphi \, e_{R} $. \item In the first part of our analysis, we considered unification of $U(3)$ family symmetry and $SU(2)_L$ weak symmetry. In the latter part, we assumed embedding $U(3)\times O(3)$ into $SU(9)\times U(1)$. How to make both scenario compatible has not been addressed. It would require a large symmetry group above the cut-off scale. \item Fine tuning is required to stabilize small scales compared to the cut-off scale of EFT $\Lambda\sim 10^3$~TeV. These small scales are the VEVs $\langle \varphi \rangle$, (physical scale\footnote{ Since we normalized $X$ to be dimensionless in eq.~(\ref{transfX}), physical scale of $X$ is determined by the normalization of the kinetic term $f_X^2 \partial_\mu {X^{\alpha\beta}}^* \partial_\mu X^{\alpha\beta}$. In order that the spectrum of $U(3)$ gauge bosons be determined mostly by $\langle \Phi \rangle$, a hierarchy $f_X,\langle \Sigma \rangle \ll v_1$ is required. } of) $\langle X \rangle$ and $\langle \Sigma \rangle$. This fine tuning problem is similar to that of the SM. \item Models above the cut-off scale are completely missing. \end{itemize} There seems to be certain solution(s) to each of these problems (except for the last two problems), if we extend our model in a sufficiently complicated manner. On the other hand, it seems very difficult to solve all of them in a simple and unified way. We have made a non-trivial consistency check of our present analysis. Using the potential of $\Phi$ and $X$, we have identified the mass eigenstates of scalar fields at the vacuum; then we have computed 1-loop radiative correction to the operator ${\cal O}'_1=\frac{\kappa}{\Lambda^3}\, \bar{\psi}_{L}\, \Phi\, \Sigma\,\Phi^T\, \varphi \, e_{R} $ by these scalars. This corresponds to incorporating the class of diagrams shown in Fig.~\ref{1LoopAnalyInEFT}. The correction to Koide's formula turned out to be quite suppressed and does not conflict the present experimental bound. \medbreak In view of the many problems listed above, it seems quite unlikely that our model, as a whole, correctly describes the true mechanisms of generation of the charged lepton spectrum. Nevertheless, we suspect that some of the mechanisms which we proposed may reflect the true aspects of Nature. We consider the following feature particularly non-trivial: Not only Koide's formula but also $m_e/m_\tau$ and $m_\mu/m_\tau$ can be explained without fine tuning. Since Koide's relation treats $m_e,m_\mu,m_\tau$ symmetrically, in many models hierarchical structure of the spectrum is difficult to realize compatibly with Koide's relation, without fine tuning of parameters. As a final remark, we note that some phenomenological implications of the present scenario have been discussed in \cite{Sumino:2008hy}. \section*{Acknowledgements} The author thanks K.~Tobe for discussion. The author is also grateful to K.~Fujii for his hospitality at KEK while part of this work has been completed and for listening to the argument patiently.
1,477,468,751,279
arxiv
\section{Introduction} Dempster-Shafer evidence theory\cite{dempster1967upper,shafer1976mathematical} has attracted more and more attentions recently years. It can handle with uncertain and incomplete information in many fields, such as target recognition, information fusion and decision making\cite{denoeux2008conjunctive,dubois1988representation,heyounewmethod2011,han2011weighted1,han2011weighted,zhang2000new,pan2001some,he2012new,deng2011risk,deng2011new,deng2011new1,deng2010target,suo2013computational,tan2012data,geng2013consensus,wei2013identifying,gao2013modified,kang2012evidential,chen2013fuzzy}. While the evidence are highly conflicting, the Dempster's combination rule will generate counter-intuitive results, such as the typical conflictive example proposed by Zadeh\cite{zadeh1986simple}. In the last decade researchers have proposed many approaches to cope with this open issue and certain effort have been obtained. The existing methods can be mainly classified into two categories. The first strategy regards that Dempster's combination rule is incomplete and modifying the combination rule as alternative, such as Yager's method\cite{yager1987dempster}, Smet's method\cite{smets1994transferable,smets1990combination} and Lefevre's method\cite{lefevre2002belief}, etc. The second strategy believes that Dempster's rule has perfect theoretical foundation and preprocessing the original evidence before combination, such as Haenni's method\cite{haenni2002alternatives}, Murphy's method\cite{murphy2000combining} and Deng's method\cite{deng2004efficient}, etc. We believe that Dempster's rule is excellent and has been widely applied in recent years. In this paper, preprocessing the original evidence for highly conflicting is adopted. The method of Deng proposed\cite{deng2004efficient} in 2004 based on the evidence distance can deal with the conflicting evidence and that the correct sensor can be quickly recognized. The evidence distance of Deng's method reflects the difference between evidences distance roughly, but can not reflect the degree of difference. In this paper, we propose a new method weighted averaging the evidence, improving Deng's method\cite{deng2004efficient}. The new method takes both Jousselme\cite{jousselme2001new} and Hausdorff\cite{hausdorff1957set} evidence distance into account. Thus, the weights of evidence are more appropriate. The remainder of this paper is organized as follows. Section 2 presents some preliminaries. The proposed method is presented in section 3. Numerical examples and applications are used to demonstrate the validity of the proposed method in section 4. A short conclusion is drawn in the last section. \section{Preliminaries} In this section, some concepts of Dempster-Shafer evidence theory\cite{dempster1967upper,shafer1976mathematical} are briefly recalled. For more information please consult Ref.\cite{he2010information}. The Dempster-Shafer evidence theory is introduced by Dempster and then developed by Shafer. In Dempster-Shafer evidence theory, let $\Theta =\left\{ {\theta _1 ,\theta _2 , \cdots ,\theta _n} \right\}$ be the finite set of mutually exclusive and exhaustive elements. It is concerned with the set of all subsets of $\Theta$, which is a powerset of $2^{\left| \Theta \right|}$, known as the frame of discernment, denotes as \[\Omega = \left\{ {\emptyset ,\left\{ {{\theta _1}} \right\},\left\{ {{\theta _2}} \right\},\left\{ {{\theta _3}} \right\}, \cdots ,\left\{ {{\theta _n}} \right\},\left\{ {{\theta _1},{\theta _2}} \right\}, \cdots ,\left\{ {{\theta _1},{\theta _1} \cdots ,{\theta _n}} \right\}} \right\}\] The mass function of evidence assigns probability to the subset of $\Omega$, also called basic probability assignment(BPA), which satisfies the following conditions \[m(\phi ) = 0,0 \le m(A) \le 1,\sum\limits_{A \subseteq \theta } {m(A) = 1}. \] $\phi$ is an empty set and $A$ is any subsets of $\Theta$. Dempster's combination rule\cite{dempster1967upper,shafer1976mathematical} is the first one within the framework of evidence theory which can combine two BPAs $m_1$ and $m_2$ to yield a new BPA $m$. The rule of Dempster's combination is presented as follows \begin{equation} m(A) = \frac{1}{{1 - k}}\sum\limits_{B \cap C = A} {{m_1}(B){m_2}(C)} \end{equation} with \begin{equation} k = \sum\limits_{B \cap C = \emptyset } {m_1 (B)m_2 (C)} \end{equation} Where $k$ is a normalization constant, namely the conflict coefficient of BPAs. \section{New combination approach} The method of Murphy\cite{murphy2000combining} purposed regards each BPA as the same role, little relevant to the relationship among the BPAs. In Deng's weighted method\cite{deng2004efficient}, each BPAs play different roles, that depended on the extent to which they are accredited in system. The similarity of Deng's method between two BPAs is ascertained by Jousselme distance function\cite{jousselme2001new}. \subsection{Two existing evidence distance} The evidence distance proposed by Jousselme\cite{jousselme2001new} is presented as follows \begin{definition} Let $m_1$ and $m_2$ be two BPAs defined on the same frame of discernment $\Theta$, containing $N $ mutually exclusive and exhaustive hypotheses. The metric $d_{BPA}$ can be defined as follows \begin{equation} \label{dpa123} d_{BPA} (m_1 ,m_2 ) = \sqrt {\frac{1}{2}\left( {{m_1 } - {m_2 } } \right)^T \underline{\underline D} \left( { {m_1 } - {m_2 } } \right)} \end{equation} \end{definition} ${\underline{\underline D} } $ is a $2^N \times 2^N $ similarity matrix, indicates the conflict of focal element in $m_1$ and $m_2$, where \begin{equation}\label{DJ} \underline{\underline D} (A,B) = \frac{{|A \cap B|}}{{|A \cup B|}} \end{equation} $\left| {A \cup B} \right|$ is the cardinality of subset of the union $A$ and $B$, where $A$ and $B$ may belong to the same BPA or come from different BPAs. $\left| {A \cap B} \right|$ indicates the conflict degree between elements $A$ and $B$. When two elements have no common object, they are highly conflicting. Another evidence distance proposed by Sunberg\cite{sunberg2013belief} is presented as follows \begin{definition} Let $m_1$ and $m_2$ be two BPAs defined on the same frame of discernment $\Theta$, containing $N $ mutually exclusive and exhaustive hypotheses. The distance of two BPAs referred to as $d_{Haus}$ is defined as follows \begin{equation} d_{Haus} (m_1 ,m_2 ) = \sqrt {\frac{1}{2}\left( {{m_1 } - \ {m_2 } } \right)^T {D_{H}} \left( {{m_1 } - {m_2 } } \right)} \end{equation} \end{definition} with \begin{equation}\label{Dijhaus} D_{H(i,j)}=S_{H}(A_{i},A_{j})=\frac{1}{1+CH(A_{i},A_{j})} \end{equation} Where H($A_{i}$,$A_{j}$) is the Hausdorff distance\cite{hausdorff1957set} between focal elements $A_{i}$ and $A_{j}$. $A_{i}$ and $A_{j}$ may belong to the same BPA or come form different BPAs. Positive number $C$ is a user-defined tuning parameter. $C$ is set to be 1, in this paper, for simplicity. It is defined according to \begin{equation} \label{DHaus} {H}(A_{i},A_{j})=\max\{\sup_{b \subseteq A_{i}}\inf_{c \subseteq A_{j}}d(b,c), \sup_{c \subseteq A_{j}}\inf_{b \subseteq A_{i}}d(b,c)\} \end{equation} Where $d(b,c)$ is the distance between two elements of the sets and can be defined as any valid metric distance on the measurement space\cite{hausdorff1957set}. While the elements are real numbers, the Hausdorff distance may be simplify as\cite{hausdorff1957set,sunberg2013belief} \begin{equation}\label{DHaus1} H_{R}(A_{i},A_{j})=\max\{|\min(A_{i})-\min(A_{j})|, |\max(A_{i})-\max(A_{j})|\} \end{equation} The below example is used to illustrate the difference between Jousselme distance\cite{jousselme2001new} and Hausdorff distance\cite{hausdorff1957set}. \begin{example} There are five orderable mutually exclusive and exhaustive hypotheses elements: 1, 2, 3, 4 and 5 on the same frame of discernment $\Theta$. \end{example} By (\ref{DJ}), the Jousselme distance matrix $\underline{\underline D}$ between each elements in a BPA can be obtained as follows \[\underline{\underline D} = \left[ {\begin{array}{*{20}{c}} 1&0&0&0&0\\ 0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1 \end{array}} \right]\] Utilize Hausdorff distance in (\ref{Dijhaus}), the Hausdorff distance matrix $D_H$ between each elements in a BPA can be obtained as follows \[{D_H} = \left[ {\begin{array}{*{20}{c}} 1&{\frac{1}{2}}&{\frac{1}{3}}&{\frac{1}{4}}&{\frac{1}{5}}\\ {\frac{1}{2}}&1&{\frac{1}{2}}&{\frac{1}{3}}&{\frac{1}{4}}\\ {\frac{1}{3}}&{\frac{1}{2}}&1&{\frac{1}{2}}&{\frac{1}{3}}\\ {\frac{1}{4}}&{\frac{1}{3}}&{\frac{1}{2}}&1&{\frac{1}{2}}\\ {\frac{1}{5}}&{\frac{1}{4}}&{\frac{1}{3}}&{\frac{1}{2}}&1 \end{array}} \right]\] It is clearly that, the five elements have no object in common. The similarity between each elements are the same value zero in Jousselme distance matrix. In case of this, Jousselme distance matrix can not show the detailed distance of each elements in an orderable system. However, Hausdorff distance matrix can calculate the detail similarity between each orderable elements. \subsection{New combination approach} In this subsection, we purpose an improved combination approach based on Deng's method\cite{deng2004efficient}. The new method takes advantage of Huasdorff distance\cite{hausdorff1957set} to update Jousselme distance\cite{jousselme2001new}. \begin{definition} Let $m_1$ and $m_2$ be two BPAs defined on the same frame of discernment $\Theta$, containing $N $ mutually exclusive and exhaustive hypotheses. The distance between $m_1$ and $m_2$ can be defined as \begin{equation}\label{newmethod} d_{Com} (m_1 ,m_2) = \sqrt {\frac{1}{2}\left( {{m_1 } - \ {m_2 } } \right)^T {D_{Com}} \left( {{m_1 } - {m_2 } } \right)} \end{equation} \end{definition} with \begin{equation}\label{d} {D_{Com}{(i,j)}} = {{\underline{\underline D}}{(i,j)}}.*{D_H}{(i,j)}\end{equation} $D_{Com}$ is a $2^N \times 2^N $ similarity matrix, indicates the metric of focal elements in $m_1$ and $m_2$. $\underline{\underline D}{(i,j)}$ is the distance matrix in (\ref{DJ}) and ${D_H}{(i,j)}$ is the distance matrix in (\ref{Dijhaus}). Given there are $n$ BPAs in the system, we can calculate the distance between each two BPAs. Thus, the distance matrix is presented as follows \begin{equation}\label{dm} DIM = \left[ {\begin{array}{*{20}{c}} 1&{{d_{12}}}& \cdots &{{d_{1j}}}& \cdots &{{d_{1n}}}\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ {{d_{i1}}}&{{d_{i2}}}& \cdots &{{d_{ij}}}& \cdots &{{d_{in}}}\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ {{d_{n1}}}&{{d_{n2}}}& \cdots &{{d_{nj}}}& \cdots &1 \end{array}} \right] \end{equation} \begin{definition} Let $Simi({m_i},{m_j})$ be the similarity value between BPA $m_{i}$ and $m_{j}$, thus the $Simi({m_i},{m_j})$ can be defined as \begin{equation} Simi({m_i},{m_j}) = 1 - {d_{Com}}({m_i},{m_j}) \end{equation} \end{definition} It is obvious that while the value of distance between two BPAs are bigger, the similarity of two BPAs are smaller, and vice versa. The similarity function can be represented by a matrix as follows \begin{equation}\label{sim} SIM = \left[ {\begin{array}{*{20}{c}} 1&{Simi_{12}}& \cdots &{Simi_{1j}}& \cdots &{Simi_{1n}}\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ {Sim{i_{i1}}}&{Sim{i_{i2}}}& \cdots &{Sim{i_{ij}}}& \cdots &{Sim{i_{in}}}\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ {Sim{i_{n1}}}&{Sim{i_{n2}}}& \cdots &{Sim{i_{nj}}}& \cdots &1 \end{array}} \right] \end{equation} \begin{definition} Let $Supp(m_i)$ be the support degree of BPA $m_i$ in the system, and the support degree of BPA $m_i$ can be presented as follow \begin{equation}\label{supp} Supp({m_i}) = \sum\limits_{\scriptstyle j = 1\hfill \atop \scriptstyle j \ne i\hfill}^n {Simi({m_i},{m_j})} \end{equation} \end{definition} From (\ref{sim}) and (\ref{supp}), we can see that the support degree $Supp({m_i})$ is the sum of similarity between each BPAs, except itself. The larger the value of $Supp({m_i})$ is, the more important the evidence will be. To normalize $Supp(m_i)$, the $W(m_i)$ of BPA $m_i$ can be obtained as follows \begin{equation}\label{weight} W({m_i}) = \frac{{Supp({m_i})}}{{\sum\limits_{i = 1}^n {Supp({m_i})} }}\end{equation} It is obvious that \[\sum\limits_{i = 1}^n {W({m_i}) = 1} \] $W({m_i})$ indicates the important and credible degree of BPA $m_i$ among all BPAs in the system. It can be regard as the weight of BPA $m_i$. After obtained the weight of each BPAs, we take advantage of Dempster's combination rule\cite{dempster1967upper,shafer1976mathematical} to yield a new BPA. The below example is used to demonstrate the detail processes of the new proposed method. \begin{example} Given there are four BPAs $m_1$, $m_2$, $m_3$ and $m_4$ on the same frame of discernment $\Theta$: \[\begin{array}{l} {m_1}(R) = 0.3,{m_1}(S) = 0.5,{m_1}(T) = 0.2\\ {m_2}(R) = 0,{m_2}(S) = 0.5,{m_2}(T) = 0.5\\ {m_3}(R) = 0.6,{m_3}(S) = 0.2,{m_3}(T) = 0.2\\ {m_4}(R) = 0.9,{m_4}(S) = 0,{m_4}(T) = 0.1 \end{array}\] \end{example} By (\ref{newmethod})-(\ref{weight}), we can obtain the weight of the four BPAs $m_1$, $m_2$, $m_3$ and $m_4$ as follows \[W({m_1}) = 0.2688,W({m_2}) = 0.2276,W({m_3}) = 0.2752,W({m_4}) = 0.2284.\] Therefore, the new BPA $m_{New}$ before combination can be obtained as follows \[\begin{array}{l} {m_{New}}(R) = 0.3\times0.2688 + 0\times0.2276 + 0.6\times0.2752 + 0.9\times0.2284 = 0.4513\\ {m_{New}}(S) = 0.5\times0.2688 + 0.5\times0.2276 + 0.2\times0.2752 + 0\times0.2284 = 0.3033\\ {m_{New}}(T) = 0.2\times0.2688 + 0.5\times0.2276 + 0.2\times0.2752 + 0.1\times0.2284 = 0.2454 \end{array}\] There are four BPAs in this example, we apply Dempster's combination rule to combine the new BPA $m_{New}$ three times, the results are presented as follows \[m(R) = 0.7744,m(S) = 0.1579,m(T) = 0.0677.\] \section{Numerical examples and Applications} It is known that Dempster-Shafer evidence theory\cite{dempster1967upper,shafer1976mathematical} needs less information than Bayes probability to deal with uncertain information. It is often regarded as the extension of Bayes probability. We utilize the below example to illustrate the effectiveness of the new proposed method. \begin{example} There are five mass functions on the same frame of discernment, the five BPAs are presented as follows\cite{deng2004efficient} \[\begin{array}{l} {m_1}:{m_1}(A) = 0.5,{m_1}(B) = 0.2,{m_1}(C) = 0.3\\ {m_2}:{m_2}(A) = 0,{m_2}(B) = 0.9,{m_2}(C) = 0.1\\ {m_3}:{m_3}(A) = 0.55,{m_3}(B) = 0.1,{m_3}(C) = 0.35\\ {m_4}:{m_4}(A) = 0.55,{m_4}(B) = 0.1,{m_4}(C) = 0.35\\ {m_5}:{m_5}(A) = 0.55,{m_5}(B) = 0.1,{m_5}(C) = 0.35 \end{array}\] \end{example} The results of different methods to combine the five BPAs are presented in Table.\ref{tabel1}. From Table.\ref{tabel1}, we can see that Dempster's combination rule\cite{dempster1967upper,shafer1976mathematical} can not handel with highly conflicting evidence. Once an element is negatived by any BPAs, no matter how strongly it is supported by other BPAs, its probability will always remain zero. Murphy's method\cite{murphy2000combining} regards each evidence plays the same role in the system, considered little relations among evidences. Deng\cite{deng2004efficient} improved Murphy's work and took advantage of an evidence distance as the weight of each evidence. The novel proposed method based on Deng's method, but utilizes Hausdorff distance to update the distance matrix. Fig.\ref{speed} indicates that the convergence speed of proposed method is slower than Deng's method but faster than Murphy's method, owing to the additional update distance because some sensors may be orderable. \begin{table}[htbp] \centering \caption{Different combination rules to combine highly conflicting evidence.} \begin{center} \begin{tabular}{cllll} \toprule \addtolength\doublerulesep{1pt} \addtolength{\tabcolsep}{1ex} &$m_1,m_2$&$m_1,m_2,m_3$&$m_1,m_2,m_3,m_4$&$m_1,m_2,m_3,m_4,m_5$\\\hline Dempster's&$m(A)=0$&$m(A)=0$&$m(A)=0$&$m(A)=0$\\ combination&$m(B)=0.8571$&$m(B)=0.6316$&$m(B)=0.3288$&$m(B)=0.1228$\\ rule\cite{dempster1967upper,shafer1976mathematical}&$m(C)=0.1429$&$m(C)=0.3684$&$m(C)=0.6712$&$m(C)=0.8772$\\ \quad&\quad&\quad&\quad&\quad\\ Murphy's &$m(A)=0.1543$&$m(A)=0.3500$&$m(A)=0.6027$&$m(A)=0.7958$\\ combination&$m(B)=0.7469$&$m(B)=0.5224$&$m(B)=0.2627$&$m(B)=0.0932$\\ rule\cite{murphy2000combining}&$m(C)=0.0988$&$m(C)=0.1276$&$m(C)=0.1346$&$m(C)=0.1110$\\ \quad&\quad&\quad&\quad&\quad\\ Deng's &$m(A)=0.1543$&$m(A)=0.5816$&$m(A)=0.8060$&$m(A)=0.8909$\\ combination&$m(B)=0.7469$&$m(B)=0.2439$&$m(B)=0.0482$&$m(B)=0.0086$\\ rule\cite{deng2004efficient}&$m(C)=0.0988$&$m(C)=0.1745$&$m(C)=0.1458$&$m(C)=0.1005$\\ \quad&\quad&\quad&\quad&\quad\\ New proposed &$m(A)=0.1543$&$m(A)=0.6355$&$m(A)=0.7605$&$m(A)=0.8761$\\ combination&$m(B)=0.7469$&$m(B)=0.2229$&$m(B)=0.0897$&$m(B)=0.0189$\\ rule&$m(C)=0.0988$&$m(C)=0.1415$&$m(C)=0.1468$&$m(C)=0.1050$\\ \toprule \end{tabular} \end{center} \label{tabel1} \end{table} \begin{figure}[!h] \begin{center} \psfig{file=speed.eps,scale=0.8} \caption{The convergence speeds of different approaches.} \label{speed} \end{center} \end{figure} \section{Conclusion} Dempster-Shafer evidence theory is a powerful tool to deal with uncertain and imprecise information in widely fields. However the evidence collected may be multifarious, some of them may be highly conflicting, owing to various noise factors, subjective or objective. The original Dempster combination rule can do nothing for these highly conflicting evidence. Modified methods of Dempster's combination rule are briefly introduced, and all of them have some drawbacks. The new proposed method inherits all the advantages of Deng's method. It applies Hausdorff distance to update the Jousselme distance and takes more distance information into account. Numerical examples demonstrate that the new proposed method can discern the correct target, effectively. \bibliographystyle{elsarticle-num}
1,477,468,751,280
arxiv
\section{Tests of Lorentz violation with high-energy astrophysical neutrinos} Lorentz symmetry is a fundamental symmetry underlying both quantum field theory and general relativity. Nevertheless, violation of Lorentz symmetry, often known as Lorentz violation (LV), has been searched for since the iconic Michelson-Morley experiment~\cite{Michelson:1887zz}. LV has shown to occur in beyond-the-Standard-Model (BSM) theories such as string theory~\cite{Kostelecky:1988zi}, non-commutative field theory~\cite{Carroll:2001ws}, loop quantum gravity~\cite{Gambini:1998it}, Ho\v{r}ava-Lifshitz gravity~\cite{Pospelov:2010mp}, etc. There are many experimental efforts to search for LV, but so far no significant evidence for LV has been found. Constraints obtained from different systems have being compiled in Ref.~\cite{Kostelecky:2008ts}. Since the expected effect of LV is small, experiments tend to use special systems to maximize their sensitivities, such as interferometers (optics~\cite{Kostelecky:2016kkn}, matter wave~\cite{Jaffe:2016fsh}, wave function~\cite{Pruttivarasin:2014pja}, etc). High-energy particles (LHC~\cite{Carle:2019ouy}, high-energy gamma rays~\cite{Amelino-Camelia:1997ieq}, ultra-high-energy cosmic rays, or UHECRs~\cite{Maccione:2009ju}, etc) are used to search for signatures of high-dimension LV operators~\cite{Kostelecky:2009zp,Kostelecky:2011gq,Kostelecky:2013rta} that have mass dimensions are greater than four. Among the many experiments hunting for LV, the searches using high-energy astrophysical neutrinos are special for the following three reasons: \begin{enumerate} \item Neutrino energy reaches higher than any anthropogenic beam. \item Neutrinos travel very long distance, from source to detection, in a straight path. \item Quantum mixings can enhance the sensitivity. \end{enumerate} \noindent LV can be seen as a classical \ae ther field, a new background field permeating the vacuum. Propagation of neutrinos may be affected by this which can cause a variety of effects, including spectrum distortion, modifying the group velocity, anomalous neutrino oscillations, with the direction dependence. Astrophysical neutrinos propagate long distances without interactions, and they have the advantage to search for these exotic effects. Furthermore, the higher-dimension operators, the nonrenormalizable sector of effective field theory, have stronger energy dependence and high-energy astrophysical neutrinos can be more sensitive to them. For example, the dimension-six operator is the lowest order interaction term with new physics. Lastly, these effects are likely to be very small, and kinematic tests may not be sensitive enough to find them. Neutrinos are natural interferometers, and using quantum mixings of them we can reach the signal region of LV expected from quantum-gravity motivated models. Fig.~\ref{fig:energy} shows the phase space of new physics one can explore with neutrinos~\cite{Arguelles:2019rbn}. Here, the horizontal axis is the neutrino energy and the vertical axis is the propagation distance of neutrinos from source to detector. Large areas below 100~GeV and 100~km are explored by anthropogenically produced neutrinos (\textit{e.g.} reactor, short- and long-baseline, or SBL and LBL, neutrino experiments) and low-energy astrophysical neutrinos (solar and supernova neutrinos). However, higher energy and longer baseline have not been explored. High-energy astrophysical neutrinos travel over 100~Mpc and they can explore new physics which are extremely weakly coupled with neutrinos, such as LV. High-energy astrophysical neutrinos can also reach PeV energies and may enhance the new physics related to power-counting nonrenormalizable operators~\cite{Kostelecky:2011gq}. \end{paracol} \nointerlineskip \begin{figure}[H] \widefigure \includegraphics[width=15 cm]{Definitions/fig_scales_energy_icrc.png} \caption{Neutrino sources are shown as a function of neutrino energy and distance traveled. Anthropologically produced neutrinos, including reactor neutrinos, very-short- , short- , and long-baseline (VSBL, SBL, and LBL) neutrino beam experiments, can investigate new physics related to short travel distance. Low-energy astrophysical neutrinos can be used to study new physics related with longer travel distance. High-energy astrophysical neutrinos explore the highest energies and longest traveled distance, which is the top right corner in this figure. Figure is adapted from Ref.~\cite{Arguelles:2019rbn}.\label{fig:energy}} \end{figure} \begin{paracol}{2} \switchcolumn \section{Tests of Lorentz violation with kinematic observables} High-energy particles, such as gamma-rays~\cite{Amelino-Camelia:1997ieq,Gagnon:2004xh,Altschul:2006uw,Kaufhold:2007qd,Maccione:2007yc,Altschul:2007vc,Klinkhamer:2008ky,Altschul:2010nf,Diaz:2015hxa,Altschul:2016ces,Schreck:2017isa,Colladay:2017qfr} and UHECRs~\cite{Maccione:2009ju}, have been used to test LV. Similarly, neutrinos can be used to find exotic effects if they exist, with two advantages. First, neutrinos are elementary particles, while UHECRs are composite --- in fact, with unknown composition. This makes the interpretation of LV constraints with neutrinos easier to interpret on a theoretical framework. Second, high-energy gamma-rays interact with the cosmic microwave background and thus have shorter distances traveled than high-energy astrophysical neutrinos. The effects of LV in high-energy neutrinos arise from them experiencing the effect of a non-trivial vacuum as they propagate from their sources to us. A field in vacuum, motivated by new physics, could permeate space-time and violate the large-scale isotropy of the Universe and hence produce LV (effectively similar to the classical \ae ther). Under such conditions, neutrinos would emit particles in vacuum~\cite{Cohen:2011hx,Diaz:2013wia,Borriello:2013ala,Stecker:2014xja,Wang:2020tej} and this energy loss would attenuate the highest energy neutrinos as they travel long distance. Such a test has been performed using the high-energy astrophysical neutrino samples~\cite{IceCube:2018cha,IceCube:2020wum}. Multi-messenger astronomy allows us to study the difference of time-of-flight (ToF) between neutrinos and photons beyond the neutrino mass effect. The first such opportunity was the supernova 1987A, where data was used the tests of Lorentz invariance~\cite{Longo:1987gc,Krauss:1987me,Ellis:2011uk}. These tests may be more interesting to use high-energy astrophysical neutrinos due to energy dependencies of some models, and the first such opportunity was the blazar TXS0506+056, the first identified high-energy astrophysical neutrino source~\cite{IceCube:2018cha}. From the observation of the neutrino and photon arrival times, several limits of neutrino velocity deviation from the speed of light were derived~\cite{Ellis:2018ogq,Laha:2018hsh,Wei:2018ajw}. Despite that TXS0506+056 has, so far, been the only detected source, searches for neutrino emissions from transient events are continuously performed. These analyses usually assume that neutrinos are travelling at the speed of light; however, if the neutrino ToF is modified due to LV, one could find more coincidence with transient events by assuming LV. The couplings of neutrinos and LV background fields can be classified into two groups, CPT-odd and CPT-even types. The CPT-odd LV fields would change its sign in Lagrangian operators and hence effectively violate CPT transformation. If the effective velocity of neutrinos is larger than in the Lorentz-invariant case, that of antineutrinos is lower and vice versa. The IceCube collaboration has not identified any high-energy astrophysical neutrino point sources except TXS0506+056 with high significance~\cite{IceCube:2019cia} (see~\cite{Stein:2020xhk} for a recent search of potential high-energy neutrino sources). In particular, no gamma ray burst (GRB) has been identified as a high-energy astrophysical neutrino source~\cite{IceCube:2016ipa,IceCube:2017amx} under the Standard Model assumptions. However, by assuming energy dependent couplings with LV and sign changes (time delay or time advanced), it is possible to find coincidences with GRBs with scale of new physics order $\sim 10^{17}~{\rm GeV}$~\cite{Zhang:2018otj} (See~\cite{Crivellin:2020oov} about the implication of this results in the charged lepton sector). Although this is very tantalizing result, it is challenging to verify it experimentally because neutrino and anti-neutrino cross-section difference is less than 15\% at $\ge 200$~TeV~\cite{CooperSarkar:2011pa,Arguelles:2015wba,Bertone:2018dse} and the charge separation is possible only in special reactions, such as resonant $W$-boson production~\cite{IceCube:2021rpz}. In the near future, data from IceCube and neutrino observatories currently under construction, such as KM3NeT~\cite{Adrian-Martinez:2016fdl} and GVD~\cite{Avrorin:2019vfc}, will provide increased sensitivity to Lorentz Violation. Further along, a generation neutrino telescopes on ice (IceCube-Gen2~\cite{IceCube-Gen2:2020qha}), water (P-ONE~\cite{Agostini:2020aar}), mountains (Ashra NTA~\cite{Sasaki:2017zwd},TAMBO~\cite{Romero-Wolf:2020pzh}, and GRAND~\cite{GRAND:2018iaj}), or outer space (POEMMA~\cite{POEMMA:2020ykm}), among others, will be able to test this hypothesis further. \section{Tests of Lorentz violation with neutrino oscillations} Neutrinos are natural interferometers, which are able to measure extremely small quantities --- such as the difference in neutrino masses --- by observing the \textit{beats} of the different neutrino flavors. Searches for distortions arising from LV in the neutrino oscillation pattern have been performed by almost all neutrino oscillation experiments~\cite{Kostelecky:2008ts}. Among them, natural sources --- such as solar neutrinos, atmospheric neutrinos, and astrophysical neutrinos --- have advantages due to very-long-baseline and/or higher attainable energy. Atmospheric neutrinos can be produced at the other side of the Earth and penetrate the Earth diameter (=12742~km), and provide the largest possible interferometer on the Earth to search for neutrino oscillations. The energy of these neutrinos reaches order 50~TeV or more~\cite{Fedynitch:2018cbl}, which correspond to the highest energy neutrinos produced by particles arriving at the Earth's surface. The atmospheric neutrino flux below around 50~TeV is produced predominantly from pion and kaon decays and this is called the "conventional" atmospheric neutrino flux. This flux is relatively well understood and has being measured, compared with high-energy atmospheric neutrinos produced by charmed meson decay, which are predicted with larger errors and have avoided detection so far. Furthermore, the astrophysical neutrino flux starts to overtake atmospheric neutrino flux from around 50~TeV. Thus these conventional atmospheric neutrinos can be used to search LV. The concept of such a search is shown in the Fig.~\ref{fig:atmo},~left. The LV oscillatory effect can be searched for in two ways depending on the assumptions of the largest non-zero LV terms. On one hand, Super-Kamiokande~\cite{Super-Kamiokande:2014exs} and the IceCube-40 (partially instrumented IceCube)~\cite{IceCube:2010fyu} searched for signatures of anisotropy in atmospheric neutrinos due to LV. On the other hand, AMANDA-II~\cite{IceCube:2009ckd} and IceCube~\cite{IceCube:2017qyp} looked for the spectral distortions due to LV. Fig.~\ref{fig:atmo},~right, shows the effect of spectrum distortion due to the presence of an interaction between neutrinos and an isotropic LV background field. One can see the very high sensitivity of this approach, especially for high-dimension LV operators. Here, atmospheric neutrinos detected by IceCube are sensitive to dimension-six LV operators down to $\sim 10^{-36}~{\rm GeV}^{-2}$, making these neutrinos provide one of the most sensitive probes of LV. To use the highest energy atmospheric neutrino data, IceCube used the up-going data sample for this analysis~\cite{IceCube:2015qii}. These muons are created by neutrino interactions in the rock surrounding the detector or the ice in the Antarctic glacier. The signal of LV is exhibited as a spectrum distortion in the high-energy muons. As you see from Fig.~\ref{fig:atmo}, right, the data is consistent with unity and there is no obvious sign of LV. This search does not find nonzero LV and produces a limit that reaches down to $\sim 10^{-24}~{\rm GeV}$ for the dimension-three operator, or $\sim 10^{-36}~{\rm GeV^{-2}}$ for the dimenion-six operator~\cite{IceCube:2017qyp}. These are among the best constraints on LV from table-top experiments to cosmology. \end{paracol} \nointerlineskip \begin{figure}[H] \widefigure \includegraphics[width=7.5 cm]{Definitions/Fig1_v3} \includegraphics[width=7.5 cm]{Definitions/Fig2_v2} \caption{Left, artistic illustration of the search for LV with atmospheric neutrino oscillations. Atmospheric neutrinos are produced in the upper atmosphere, and their flavors may be converted due to couplings between neutrinos and LV background fields as they propagate. The effects induced by the new physics are negligible if neutrinos travel a short distance; namely for neutrinos entering the IceCube volume from the horizontal direction. However, neutrinos produced near the northern sky travel a long distance before they reach IceCube, and they are significantly more impacted by interactions with the LV background fields. Right, expected atmospheric neutrino oscillation probability ratio with function of energy due to LV. Here, the vertical axis is the double ratio of oscillation probability for neutrinos from the vertical to horizontal direction. No LV is adjusted to the unity in this figure. Nonzero LV modifies this ratio and the larger deviations happen with the larger LV couplings. The figure shows the sensitivity to $|c_{\mu\tau}^{(6)}|$, one of dimension-six operators which parameterize LV and this analysis is sensitive with. Figures are adapted from~\cite{IceCube:2017qyp}. \label{fig:atmo}} \end{figure} \begin{paracol}{2} \switchcolumn \section{Tests of Lorentz violation with neutrino mixings} Astrophysical neutrinos are extremely-long-baseline neutrino oscillation experiments. In these systems, neutrino coherence depends on the details of the astrophysical neutrino source, detection method, and the distance of propagation. For example, for the observed high-energy astrophysical neutrino flux, which is dominated by extragalactic sources, neutrino oscillations are not observable due to the relative poor energy resolution and extremely large ratio of baseline to energy. In this scenario, we are only able to observe neutrino mixing instead of oscillations among neutrino states. However, even if we cannot resolve the neutrino oscillation pattern, new physics effects such as LV remain observable. This is because the propagation eigenstates and detection eigenstates are not the same. The SNO experiment searched for LV from the annual modulation of the solar neutrino signal~\cite{SNO:2018mge}. By assuming non-isotropic static LV background field within the solar system, neutrinos propagating one direction may be affected differently than others. Search of such a signal must control all other natural modulations of solar neutrinos, including the eccentricity of the Earth's orbit and day-night caused by the Earth matter propagation. The sensitivity of this search reaches order $\sim 10^{-21}~{\rm GeV}$ in dimension-three LV coefficients. High-energy astrophysical neutrinos offer even longer baselines. Most of these neutrinos do not have identified sources and the flux is isotropic. Also the source candidates populate distances from the Earth on the order 100~Mpc or more. In the high-energy starting event (HESE) sample by IceCube~\cite{IceCube:2020wum}, the energy of these neutrinos ranges from around 60~TeV to 2~PeV. These high-energy neutrinos can push the search of higher-dimension LV operators. Fig.~\ref{fig:astro}, left, shows the sensitivities of different systems to LV operators. High-energy astrophysical neutrino flavors are expected to offer the most sensitive LV searches in dimension-five and dimension-six operators~\cite{Katori:2019xpc}. \end{paracol} \nointerlineskip \begin{figure}[H] \centering \widefigure \includegraphics[width=8 cm]{LVlimit_color} \includegraphics[width=7 cm]{Definitions/flavor_scan_data_bunnies_7yr_steps21_Nov_fixMuNorm_inel_gf_source_shaded_serif_paper2c_contour2c_tau} \caption{Left, the LV sensitivities of different systems. The sensitivity is normalized with the Planck energy ($E_P=1.22\times 10^{19}~{\rm GeV}$) assuming LV arises from Planck scale physics. This means that the natural scale of a dimension-five LV operator is $1/E_P$, a dimension-six LV operator is $1/E_P^2$, and so on, and the sensitivity is normalized so that these numbers appear to be unity. Lines are gravitational Cherenkov emission~\cite{Moore:2001bv,Kostelecky:2015dpa}, GRB vacuum birefringence~\cite{Kostelecky:2013rv}, UHECRs~\cite{Maccione:2009ju}, atmospheric neutrino oscillations~\cite{IceCube:2017qyp}, and the expected sensitivity from the high-energy astrophysical neutrino flavors. Figure is adapted from~\cite{Katori:2019xpc}. Right, HESE flavor ratio $(\nu_e:\nu_\mu:\nu_\tau)$ measurement by IceCube. Each corner represents electron, muon, and tau neutrino dominant state. The standard scenarios (dotted line area) give roughly $(\nu_e:\nu_\mu:\nu_\tau)\sim(1:1:1)$ on the Earth regardless of the assumption of the flavor ratio at the source. This means the expected flavor ratio on the Earth is always around the center of this plot in standard assumptions. On the other hand, current data contours enclose a large area, so different standnard scenarios cannot be distinguished. Figure is adapted from~\cite{IceCube:2020abv}. \label{fig:astro}} \end{figure} \begin{paracol}{2} \switchcolumn Fig.~\ref{fig:astro}, right, shows the current status of the high-energy neutrino flavor ratio measurements. Since the statistics of high-energy neutrinos are low, flavors are measured integrated over the whole spectrum and normalized with the total flux; namely the flavor ratio, $(\nu_e:\nu_\mu:\nu_\tau)$ is reported. The astrophysical neutrino production models give some information about the neutrino flavor composition at the source. The most likely scenario is that they are some combination of electron and muon neutrinos. Two extreme cases are productions of astrophysical neutrinos which are dominated by electron neutrinos $(1:0:0)$ or by muon neutrinos $(0:1:0)$. And all other possibilities are between these two models. Remarkably, all of these scenarios make more or less the same flavor ratio at the Earth by neutrino mixing, namely $\sim (1:1:1)$~\cite{Arguelles:2015dca,Bustamante:2015waa}. All of these predict flavor ratio at the Earth is near the center of the flavour triangle and the spread of the center region is related to the current uncertainty of the mixing angles; for projections see~\cite{Song:2020nfh}. On the other hand, current data encloses a large region and it is not easy to distinguish any particular scenario~\cite{IceCube:2015rro,IceCube:2015gsk,IceCube:2018pgc,IceCube:2020abv}. Thus, it is necessary to shrink this contour to measure possible deviation of the flavor ratio from the standard case. This requires larger sample sizes and better algorithms to measure neutrino flavors in neutrino telescopes~\cite{Song:2020nfh}. Many different types of new physics can be discovered from the effective operator approach with astrophysical neutrino flavour measurement, including neutrino-dark matter interactions~\cite{Miranda:2013wla,deSalas:2016svi,Farzan:2018pnk}, neutrino-dark energy interactions~\cite{Ando:2009ts,Klop:2017dim}, neutrino self-interactions~\cite{DiFranzo:2015qea,Cherry:2016jol,Creque-Sarbinowski:2020qhz}, and neutrino long-range forces~\cite{Bustamante:2018mzu}. The first IceCube results to test these models are published recently~\cite{IceCube:2021tdn}. To summarize, the search for signatures of LV with high-energy astrophysical neutrinos has just begun. The sensitivity to certain operators seems to exceed any known systems and reaches the Planck scale. Thus, the study of these probes has a great potential for the discovery of violations of fundamental space-time symmetries. \section*{Acknowledgement} We thank to Rogan Clark for careful reading of this manuscript. CAA is supported by the Faculty of Arts and Sciences of Harvard University, and the Alfred P. Sloan Foundation. TK is supported by the Science and Technology Facilities Council (UK). \end{paracol} \reftitle{References} \externalbibliography{yes}